tag:blogger.com,1999:blog-19232211098681930082024-02-08T07:49:14.484-05:00Wrong NotesWriting my symphony with all the wrong notes.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.comBlogger34125tag:blogger.com,1999:blog-1923221109868193008.post-85522234491576691752021-10-30T22:16:00.003-04:002021-10-30T23:48:07.548-04:00The Metaverse is stoopid, and it will fail just like Facebook has.<p>
So Facebook has died, and long live Meta which I think marks a huge historical milestone for humanity. The day social media died. Facebook is now going to try to morph itself into "The Metaverse" which is some sort of 3d virtual playground where nobody hates Mark Zuckerberg anymore? Honestly I can't make heads or tails of what the Metaverse is supposed to actually be beyond what has already come before as a persistent virtual reality social game. Just pick one and that's the "Metaverse": Second Life, Roblox, Minecraft, every MMOG game invited in the 21st century, practically. So why is Meta going to revolutionize this space? Because they have lots of money and an image to save? What furtile ground is actually left with this idea? And do you really want to live in a virtual reality based internet?
</p>
<p>
The "Metaverse" is basically trying to deliver on a promise as old as the internet. Most people cite <a href="https://en.wikipedia.org/wiki/Snow_Crash">Neil Stephenson's "Snowcrash"</a> as the genesis of this idea, but it's probably only the most popular one. There had been attempts before to create it or something that could be possibly considered it. But, persistent social based VR based worlds or communities have been built before: most noteable Second Life. In fact I'd argue Second Life came closest to this idea because it wasn't trying to be anything beyond a excessively flexible virtual world. It also had the vision of being literally a 2nd life where you designed a new image of yourself within it. The economy of Second Life was larger than the GDP of Russia. I think it was the 27th largest economy of the world. It was quite popular for a time, but eventually it faded from the zeitgeist of the internet. While it failed, it did approximate this Metaverse vision. No matter how crude Second Life was it ultimately failed as an idea and that could be the precursor to fate waiting for "The Metaverse". I have serious doubts that a Facebook powered "Metaverse" is going to be radically different and transformative to make the concept stick.
</p>
<p>
I bet most Metaverse proponents would disagree with my comparison, and bristle at the fact that their project can be boiled down to a 4-5 year period in the mid 2000s that has been long forgotten. But there were many other attempts to bring about the "Metaverse" or technology that would support it. VRML was a promising idea that could democratize 3D worlds using a standards based approach, but it flamed out before it could even get going. Most promponents chalked up that failure to lacking implementations and compabilities, but there has been no real attempt to reboot that failure either. Mostly because the concept of the web as a 3D world was just not what people wanted. While we have better platforms like Unity or Unreal I don't think the crudeness of these inventions were really the problem.
</p>
<p>
Sure Mark Zukerberg and John Carmack can debate over whether the technology is ready or not, or if new technology will somehow improve the experience. However, the technology isn't really that much better than it was 10 or even 20 years ago. Maybe we have nicer graphics or higher resolutions, or even cheap VR headsets, but it hasn't really made VR more desirable. It's only really found a following inside a small pocket of gamers. Most of the time 2D monitors are the preferred method of expierence for gamers. And that really points to the fact of 3D expierences, which unless you are trying to build immersive gaming type worlds, most of us would rather just use 2D. We don't need to consume our instagrams and TikToks in 3D to get the point. Never have. That's why these 3D world platforms never last. VR interfaces are the video phone from the 1950s. Sounds really cool and sci-fi, but not a practical use case.
</p>
<p>
For a world that just spent roughly 18 months in doors and isolated with our only social outlet is zoom happy hours and conducting business over zoom calls I think what most people want right now is less computer time and more face time in the real life or what ever is left of it. That's what makes this pathetic attempt to revitalize a company's image by showcasing some bright new vision of the future as a transparent attempt to distract rather than deliver. It feels like the "Metaverse" is just some sort of fantasy escapism for Zuckerberg to run away from the problems he created because of his belief in excessively permissive 1st ammendment rights.
</p>
<p>
Modern social media is almost completely unregulated or moderated. Sure Facebook has standards of conduct and employs armies of people to moderate Facebok and Instagram, but it falls way short of effectively policing it. It took the almost overthrowing of the US government to force it to deplatform a President which shows every other bad actor on Facebook, Twitter, etc where the lines is which is very far away for almost all of them. These problems haven't gone away with a new platform and besides what army will moderate the "Metaverse"? Misinformation and hate speech is abound on the platform every minute of the day, and whatever Facebook is doing hasn't even come close to being a solution to the problem. Bot armies reign supreme ready to defame and spew nonsense 24-7 8 days a week. And it's largely those unchecked forces that spoiled Facebook and Twitter in the first place. Cyberbullying at scale runs <a href="https://www.washingtonpost.com/technology/2021/10/27/meghan-markle-twitter-hate-campaign/">rampant smear campaigns by a vocal minority</a> and <a href="https://www.insider.com/police-say-man-killed-texas-woman-for-voting-joe-biden-2021-9">your neighbor believes in all sorts of pyschotic conspiracy theories that you are a liberal nazi who must be assinated</a>.
</p>
<p>
The problem is that while Facebook is home for the cesspool of humanity so will it be in the "Metaverse". While we shake our finger at Mark Zuckerberg and his abomination of a creation he is only partly to blame because we're the ones at the end of the like and reshare button. And we'll be the exact same customer occupying "the Metaverse" if that becomes a thing. We're the ones that believe rumors, innuendos, and out right bat shit crazy conspiracy theories than pander to human's worst inner believes and fears. We are the ones that like and reshare content from the cesspool afraid to look away from the computer for fear we might miss something. In a way we are the ones that destroyed social media just as much as Zuckerberg or Jack Dorsey. The only real sin they committed was continuing to hold fast to their "no censorship" is a laudable goal. And that letting us "be our true selves" was a good idea. It wasn't. We're horrible people. In a way if the world was an ideal place inhabitted by rational and logical people you quite possibly could hold very open permissive 1st ammendment beliefs, but reality is we aren't rational nor ideal. And we don't deserve unfettered 1st amendment rights. We are quite easily manipulated by the dumbest schemes. Those faults require that public spaces censor out the ones that will pollute it, see also <a href="https://en.wikipedia.org/wiki/Paradox_of_tolerance">Paradox of Tolerance</a>. And that hasn't really changed by peddling "The Metaverse".
</p>
<p>
This is all happening at just the time when globally we are waking up to how horrible we've been, and how awful Mark and other technofiles have been mismanaging their platforms. Now Mark Zuckerberg is trying to save what little he has left of his companies' credibility. It reminds me of the early 2000s when another company's image was falling apart because of rampant security issues. Microsoft. After Windows XP an inordinate storm of security vulnerabilities were threatening Micrsoft, and they couldn't patch it fast enough. The DOJ had slapped Microsoft down for violating Java's EULA with its "embrace and extend" philosophy, and basically was the all around 1 million lb. bad guy of tech. Then comes the promises of Vista. A new OS done right with a new language at its heart that would fix all of these pesky security issues in one fell swoop, and deliver us into a new world of discoverable services, the .NET platform, and nirvana. SQL database reimagined, hard disks and file systems were going to be "rethought" for "today's computing". Meanwhile the world moved on from Microsoft dominance, and old MS totally and completely missed the rise of web 2.0 and modern browsers. We began to break way from Internet Explorer 6.0 and softare as a service really started to become a reality like never before. Apple introduced it's OS 10, and people started to filter out into other operating systems away from the intel MS hegemony. Vista turned out to be a major failure, and it took a decade for MS to realize their mistake. And while that was happening they were moved out of the way. Vista distracted Microsoft and completely enveloped them into a world unto their own making. And the rest of computing just drove right past that exit to a happier and more exiciting industry.
</p>
<p>
I think "The Metaverse" will be Facebook's Vista. A new vision created when a company has reached a critical failure in its product. "The Metaverse" will be a project unto itself that Facebook will dedicate untold effort towards. A project that will become so big and all consuming Facebook will struggle to explain what it's doing and why. It will become the project that no one actually wants and will end up leading them to take their eye off the ball. Just like Microsoft did. They'll become less important in the fabric of the web economy, and will fade. They won't dissappear unless governments regulate them out of existence, but they will become less important in our lives. After all there still is a Myspace floating out there, and Facebook has enough gravity to float around too. A little more relevant than Myspace, but less relevant than now.
</p>
<p>
There are definite limits to any institutions we create even software based ones. We can never really perfect them or continue to build them ever larger. At some point the Tower of Babel always falls apart. Will Facebook have a 2nd act? Is Meta that second act or just the rebound girlfriend/boyfriend? I'm betting on the later.
</p>chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-72088185912652193472020-01-24T08:37:00.013-05:002021-10-30T23:33:46.002-04:00Mysql and the GroovyshI wanted to use mysql in groovy shell, and I found it a bit trickier than usual. Also there is literally nothing covering how to do this on the internet. First thing is to configure your graphConfig.xml file.
Edit your grapeConfig.xml file (for *nix ~/.groovy/grapeConfig.xml). Then drop your config in there:
<script type="text/plain" class="language-xml">
<ivysettings>
<settings defaultResolver="downloadGrapes"/>
<resolvers>
<chain name="downloadGrapes">
<ibiblio name="central" root="https://repo1.maven.org/maven2/" m2compatible="true"/>
</chain>
</resolvers>
</ivysettings>
</script>
I had to update mine because I guess the central repo changed to requiring https now.
Fire up the groovy shell. Then execute this command:
<pre>
<code class="language-groovy">
groovy:000> :grab 'mysql:mysql-connector-java:8.0.18'
</code>
</pre>
Now here is where things get a little weird:
<pre>
<code class="language-groovy">
groovy:000> sql = Sql.newInstance('jdbc:mysql://localhost:3306/someDB',
'person',
'none of your business!')
ERROR java.sql.SQLException:
No suitable driver found for jdbc:mysql://localhost:3306/someDB
</code>
</pre>
So that sucks. Unfortunately the driver isn't known to the DriverManager so no workie. Too close for missiles, switching to guns for this one:
<pre>
<code class="language-groovy">
groovy:000> sql = Sql.newInstance('jdbc:mysql://localhost:3306/someDB',
'person',
'none of your business!',
"com.mysql.cj.jdbc.Driver")
</code>
</pre>
Bam! Now we're ready to start slamming some SQL in groovysh (woot).chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-7150862100665641442015-05-01T23:15:00.000-04:002015-05-02T11:34:28.013-04:00Jenkins, S3 Copy Artifact, Deploy Plugin, and ROOT Context Tricks<p>
I've spent several frustrating days trying to get Jenkins to deploy my application remotely to my Tomcat 7.x app server. My build setup is pretty vanilla Java application built with Ant and Ivy. Overall I like Jenkins and I'm trying to learn how to use it for continuous deployment, but the lack of documentation explaining some of the plugins makes it extremely frustrating. This hopefully will explain some of the subtle configuration options better for these plugins. For this I've setup Jenkins with these plugins:
</p>
<ul>
<li><a href="https://wiki.jenkins-ci.org/display/JENKINS/Git+Client+Plugin">GIT Client plugin</a></li>
<li><a href="https://wiki.jenkins-ci.org/display/JENKINS/GitBucket+Plugin">GitBucket</a></li>
<li><a href="https://wiki.jenkins-ci.org/display/JENKINS/S3+Plugin">S3 Plugin</a></li>
</ul>
<p>
I have two jobs. One builds my application which uses a post build step to Publish artifacts to S3 bucket. The second job is used to remotely deploy the artifacts from the first job to the Tomcat 7.x server.
</p>
<p>
The 2nd job is a parameterized build with the following configuration:
</p>
<pre>
Build selector for Copy Artifact
name = BUILD_SELECTOR
Execute Shell
Command = rm -rf $WORKSPACE/build
Copy S3 Artifact
Project Name = MyApp
Which Build = Specified by Build Parameter
Parameter Name = BUILD_SELECTOR
Artifact Copy = webappname-*.war
Target Directory = $WORKSPACE/build
</pre>
<p>
Few things to note. BUILD_SELECTOR is the name of the environment variable that holds the user's selected build. The <b>artifact to copy</b> setting is not a path it's just a pattern used to select the artifact. I execute the rm command to clean up the artifacts between successive builds.
</p>
<p>
The first problem I encountered was the 2nd job kept failing because it said there were not any artifacts from the 1st job. It is NOT documented anywhere that I could find, but once I went back to the 1st Job and marked the <b>"Publish artifacts to S3 bucket"</b> step as <b>"Manage Artifacts"</b>. Once that was checked it finally recognized the artifacts and I got the following!
</p>
<pre>
Copied 1 artifact from "MyApp" build number 301 stored in S3
</pre>
<p>
But the next problem was the deploy plugin kept failing with a very obtuse error.
</p>
<pre>
java.io.IOException: Error writing request body to server
</pre>
<p>
I found out that if I removed my application from the Tomcat 7.x webapps directory then it would actually deploy! But if I tried to redeploy it it failed with that obtuse error. I was deploying my app to Tomcat's ROOT context so my configuration looked like this:
</p>
<pre>
WAR/EAR = build/fuseanalytics-*.war
Context = ROOT
Container = Tomcat7x
Manager User Name = none of your business
Password = also none of your business
Tomcat URL = http://somehost
</pre>
<p>
So the clue was the following logging written out in the console output: "is not deployed."
</p>
<pre>
Copied 1 artifact from "MyApp" build number 301 stored in S3
Deploying /var/lib/jenkins/workspace/MyApp/build/myapp-1.0-301.war to container Tomcat 7.x Remote
[/var/lib/jenkins/workspace/DeployMyApp/build/myapp-1.0-301.war] is not deployed. Doing a fresh deployment.
</pre>
<p>
So after looking through the Cargo code (what the deploy plugin is based on) I found out that Cargo has to use redeploy or undeploy if a webapp is already deployed. So why is it not working? It turns out Cargo cannot handle the Context of ROOT. The tomcat manager data doesn't specify the webapp deployed on ROOT! The work around is to change ROOT to '/' (without quotes) in Jenkins and viola! It works!
</p>
chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com1tag:blogger.com,1999:blog-1923221109868193008.post-15913445186712924832014-02-17T16:53:00.007-05:002021-10-30T23:37:25.506-04:00Simplest Explanation of Ivy ConfigurationsIf you are here then you are probably trying to understand IVY's configuration concept. And quite frankly after getting comfortable with it I still can't understand their docs. They are freaking obtuse. Going on Stackoverflow proves frustrating as well. I'm going to try and explain this in a really straight forward example. One because the information out there isn't good, and two so people can throw stones at my explanation and improve my understanding. Here goes.
<br />
<h3>
What does Ivy do?</h3>
Ivy downloads dependencies and put them into directories your Ant script will use when compiling, packaging, etc. The important part of that is Ivy downloads dependencies and organizes them. It's up to your Ant script to use them appropriately.
<br />
An ivy-module (ie ivy.xml file) has two main parts:
<br />
<ul>
<li>What dependencies do you need?</li>
<li>How do you want them organized?</li>
</ul>
The first part is configured under the <dependencies> element. The 2nd is controlled by the <configurations> element. For example:
<br />
<script type="text/plain" class="language-xml">
<ivy-module ...="">
<configurations>
<conf name="compile" visibility="public"></conf>
</configurations>
<dependencies>
<dependency name="junit" org="junit" rev="3.8.1"></dependency>
<dependency name="mysql-connector-java" org="mysql" rev="5.1.18"></dependency>
<dependency name="log4j" org="log4j" rev="1.2.15"></dependency>
</dependencies>
</ivy-module>
</script>
This is fairly straightforward. We have three dependencies. If we have an Ant build file configured for Ivy this will download all three of these jar files and put them into the ${ivy.lib.dir}/compile/jar directory. That's great, but we when we go to package our application some of these aren't needed. For one we don't care to ship junit with our application so can we segment that out?
<br />
You could do this with filesets and excludes in Ant but that is tedious and error prone. Ivy will do this for you if you know how to ask to ask it. Ivy will put the dependencies in different directories based on if that dependency is needed for testing, compilation, or runtime. This is where configurations start to matter. So let's change what we have such that we divide up our dependencies using configurations. Let's create a 'test' configuration for this purpose.
<br />
<script type="text/plain" class="language-xml">
<ivy-module ....="">
<configurations>
<conf name="compile" visibility="public"></conf>
<conf name="test" visibility="public"></conf>
</configurations>
<dependencies>
<dependency conf="test" name="junit" org="junit" rev="3.8.1"></dependency>
<dependency name="mysql-connector-java" org="mysql" rev="5.1.18"></dependency>
<dependency name="log4j" org="log4j" rev="1.2.15"></dependency>
</dependencies>
</ivy-module>
</script>
Ok that was easy right? Well if you run this you'll find two directories under ${ivy.lib.dir}:
<br />
<ul>
<li>${ivy.lib.dir}/compile</li>
<li>${ivy.lib.dir}/test</li>
</ul>
However, all three dependencies in test, and the other two will be in compile! Doh! That's not what we wanted so what happened?! This comes from the fact that if you don't specify a conf attribute on each dependency it defaults to "*". Well sort of it's a bit more complicated, but you can think of it like match all configs. And because that dependency matches all configs mysql and log4j was copied to both test and compile directories. So let's fix that.
<br />
<script type="text/plain" class="language-xml">
<ivy-module ....="">
<configurations>
<conf name="compile" visibility="public"></conf>
<conf name="test" visibility="public"></conf>
</configurations>
<dependencies>
<dependency conf="test" name="junit" org="junit" rev="3.8.1"></dependency>
<dependency conf="compile" name="mysql-connector-java" org="mysql" rev="5.1.18"></dependency>
<dependency conf="compile" name="log4j" org="log4j" rev="1.2.15"></dependency>
</dependencies>
</ivy-module>
</script>
Alright now everything should be as we expect! But it's annoying to have to specify conf="compile" every time we add an dependency. This is where defaults come into play. Remember I said conf attribute defaults to "*" when nothing is specified? Well we can override that by setting the defaultconf on the dependencies tag.
<br />
<script type="text/plain" class="language-xml">
<ivy-module ....="">
<configurations>
<conf name="compile" visibility="public"></conf>
<conf name="test" visibility="public"></conf>
</configurations>
<dependencies defaultconf="compile">
<dependency conf="test" name="junit" org="junit" rev="3.8.1"></dependency>
<dependency name="mysql-connector-java" org="mysql" rev="5.1.18"></dependency>
<dependency name="log4j" org="log4j" rev="1.2.15"></dependency>
</dependencies>
</ivy-module>
</script>
Alright! Now we can just add dependencies and they will always be added to the compile configuration by default! Much easier.
<br />
<h3>Transitive Dependencies</h3>
Now there are some complexities about Ivy that I shielded you from thus far. And it has to do with the decisions Ivy has to make while trying to resolve dependencies. See when you declare you depend on A well A might also depend on B and C. Therefore you depend on not just A, but A, B, and C. B and C are called transitive dependencies. These are hidden from you because using Maven's POM files Maven (and Ivy) can figure those transitive dependencies. And there is where the information I've shielded from you lies in Maven's POM file.
<br />
See Maven has a different way to section out dependencies called scopes. And unlike Ivy they are fixed. But when Ivy is downloading these dependencies it needs to know what scopes to use when pulling these transitive dependencies (are we pulling this for testing, runtime, compilation, etc?). That should make your head spin a bit. But this is a real problem because we have to tell Ivy how to map our configurations to Maven scopes so it knows what to pull.
<br />
Without mapping our configurations they don't really work well so you have to understand this, but it's not this complicated once it's explained. So let's say we want to pull all of the dependencies JUnit has we'd do the following:
<br />
<script type="text/plain" class="language-xml">
<dependencies defaultconf="compile">
<dependency conf="test->default" name="junit" org="junit" rev="3.8.1"></dependency>
...
</dependencies>
</script>
Whoa what the heck is test->default? This looks weird, but what we are saying is our configuration is test and we want to map that to the default scope in Maven. This will have the effect of pulling all junit's transitive dependencies. If we did the following:
<br />
<script type="text/plain" class="language-xml">
<dependencies defaultconf="compile">
<dependency conf="test->master" name="junit" org="junit" rev="3.8.1"></dependency>
</dependencies>
</script>
That would only pull the dependencies junit directly declares, but not ITS transitive dependencies. You might do test->master if you wanted to compile against just junit, but not actually package it up in your application because it's optional. The user of your library must provide that library if they want to use that integration for example. Servlet API is a good example where you only need it for compilation, but you don't need to shipped with your WAR.
<br />
So here is the mystery of the -> operator in Ivy. It maps Ivy configurations onto Maven scopes when resolving dependencies so Ivy knows exactly what to pull down. It's that simple.
<br />
Back to our example now because we used defaultconf attribute to specify compile, but we didn't map it to scopes yet. So we can do that by doing the following:
<br />
<script type="text/plain" class="language-xml">
<dependencies defaultconf="compile->default">
...
</dependencies>
</script>
We can go further and simply specify this at the configurations level so that we don't have to specify it every time we change a conf attribute.
<br />
<script type="text/plain" class="language-xml">
<ivy-module ....="">
<configurations defaultconfmapping="default">
<conf name="compile" visibility="public"></conf>
<conf name="test" visibility="public"></conf>
</configurations>
<dependencies defaultconf="compile">
<dependency conf="test" name="junit" org="junit" rev="3.8.1"></dependency>
<dependency name="mysql-connector-java" org="mysql" rev="5.1.18"></dependency>
<dependency name="log4j" org="log4j" rev="1.2.15"></dependency>
</dependencies>
</ivy-module>
</script>
Notice we didn't use test->default anymore? That's because we specified that at the configurations level and all our our configs are mapped to default scope in Maven for us.
<br />
There is a lot more to configurations that I don't fully understand, but I think this will demystify most things about configurations so you can start to structure your project appropriately using Ivy without trolling Stackoverflow and Ivy docs for vague answers.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com18tag:blogger.com,1999:blog-1923221109868193008.post-75649339506239063972013-06-06T17:18:00.001-04:002021-10-30T23:40:58.867-04:00Groovy Mixins and the undocumented features of this pointer<p>
I've been using Groovy and Grails lately and I love the platform. It's a great productivity tool. However, the docs for Groovy the language are languishing and haven't been kept up to date as the platform has evolved. One of those evolutions that is poorly documented is Mixins and I'm specifically talking about dynamic Mixins. Compile time Mixins use the annotation and there are several versions using <b>@Mixin</b> and <b>@Category</b>, but essentially the are limited in their use because you can't add a mixin into a class you didn't author. That means you have to use a different mechanism to augment 3rd party classes. This leaves either modifying the metaClass property on the class or using the newer Dynamic Mixin feature.
</p>
<p>
For example let's say we want to add a zip method on java.util.File. This method would take this File instance and produce a zipped version of it. For files it simply compresses the file, and for directories it's compress the whole directory and return the resulting file. Using the metaClass property we could do the following to add this:
</p>
<pre>
<code class="language-groovy">
File.metaClass.zip = { String destination ->
OutputStream result = new ZipOutputStream(new FileOutputStream(destination))
result.withStream { ZipOutputStream zipOutStream ->
delegate.eachFileRecurse { f ->
if (!f.isDirectory()) {
zipOutStream.putNextEntry(new ZipEntry(f.getPath()))
new FileInputStream(f).withStream { stream ->
zipOutStream << stream
zipOutStream.closeEntry()
}
}
}
}
</code>
</pre>
<p>
This works well and now you can do something as simple as new File( 'some/directory').zip('some_directory.zip'), and boom it writes out a zipped copy of that directory! That's pretty awesome isn't it? I think you're seeing the reason for why we want to do this.
</p>
<p>
Now let's see if we can translate that into a dynamic Mixin. Here is the version in Mixin form:
</p>
<pre>
<code class="language-groovy">
class EnhancedFile {
static {
File.metaClass.mixin( EnhancedFile )
}
void zip( String destination ) {
OutputStream result = new ZipOutputStream(new FileOutputStream(destination))
result.withStream { ZipOutputStream zipOutStream ->
eachFileRecurse { f ->
if (!f.isDirectory()) {
zipOutStream.putNextEntry(new ZipEntry(f.getPath()))
new FileInputStream(f).withStream { stream ->
zipOutStream << stream
zipOutStream.closeEntry()
}
}
}
}
}
}
</code>
</pre>
<p>
Some small changes were made to the code. One is the static block at the top now places the mixin into the File object when this class is loaded. This is where Mixins added to 3rd party could be better. Essentially I just want to add this to augment 3rd party libraries, and it could be added at compile time through a simple annotation that let's me annotate the Mixin instead of the target of the Mixin. For example if I could use <b>@MixinTarget(File)</b> on the Mixin to augment File it could register it at compile time, but sadly it doesn't exist. This is why were are using runtime mixins here.
</p>
<p>
The other change was removing the delegate member. In metaClass mixin land delegate is a magic keyword that points back to the target of the mixin, or the instance your code was mixed into. In Dynamic Mixin land delegate keyword doesn't exist. However, you can refer to methods in the target class by calling them as if they were instance methods on the Dynamic Mixin. Notice how File.eachFileRecurse() method is called within the mixin.
</p>
<p>
This is our first clue how Dynamic Mixins are different than metaClass mixins. In dynamic mixin land delegate is not defined so referring back to the target is undocumented! There is no discussion about how it works or how its suppose to work. This is the point of this blog post.
</p>
<p>
Now let's say we want to add an unzip method to our Mixin. Let's look at the metaClass version first:
</p>
<script type="text/plain" class="language-groovy">
File.metaClass.unzip = { File destination ->
ZipFile zf = new ZipFile( (File)delegate )
Enumeration<? extends ZipEntry> entries = zf.entries()
while( entries.hasMoreElements() ) {
ZipEntry entry = entries.nextElement()
File f = new File( destination, entry.name )
if( !f.getParentFile().exists() ) f.mkdirs()
new FileOutputStream( f ).withStream { OutputStream stream ->
stream << zf.getInputStream( entry )
}
}
}
}
File.metaClass.unzip = { String destination ->
return delegate.unzip( new File( destination ) )
}
</script>
<p>
In this example I have two overloaded versions of the unzip method. That's cool because Groovy honors Java's call differentiation by type, but the crux of this method is in the first one. It's pretty straight forward unzips this File instance into the destination File instance. See any issue with porting? That first line is passing the target of the mixin using <b>delegate</b> keyword to ZipFile! How can we implement that in a Dynamix Mixin!? This is the confusing part. In Dynamic Mixin land what does <b>this</b> pointer point to? Why it points to the instance of the Mixin. In this case its an instance of EnhancedFile. Well that doesn't do us much good does it? But what is the relationships between Mixin and Mixee? That gets a bit fuzzy. We could try casting this to a File after all it appears this is a File because we can simply call instance methods as if they were inside EnhancedFile too. Let's try that:
</p>
<pre>
<code class="language-groovy">
ZipFile zf = new ZipFile( (File)this )
</code>
</pre>
<p>
But that doesn't work and throws a ClassCastException. What about using the <b>as</b> keyword to convert it?
</p>
<pre>
<code class="language-groovy">
ZipFile zf = new ZipFile( this as File )
</code>
</pre>
<p>
That actually works! And here is a simple test you can try out:
</p>
<pre>
<code class="language-groovy">
class MeMixin {
def me() {
return this
}
}
class MeTarget {
}
MeTarget.mixin MeMixin
target = new MeTarget()
println( target.equals( target.me() as MeTarget ) )
println( target.equals( target.me() )
</code>
</pre>
<p>
The above code will print true then false. So the <b>as</b> keyword somehow changes the <b>this</b> pointer of the Mixin into the target class. It's the same reference as the original (that's important). Well it'd be pretty useless if it wasn't. Now why this works I can't explain that yet.
</p>
<p>
Here is the full code:
</p>
<script type="text/plain" class="language-groovy">
class EnhancedFile {
static {
File.metaClass.mixin( EnhancedFile )
}
void zip( String destination ) {
OutputStream result = new ZipOutputStream(new FileOutputStream(destination))
result.withStream { ZipOutputStream zipOutStream ->
eachFileRecurse { f ->
if (!f.isDirectory()) {
zipOutStream.putNextEntry(new ZipEntry(f.getPath()))
new FileInputStream(f).withStream { stream ->
zipOutStream << stream
zipOutStream.closeEntry()
}
}
}
}
}
void unzip( File destination ) {
ZipFile zf = new ZipFile( this as File )
Enumeration<? extends ZipEntry> entries = zf.entries()
while( entries.hasMoreElements() ) {
ZipEntry entry = entries.nextElement()
File f = new File( destination, entry.name )
if( !f.getParentFile().exists() ) f.mkdirs()
new FileOutputStream( f ).withStream { OutputStream stream ->
stream << zf.getInputStream( entry )
}
}
}
void unzip( String destination ) {
unzip( new File( destination ) )
}
}
</script>chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-69378982669338340702011-07-22T23:38:00.016-04:002011-08-01T09:22:35.324-04:00Now can we please raise the debt ceiling?!I wanted to look at how bad it's gotten just by looking at the numbers we're up against. What we are arguing over is money the US takes in vs. pays out in obligations. At present time the government spends $3.834 trillion, and takes in $2.567 trillion. You should already see the problem. We're spending $1.267 trillion that we don't have. So where do we get that from without raising taxes?<br /><br />By issuing more bonds, but we can't do that until this debt ceiling is raised. See we've been doing this since the 1980s. We spend more than we take in, and to get money we sell US Treasure bonds to people to keep operating. However, the debt ceiling is a law on the books that states the US Government won't borrow more than X, and every time we reach X Congress votes to raise it to Y, sells more bonds to cover the deficit, and we keep going. And, people are perfectly happy to buy them because the USA has NEVER defaulted on those obligations.<br /><br />Now of that $3.834 trillion in spending some of it is allocated by law. By law we have to spend it. If we wanted to change it Congress would have to create a new law that cuts that spending. These are things like Social Security, Medicare/Medicaid, National Debt Interest, Income Security, and Veterans Benefits. This doesn't get discussed much because passing a law to cut these is really difficult, and politicians, on both sides, don't want to be the one that slashes these because they will be voted out. Some of these you can't do anything about like National Debt Interest. You don't pay that and that spells default, USA gets it's AAA rating slashed, interest rates rise up, babies die, and Jesus weeps. The $250 billion in National Debt interest is interest on all that borrowing we keep doing. For the remaining items Social Security, and Income Security are funded by specific taxes. If you cut those programs it doesn't help because those special taxes can't be used to pay for other spending. That is illegal. So what does that leave? Medicare, Medicaid, and the discretionary budget as places you can cut. I'm leaving Veteran's Benefits out of it because it's $68 billion which even you completely cut it to zero it would contribute squat, and persons who cut that thing would make Casey Anthony look like Mother Teresa.<br /><br />What we're really talking about is the Discretionary Budget which in 2011 is $1.415 trillion dollars of which 63% ($895 billion) is spent on Military spending, and 37% ($520 billion) is spent on non-Military spending. In 2004 the Discretionary Budget was $782 billion and 51% ($399 billion) for Military Spending and 49% ($383 billion for non-Military Spending). That's a 58% increase in the budget in 7 years. You'll also notice how much the military percentage of the pie has increased. That means it's rising at roughly 6.8% per year. More than twice the typical 3% inflation rate. But, the more disturbing trend is military spending has increased 12.3% per year while non-military spending rose only 5% per year. Why is that important? Because Discretionary Military spending is single largest expense the American government pays out, hence if we really want to make serious cuts it has to start with the military spending.<br /><br />If we didn't want to raise the debt ceiling we need to come up with $1.267 trillion by cutting spending or raising taxes. If we didn't want to raise taxes and you don't want to cut the non-Discretionary items, then we'd need to cut $1.267 trillion from the $1.415 trillion Discretionary budget. That would leave $148 billion for the government (both military and non-military) to run on. Our government couldn't function no matter how much the Tea Party wishes that were true.<br /><br />What if we consider the full budget for cutting funding. In order to cut spending enough, so we don't have to raise taxes, we'd need to cut 58% from Discretionary Military spending, Discretionary Non-Military spending, Medicare, and Medicaid. If we included Income Security in those cuts we can get it down to 46% cuts across the board. And if we included Social Security it'd be around 36% cuts across the board.<br /><br />Ok so let's look at it from what we'd need to do to raise taxes to cover it. In order to get $1.267 trillion more we'd need to increase taxes by 50%! 50% tax increase would cover the deficit without cutting any spending. Now if you thought cutting spending to cover it was insane. Raising taxes by 50% is bonkers. I can't afford a 50% tax hike as I bet neither can you, and corporations would get a shock so bad Wall Street would absolutely freak their shit. And, send their K Street soliders to figure out a way to shirk their responsibility. Yep same song different verse. So even if you could pass the bill I bet they couldn't collect on those taxes.<br /><br />Those are the two extremes of the argument. You can't cut your way to a balanced budget, and you can't tax your way one either. However, getting really serious about fixing those problems means serious cuts and serious tax hikes. Looking at closing loops holes to raise revenue, and cutting spending is the only way you could reasonably do it. But again, there's no perfect answer given the constraints. It will still require serious cuts, and tax hikes. Even raising taxes 10% you'll need to cut $1 Trillion in spending across the board. That is going to be very hard. What about the Bush tax cuts? Even rolling those back will only add $300 Billion-ish in revenue.<br /><br />The easiest way out is to raise the debt ceiling because defaulting will have tremendous consequences. And, to think it will get worked out if we miss the Aug 2nd deadline is a farse because we're already on borrowed time. This thing was supposed to get wrapped up 6 months ago, and the Treasury did some funny accounting to get more time. They've been in a stalemate since then. So if they can't figure it out in 6 months what makes you think they'll figure it out in another 6 months when the Treasury is out of money? They've been living on life support for 6 months.<br /><br />So given all of the facts can we please just raise the debt ceiling? My 401K doesn't need 3rd shot to the junk in 10 years.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-5160611523004366112011-03-28T20:42:00.005-04:002011-03-28T21:26:02.012-04:00How Failing Fast allows you to reframe the problemJust read an article on Fast Company on how human powered flight was solved by Paul MacCready. It's really cool because it's not a software story, but it has so many similarities with software. Success centers around creating an environment where you can iterate on your idea. I like stories like this because the motto of "fail fast" gets hollow as it is over used. After a while It's hard to remember what it originally meant. Stories help re-affirm it's meaning.<br /><br />In so many ways this is really what agile software development is trying to get you to. Agile demands a lot from your team, and the only way you can live up to the promises of agile development is to create this environment. Without it you'll just fail, or worse just survive on far less productivity.<br /><br />No more big design up front. It failed people for human powered flight, it failed for cars, and it failed for software.<br /><br /><a href="http://www.fastcodesign.com/1663488/wanna-solve-impossible-problems-find-ways-to-fail-quicker">http://www.fastcodesign.com/1663488/wanna-solve-impossible-problems-find-ways-to-fail-quicker</a>chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-5839540175966208062011-03-19T17:21:00.011-04:002011-03-19T17:55:07.759-04:00View Source and SVG on the iPadI'm playing around with SVG on the iPad, and I find it's hard to really debug even the smallest thing on it. Apple is a lot of things, but calling them a developer of great development environments would be a grandiose lie. Before I say something I'll have to do a lot of explaining about I wanted to share a script for view the source on your iPad for SVG documents.<br /><br /><pre><br />javascript:var%20sourceWindow%20%3D%20window.open('about%3Ablank')%3B%20%0Avar%20newDoc%20%3D%20sourceWindow.document%3B%20%0AnewDoc.open()%3B%20%0AnewDoc.write('%3Chtml%3E%3Chead%3E%3Ctitle%3ESource%20of%20'%20%2B%20document.location.href%20%2B%20'%3C%2Ftitle%3E%3C%2Fhead%3E%3Cbody%3E%3C%2Fbody%3E%3C%2Fhtml%3E')%3B%20%0AnewDoc.close()%3B%20%0Avar%20pre%20%3D%20newDoc.body.appendChild(newDoc.createElement(%22pre%22))%3B%20%0Avar%20src%20%3D%20''%3B%0Aif(%20document.documentElement.innerHTML%20)%20%7B%0A%20%20%20src%20%3D%20document.documentElement.innerHTML%3B%0A%7D%20else%20%7B%0A%20%20%20var%20div%20%3D%20newDoc.createElement(%22div%22)%3B%0A%20%20%20div.appendChild(%20document.documentElement.cloneNode(true)%20)%3B%0A%20%20%20src%20%3D%20div.innerHTML%3B%0A%7D%0Apre.appendChild(newDoc.createTextNode(src))%3B<br /></pre><br /><br />To get this on the iPad follow these steps.<br /><br /><ol><br /><li>Open this page on the iPad.</li><br /><li>Select all of the text from the prior paragraph</li><br /><li>Add a bookmark for this page.</li><br /><li>Edit the 2nd field and past the copied text in there</li><br /><li>Now open a SVG document and click your new bookmark</li><br /></ol><br /><br />This is a modified version of source code from <a href="http://banagale.com/view-source-from-safari-on-ipad.htm">Rob's Blog</a>. The only problem with Rob's version is the use of innerHTML. Unfortunately, SVG doesn't have innerHTML. This code will handle document nodes that don't have innerHTML property by cloning them and placing the clone in a DIV element. That way we can properly get the innerHTML from there. Using this code will allow you to see the SVG and HTML source.<br /><br />Here's the source code for this bookmarklet for easy debugging if you have trouble:<br /><br /><pre class="brush: javascript"><br />var sourceWindow = window.open('about:blank'); <br />var newDoc = sourceWindow.document; <br />newDoc.open(); <br />newDoc.write('<html><head><title>Source of ' + document.location.href + '</title></head><body></body></html>'); <br />newDoc.close(); <br />var pre = newDoc.body.appendChild(newDoc.createElement("pre")); <br />var src = '';<br />if( document.documentElement.innerHTML ) {<br /> src = document.documentElement.innerHTML;<br />} else {<br /> var div = newDoc.createElement("div");<br /> div.appendChild( document.documentElement.cloneNode(true) );<br /> src = div.innerHTML;<br />}<br />pre.appendChild(newDoc.createTextNode(src));<br /></pre>chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-9914802011404015772010-11-18T10:03:00.010-05:002010-11-18T13:55:00.618-05:00When you're doing it wrong...How do you know when you're doing it right? Most of the time I know I'm doing it right is when it feels like I'm always hitting my goals, and it's getting easier than it was yesterday. Although that might be a little lie I tell myself because it might just be because I know when I'm doing it wrong, and how bad that feels. If I don't have those bad feelings I know I must be doing something right. <br /><br />Here's a great example of doing it wrong. I'm at a place that loves to branch code. Most of the time they are branching because the business demands a release, but they have such a large team in order to keep everyone "busy-ish" they have to branch. They have an idea they're doing it wrong, but they don't really have a clue as to how to do it right so they just do what they know. The developer's don't like branching, but the business doesn't give them much choice.<br /><br />Problem is refactoring is important because the code base is pretty hard to work with. Now when they add multiple branches + refactoring + big team = double black diamond level of difficultly in the merges. So that's bad, but another side effect is when a merge is going on it prevents people from modifying the repository. Nobody can use the source control system while this is happening. It's an all stop. One of these merges is going on it's 5th day. That 5 days where no one has integrated their changes, built all of the code, or synchronized with other people's changes. Now all of a sudden the choices to use SCM, continuous integration, refactoring, and small agile practices is really loosing it's benefit. One developer suggested we send around patches to each other while the merge is going on. We specifically picked a SCM system so we don't do that. Once the merge is done the SCM system is going to hit with tons of changes, and when something breaks functionality they won't be able to easily resolve it because of the volume of changes. Now quality is suffering directly because of relentless branching. <br /><br />Funny thing is I can't think of anyone out there that suggests branching as a technique for achieving quality. However, there are countless examples from experts that generally agree using SCM, continuous integration, refactoring, and small changes help overall quality. Why are we doing something that sacrifices those best of breed practices? Now the guy with the "big picture" view seems to believe it's the actual code quality is to blame for productivity problems, and quality issues. He thinks more code reviews, and education is in order about how to write "good" code will right the ship. At some level business is just throwing crap over the wall without any real conversations.<br /><br />This is what I call an "everything is arduous and ridiculous" environment. Everything about this place feels over the top hard. Why don't people around me seems to realize this is the ridiculous way to operate? Haven't they ever had that effortless feeling of productivity? How you're always the man, and it's just right? Sure this works in that we are producing a product very slowly, but it doesn't feel like a success. Is it luck? Is it innate to the problem you're trying to solve? Well...maybe.<br /><br />Sure some problems are harder than others. Building Google mail is harder than building an android app. But, I've been on some pretty nasty android apps. Which tells me there is a way to make an easy problem hard, and hard problem easy. So what are we doing that makes this problem so hard?<br /><br />Sometimes it's not being smart enough. We all love algorithms, and finding that simple algorithm that just makes the problem go away is sublime. That's what we all fell in love with if we have any formal training. But, those types of problems are far and few between. Mostly what we do is slog crap from one database, slap it on the glass, then slog the new crap back into the database. Rinse and repeat 1000x and you've got a product. There is no algorithm that makes that easier. If there's no algorithm then what is it?<br /><br />Technique. There's a difference between algorithm and technique, and what types of problems they are best suited for. Technique isn't going to come up with map reduce. That's algorithm. Technique is your choices for what you're going to use, and how you're going to use it so the problem is easier. Technique is also about how you choose to define the problem which means technique comes before algorithm. How can you choose an algorithm if you don't know what your problem is?<br /><br />Technique breaks down into two parts. Choosing a set of tools and processes, and how you apply those tools or processes. Technique extends past the end product into the support systems that nurture how that end product is created with bug tracking systems, source control management systems, continuous integration, user forums, etc. And, those choices can have a greater effect on the end product than what you put into the product. Just reread the example above for justification.<br /><br />To some degree, we place too much emphasis on tool choice because how you apply it can undermine the choice of using that tool. If your technique doesn't match the tool the tool will never matter. Have two bug tracking systems because one group doesn't want to give up their existing one. Been there, real story, doesn't work, definitely doing it wrong. (Actually same place as the example probably could have a book of "doing it wrong" ideas from this place). As in the example at some point application of those tools made the choice of SCM moot.<br /><br />In the end we need to discuss technique more passionately than specific technologies. The two do go hand in hand, but it's the technique in the end that makes the difference. So how do you know you're doing it right? When technique matters more than technology.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com1tag:blogger.com,1999:blog-1923221109868193008.post-59379330868227482692010-09-23T00:05:00.017-04:002010-11-28T11:58:51.812-05:00Flexjson meet Android<a href="http://flexjson.sourceforge.net">Flexjson 2.1</a> now supports running Flexjson on Android. So I thought I'd show a quick example of using Flexjson in an Android application. Hopefully this will spark some ideas about what you can use Flexjson for in your own application. I'm going to start simple creating a quick Android app that pulls recipes from <a href="http://www.puppyrecipe.com">Puppy Recipe</a>, parses it using Flexjson, and displays it in a list. Let's get started.<br /><br />Recipe Puppy has a very simple REST API, almost too simple, that returns responses in JSON. Recipe puppy allows you to search recipes by the ingredients contained within by using a URL parameter <b>i</b>. Individual ingredients are separated by a comma, and URL encoded. Here is a simple example:<br /><br /><a href="http://www.recipepuppy.com/api/?&i=banana,chicken&p=1">http://www.recipepuppy.com/api/?&i=banana,chicken&p=1</a><br /><br />Exciting isn't it? If you click that link you'll see the JSON response. It's a little hard to read like that so here is a simple break down with a little formatting:<br /><br /><pre class="brush: js"><br />{<br /> "title":"Recipe Puppy",<br /> "version":0.1,<br /> "href":"http:\/\/www.recipepuppy.com\/",<br /> "results":[<br /> {<br /> "title":"Chicken Barbados \r\n\r\n",<br /> "href":"http:\/\/www.kraftfoods.com\/kf\/recipes\/chicken-barbados-53082.aspx",<br /> "ingredients":"chicken, orange zest, chicken, banana, orange juice, brown sugar, flaked coconut",<br /> "thumbnail":"http:\/\/img.recipepuppy.com\/602538.jpg"<br /> },<br /> ...<br /> ]<br />}<br /></pre><br /><br />This is pretty straight forward. We have a little header and what we really are interested in results property which is an array of recipe objects. So we'll create two simple Java classes to map those data members. RecipeResponse for the header portion, and Recipe which is the object contained within "results" property.<br /><br />Here are those objects:<br /><br /><pre class="brush: java"><br />public class RecipeResponse {<br /> public String title;<br /> public Double version;<br /> public String href;<br /> public List<Recipe> results;<br /><br /> public RecipeResponse() {<br /> }<br />}<br /><br />public class Recipe {<br /><br /> private String title;<br /> private String href;<br /> private String ingredients;<br /> private String thumbnail;<br /> private Drawable thumbnailDrawable;<br /><br /> public Recipe() {<br /> }<br /><br /> public String getTitle() {<br /> return title;<br /> }<br /><br /> public void setTitle(String title) {<br /> this.title = title.trim();<br /> }<br /><br />}<br /></pre><br /><br />In the Recipe object I actually created a Java Bean with getter/setter, but I didn't include most of those methods. I did make a point to show the setter for the title property. Turns out some of the data coming out of recipe puppy contains extra newlines characters in the title. To get rid of those I'm doing a trim() in the setter. Flexjson is smart enough to call the setter method if you have defined it instead of setting values directly into the instance variables. However, if you use public instance variables it will set values directly into those too. <b>This was a fix made in 2.1 with respect to using public instance variables during deserialization process</b>. You'll be happy to know it works now.<br /><br />So let's jump to the usage of Flexjson in the android code. So we create a RecipeActivity that contains a List to display the recipes. We're going to look at the AsyncTask that loads the data using Flexjson. Here is the full code for that:<br /><br /><pre class="brush: java"><br /> new AsyncTask=<String, Integer, List<Recipe>>() {<br /><br /> private final ProgressDialog dialog = new ProgressDialog(RecipeActivity.this);<br /><br /> @Override<br /> protected void onPreExecute() {<br /> dialog.setMessage("Loading Recipes...");<br /> dialog.show();<br /> }<br /><br /> @Override<br /> protected List<Recipe> doInBackground(String... strings) {<br /> try {<br /> return getRecipe( null, 1, "banana", "chicken" );<br /> } catch( IOException ex ) {<br /> Log.e( RECIPES, ex.getMessage(), ex );<br /> return Collections.emptyList();<br /> }<br /> }<br /><br /> @Override<br /> protected void onPostExecute(List<Recipe> results) {<br /> if( dialog.isShowing() ) {<br /> dialog.dismiss();<br /> }<br /> Log.d( RECIPES, "Loading " + results.size() + " Recipes" );<br /> recipes.setList( results );<br /> new ThumbnailLoader( recipes ).execute( recipes.toArray( new Recipe[ recipes.size() ]) );<br /> Log.d( RECIPES, "Loaded " + recipes.size() + " Recipes" );<br /> }<br /><br /> protected List<Recipe> getRecipe( String query, int page, String... ingredients ) throws IOException {<br /> String json = HttpClient.getUrlContent( String.format( "http://www.recipepuppy.com/api/?q=%s&i=%s&p=%d",<br /> query != null ? URLEncoder.encode(query) : "",<br /> ingredients.length > 0 ? URLEncoder.encode(join(ingredients,",")) : "",<br /> page ) );<br /> RecipeResponse response = new JSONDeserializer<RecipeResponse>().deserialize(json, RecipeResponse.class );<br /> return response.results;<br /> }<br /> }.execute();<br /></pre><br /><br />The method your probably most interested in is getRecipe(). This method formats the URL we're going to load. It then loads that URL and passes the results returned as a JSON block to the JSONDeserializer. JSONDeserializer will take a JSON formatted String and bind that into a Java object. In this example, we're binding into a RecipeResponse object. Here is how that is done:<br /><br /><pre class="brush: java"><br />RecipeResponse response = new JSONDeserializer<RecipeResponse>().deserialize(json, RecipeResponse.class );<br /></pre><br /><br />A single line of code does that. The deserialize() method performs the deserialization and binding. The first argument is the JSON String, and the second is the top level class we want to bind into. Notice we didn't have to mention anything about Recipe. Flexjson is smart enough to use the data types from the top level object to figure out any other data types contained within. So if you refer to the RecipeResponse.results instance variable you can see the List data type with a generic type. Flexjson will use generics whenever possible to figure out concrete types to instantiate. Of course polymorphism, interfaces, abstract classes, and the like causes issues with this, but we're not going into that right now. See the Flexjson home page to find out more.<br /><br />You'll notice the RecipeResponse object is returned fully populated with the JSON data, but we're really only interested in <b>response.results</b> so we just return that. It'd be nice if Recipe Puppy returns how many total pages there were in the header (hint, hint) so that it was more interesting. Anyway it is beta. That array is then added to the ListAdapter and displayed on the screen.<br /><br />Other things Flexjson could be used for is saving state by serializing objects to JSON, and then deserializing when Activities are reconstituted. This can be easier than writing ContentProviders to dump stuff into the database. One of my biggest gripes with Android is how between pages objects can be reliably sent because Intent's require you break everything down to primitives. With Flexjson we can just simply serialize an object put that in the Intent, and then deserialize it on the other side. So no more boilerplate code to flatten your objects.<br /><br />Here's a simple example serializing our recipes to the disk:<br /><br /><pre class="brush: java"><br /> File f = app.getFilesDir();<br /> Writer writer = new BufferedWriter( new FileWriter( new File( f, "recipes.json") ) );<br /> try {<br /> new JSONSerializer().deepSerialize(favorites, writer);<br /> writer.flush();<br /> } finally {<br /> writer.close();<br /> }<br /></pre><br /><br />Now I know there are people worried about performance, but timing the following code this ran on device in less than 40ms which is within the acceptable bounds for UI performance. If you need more performance you can cache the JSONSerializer/JSONDeserializer instance which optimizes data type mappings so it doesn't recompute those when serializes and deserializes. As always measure, measure, measure.<br /><br />You've gotten an introduction about how Flexjson can make it easier to work with JSON data with Android.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com1tag:blogger.com,1999:blog-1923221109868193008.post-73848195263094009002010-05-09T12:28:00.009-04:002010-05-09T17:28:14.789-04:00Caringorm is Architectural PoisonNo one has ever accused me of shying away from sensational titles, and now's not the time to get timid. I must confess I've never been a fan of Caringorm. My first impression of Caringorm was it's over engineered. Why are there so many layers? Isn't that going to just slow you down having to develop a View, Command, Delegate, Service, etc for every round trip I make to the backend? Now that I'm working on a project that has gone horribly wrong I see how Caringorm architecture directly contributed to the problems. We've been called in to straighten out the mess and put down a more suitable architecture. After understanding what the team has done I begin to see the techniques, that Caringorm purports as best practices, create more work for yourself the longer you use them.<br /><br />Software architecture should organize your work so you can work at a higher level related to your problem you are trying to solve. It does this by fostering reuse in your code. It should allow you to reuse what you did yesterday to apply to today's problem reducing the work required get work done. As your project grows the only way you can move quickly is through reuse. Without reuse the work grows exponentially to the point where value can't be delivered. How long this takes before your code base becomes unproductive? Ten releases? 10 Years? I've seen it happen in 1 release, and less than 1 year.<br /><br />Key signs this has happened in your project is talks about rewriting your application or major refactoring. Other signs come in when your customer says that should be easy, and then developing it takes significantly more time than you'd expect. Bad architecture robs your team's performance to deliver value. If this goes on too long your project will get scrapped and if you're lucky you'll be allowed to start another project. Most likely you won't because the business will be putting your project in maintenance mode while they spin up the "solution". Good architecture is quite the opposite. Easy things are easy and hard things are possible. At the core of this is the level of reuse in your project.<br /><br />Caringorm doesn't foster reuse. It stalks it, attacks, leaves it dead, and poisons the earth to keep it from ever fostering. At the heart of this is the age old singleton problem. Singleton's are seriously bad technique, and I wish every developer out there understood this. Using singletons to limit an instance to a single instance is not all together bad, but using it as a locator pattern is where the serious issues began to make your code single use. Unfortunately you can't limit a singleton to the good parts without accepting the very serious downsides, and this is the reasons I try and void them at all costs because the downsides are that damaging. Cargingorm has no problem using the ModelLocator (which is a singleton) in your views (mxml). And there is no amount of other techniques you can introduce to overcome the problems that come with this. I don't care if you're using Code Behind, Presentation Model, or whatever. <span style="font-weight:bold;">If you use a singleton in your views you can't reuse them.</span> Anything that directly references singletons becomes single purpose in its use as well, including anything referencing those objects, and so on and so on.<br /><br />Why is that a problem? Well consider if we wrote DataGrid with the same techniques Caringorm purports as acceptable practices. Let's say DataGrid.dataProvider was hard coded to look in ModelLocator.getInstance().dataProvider. Now how can you have two instances of DataGrid in your program pointing at two different dataProviders? You can't. And this is precisely the problem that leads to serious architecture problems with Caringorm programs. Now throw in calls to getController().eventManager.addEventListener() in your views and you have a serious recipe for disaster.<br /><br />You might find my example contrived so let me describe a more real world scenario. Say you have a signup process that people fill out on your site and you have a view that represents the information you want to gather. In that view you're using the ModelLocator. Now the customer wants to add a new way to sign up because their doing an email campaign, and they'd like to pre-populate that form from details like the email address and ad campaign number to track it into the view. Unfortunately, ModelLocator makes it difficult to put two different models into your view because it's hard coded to one. What would have been an easy task by instantiating another instance of your view has turned into creating another view from scratch. So let's say you need to do this fast and you copy the view and makes the changes to create two different views. Then the customer wants to add a field to both views. Now you need to update two places in your application. This is precisely what I mean when I say Caringorm creates more work for you. Over time if enough of these exist in your application your productivity will drift to zero because maintaining all of it too much work.<br /><br />Now hopefully I convinced you that the patterns Caringorm suggests are not helping you. And, you decide to banish ModelLocator from the views. However, the problem of geting something from the model and bound into the view still exists. So what part of the Caringorm architecture will interact with the ModelLocator and the view? Normally this would be the Controller in a traditional MVC pattern. In Caringorm the Command is suppose to be this part, but it doesn't have access to the view. Therefore, how will it set the data properties on the view? You could do some gymnastics by passing references through the FrontController into the Commands, but at this point I'm taking some serious liberties to modify Caringorm's architecture design to make it work. If the architects of Caringorm had realized this then their examples would have shown how to do this.<br /><br />I'd like to think that as an industry we're working towards a shared understanding about the dangers of singletons, and if that were true I'd expect to see a drop in the number of projects using them. I'd expect to see a reduction in the number of frameworks employing singletons as an instance locator pattern. However, it's quite the opposite. Most developers don't see a singleton and get that tingling sensation that something bad is about to happen. But, singletons have all the same problems that global variables do and by in large most developers realize global variables are poison if not very carefully used. <br /><br />Caringorm is like EJB of Flex. Over engineered. Expect there to be serious changes to Caringorm in the future to save the marketing that Adobe has done with clients. Just like Sun did with EJB3 for EJB. Sun had over emphasized the benefits (if there were really ever any) to using EJB, and once the community realized EJB was over engineered and more trouble than it was worth. Sun had no choice but to hire Hibernate's creator to design EJB 3.0 and started begging for forgiveness. Adobe will have to do the same.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com1tag:blogger.com,1999:blog-1923221109868193008.post-72709913524670250212010-03-08T22:21:00.014-05:002010-11-28T12:22:35.513-05:00GreenThread: Problems with Recursive FunctionsIn previous blog posts on GreenThreads I mentioned that the downsides of using GreenThreads meant you couldn't write recursive functions. In one of the comments I was asked to expand on this idea, and after the comment got so long I figured a blog article might be a better forum for this topic. I'm going to discuss the issues with regular recursive functions, then we'll explore the differences between two types of recursive functions, and potential changes that could be made to help make it easier to write recursive GreenThreads.<br /><br />Let's start by examining the following function:<br /><br /><pre class="brush: as3"><br />public function factorial( i : int ) : int {<br /> if( i == 0 ) return 1;<br /> return i * factorial( i - 1 );<br />}<br /></pre><br /><br />It's like the "hello world" of recursive functions. What makes a function recursive is the fact factorial() function calls itself in evaluating the value of the function. If we were to run it in a GreenThread there's no way for the system to interrupt the function calls should this function take longer than the length of a frame. For example, say you ran factorial(5) in the GreenThread. The call stack will look like the following:<br /><br /><pre class="brush: plain"><br />factorial(5) -> factorial(4) -> factorial(3) -> factorial(2) -> factorial(1) -> factorial(0)<br /></pre><br /><br />There's no way to let Flash insert a paint in between factorial(3) calling factorial(2) because factorial(3) calls factorial(2) directly.<br /><br />The way GreenThread framework works is that it handles repeatedly calling your GreenThread until the time has elapsed for a single frame. At that point it let's Flash have control again and then resumes on the next frame repeating this process until your GreenThread says it's finished. This is actually implemented using a big loop outside your GreenThread.<br /><br />It gets even harder for recursive functions. Look back at factorial() function. Notice that factorial(5) has to compute factorial(4) before multiplying by 5 so it can return it's value. Therefore, it's not possible break out of the function call, allow Flash to paint, then resume within a function so it can multiply by 5 to finish the computation. (Not unless Flash supported continuations, but that's a whole another topic). So now recursive functions can't be interrupted because the function directly calls itself, and depending on how you write your recursive function it's not possible to put a break because of operations that might come after the recursive call finishes.<br /><br />There are other issues with recursive functions, but they are not possible to use in a GreenThread because of the dependencies between stack frames. However, there is another type of recursion that can help eliminate dependencies. Let's write our recursive function to remove the dependency on local operations:<br /><br /><pre class="brush: as3"><br />public function factorial( i : int, accumlator : int = 1 ) : int {<br /> if( i == 0 ) return accumlator;<br /> return factorial( i - 1, i * accumlator );<br />}<br /></pre><br /><br />Now notice that factorial(5) doesn't have extra work to run after factorial(4,5) runs like we did before. This means factorial(5) could be replaced by the return from factorial( 4, 5 ). In fact factorial(5) == factorial(4,5)! This technique is called tail recursion, and in certain languages it helps make recursion work without growing the stack frames so large iterations don't overflow the stack. Now Actionscript doesn't benefit from this, but this will allow us to work around the second problem we have. We still have our original problem so we'll have to tackle that before we're done.<br /><br />Now we still have factorial(5) directly calling factorial( 4, 5 ) so Flash can't interrupt the function calls so it can paint. However, what if we had a special call that would delay calling factorial(4,5) so we could do whatever Flash wanted, then it would resume our recursive function.<br /><br />Well there exists such a function: callLater(). callLater() can be used to schedule a function to be called in the next frame, and in fact from all of the testing done it's safe to use callLater() as a technique for implementing GreenThreads. However, directly using it will suffer from poor performance because of long waits between function calls. So, let's assume there is a new function in GreenThread that acts like callLater(), but achieves better performance. Now our recursive function could look like:<br /><br /><pre class="brush: as3"><br />public function factorial( i : int, accumlator : int = 1 ) : Boolean {<br /> if( i == 0 ) return false;<br /> return invokeOnThread( factorial, i - 1, i * accumlator );<br />}<br /></pre> <br /><br />Now invokeOnThread() doesn't exist in the current code base, but it could be written. Actually the accumlator would probably be best served as an instance variable within your GreenThread, and we'd need to change some more features to fit within the framework. Assuming that it is we could support recursive functions given these constraints:<br /><br /><ul><br /> <li>You must write your recursive calls so they conform to tail recursion.</li><br /> <li>You must use invokeOnThread() to recursively call your function.</li><br /> <li>You must conform to the contracts of the GreenThread framework.</li><br /></ul><br /><br />The upside is the ability to think recursively. While every algorithm can be expressed either in iteration or recursion it's not easy to convert between each form. Some algorithms are easier to express using recursion and can be very hard to write iteratively. The downside is the requirement to write tail recursive algorithms which can be difficult for the uninitiated, but it's a skill that can be honed.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com2tag:blogger.com,1999:blog-1923221109868193008.post-37012385607811273832009-09-11T16:17:00.013-04:002010-11-28T12:25:47.846-05:00What if HTML wasn't Top Dog?What if HTML wasn't the top node in our web pages? Sounds strange right? But, what if it was just another node inside a larger structure? HTML is great for defining textual documents where you want page flow layout as your choice. However, it really is painful to use a general purpose UI layout language. Which I would argue is the more common practice these days. Even the simplest blogs, forums, or search pages have some form of application layout involved. Why is it so hard with HTML? Lots of that derives from page flow layout and legacy support of this concept. But, if we embed HTML in a larger structure we could do whatever we wanted.<br /><br />What sucks most about HTML? I would argue it's all the time I waste trying to get the layout I wanted. All the time I spent learning CSS and HTML 4 was probably 6 months or more before I felt comfortable with it. I could come close to the layout I had in my mind. However, as soon as I switched browsers my beautiful layout went to crap, and I had to dig into arcane browser hacks to make it work. Who enjoys that?<br /><br />How much of that is the complexity of CSS rules and HTML? Ever tried or thought about creating a browser? It's NOT easy. I find understanding the interaction between CSS and HTML arcane as a web designer. If I find it hard then it's really hard for the browser developer to get it right. And that's precisely what we've seen. Lots of inconsistency in how they interpret the meaning of things. Leading to browser inconsistencies. If it's simple to understand then it's simple to implement. If it's simple to implement it's easier for two people to come to a common expectation.<br /><br />Let's get specific. Say I wanted my node to be position relative to it's parent. I want to set the top left corner of an element to be 50 pixels from the left and 50 pixels from the top. In HTML I can set the left and right of my component, but I also have to set the layout to absolute on the child, and set the parent to relative. This is a common practice in other UI toolkits, but it's complex in HTML. What if all I did was this:<br /><br /><pre class="brush: xml"><br /><application><br /> <box top="50" left="50" width="200" height="200"></box><br /></application><br /></pre><br /><br />Pretty simple right? Although this isn't that far from HTML/CSS, there are other things that aren't so easy. What if I wanted to horizontally align that box relative to the parent's center.<br /><br /><pre class="brush: xml"><br /><application><br /> <box width="800" height="600" horizontalCenter="0"></box><br /></application><br /></pre><br /><br />Simple. HTML/CSS you would use margin: auto? WTF!? Doesn't horizontalCenter make more sense? Of course it does. Try vertical centering on for size:<br /><br /><pre class="brush: xml"><br /><application><br /> <box width="800" height="600" verticalCenter="0"></box><br /></application><br /></pre><br /><br />Try that with HTML and you'll come up short or at best bizarre.<br /><br />What about defining boxes that grow when the window is changed? That can easy too:<br /><br /><pre class="brush: xml"><br /><application><br /> <box id="banner" left="0" right="0" top="0" height="50"></box><br /> <box id="leftmenubar" left="0" width="250" top="50" bottom="0"></box><br /> <box id="content" left="250" right="0" top="50" bottom="0"></box><br /></application><br /></pre><br /><br />Simple. The content area sets his left and right relative to the parent's edges. When the parent grows so does the child. The banner and leftmenubar are fixed in position. However, the banner grows its width as the parent's width grows.<br /><br />Even supporting legacy HTML documents could be simple.<br /><br /><pre class="brush: xml"><br /><application><br /> <box width="800" height="600" horizontalCenter="0" verticalCenter="0"><br /> <HTML width="100%" height="100%"><br /> </HTML><br /> </box><br /></application><br /></pre><br /><br />HTML just becomes another possible node within the super document. It would create yet another box that can display text documents using what you want for text documents which is page flow layout. HTML nodes could occur as many times as we need in our over all application.<br /><br />Furthermore, legacy HTML documents (e.g. those starting with HTML) could be converted into our application tag just by surrounding the application tag around the legacy HTML only document. Hence making all HTML documents forward compatible with application documents.<br /><br />It's a simple idea to fix the constant layout problems with the web.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com2tag:blogger.com,1999:blog-1923221109868193008.post-72073081565725779642009-09-03T23:13:00.011-04:002010-11-28T12:26:54.321-05:00On the Importance of being Synchronous: Asynchronous + Actionscript<a href="http://kuwamoto.org/2006/05/16/dealing-with-asynchronous-events-part-2/">Dealing with Asynchronous Events Part-2</a><br /><br />Damn Blogspot sucks. What the hell? Why haven't they added the 1st new feature in like 5 years? Trackbacks hello???? WTF? I'm not up for cobbling a solution together with greasemonkey, yada, yada, yada. I need a new blog platform. Enough about that let's get to code.<br /><br />Anyway I wanted to add my fuel to the fire on asynchronous programming. This is a topic I'm very interested in because Actionscript isn't the only language suffering from this. It's very much rooted in classic Computer Science so it's a deep topic. That blog post is old, but it's still something that doesn't have a satisfactory answer yet. Computer scientists have been discussing this topic since the 1970s in one form or another.<br /><br />I'm very satisfied with my solution to the single request problem. By that I mean making a single round trip to the server and back. Here is roughly how I do asynchronous calls in actionscript.<br /><br /><pre class="brush: as3"><br />var tag : String = "Archive";<br />var loader : URLLoader = defaultLoader();<br />loader.addEventListener( Event.COMPLETE, function( event : Event ) : void {<br /> var json : Object = JSON.decode(loader.data);<br /> var mail : Array = json.map( funcion( json : Object, index : int, arr : Array ) : Email { <br /> return new Email( json ) <br /> } );<br /> var event : DynamicEvent = new DynamicEvent('mail.loaded');<br /> event.mail = mail;<br /> dispatch( event );<br />} );<br />loader.load( session.httpGet( '/home/email/', { tag: tag } ) );<br /></pre><br /><br />Really tight code, and it doesn't feel like the infrastructure for doing the calls are in the way of understanding what's going on. Now I'm using several factory methods to encapsulate common error handling, host name, authentication tokens, etc. Of course all of this can be overridden, but having defaults keeps that code out of the flow of how you work.<br /><br />The difficult part comes when you need synchronous flow control over asynchronous calls. This only starts to show up with more than one trip to the server. Say for example, server call 1 must complete before server call 2. You can chain them like so:<br /><br /><pre class="brush: as3"><br />var loader : URLLoader = defaultLoader();<br />loader.addEventListener( Event.COMPLETE, function( event : Event ) : void {<br /> var json1 : Object = JSON.decode(loader.data);<br /><br /> // do something with json1<br /><br /> var nextLoader : URLLoader = defaultLoader();<br /> loader.addEventListener( Event.COMPLETE, function( event : Event ) : void {<br /> var json2 : Object = JSON.decode( nextLoader.data );<br /><br /> // do something else with json2, and maybe json1<br /><br /> });<br /> loader.load( session.httpPost( '/home/update', { arg1: json1.arg1 } ) );<br />} );<br />loader.load( session.httpGet( '/home/synchronize/', { hashkey: hashkey } ) );<br /></pre><br /><br />It's doable, but it's getting messy. And quite frankly a little hard to understand. Is that all we need? If so, then we can stop here and be ok. Sadly, no the rabbit hole can get worse and twisted. Say we want to do server call 1, server call 2, or both based on some conditions! And we want to maintain the order call 1 precedes call 2 if call 1 is done. Kinda of like:<br /><br /><pre class="brush: as3"><br />var result : Object = null;<br />if( someExpression ) {<br /> result = executeServerCall1();<br />}<br /><br />var result2 : Object = null;<br />if( someOtherExpression ) {<br /> result2 = executeServerCall2( result.arg1 );<br />}<br /></pre><br /><br />Now I want to stop right here and say. Look how easy that was to specify in synchronous code. Jr. programmers can understand that code. All things like conditional logic, control flow, data flow, and more importantly re-usability are all effortless. Just doing simple control flow between asynchronous code is a real challenge.<br /><br />One thing that I really have trouble with is refactoring logic into a re-usable method that I can call from multiple locations. In synchronous land I can wrap behavior around it doing logic before and after that method. I can easily pass data between in and out. All of these properties lead to reuse and powerful constructs for hiding details. The basis of easy to follow and maintain algorithms.<br /><br />Adding logic before and after is very hard when the method uses asynchronous calls. I've decided that adding callback objects into the calls it the best route. For example,<br /><br /><pre class="brush: as3"><br />public function updateUser( user : User, callback : Function ) : void {<br /> var loader : URLLoader = defaultLoader();<br /> loader.addEventListener( Event.COMPLETE, function( event : Event ) : void {<br /> var json : Object = JSON.decode(loader.data);<br /> var user : User = new User( json );<br /> callback( user );<br /> });<br /> loader.load( session.httpPost( '/user/update/', { id: user.id, email: user.email } ) );<br />}<br /></pre><br /><br />I prefer callbacks to using event listener. The main reason for that is event listeners are more long living, i.e. longer than a single method call. If you use event listeners you have to register and unregister between calls, i.e. more mess. If this method is apart of a longer living instance, as I typically do, you could get more than one callback happening. Callbacks are isolated between method calls so they can independent from one another. (There's a lot to discuss here too, but I'll save that for later).<br /><br />I'm working on my next evolution of this idea to try and build up an architecture to help aid in making multiple round trips to the server in order without adding fuss, and hopefully allowing an outside person to read my code without needing a lobotomy to put my brain in their head. We will have to step away from our friend closure for this to work. But, I want to leave you with this thought.<br /><br />All of the tools we use today are aided by synchronous control flow. When we remove synchronous flow our tools fall apart. We have very little tools at our disposal to help specify complex flow using asynchronous semantics. Closures are about it, but they aren't enough and fall apart quickly. We need new constructs that aid in asynchronous control flow. Possibly a way to restore synchronous flow, but asynchronous underneath. If we had these constructs we could do this type of work independent from things we typically think of like threads, processes, message passing, etc. Those constructs could be underneath it, but we as programmers would be less involved with their presence.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com6tag:blogger.com,1999:blog-1923221109868193008.post-20508451357692212122009-05-21T17:28:00.013-04:002010-05-09T11:39:07.779-04:00Agile Methods are DisruptiveNow I might actually start a fight with that title, but at least it got you here. I recently just finished reading the "Inventor's Dilemma" by Clayton Christensen. It's an amazing book that focuses on the difference between disruptive technologies and sustaining technologies. This distinction is important because it changes how your organization should attempt to develop and manage these types of technologies. Within the book he discusses some of the social forces at play that make developing disruptive technologies different from sustaining ones. I believe these same dynamics are at work with adopting agile development processes. <br /><br />In order to understand this you have to understand the difference between what is disruptive and what is sustaining. Sustaining technologies are complimentary to existing technologies your customers use. Sustaining technologies will be easily accepted by your existing customer base. That property makes it very easy for you to develop within product line using your existing resources and process. Disruptive technologies are the opposite. They most likely won't be accepted by your customers at first and your organization will find it extremely hard to develop them in house. Christensen's argument is that disruptive technologies only work if they are spun off into a separate organization independent from your own. They must be quarantined away from the core or else your organization will kill them at all costs.<br /><br />It's a fascinating phenomenon, but at the end of the book he gives hints as to why this is so. He starts talking about the three components that make up any organization: resources, processes, and values. This is where the book starts to sound more like a sociology study on business structure. He points out that resources are portable: people, assets, software, money, etc. They can be fired, hired, moved, procured, sold, bought, etc. They don't care where they are, and they can be applied anywhere you want. This the key difference between the other two because processes and values are NOT portable. It's much harder to move processes and values to between organizations. These are very important properties because without them the organization would disappear. Remember resources move in and out of an organization but it's these processes and values that stay behind and keep it alive. This also means that changing these processes and values is next to impossible. Why? Well because they are what define the organization if you change them then the organization dies, and a new one comes into being. <br /><br />This got me thinking about agile environments and how they try to affect the later two components. Agile is a process, but mainly targeted at software development. And with it comes a certain set of values you must adopt or else you're going to find it very hard to follow the process. If you don't accept the idea that high levels of communication and collaboration are much better than comprehensive documentation then you'll find agile methods very hostile.<br /><br />The other day a group of developers were all talking about agile development. Eventually we drifted towards the difficultly we were all having trying to convert an organization into an agile one. Almost all of us felt like it was somewhere between limited success to impossible. It finally hit me. We're trying to do the exact thing Christensen says you can't do. Change a company's process AND values! Not so much a company, but a development team which like a company has processes and values.<br /><br />We all had anecdotal evidence of a lack of success in doing so. In fact of all the organizations I know that have successfully adopted agile development were green field starts, or they were able to convert everyone in their organization to it all at once. This normally meant small shops or isolated teams. And, in fact my only successful attempt was when I was on a team that was separate from the rest of the development organization that had virtually no dependencies on non-agile groups. My other attempts were very large groups, or groups that had lots of dependencies on other non-agile groups. No surprise those all failed to reap the benefits of agile development.<br /><br />Why do groups with dependencies fail? It seemed obvious at the time, but I think another idea Christiansen mentions is to blame. And, that is Resource Dependence Theory. In the book Christiansen explains a theory of management that says something like the following. Employees (e.g. CEO, the board, VPs, managers, etc) aren't in control of the decisions in a company. The customers they serve are. I would actually add to this that it's not just the customers, but suppliers, partners, etc. For example, think of the car dealers for GM and how they have crippled GM's ability to cut costs by closing dealerships through the years. The CEO could do nothing to change this until they almost went bankrupt. That's how entrenched customers can make an organization.<br /><br />This same idea of resource dependence comes into play with agile teams. If you have a lot of dependency between you another team then you will find it increasingly difficult to be agile yourself. Why? Just like company heads aren't in charge of their companies you aren't in charge of your group.<br /><br />Just like when you develop a new disruptive technology you can't stop at the product development teams. You have to break everything off: sales, marketing, etc. Agile is much the same as it requires you to adopt a new set of processes and values in how you build your products. <br /><br />Agile processes' values are in conflict with traditional development values. For one, agile can't predict what will be in the release and when it will be done. It can only predict one or the other, but not both. Traditional development thinks it can do both, but really it can't. However, this points out a key difference in values between the two. Traditional shops like both pieces, or they like the idea that they might know both pieces. Traditional development believes you can reliably predict outcomes and plan for long term success. Agile shops believe predictions are unreliable and reject the idea of long term planning in terms of project management. Traditional shops call for loads of documentation and check points. Agile groups reject documentation as wasteful and unproductive in favor of collaboration and high level of communication. These differences in values makes traditional shops find reasons to reject agile, or at best neuter it into submission.<br /><br />I remember one such conversation that illuminated how ferocious this difference in values can be. We were in a meeting trying to explain agile development practices work. It turned into a huge argument with one of the vice presidents about why agile practices would never work for product development. He fixated on the lack of "robust" agile procedures. He claimed that they might work in the consulting world, but don't apply to product development because product development need more "robust" procedures. His main evidence was this problem of predictability. Agile development could not predict both features and delivery schedule. The VP insisted agile development would not deliver quality software that a product company needs. In many ways it sounded exactly like an existing customer might come after a disruptive technology. Disruptive technologies typically don't have the same level of performance the entrenched technology does at first. The VP was making an argument over quality much in the same way Christensen says existing customers will make over disruptive technologies. Disruptive technologies, at first, don't perform as well, scale as well, or meet the high end needs of the existing customer base. So they usually find their foothold in smaller markets with lower margins. The incumbents are often all to happy to let the disruptive technology provider enter them because they don't make much money from these lower level markets anyway. I think it's interesting how agile development found it's foothold in consulting and small teams first.<br /><br />If you want to succeed with agile development in your organization don't try and change your existing development process into an agile one. It won't work or if you are able to do it it will be a very frustrating and tiresome process. Better is to start an agile organization. Separate them from other non-agile groups, give them autonomy to affect the processes outside your development organization. Don't see agile as just something your engineers do.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com2tag:blogger.com,1999:blog-1923221109868193008.post-90865488375965351222009-03-12T19:01:00.003-04:002009-03-12T19:17:32.898-04:00GridGain, GigaSpaces, Windows HPC on EC2For those of you interested in grid computing I found an older, but great post about scalability of ec2 for grid based applications. The thing that caught my eye was the final test using Windows HPC and Velocity. The tests were not comparable to each other, but the final test shows how much degradation you suffer when you're data is stored away from your computations. In there tests 31x reduction in performance when your data is stored "out of the cloud". I think this really shows the importance for good redundant storage at the point of computation.<br /><br /><a href="http://highscalability.com/your-cloud-scalable-you-think-it">http://highscalability.com/your-cloud-scalable-you-think-it</a><br /><br />The good news is for GridGain is the near linear scalability up to 512 nodes in pure CPU tests. Not as high as <a href="http://www.cs.washington.edu/homes/ak/clusterworkshop/slides/YahooHadoopDISC08.pdf">2000 nodes for Hadoop</a>, but that's the only real numbers I've seen anywhere on it. Does hint that GridGain's network overhead is really pretty light.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-68818566630615473412009-03-12T14:32:00.005-04:002009-03-12T14:53:22.654-04:00Grid Computing: Intro To GridGain Talk is OnlineI finally got some time to put up the slides, and source code for my talk I gave at the <a href="http://devnexus.com">Devnexus</a> conference in Atlanta. Here is the link to the <a href="http://app.sliderocket.com/app/FullPlayer.aspx?id=0EACB07D-839A-D23F-AEFB-E7CE7494085E">slides</a>, and the source code is <a href="http://sites.google.com/site/phreeus/Home/src.zip?attredirects=0">here</a>.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com1tag:blogger.com,1999:blog-1923221109868193008.post-56076447056937758412009-02-21T01:32:00.017-05:002010-11-28T12:31:04.257-05:00Actionscript and Concurrency (III of III)In the <a href="http://wrongnotes.blogspot.com/2009/02/actionscript-and-concurrency-ii-of-iii.html">previous article</a> we covered techniques for breaking up our long running job, but the performance was 40x slower than if we just ran our algorithm straight out. The problem is our algorithm spends very little time doing work, and a lot of time waiting for the next frame. Actionscript's performance is really quite high. We need to increase the time spent running our algorithm and minimize the time we spend doing nothing. We can do that by doing many iterations per frame instead of just one. Using getTimer() we measure how much time we spent looping and back off right before the next frame. Let's look at the code:<br /><br /><pre class="brush: as3"><br />public function start() : void {<br /> Application.application.addEventListener( Event.FRAME_ENTER, onCycle );<br />}<br /><br />private function onCycle( event : Event ) : void {<br /> var cycle : Boolean = true;<br /> var start : Number = getTimer();<br /> var milliseconds = 1000 / Application.application.stage.frameRate - DELTA;<br /> while( cycle && (getTimer() - start) < milliseconds ) {<br /> cycle = doLongWork();<br /> }<br /><br /> if( cycle == false ) {<br /> Application.application.removeEventListener( Event.FRAME_ENTER, doLongWork );<br /> }<br />}<br /> <br />public function doLongWork() : Boolean {<br /> // do some work<br /> i++;<br /> return i < total;<br />}<br /></pre><br /><br />Now we broken up our algorithm into an extra method. First the start() method which we're already seen. The new method is the onCycle which calculates how long a frame is in milliseconds. The loop continues until either the doLongWork method returns false, or we run out of time. Notice the DELTA constant is some constant that keeps us from eating up the entire frame. We need to give a little breathing room for Flash to drain the queue. Notice how our doLongWork method is just the code pertaining to our job. This makes it's easier to build a general purpose solution that we can reuse.<br /><br /><h3>Green Threads</h3><br />We can't use true OS threads in Actionscript, but any language can emulate threads. This technique is often called Green Threads. Lots of languages have used this in the past. Threads in Ruby are still green, and early versions of Java were green as well. Now Actionscript can too. I should pause and give credit to <a href="http://blog.generalrelativity.org/?p=29">Drew Cummins</a> who implemented a version of this for Flash player 10. I've rewritten this to remove the dependency of Flash 10, and changed some of the API so event dispatch is more natural, added easy progress events, and optional progress tracking. Let's see how our Mandelbrot algorithm changes when we use this.<br /><br />In order to use GreenThreads create a subclass of GreenThread, override run method, and optionally override initialize method to add code that runs at the start. Here is an example:<br /><br /><pre class="brush: as3"><br />public class Mandelbrot extends GreenThread {<br /> private var _bitmap : BitmapData;<br /> private var _maxIteration : uint = 100;<br /> private var _realMin : Number = -2.0;<br /> private var _realMax : Number = 1.0;<br /> private var _imaginaryMin : Number = -1.0;<br /> private var _imaginaryMax : Number = 1.0;<br /> private var _shader : Shader;<br /> <br /> private var _realStep : Number;<br /> private var _imaginaryStep : Number;<br /> private var screenx : int = 0;<br /> private var screeny : int = 0;<br /><br /> override protected function initialize( ) : void {<br /> _bitmap = new BitmapData( width, height, false, 0x020202 );<br /> screenx = screeny = 0;<br /> _realStep = (_realMax - _realMin) / Number(_bitmap.width);<br /> _imaginaryStep = ( _imaginaryMax - _imaginaryMin ) / Number( _bitmap.height );<br /> }<br /><br /> override protected function run():Boolean {<br /> if( screenx > _bitmap.width ) {<br /> screenx = 0;<br /> screeny++;<br /> }<br /> if( screeny < _bitmap.height ) {<br /> var x : Number = screenx * _realStep + _realMin;<br /> var y : Number = screeny * _imaginaryStep + _imaginaryMin;<br /> var x0 : Number = x;<br /> var y0 : Number = y;<br /> var iteration : int = 0;<br /> while( x * x + y * y <= (2 * 2) && iteration < _maxIteration ) {<br /> var xtemp : Number = x * x - y * y + x0;<br /> y = 2 * x * y + y0;<br /> x = xtemp;<br /> iteration = iteration + 1;<br /> }<br /> <br /> if( iteration == _maxIteration ) {<br /> _bitmap.setPixel( screenx, screeny, 0x000000 );<br /> } else {<br /> _bitmap.setPixel( screenx, screeny, shader.lookup( Number(iteration) / Number(maxIteration) ) );<br /> }<br /> screenx++;<br /> return true;<br /> } else {<br /> return false;<br /> }<br /> }<br />}<br /></pre><br /><br />The run() method is the body our of loop. The intialize() method is called once after the user calls the start() method. After that run() method is called repeatedly until it returns false. It's perfectly acceptable to call start() more than once to kick off the thread again after it's finished. That means you can calculate the Mandelbrot set from different zoom levels without needing to recreate new instances. The initialize() method will be called every time start() is called. Check out the results <a href="http://sites.google.com/site/phreeus/Home/actionscript-and-concurrency/FractalViewer.swf?attredirects=0">here</a>.<br /><br />You can also add optional progress tracking by setting maximum, and progress members. This will automatically dispatch ProgressEvents so that your instance can be a source to a ProgressBar. It makes tracking your job easy. GreenThread also subclasses EventDispatcher so you can dispatch events from within the run method.<br /><br />By in large we've solved the performance problems or we've gotten very close. What's holding us back is the resolution of getTimer(). Since we only have precision of millisecond we really can't run the risk of going smaller than 1 millisecond for our DELTA. That costs us a few iterations on our run() method which can make a difference over 1000 cycles. We could be a full second behind Actionscript that just ran the job straight through. There are a few things we can do to squeeze a little more performance out of GreenThreads.<br /><br />Frame rate governs everything we do, and by default Flex applications run at 24 frames/s, but really most Flex applications don't do that much animation so if we dropped the frame rate in half to 12 frames/s we would be able to run for longer periods uninterrupted. The fewer interruptions we have, the faster we'll go.<br /><br />GreenThreads also allows you to configure how much of the frame's time you dedicate to running your thread. By default it's set at 0.99 that roughly leaves 1 ms to update the UI. Under some experimentation this has proven to work quite well without creating lots of timeouts, but if you want to tweak it just provide a new value in the start method like so:<br /><br /><pre class="brush: as3"><br />public function go() : void {<br /> start( 0.5 );<br />}<br /></pre><br /><br />If the delta is less than 1 then it means a percentage of the length of a frame. If it's >=1 then it means the number of milliseconds to subtract from the length of a frame. Some more thought needs to go into this so that as you run your application on different machines with different CPUs so the pause is appropriate for the CPU. In the future it might need to be dynamically adjusted as the algorithm runs.<br /><br /><h3>Thread Statistics</h3><br />GreenThreads supports runtime statistics for tracking your job. To turn on thread statistics pass true to the GreenThread constructor. Thread statistics collects total time the job took, number of timeouts, min and max iteration times, average time a single iteration took, how many cycles it took, etc. There is a fair amount of information that can be gathered to help tune your thread. You can access that information by doing the following:<br /><br /><pre class="brush: as3"><br />public class SomeJob extends GreenThread {<br /><br /> public function SomeJob() {<br /> super( true ); // turn on debug statisitics<br /><br /> addEventListener( Event.COMPLETE, function( event : Event ) : void {<br /> trace( statistics.print() );<br /> });<br /> }<br />}<br /></pre> <br /><br /><h3>Conclusion</h3><br />There are some drawbacks to doing concurrency this way. One is algorithm have to be cooperative, and stop processing in the middle to let Flash do its thing. That means your algorithm normally have to be rewritten to conform with this approach. That can be particularly difficult for recursive algorithms. There needs to be more research done into how you might fix this with the callLater() technique. The biggest draw back is that we cannot take advantage of multi-processors. For all the code you write Flash runs on a single OS thread. This is a serious disadvantage for us going forward because as Actionscript developers we cannot access boosts in hardware performance as cores are added.<br /><br />It's been a lot of information but hopefully you now understand the theory behind concurrency in Actionscript, and you have a new library that helps you optimize your code. You can access the <a href="http://sites.google.com/site/phreeus/Home/actionscript-and-concurrency/Mandelbrot.zip?attredirects=0&d=1">source code here</a>, and download the <a href="http://code.google.com/p/greenthreads/">GreenThread's library here</a>. I look forward to hearing about what sorts of long running jobs you create.<br /><br />Full source code of the Mandelbrot set is <a href="http://sites.google.com/site/phreeus/Home/actionscript-and-concurrency/Mandelbrot.zip?attredirects=0&d=1">here</a>.<br /><br /><a href="http://code.google.com/p/greenthreads/">Download GreenThread's library here</a>.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com9tag:blogger.com,1999:blog-1923221109868193008.post-20178915298778387952009-02-19T14:21:00.023-05:002010-11-28T12:30:24.921-05:00Actionscript and Concurrency (II of III)Now that we understand more about <a href="http://wrongnotes.blogspot.com/2009/02/concurrency-and-actionscript-part-i-of.html">how Flash works</a> internally we can begin to talk about strategies to chop up our large running job into smaller pieces. We'll render the Mandelbrot set. It's a time consuming algorithm, and it's fun when your demos make pretty pictures too. Let's look at a simple implementation.<br /><br /><pre class="brush: as3"><br />public function calculate( width : int, height : int ) : void {<br /> _bitmap = new BitmapData( width, height, false, 0x020202 );<br /><br /> var realStep : Number = (_realMax - _realMin) / Number(_bitmap.width);<br /> var imaginaryStep : Number = ( _imaginaryMax - _imaginaryMin ) / Number( _bitmap.height );<br /> <br /> for( var screeny : int = 0; screeny < _bitmap.height; screeny++ ) {<br /> for( var screenx : int = 0; screenx < _bitmap.width; screenx++ ) {<br /> var x : Number = screenx * realStep + _realMin;<br /> var y : Number = screeny * imaginaryStep + _imaginaryMin;<br /> var x0 : Number = x;<br /> var y0 : Number = y;<br /> var iteration : int = 0;<br /> while( x * x + y * y <= (2 * 2) && iteration < _maxIteration ) {<br /> var xtemp : Number = x * x - y * y + x0;<br /> y = 2 * x * y + y0;<br /> x = xtemp;<br /> iteration = iteration + 1;<br /> }<br /> <br /> if( iteration == _maxIteration ) {<br /> _bitmap.setPixel( screenx, screeny, 0x000000 );<br /> } else {<br /> _bitmap.setPixel( screenx, screeny, shader.lookup( Number(iteration) / Number(maxIteration) ) );<br /> }<br /> }<br /> }<br /> var evt : Event = new Event( Event.COMPLETE );<br /> dispatchEvent( evt );<br />}<br /></pre><br /><br />Click <a href="http://sites.google.com/site/phreeus/Home/actionscript-and-concurrency/FractalViewer_blocking.swf?attredirects=0">here</a> to see it in action. Notice how there was a pause before it actually drew the Mandelbrot set. Maybe you got the pinwheel of death or not responding. That is what happens when you hold up the Event Queue.<br /><br />At a high level this algorithm calculates whether or not a pixel at (screenx,screeny) is inside the Mandelbrot set (i.e. stays below 4). If it stays below 4 after maxIterations of the loop then it colors the pixel black. If not it's color is based on how many times the inner while loop ran before going past 4. It's not as important you understand what the algorithm is doing as so much the parts of the algorithm. The parts that make this a long job are the three loops inside.<br /><br /><h3>Breaking Apart Algorithms</h3><br />In order to split this job up and run across many frames we'll need to break up those top two loops. Before we jump into that let's talk in a little more general terms. Say we want to bust up a general purpose loop something like:<br /><br /><pre class="brush: as3"><br />public function doLongWork( arg1 : String ) : void {<br /> for( var i : int = 0; i < 1000000; i++ ) {<br /> // do some work<br /> }<br />}<br /></pre><br /><br /><h3>callLater() Technique</h3><br />We could use the UIComponent.callLater() method to trigger our loop that might look like the following:<br /><br /><pre class="brush: as3"><br />public function doLongWork( arg1 : String, i : int, total : int ) : void {<br /> // do work<br /> if( i < total ) {<br /> uicomponent.callLater( doLongWork, [ arg1, i + 1, total ] );<br /> }<br />}<br /></pre><br /><br />This is nice. We've removed the for loop and replaced it with what looks like a recursive call, but it's not. Actually what we're doing is doing a single iteration of the loop, and then scheduling the Event Queue to call us back later to do the next iteration of our loop. We do this until i == total, and that point we stop.<br /><br /><h3>Timer Technique</h3><br />Another way we could restructure our code is to us a timer. Here is another way to do this:<br /><br /><pre class="brush: as3"><br />public function start() : void {<br /> i = 0; total = 1000000;<br /> var milliseconds = 1000 / Application.application.stage.frameRate;<br /> _runner = new Timer( milliseconds );<br /> _runner.addEventListener( TimerEvent.TIMER, doLongWork );<br />}<br /> <br />public function doLongWork() : void {<br /> // do some work<br /> i++;<br /> if( i >= total ) {<br /> _runner.stop();<br /> _runner.removeEventListener( TimerEvent.TIMER, doLongWork );<br /> }<br />}<br /></pre><br /><br />A little more code, but this works too. Now we're scheduling a timer to call us at an interval of a single frame. That's the first line where we calculate in milliseconds the length of a single frame. Then we register our doLongWork method to get called back by the timer. We then remove the listener and stop the timer once i reaches total. Notice that in this method we have to move i and total into instance variables which means we have to initialize those in some sort of start method.<br /><br /><h3>FRAME_ENTER Technique</h3><br />The final option we could use is FRAME_ENTER event. That looks like this:<br /><br /><pre class="brush: as3"><br />public function start() : void {<br /> i = 0; total = 1000000;<br /> Application.application.addEventListener( Event.FRAME_ENTER, doLongWork );<br />}<br /> <br />public function doLongWork( event : Event ) : void {<br /> // do some work<br /> i++;<br /> if( i >= total ) {<br /> Application.application.removeEventListener( Event.FRAME_ENTER, doLongWork );<br /> }<br />}<br /></pre><br /><br />This is nice because we don't have to fiddle with math in order to call us back at the frame rate. We just register a listener, when we're done we unregister our listener. Not as clean as callLater, but this works in both Flash and Flex. What's important to remember is that all of these techniques are equivalent. There is no discernible difference in performance between them.<br /><br /><h3>Restructuring Our Demo</h3><br />So now we can restructure our Mandelbrot algorithm to match one of these patterns. In our case we'll move screenx, screeny, _realStep, _imaginaryStep, and our BitmapData outside of our calculate() method, and put them inside calculateAsync(). Here's our Mandelbrot algorithm restructured:<br /><br /><pre class="brush: as3"><br />public function calculateAsync( width : int, height : int ) : void {<br /> _bitmap = new BitmapData( width, height, false, 0x020202 );<br /> screenx = screeny = 0;<br /> _realStep = (_realMax - _realMin) / Number(_bitmap.width);<br /> _imaginaryStep = ( _imaginaryMax - _imaginaryMin ) / Number( _bitmap.height );<br /> Application.application.addEventListener( Event.FRAME_ENTER, calculate );<br />}<br /><br />private function calculate( event : Event ) : void {<br /> if( screenx > _bitmap.width ) {<br /> screenx = 0;<br /> screeny++;<br /> }<br /> if( screeny < _bitmap.height ) {<br /> var x : Number = screenx * _realStep + _realMin;<br /> var y : Number = screeny * _imaginaryStep + _imaginaryMin;<br /> var x0 : Number = x;<br /> var y0 : Number = y;<br /> var iteration : int = 0;<br /> while( x * x + y * y <= (2 * 2) && iteration < _maxIteration ) {<br /> var xtemp : Number = x * x - y * y + x0;<br /> y = 2 * x * y + y0;<br /> x = xtemp;<br /> iteration = iteration + 1;<br /> }<br /> <br /> if( iteration == _maxIteration ) {<br /> _bitmap.setPixel( screenx, screeny, 0x000000 );<br /> } else {<br /> _bitmap.setPixel( screenx, screeny, shader.lookup( Number(iteration) / Number(maxIteration) ) );<br /> }<br /> screenx++;<br /> } else {<br /> Application.application.removeEventListener( Event.FRAME_ENTER, calculate );<br /> }<br />}<br /></pre><br /><br />Now if we <a href="http://sites.google.com/site/phreeus/Home/actionscript-and-concurrency/FractalViewer_slow.swf?attredirects=0">ran this version</a> of it you'd see it's 40x slower! Why is that? Well we're executing a single iteration every start of a frame (~40ms). If you timed the running of the calculate method. You'd see that it probably takes less than a millisecond to complete a single iteration. So we've just take a single iteration that ran in <1ms and now we're running it in 40ms! Yikes!<br /><br />Remember when I said Flash is all about animation? Running jobs at the frame rate is great for animation, but horrible for general purpose concurrency. We need a better way. So our final part we'll discuss optimizing our technique and demonstrate a general purpose solution so that you don't have recreate the wheel when you need to run jobs for long periods of time. See you in the next installment.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com1tag:blogger.com,1999:blog-1923221109868193008.post-80258334675470905452009-02-19T12:51:00.023-05:002009-11-18T23:43:40.453-05:00Actionscript and Concurrency (Part I of III)This is a three part series from a talk I gave at the <a href="http://www.affug.org">Atlanta Flex and Flash User Group</a> meeting in February. Original slides are <a href="http://sites.google.com/site/phreeus/Home/actionscript-and-concurrency/Concurrency.swf?attredirects=0">here</a>.<br /><br /><div><ul><li>Part I Intro to the Event Queue</li><li>Part II <a href="http://wrongnotes.blogspot.com/2009/02/actionscript-and-concurrency-ii-of-iii.html">Techniques for Long running Jobs</a></li><li>Part III <a href="http://wrongnotes.blogspot.com/2009/02/actionscript-and-concurrency-iii-of-iii.html">Increasing Performance and a General Purpose Solution</a></li></ul></div><br /><br />I can hear it now Actionscript can't do that. And some of you might say what is concurrency? Concurrency goes by a lot of names, but it simply means doing two things at once. More accurately stated doing two things simultaneously. Now I can hear what you might say "But, Flash already does that? I mean I can animate two objects on the screen at once." While that's true, Flash is optimized for animation not for general purpose concurrency. In fact the design of animation is so foundational it governs everything that happens in Flash.<br /><br /><h3>Timeline</h3><br />If you're a Flash developer I'm sure you're familiar with the timeline. The timeline is divided into a series of frames, and Flash executes those frames at a particular rate one after the other. For flex applications the rate is 24 frames / second. If you do the math that means each frame lasts a little more than 40ms. The timeline is very natural for designers. But, for developers this concept is a little strange because it's hard to understand where my code is executed?<br /><br /><h3>Event Queue</h3><br />For developers I think a thought experiment helps clarify the timeline. Say you were hired to implement the timeline concept. What data structure would help you in doing this task? The answer is surprisingly simple and bears a lot of resemblance to most UI toolkits out there. At the heart of the Flash platform there exists a queue of events. The Event Queue, as it's known, is where all code is triggered. Every piece of code in Flex and Flash is related back to some event being triggered. So when you move your mouse, click a button, type on the keyboard, set timers they all go onto the queue. Flash then pops off those events from the queue and executes them one by one. When one event is done processing the next event is processed. There's no way any two events can be processed at the same time. It all happens one at a time.<br /><br />Now in Flash even those frames from the timeline are modeled as events. The difference between them and other events is that frame events must occur at certain points in time. Unlike mouse or keyboard which can wait till the next frame to be processed. Frame events must be processed every ~40ms (or 1000 ms / 24).<br /><br />That means if one event takes too long to process Flash can't update the screen, process other events, or doing anything. What happens is Flash locks up and you get a pinwheel of death or a Not Responding next to your application. In fact Flash punishes such acts and stops processing mouse, keyboard, or button clicks till it catches up. Why? Well that because if you block the queue from processing it won't update that nice animation, and Flash prioritizes that over anything else.<br /><br /><span style="font-weight:bold;">Rule number one</span> is that any code you write must execute within the time span of a frame. If not you run the risk of having Flash coming down on you. This leads us to the <span style="font-weight:bold;">second rule</span> which is <span style="font-weight:bold;">Flash is not concurrent</span>. There is no way for Flash to update the UI and run your job at the same time.<br /><br /><h3>Timeslicing</h3><br />So what do you do if you have a long running job? Well the answer lies in how animation works in Flash. Animating several objects at once requires a series of many smaller steps. Say you want to move a ball from one side of the screen to the other. That means you need to move it a little, redraw, move it some more, redraw, etc until the ball is at the other side of the screen. In a way animation is chopping up a long running job into several smaller jobs that run once per frame. We can do that too by chopping up our job into many smaller jobs, and executing them a little at time until we're done!<br /><br />So how might you do this? Well there are several tricks you can use, and they all perform the same. It's more a matter of taste in which one you choose, but remember technically there is no real difference between these. Our <a href="http://wrongnotes.blogspot.com/2009/02/actionscript-and-concurrency-ii-of-iii.html">next part</a> we'll look at each of these techniques and discuss unique problems when doing concurrent actions in Flash.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com4tag:blogger.com,1999:blog-1923221109868193008.post-59390729708431522842009-01-07T11:33:00.008-05:002010-11-28T12:32:50.043-05:00Fun with Fluent Interfaces and JavaI've written about fluent interfaces before, but I thought I'd share this one I use quite a bit. You never know how much you like something until it's gone. I never thought I really liked Java's InputStream and OutputStream that much until I had to do a lot of streaming work in Actionscript. They have no abstraction for doing stream manipulations. But, let's be honest Java's io first settler's haven't changed much since their introduction. In fact their interfaces have not changed one bit. Sad really because they are so ubiquitous. I find I'm always copying data from one stream the other, dealing with IOExceptions, remembering to close streams, etc. And I got really tired of doing it over and over. What started out as static methods has evolved into a very simple object called ExtendInputStream. Extended as in extending the interface to add more rich functionality rather than the use of inheritance.<br /><br />The greatest single thing about InputStream and OutputStream is that it's the quintessential example of a decorator. Decorator is one of the foundational software patterns. What I love about decorators is the ability to encapsulate related classes behind a new interface while still retaining interoperability with other decorators.<br /><br />ExtendedInputStream is a InputStream so it can interact just as plain old InputStream would, but it adds methods like copy, closeQuietly, copyAndClose, and integration with File objects which has always been a pet peeve of mine with InputStream. Let's look at some examples:<br /><br />Here is copying a file to a directory.<br /><br /><pre class="brush: java"><br />new ExtendedInputStream( someFile ).copyAndClose( dir );<br /></pre><br /><br />One liner! It's amazing how File object doesn't have these methods already implemented, but then again this approach is much more flexible because we can copy this file to any OutputStream. Here is copying a set of files to a zip.<br /><br /><pre class="brush: java"><br />ZipOutputStream zout = new ZipOutputStream( out );<br />for( File myfile : files ) {<br /> ZipEntry entry = new ZipEntry( myfile.getName() );<br /> zout.putNextEntry( entry );<br /> new ExtendedInputStream( myFile ).copyAndClose( zout );<br />}<br /></pre><br /><br />Five lines of code! Not bad given that 4 of those lines is just to work with ZipOutputStream. Notice how I'm not saving the reference to the ExtendedInputStream here. The copyAndClose() method copies the contents of the file to the OutputStream and closes the InputStream. Closing the OutputStream is your responsibility.<br /><br />And the more general case of copying an plain old InputStream to any OutputStream.<br /><br /><pre class="brush: java"><br /><br /> URLConnection remote = new URL("...").openConnection();<br /> new ExtendedInputStream( new URL("...").openStream() ).copyAndClose( remote.openOutputStream() );<br /></pre><br /><br />Here is a more advanced version. Say we want to pull down a URL and save it to a file on our local filesystem.<br /><br /><pre class="brush: java"><br /> File someDirectory = ...;<br /> new ExtendedInputStream( new URL("...").openStream() ).name( "SavedUrl.txt" ).copyAndClose( someDirectory );<br /></pre><br /><br />Here we use the optional method name() to set the name of the stream so when we save something to a directory it will use this name as the filename. You could have just as easily done new File( someDirectory, "SaveUrl.txt" ), but it's not always convenient.<br /><br />You can use a similar pattern for increasing the buffer size used when copying as well.<br /><br /><pre class="brush: java"><br /> new ExtendedInputStream( new URL("...").openStream() ).bufferSize( 8096 * 2 ).copyAndClose( someDir ); <br /></pre><br /><br />While I have enjoyed writing this simple class I think I've enjoyed using it more so. I really can't start a new Java project without it now. It's a lot of fun to use. I'd be interested in hearing other features people might want to see added.<br /><br /><pre class="brush: java"><br />package com.wrongnotes.util;<br /><br />import java.io.*;<br /><br />public class ExtendedInputStream extends InputStream {<br /><br /> private InputStream delegate;<br /> private String name;<br /> private int bufferSize = 8096;<br /><br /> public ExtendedInputStream( InputStream stream ) {<br /> this( "no_name_file", stream );<br /> }<br /><br /> public ExtendedInputStream(String name, InputStream delegate) {<br /> this.name = name;<br /> this.delegate = delegate;<br /> }<br /><br /> public ExtendedInputStream( File src ) throws FileNotFoundException {<br /> name = src.getName();<br /> delegate = new BufferedInputStream( new FileInputStream( src ) );<br /> }<br /><br /> public int read() throws IOException {<br /> return delegate.read();<br /> }<br /><br /> public int read(byte b[]) throws IOException {<br /> return delegate.read(b);<br /> }<br /><br /> public int read(byte b[], int off, int len) throws IOException {<br /> return delegate.read(b,off,len);<br /> }<br /><br /> public long skip(long n) throws IOException {<br /> return delegate.skip(n);<br /> }<br /><br /> public int available() throws IOException {<br /> return delegate.available();<br /> }<br /><br /> public void close() throws IOException {<br /> delegate.close();<br /> }<br /><br /> public synchronized void mark(int readlimit) {<br /> delegate.mark(readlimit);<br /> }<br /><br /> public synchronized void reset() throws IOException {<br /> delegate.reset();<br /> }<br /><br /> public long copy(File dest) throws IOException {<br /> if( dest.isDirectory() ) {<br /> dest = new File( dest, name );<br /> }<br /> FileOutputStream out = new FileOutputStream(dest);<br /> try {<br /> return copy( out );<br /> } finally {<br /> out.close();<br /> }<br /> }<br /><br /> public long copy(OutputStream out) throws IOException {<br /> long total = 0;<br /> byte[] buffer = new byte[bufferSize];<br /> int len;<br /> while ((len = this.read(buffer)) >= 0) {<br /> out.write(buffer, 0, len);<br /> total += len;<br /> }<br /> out.flush();<br /> return total;<br /> }<br /><br /> public void closeQuietly() {<br /> try {<br /> close();<br /> } catch( IOException ioe ) {<br /> // ignore<br /> }<br /> }<br /><br /> public void copyAndClose( File file ) throws IOException {<br /> try {<br /> copy( file );<br /> } finally {<br /> close();<br /> }<br /> }<br /><br /> public void copyAndClose(OutputStream out) throws IOException {<br /> try {<br /> copy( out );<br /> } finally {<br /> close();<br /> }<br /> }<br /><br /> public ExtendedInputStream bufferSize( int size ) {<br /> bufferSize = size;<br /> return this;<br /> }<br /><br /> public ExtendedInputStream name( String newName ) {<br /> name = newName;<br /> return this;<br /> }<br />}<br /></pre>chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com2tag:blogger.com,1999:blog-1923221109868193008.post-72407105263385600602008-12-17T17:13:00.007-05:002009-02-19T16:33:45.309-05:00AC/DC Black Ice Tour: AtlantaSo my ears are still ringing even as I rock out to AC/DC at work. Last night AC/DC rocked Atlanta to a sold out crowd. The set list was the typical set list they've been playing in other cities. The concert started with a mini-movie segment of Angus driving a train, and two very slutty women come into the cab to derail the train. It's chocked full of not-so subtle innuendo and general male humor. Of course I'm not so sure why the women felt it necessary to try and stop the train. They looked like groupies so why they were stopping the train is unexplained. The overt sexual references and general absurdity of the piece had me laughing. Of course it ended in a huge on stage explosion and pyrotechnics display, a huge train smashes through the stage and out pops AC/DC playing their latest "Rock 'N Roll Train". It was an awesome entrance.<br /><br />Then they followed it up a Bon Scott original "Hell Ain't A Bad Place To Be". While those are some my favorite AC/DC tunes. I really wanted them to play "It's a Long Way To the Top (If you wanna Rock 'N' Roll)", and Rock 'N' Roll Singer which are probably my favorites. Well that's not true I always have trouble choosing my favorites, but I do play those a lot.<br /><br />Then they broke out "Back in Black" which I thought they played too early. This is their anthem get your damn hands up, but it definitely got the crowd going. It's hard to pick what they could have shuffled around because it's a solid set, but "Back in Black" is just such a powerful song it's got to be deeper in the set list.<br /><br />Then "Big Jack" off their new album. Followed by "Dirty Deeds Done Dirt Cheap" followed by "Thunderstruck" which was my wife's favorite. I do have a warm spot in my heart for "Thunderstruck". Thank god her favorite wasn't "Dirty Deeds".<br /><br />Then it was back to the new album for "Black Ice", and at this point I thought I'd have to go to the bathroom. But, I stuck it out, and was treated to another movie with "Warmachine". My favorite part was the parachuting women walking on the tank treads. Hilariously.<br /><br />Then it was back to the hits with "Anything Goes" and "You Shook Me All Night Long" with a flaming model dancing on the screens. Cue the 5 ladies in the audience to jump up and start dancing. While a lot of AC/DC songs are about women I can tell you there aren't many who listen to it. Then it was into T.N.T. which rocked.<br /><br />Other highlights of the night were Angus' guitar solo a top an elevated stage, and the gigantic inflatable Rosie that road the train during "Whole Lotta Rosie". Finally it was real cannon fire with "For Those About to Rock We Salute You". There was one long encore ending with "Highway to Hell" finishing the nite. I don't know if my ears could take anymore. It rocked them off.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0tag:blogger.com,1999:blog-1923221109868193008.post-25004360533651284132008-08-09T12:39:00.008-04:002008-08-09T22:31:14.765-04:00Rails on IntelliJAs I promised I'll give my verdict on Ruby on Rails with IntelliJ. First and foremost I haven't gone back to RadRails/Aptana Studio. I've been using IntelliJ's Ruby plugin now for 2 weeks I guess, and I still really like it. It's stable and very peppy. Much more so than Java editing I hate to say, but there aren't as many features in Rails plugin as main stay Java. Just the editor alone is so much nicer than Eclipse like hitting home goes to the beginning of your code on a line not the start of the first column. And, it doesn't just crash periodically like Eclipse does. Aptana Studio was really flaky. I've only shutdown IntelliJ to reboot my machine for Apple updates so it's very stable.<br /><br />By in large it covers most of rails and ruby development. You get all the comforts of home with generator scripts. Just right click on your project > New > Controller and viola it runs the Rails generator creating all of the parts you'd expect: controller, it's helper, tests, controller's view and such. You have access to all the same generators you'd expect: Controller, Model, View, Migration, plain old ruby, etc. And it works just like you expect with IntelliJ. <br /><br />One draw back is undo. It looks like they tried their best to support undo'ing a generator script, but all to often it gives up saying "Undo is too complex" or something like that.<br /><br />Syntax highlighting is a go. Nothing really unexpected there. Controllers, Models, Views, and even YAML editor.<br /><br />Code completion. Well it doesn't work that well in Aptana, and it's not too good in IntelliJ. I've gotten to the point where I just don't use it. I think if they were to try and make this work it would have to be some sort of smart guessing going on about what you're doing. But, in the end code completion just isn't going to work for dynamic languages. Oh well.<br /><br />Refactor is going to be the same verdict as Code completion. It does support move, copy, rename, and migrate. But, these aren't the fire and forgot refactorings you love with Java. Always preview...always. Sorry that's just what you have to give up when working with agile dynamic languages.<br /><br />Navigating between files is just as you'd expect it with Shift-N and Ctrl-Shift-N (Splat-N and Shift Splat N for mac users). They both work well, and I go back and forth between the class version, and the file version. The class version is going to work for Controllers, Models, Tests, and Migrations. The file version will work for views, and pretty everything else. It's a little weird having these different ways you have to use, but if you're an average IntelliJ user you won't even notice. It also still carries the "Include Java Files" check box when searching in Files. That should probably be removed.<br /><br />There are two views I usually go between the normal Project view, and a special Rails view. When I do Java I never switched from Project view, but the special Rails view is really quite nice. It pulls all of your code together showing your controllers, model, and tests. Under models you can access your migrations right there which is very nice. It's always a pain to navigate to the DB folder to find your migrations. It also pulls in your public folder for direct access to static resources. But, it's strangely absent of views, and this is probably why I find myself switching back to the project view. You can't see your view files (rhtml,rxml) from the Rails view. This is something I'd really wish they'd fix. It just doesn't even make sense why they left that out.<br /><br />You also have access to Rake tasks as well. Just right click on your project view > Rake. From there is a bunch of fold out menus to run Rake tasks. It's really nice to see all of the options you have. I'm always finding new tasks I didn't know I got for free. The ability to view all of the Rake tasks is very important given that the Rails geeks don't document very well. But, trying to right click then navigating all of those fold out menus is a futile exercise in mouse dexterity. I'd rather see a Rake view similar to the Ant and Maven docking views. That would be really sweet. Then I can just type portions of my rake tasks and quickly run them with the find feature just like I do with Ant.<br /><br />I normally use the script/server inside my own terminal to run the rails server, but IntelliJ supports running the server inside the IDE. You have all your choices of Mongrel, WEBrick, and Lighttpd. You can run your server in the various environments, but you have to specify that in the server arguments field. You can save various versions (development, production, etc) as separate programs to run, and just swap between them in the drop down box. You can run Ruby scripts, Ruby tests, Rails server, and RSpec.<br /><br />The thing I miss is the ability to run the script/console from within the IDE. I think you could make IntelliJ do this as it will run normal Ruby scripts. I tried, but it failed with an exception related to readline. It probably has something to do with the fact that I rebuilt ruby to include readline library, and IntelliJ just can't load it.<br /><br />I think they could make this experience a little easier by pre-populating your project with Run configurations for Developer, Test, Production, and console. That would be a nice enhancement.<br /><br />Finally, the coup de grace for RadRails is that the Rails Plugin comes packed with inline template completion (remember sout?). Think TextMate like editing! The best feature of TextMate is it's fast abbreviations for common tasks in Rails. Well IntelliJ users have nothing to covet. Type vc Ctrl-J and you get a validates_confirmation_of. Type rec = redirect_to with controller. Type rf = render file. There are tons of these. Although most are targeted at Ruby, but there are some rails specifics. I'd like to see more migration templates, but the good news is you can add them.<br /><br />Overall IntelliJ's rails plugin covers everything you need. I'm not switching back to Aptana/RadRails period. I think there are somethings they can improve, but overall it's the same quality you have come to love from Java development with IntelliJ. It's really one of the best Rails environments available to you.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com3tag:blogger.com,1999:blog-1923221109868193008.post-30999430101460833602008-07-26T13:16:00.004-04:002008-07-26T13:23:05.359-04:00Location of the Ruby SDK and IntelliJ on MacI've been getting back into Rails development again, and I've been trying to find the right IDE to work in. I'm an IntelliJ fan, and I hate Eclipse. But, last time I was using Rails I used RadRails, and it was ok. However, now IntelliJ 7 has support for Rails development, and I thought I'd try it out.<br /><br />First thing I ran into when I was configuring my project is setting up the Ruby SDK. Much like a Java project needs to know where the JDK home directory lives Ruby projects need to know where Ruby SDK lives. Only problem is on a Mac or Linux box those aren't so simple to find. I also had <a href="http://hivelogic.com/articles/2007/02/ruby-rails-mongrel-mysql-osx">rebuilt Ruby</a> so I could get readline support and installed it under /usr/local.<br /><br />I tried /usr/local/lib/ruby that didn't work. I tried /usr/local/lib/ruby/1.8 that didn't work. Finally after stumbling around without any success I tried <span style="font-weight:bold;">/usr/local</span> and viola it worked.<br /><br />If you didn't rebuild Ruby and are using the default Mac install. It would be <span style="font-weight:bold;">/usr</span>. Make sense? No not at all, but hopefully this blog post will help some peeps in the future.chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com3tag:blogger.com,1999:blog-1923221109868193008.post-86789023761966488112008-04-07T11:28:00.010-04:002010-11-28T12:38:38.550-05:00Objects as Functions Part II: A lightweight web app validation utilityBinding and validation is something almost all web frameworks must have a solution for. Some are more elegant than others, but the problem with most of them is that they are tied specifically to the use of that framework. I've yet to see a reusable utility that is framework agnostic for handling this. This is another take on <a href="http://wrongnotes.blogspot.com/2008/01/objects-as-functions.html">Objects as Functions</a> post I did a while back. Only now I'm applying it to binding and valiation for the web. The results are usable by any Java developer using any framework they want. Creating yet another idea "From Frameworks to Object Oriented Utilities."<br /><br />I had some code that I had written a while back where I coded the binding and validation by hand. In other words it was just a bunch of if else statement ladders. It resembled something like the following:<br /><br /><pre class="brush: java"><br />setEmail( request.getParameter("email") );<br /><br />List<String> errors = new ArrayList<String>();<br /><br />if( isNotSpecified( getEmail() ) ) {<br /> errors.add("Email is missing.");<br />} else if( isNotSpecified( confirmEmail ) ) {<br /> errors.add("Confirm email is missing.");<br />} else if( validateEmailFormat( getEmail() ) ) {<br /> errors.add("Email address provided is not a valid.");<br />} else if( validateEmailFormat( confirmEmail ) ) {<br /> errors.add("Confirm Email address provided is not a valid.");<br />} else if( !confirmEmail.equals( getEmail() ) ) {<br /> errors.add("Email and Confirm Email did not match.");<br />}<br /><br />return errors;<br /></pre><br /><br />Ok so yikes! I just like jumped back 10 years by writing code like that! But, I did it because I was in a situation where the "architect" hadn't really thought about these problems, and hadn't picked a framework that gave us that ability. So most people weren't doing any validation, and very poor binding. Think Vietnam of web apps here.<br /><br />So after I wrote this code once I knew I needed something better, but it wasn't until I was about to write it again that I decided to go back and try to refactor out the common code into a utility to make my job easier that I came up with a general solution. If you'll notice in that code above there are some handy instance methods I created in this class to help specifying the validation language. So I started pulling those common methods out into a separate class. I'll spare you the details of the refactoring for another blog post. I'll start with some simple examples:<br /><br /><pre class="brush: java"><br />public List<String> validateAndBind( RequestValidater validater ) {<br /> setFirstName( validater.param("firstName").require().toValue() );<br /> return validater.getErrors();<br />}<br /></pre><br /><br />This first example simply validates that the parameter "firstName" was specified in the request, fetches that value, and binds it into the instance object using a setter method. You'll notice there is no reflection taking place here. I'm a huge fan of reflection, but I think you'll see that this is so easy you actually don't need it. Remember that even in reflective frameworks you have to specify the validation rules, and specifying the binding (i.e. making you call the setter method by hand) isn't really where all the hard work is.<br /><br />Going into detail on what this does. The first step requests the "firstName" parameter from the validater object. Then it calls the require() method on it. This methods checks to see if the parameter is present if not it adds a default error message. The RequestValidater object keeps track of all the errors it encounters while executing the validation rules. Finally toValue() method returns that parameter's value as a String passing it to the object's setter method. If the value isn't present it simply returns null.<br /><br />In this simple example, if firstName parameter was missing then it would create an error message like: "First Name is missing". Because the parameter used camel case (i.e. firstName) the validater can infer a display name from that by breaking apart the parameter's name on the capital letter boundaries. So "firstName" would become "First Name". You can override this by supplying a second parameter to the param() method. Like:<br /><br /><pre class="brush: java"><br /> setFirstName( validater.param("firstName", "First name").require().toValue() );<br /></pre><br /><br />It will also infer using underscores as well (i.e. "first_name" = "First Name"). It's better to accept the default since it's less work, but realize that you can customize it if you so wish. The second way is to supply an actual error message in the require() method as a parameter. While this might be necessary sometimes, particularly with the matches() method, it's usually best to accept the defaults.<br /><br />Here's another example that validates and binds a date object.<br /><br /><pre class="brush: java"><br />public List<String> validateBindings( RequestValidater validater ) {<br /> setBirthDate( validater.param("birthDate").require().toDate() );<br /><br /> return validater.getErrors();<br />}<br /><br /></pre><br /><br />The key difference in this example is the call to toDate() rather than toValue(). The to*() methods convert strings into other values like integer, dates, etc. These methods will usually end the validation rule methods. You can also pass a default value into the to*() methods to provide a default date, integer, etc. Of course you wouldn't do that with a require() validation rule provided.<br /><br />Here is a couple more examples:<br /><br /><pre class="brush: java"><br />public List<String> validateBindings( RequestValidater validater ) {<br /> setUsername( validater.param("username").require().between( 5, 30).toValue() );<br /> setEmail( validater.param("email").require().validateEmail().equals( validater.param("confirmEmail") );<br /><br /> return validater.getErrors();<br />}<br /></pre><br /><br />This example we see some more methods for performing validations. The between() method is used to validate a parameter's length is between the two values. If not it adds an error message. You can see in the email example. Two methods validateEmail() which makes sure the value conforms to an email address, and the equals method which tests to see if the value matches some other value. In this example you can see how the validater.param("confirmEmail") can be used again to refer to another parameter in the request.<br /><br />Finally, there's a matches() method for making sure parameters conform to a regular expression. Here is an example of that:<br /><br /><pre class="brush: java"><br />public List<String> validateAndBind( RequestValidater validater ) {<br /><br /> Pattern phoneNumber = Pattern.compile("\\(?\\d\\d\\d\\)?(-|\\s)\\d\\d\\d(-|\\s)\\d\\d\\d\\d");<br /><br /> setPhonNumber( validater<br /> .param("phoneNumber")<br /> .require().matches( phoneNumber, "Phone Number provided does not look like a phone number.")<br /> .toValue() );<br /><br /> return validater.getErrors();<br />}<br /><br /></pre><br /><br />Here's how you can use the RequestValidater in your controllers:<br /><br /><pre class="brush: java"><br /> MyObject obj = new MyObject();<br /> List<Errors> errors = obj.validateBindings( new RequestValidater( request ) );<br /> if( errors.isEmpty() ) {<br /> // no errors means the user's request was valid<br /> } else {<br /> // we have some errors so send them back with the form data.<br /> }<br /><br /></pre><br /><br />After I finished writing this utility I went back and refactored by code to use it. I had something like 100-150 lines of validation code that I reduced to a simple 8 lines of code. Actually I added some new lines and formatting between some of the chained method calls which inflated it to like 25 lines, but still that's an amazing amount of code reduction. And, that doesn't include the lines I would've written to do validation in the second object.<br /><br />This is yet another example of how you can use Objects in Java as functions to really change how you can reuse code. Notice that I didn't create some static method utility class to do this because there was state being kept and managed inside RequestValidater for me. If I had a static method I'd have to keep track of the state for me. Even the error messages can be standardized across my entire app by using this class. Also notice that the fact I'm using binding and validating against HttpServletRequest is hidden from my model objects. This is another great example of how encapsulation hides the details of the system from my model objects. Something static utilties can't do for me. Why is this important? Well I didn't go into it, but I also made changes to the RequestValidater such that it's easy to use in unit tests by just instantiating it with a HashMap of parameters. That makes it really easy to automate your validation testing because your model object's aren't bound to the HttpServletRequest interface. Without using encapsulation it wouldn't have been that easy to reuse RequestValidater in a different context. You can see an example in the main method included in the source of how to reuse it in unit tests.<br /><br />Finally, my last thought on this is that it's a single class. There is no framework you have to adopt to use this code. Java has many choices when it comes to web frameworks. It's both a blessing and a curse, but the reality of the matter is most people are using Struts! Yuck! Why continue to build code that no one else can use? This marks the time when we need to move away from frameworks and to utilities.<br /><br />Now here's the source:<br /><br /><pre class="brush: java"><br />package com.cci.web.validation;<br /><br />import javax.servlet.http.HttpServletRequest;<br />import java.text.ParseException;<br />import java.text.SimpleDateFormat;<br />import java.util.*;<br />import java.util.regex.Pattern;<br /><br />public class RequestValidater {<br /> private HttpServletRequest request;<br /> private Map<String,String> params;<br /> private List<String> errors = new ArrayList<String>();<br /><br /> public RequestValidater(HttpServletRequest request) {<br /> this.request = request;<br /> }<br /><br /> public RequestValidater( Map<String,String> params ) {<br /> this.params = params;<br /> }<br /><br /> public boolean hasErrors() {<br /> return !errors.isEmpty();<br /> }<br /><br /> public List<String> getErrors() {<br /> return errors;<br /> }<br /><br /> public Parameter param( String name ) {<br /> return new Parameter( name );<br /> }<br /><br /> public Parameter param( String name, String displayName ) {<br /> return new Parameter( name, displayName );<br /> }<br /><br /> protected String getParameter( String name ) {<br /> if( request != null ) {<br /> return request.getParameter(name);<br /> } else {<br /> return params.get(name);<br /> }<br /> }<br /><br /> public class Parameter {<br /> private String displayName;<br /> private String name;<br /> private String value;<br /><br /> public Parameter(String name) {<br /> this.name = name;<br /> this.displayName = convertToDisplay( name );<br /> this.value = getParameter(name);<br /> }<br /><br /> public Parameter(String name, String displayName) {<br /> this.name = name;<br /> this.displayName = displayName;<br /> this.value = getParameter(name);<br /> }<br /><br /> private String convertToDisplay(String camelCase) {<br /> StringBuilder builder = new StringBuilder();<br /> if( !camelCase.contains("_") ) {<br /> builder.append( Character.toTitleCase( camelCase.charAt(0) ) );<br /> for( int i = 1; i < camelCase.length(); i++ ) {<br /> char next = camelCase.charAt(i);<br /><br /> if(Character.isUpperCase( next ) ) {<br /> builder.append( ' ' );<br /> }<br /> builder.append( next );<br /> }<br /> } else {<br /> String[] words = camelCase.split("_");<br /> for( String word : words ) {<br /> builder.append( Character.toUpperCase( word.charAt(0) ) );<br /> builder.append( word.subSequence( 1, word.length() ) );<br /> }<br /> }<br /> return builder.toString();<br /> }<br /><br /> public Parameter require() {<br /> return require( displayName + " is missing." );<br /> }<br /><br /> public Parameter require(String error ) {<br /> if( name == null || name.length() < 1 ) {<br /> errors.add( error );<br /> }<br /> return this;<br /> }<br /><br /> public Parameter between( int minSize, int maxSize ) {<br /> return between( minSize, maxSize, displayName + " must be at least " + minSize + " characters, but no more than " + maxSize + " characters.");<br /> }<br /><br /> public Parameter between( int minSize, int maxSize, String error ) {<br /> if( value == null ) return this;<br /><br /> if( value.length() < minSize || value.length() > maxSize ) {<br /> errors.add( error );<br /> }<br /> return this;<br /> }<br /><br /> public Parameter matches( Pattern pattern, String error ) {<br /> if( value == null ) return this;<br /><br /> if( !pattern.matcher( value ).matches() ) {<br /> errors.add( error );<br /> }<br /> return this;<br /> }<br /><br /> public Parameter validateAsEmail() {<br /> if( value == null ) return this;<br /><br /> if( !value.matches("(\\w|\\.)+@\\w+\\.\\w+(\\.\\w+)*") ) {<br /> errors.add( value + " is not a valid email address.");<br /> }<br /> return this;<br /> }<br /><br /> public Parameter equals( Parameter param ) {<br /> return equals( param.value, displayName + " does not match " + param.displayName + "." );<br /> }<br /><br /> public Parameter equals( String aValue, String error ) {<br /> if( value == null ) return this;<br /><br /> if( !value.equals( aValue ) ) {<br /> errors.add( error );<br /> }<br /> return this;<br /> }<br /><br /> public Date toDate() {<br /> return toDate( "MM/dd/yyyy");<br /> }<br /><br /> public Date toDate( String datePattern ) {<br /> return toDate( datePattern, value + " is not a valid date. (" + datePattern + ")" );<br /> }<br /><br /> public Date toDate( String datePattern, String error ) {<br /> if( value == null ) return null;<br /><br /> try {<br /> SimpleDateFormat dateFormat = new SimpleDateFormat( datePattern );<br /> return dateFormat.parse( value );<br /> } catch( ParseException pex ) {<br /> errors.add( error );<br /> return null;<br /> }<br /> }<br /><br /> public Integer toInt() {<br /> return toInt( (Integer)null );<br /> }<br /><br /> public Integer toInt( Integer defaultVal ) {<br /> return toInt( displayName + " must be a number without a decimal point.", defaultVal );<br /> }<br /><br /> public Integer toInt( String error ) {<br /> return toInt( error, null );<br /> }<br /><br /> public Integer toInt( String error, Integer defaultValue ) {<br /> if( value == null ) return defaultValue;<br /><br /> try {<br /> return Integer.parseInt( value );<br /> } catch( NumberFormatException nex ) {<br /> errors.add( error );<br /> return null;<br /> }<br /> }<br /><br /> public String toValue() {<br /> return value;<br /> }<br /><br /> public String toValue( String defaultValue ) {<br /> return value != null ? value : defaultValue;<br /> }<br /> }<br /><br /> public static void main(String[] args) {<br /> Map<String,String> params = new HashMap<String,String>();<br /> params.put("email", "jep1957@mindspring.com" );<br /> params.put("email1", "this.email@bad" );<br /> params.put("email2", "my address@bad.com" );<br /><br /> validateThese( "jep1957@mindspring.com", "charlie.hubbard@coreconcept.com", "this.email@bad", "my address@bad.com", "bad@bad@bad@bad", "hiya", "foo.bar" );<br /> }<br /><br /> private static void validateThese( String... emails ) {<br /> for( String email : emails ) {<br /> Map<String,String> params = new HashMap<String,String>();<br /> params.put("email", email );<br /><br /> RequestValidater validater = new RequestValidater( params );<br /> String val = validater.param("email").validateAsEmail().toValue();<br /> System.out.println( val + " was " + ( validater.hasErrors() ? "not valid!" : "valid" ) );<br /> }<br /> }<br />}<br /><br /></pre>chubbsondubshttp://www.blogger.com/profile/06708078598697844829noreply@blogger.com0