10/30/2021

The Metaverse is stoopid, and it will fail just like Facebook has.

So Facebook has died, and long live Meta which I think marks a huge historical milestone for humanity. The day social media died. Facebook is now going to try to morph itself into "The Metaverse" which is some sort of 3d virtual playground where nobody hates Mark Zuckerberg anymore? Honestly I can't make heads or tails of what the Metaverse is supposed to actually be beyond what has already come before as a persistent virtual reality social game. Just pick one and that's the "Metaverse": Second Life, Roblox, Minecraft, every MMOG game invited in the 21st century, practically. So why is Meta going to revolutionize this space? Because they have lots of money and an image to save? What furtile ground is actually left with this idea? And do you really want to live in a virtual reality based internet?

The "Metaverse" is basically trying to deliver on a promise as old as the internet. Most people cite Neil Stephenson's "Snowcrash" as the genesis of this idea, but it's probably only the most popular one. There had been attempts before to create it or something that could be possibly considered it. But, persistent social based VR based worlds or communities have been built before: most noteable Second Life. In fact I'd argue Second Life came closest to this idea because it wasn't trying to be anything beyond a excessively flexible virtual world. It also had the vision of being literally a 2nd life where you designed a new image of yourself within it. The economy of Second Life was larger than the GDP of Russia. I think it was the 27th largest economy of the world. It was quite popular for a time, but eventually it faded from the zeitgeist of the internet. While it failed, it did approximate this Metaverse vision. No matter how crude Second Life was it ultimately failed as an idea and that could be the precursor to fate waiting for "The Metaverse". I have serious doubts that a Facebook powered "Metaverse" is going to be radically different and transformative to make the concept stick.

I bet most Metaverse proponents would disagree with my comparison, and bristle at the fact that their project can be boiled down to a 4-5 year period in the mid 2000s that has been long forgotten. But there were many other attempts to bring about the "Metaverse" or technology that would support it. VRML was a promising idea that could democratize 3D worlds using a standards based approach, but it flamed out before it could even get going. Most promponents chalked up that failure to lacking implementations and compabilities, but there has been no real attempt to reboot that failure either. Mostly because the concept of the web as a 3D world was just not what people wanted. While we have better platforms like Unity or Unreal I don't think the crudeness of these inventions were really the problem.

Sure Mark Zukerberg and John Carmack can debate over whether the technology is ready or not, or if new technology will somehow improve the experience. However, the technology isn't really that much better than it was 10 or even 20 years ago. Maybe we have nicer graphics or higher resolutions, or even cheap VR headsets, but it hasn't really made VR more desirable. It's only really found a following inside a small pocket of gamers. Most of the time 2D monitors are the preferred method of expierence for gamers. And that really points to the fact of 3D expierences, which unless you are trying to build immersive gaming type worlds, most of us would rather just use 2D. We don't need to consume our instagrams and TikToks in 3D to get the point. Never have. That's why these 3D world platforms never last. VR interfaces are the video phone from the 1950s. Sounds really cool and sci-fi, but not a practical use case.

For a world that just spent roughly 18 months in doors and isolated with our only social outlet is zoom happy hours and conducting business over zoom calls I think what most people want right now is less computer time and more face time in the real life or what ever is left of it. That's what makes this pathetic attempt to revitalize a company's image by showcasing some bright new vision of the future as a transparent attempt to distract rather than deliver. It feels like the "Metaverse" is just some sort of fantasy escapism for Zuckerberg to run away from the problems he created because of his belief in excessively permissive 1st ammendment rights.

Modern social media is almost completely unregulated or moderated. Sure Facebook has standards of conduct and employs armies of people to moderate Facebok and Instagram, but it falls way short of effectively policing it. It took the almost overthrowing of the US government to force it to deplatform a President which shows every other bad actor on Facebook, Twitter, etc where the lines is which is very far away for almost all of them. These problems haven't gone away with a new platform and besides what army will moderate the "Metaverse"? Misinformation and hate speech is abound on the platform every minute of the day, and whatever Facebook is doing hasn't even come close to being a solution to the problem. Bot armies reign supreme ready to defame and spew nonsense 24-7 8 days a week. And it's largely those unchecked forces that spoiled Facebook and Twitter in the first place. Cyberbullying at scale runs rampant smear campaigns by a vocal minority and your neighbor believes in all sorts of pyschotic conspiracy theories that you are a liberal nazi who must be assinated.

The problem is that while Facebook is home for the cesspool of humanity so will it be in the "Metaverse". While we shake our finger at Mark Zuckerberg and his abomination of a creation he is only partly to blame because we're the ones at the end of the like and reshare button. And we'll be the exact same customer occupying "the Metaverse" if that becomes a thing. We're the ones that believe rumors, innuendos, and out right bat shit crazy conspiracy theories than pander to human's worst inner believes and fears. We are the ones that like and reshare content from the cesspool afraid to look away from the computer for fear we might miss something. In a way we are the ones that destroyed social media just as much as Zuckerberg or Jack Dorsey. The only real sin they committed was continuing to hold fast to their "no censorship" is a laudable goal. And that letting us "be our true selves" was a good idea. It wasn't. We're horrible people. In a way if the world was an ideal place inhabitted by rational and logical people you quite possibly could hold very open permissive 1st ammendment beliefs, but reality is we aren't rational nor ideal. And we don't deserve unfettered 1st amendment rights. We are quite easily manipulated by the dumbest schemes. Those faults require that public spaces censor out the ones that will pollute it, see also Paradox of Tolerance. And that hasn't really changed by peddling "The Metaverse".

This is all happening at just the time when globally we are waking up to how horrible we've been, and how awful Mark and other technofiles have been mismanaging their platforms. Now Mark Zuckerberg is trying to save what little he has left of his companies' credibility. It reminds me of the early 2000s when another company's image was falling apart because of rampant security issues. Microsoft. After Windows XP an inordinate storm of security vulnerabilities were threatening Micrsoft, and they couldn't patch it fast enough. The DOJ had slapped Microsoft down for violating Java's EULA with its "embrace and extend" philosophy, and basically was the all around 1 million lb. bad guy of tech. Then comes the promises of Vista. A new OS done right with a new language at its heart that would fix all of these pesky security issues in one fell swoop, and deliver us into a new world of discoverable services, the .NET platform, and nirvana. SQL database reimagined, hard disks and file systems were going to be "rethought" for "today's computing". Meanwhile the world moved on from Microsoft dominance, and old MS totally and completely missed the rise of web 2.0 and modern browsers. We began to break way from Internet Explorer 6.0 and softare as a service really started to become a reality like never before. Apple introduced it's OS 10, and people started to filter out into other operating systems away from the intel MS hegemony. Vista turned out to be a major failure, and it took a decade for MS to realize their mistake. And while that was happening they were moved out of the way. Vista distracted Microsoft and completely enveloped them into a world unto their own making. And the rest of computing just drove right past that exit to a happier and more exiciting industry.

I think "The Metaverse" will be Facebook's Vista. A new vision created when a company has reached a critical failure in its product. "The Metaverse" will be a project unto itself that Facebook will dedicate untold effort towards. A project that will become so big and all consuming Facebook will struggle to explain what it's doing and why. It will become the project that no one actually wants and will end up leading them to take their eye off the ball. Just like Microsoft did. They'll become less important in the fabric of the web economy, and will fade. They won't dissappear unless governments regulate them out of existence, but they will become less important in our lives. After all there still is a Myspace floating out there, and Facebook has enough gravity to float around too. A little more relevant than Myspace, but less relevant than now.

There are definite limits to any institutions we create even software based ones. We can never really perfect them or continue to build them ever larger. At some point the Tower of Babel always falls apart. Will Facebook have a 2nd act? Is Meta that second act or just the rebound girlfriend/boyfriend? I'm betting on the later.

1/24/2020

Mysql and the Groovysh

I wanted to use mysql in groovy shell, and I found it a bit trickier than usual. Also there is literally nothing covering how to do this on the internet. First thing is to configure your graphConfig.xml file. Edit your grapeConfig.xml file (for *nix ~/.groovy/grapeConfig.xml). Then drop your config in there: I had to update mine because I guess the central repo changed to requiring https now. Fire up the groovy shell. Then execute this command:

groovy:000> :grab 'mysql:mysql-connector-java:8.0.18'

Now here is where things get a little weird:

groovy:000> sql = Sql.newInstance('jdbc:mysql://localhost:3306/someDB', 
    'person', 
    'none of your business!')
ERROR java.sql.SQLException:
No suitable driver found for jdbc:mysql://localhost:3306/someDB

So that sucks. Unfortunately the driver isn't known to the DriverManager so no workie. Too close for missiles, switching to guns for this one:

groovy:000> sql = Sql.newInstance('jdbc:mysql://localhost:3306/someDB', 
    'person', 
    'none of your business!', 
    "com.mysql.cj.jdbc.Driver")

Bam! Now we're ready to start slamming some SQL in groovysh (woot).

5/01/2015

Jenkins, S3 Copy Artifact, Deploy Plugin, and ROOT Context Tricks

I've spent several frustrating days trying to get Jenkins to deploy my application remotely to my Tomcat 7.x app server. My build setup is pretty vanilla Java application built with Ant and Ivy. Overall I like Jenkins and I'm trying to learn how to use it for continuous deployment, but the lack of documentation explaining some of the plugins makes it extremely frustrating. This hopefully will explain some of the subtle configuration options better for these plugins. For this I've setup Jenkins with these plugins:

I have two jobs. One builds my application which uses a post build step to Publish artifacts to S3 bucket. The second job is used to remotely deploy the artifacts from the first job to the Tomcat 7.x server.

The 2nd job is a parameterized build with the following configuration:

Build selector for Copy Artifact
name = BUILD_SELECTOR

Execute Shell
Command = rm -rf $WORKSPACE/build

Copy S3 Artifact
Project Name = MyApp
Which Build = Specified by Build Parameter
Parameter Name = BUILD_SELECTOR
Artifact Copy = webappname-*.war
Target Directory = $WORKSPACE/build

Few things to note. BUILD_SELECTOR is the name of the environment variable that holds the user's selected build. The artifact to copy setting is not a path it's just a pattern used to select the artifact. I execute the rm command to clean up the artifacts between successive builds.

The first problem I encountered was the 2nd job kept failing because it said there were not any artifacts from the 1st job. It is NOT documented anywhere that I could find, but once I went back to the 1st Job and marked the "Publish artifacts to S3 bucket" step as "Manage Artifacts". Once that was checked it finally recognized the artifacts and I got the following!

Copied 1 artifact from "MyApp" build number 301 stored in S3

But the next problem was the deploy plugin kept failing with a very obtuse error.

java.io.IOException: Error writing request body to server

I found out that if I removed my application from the Tomcat 7.x webapps directory then it would actually deploy! But if I tried to redeploy it it failed with that obtuse error. I was deploying my app to Tomcat's ROOT context so my configuration looked like this:

WAR/EAR = build/fuseanalytics-*.war
Context = ROOT
Container = Tomcat7x
Manager User Name = none of your business
Password = also none of your business
Tomcat URL = http://somehost

So the clue was the following logging written out in the console output: "is not deployed."

Copied 1 artifact from "MyApp" build number 301 stored in S3
Deploying /var/lib/jenkins/workspace/MyApp/build/myapp-1.0-301.war to container Tomcat 7.x Remote
  [/var/lib/jenkins/workspace/DeployMyApp/build/myapp-1.0-301.war] is not deployed. Doing a fresh deployment.

So after looking through the Cargo code (what the deploy plugin is based on) I found out that Cargo has to use redeploy or undeploy if a webapp is already deployed. So why is it not working? It turns out Cargo cannot handle the Context of ROOT. The tomcat manager data doesn't specify the webapp deployed on ROOT! The work around is to change ROOT to '/' (without quotes) in Jenkins and viola! It works!

2/17/2014

Simplest Explanation of Ivy Configurations

If you are here then you are probably trying to understand IVY's configuration concept. And quite frankly after getting comfortable with it I still can't understand their docs. They are freaking obtuse. Going on Stackoverflow proves frustrating as well. I'm going to try and explain this in a really straight forward example. One because the information out there isn't good, and two so people can throw stones at my explanation and improve my understanding. Here goes.

What does Ivy do?

Ivy downloads dependencies and put them into directories your Ant script will use when compiling, packaging, etc. The important part of that is Ivy downloads dependencies and organizes them. It's up to your Ant script to use them appropriately.
An ivy-module (ie ivy.xml file) has two main parts:
  • What dependencies do you need?
  • How do you want them organized?
The first part is configured under the <dependencies> element. The 2nd is controlled by the <configurations> element. For example:
This is fairly straightforward. We have three dependencies. If we have an Ant build file configured for Ivy this will download all three of these jar files and put them into the ${ivy.lib.dir}/compile/jar directory. That's great, but we when we go to package our application some of these aren't needed. For one we don't care to ship junit with our application so can we segment that out?
You could do this with filesets and excludes in Ant but that is tedious and error prone. Ivy will do this for you if you know how to ask to ask it. Ivy will put the dependencies in different directories based on if that dependency is needed for testing, compilation, or runtime. This is where configurations start to matter. So let's change what we have such that we divide up our dependencies using configurations. Let's create a 'test' configuration for this purpose.
Ok that was easy right? Well if you run this you'll find two directories under ${ivy.lib.dir}:
  • ${ivy.lib.dir}/compile
  • ${ivy.lib.dir}/test
However, all three dependencies in test, and the other two will be in compile! Doh! That's not what we wanted so what happened?! This comes from the fact that if you don't specify a conf attribute on each dependency it defaults to "*". Well sort of it's a bit more complicated, but you can think of it like match all configs. And because that dependency matches all configs mysql and log4j was copied to both test and compile directories. So let's fix that.
Alright now everything should be as we expect! But it's annoying to have to specify conf="compile" every time we add an dependency. This is where defaults come into play. Remember I said conf attribute defaults to "*" when nothing is specified? Well we can override that by setting the defaultconf on the dependencies tag.
Alright! Now we can just add dependencies and they will always be added to the compile configuration by default! Much easier.

Transitive Dependencies

Now there are some complexities about Ivy that I shielded you from thus far. And it has to do with the decisions Ivy has to make while trying to resolve dependencies. See when you declare you depend on A well A might also depend on B and C. Therefore you depend on not just A, but A, B, and C. B and C are called transitive dependencies. These are hidden from you because using Maven's POM files Maven (and Ivy) can figure those transitive dependencies. And there is where the information I've shielded from you lies in Maven's POM file.
See Maven has a different way to section out dependencies called scopes. And unlike Ivy they are fixed. But when Ivy is downloading these dependencies it needs to know what scopes to use when pulling these transitive dependencies (are we pulling this for testing, runtime, compilation, etc?). That should make your head spin a bit. But this is a real problem because we have to tell Ivy how to map our configurations to Maven scopes so it knows what to pull.
Without mapping our configurations they don't really work well so you have to understand this, but it's not this complicated once it's explained. So let's say we want to pull all of the dependencies JUnit has we'd do the following:
Whoa what the heck is test->default? This looks weird, but what we are saying is our configuration is test and we want to map that to the default scope in Maven. This will have the effect of pulling all junit's transitive dependencies. If we did the following:
That would only pull the dependencies junit directly declares, but not ITS transitive dependencies. You might do test->master if you wanted to compile against just junit, but not actually package it up in your application because it's optional. The user of your library must provide that library if they want to use that integration for example. Servlet API is a good example where you only need it for compilation, but you don't need to shipped with your WAR.
So here is the mystery of the -> operator in Ivy. It maps Ivy configurations onto Maven scopes when resolving dependencies so Ivy knows exactly what to pull down. It's that simple.
Back to our example now because we used defaultconf attribute to specify compile, but we didn't map it to scopes yet. So we can do that by doing the following:
We can go further and simply specify this at the configurations level so that we don't have to specify it every time we change a conf attribute.
Notice we didn't use test->default anymore? That's because we specified that at the configurations level and all our our configs are mapped to default scope in Maven for us.
There is a lot more to configurations that I don't fully understand, but I think this will demystify most things about configurations so you can start to structure your project appropriately using Ivy without trolling Stackoverflow and Ivy docs for vague answers.

6/06/2013

Groovy Mixins and the undocumented features of this pointer

I've been using Groovy and Grails lately and I love the platform. It's a great productivity tool. However, the docs for Groovy the language are languishing and haven't been kept up to date as the platform has evolved. One of those evolutions that is poorly documented is Mixins and I'm specifically talking about dynamic Mixins. Compile time Mixins use the annotation and there are several versions using @Mixin and @Category, but essentially the are limited in their use because you can't add a mixin into a class you didn't author. That means you have to use a different mechanism to augment 3rd party classes. This leaves either modifying the metaClass property on the class or using the newer Dynamic Mixin feature.

For example let's say we want to add a zip method on java.util.File. This method would take this File instance and produce a zipped version of it. For files it simply compresses the file, and for directories it's compress the whole directory and return the resulting file. Using the metaClass property we could do the following to add this:


File.metaClass.zip = { String destination ->
   OutputStream result = new ZipOutputStream(new FileOutputStream(destination))
   result.withStream { ZipOutputStream zipOutStream ->
   delegate.eachFileRecurse { f ->
       if (!f.isDirectory()) {
            zipOutStream.putNextEntry(new ZipEntry(f.getPath()))
            new FileInputStream(f).withStream { stream ->
                zipOutStream << stream
                zipOutStream.closeEntry()
            }
       }
   }
}

This works well and now you can do something as simple as new File( 'some/directory').zip('some_directory.zip'), and boom it writes out a zipped copy of that directory! That's pretty awesome isn't it? I think you're seeing the reason for why we want to do this.

Now let's see if we can translate that into a dynamic Mixin. Here is the version in Mixin form:


class EnhancedFile {

    static {
        File.metaClass.mixin( EnhancedFile )
    }

    void zip( String destination ) {
        OutputStream result = new ZipOutputStream(new FileOutputStream(destination))
        result.withStream { ZipOutputStream zipOutStream ->
            eachFileRecurse { f ->
                if (!f.isDirectory()) {
                    zipOutStream.putNextEntry(new ZipEntry(f.getPath()))
                    new FileInputStream(f).withStream { stream ->
                        zipOutStream << stream
                        zipOutStream.closeEntry()
                    }
                }
            }
        }
    }
}

Some small changes were made to the code. One is the static block at the top now places the mixin into the File object when this class is loaded. This is where Mixins added to 3rd party could be better. Essentially I just want to add this to augment 3rd party libraries, and it could be added at compile time through a simple annotation that let's me annotate the Mixin instead of the target of the Mixin. For example if I could use @MixinTarget(File) on the Mixin to augment File it could register it at compile time, but sadly it doesn't exist. This is why were are using runtime mixins here.

The other change was removing the delegate member. In metaClass mixin land delegate is a magic keyword that points back to the target of the mixin, or the instance your code was mixed into. In Dynamic Mixin land delegate keyword doesn't exist. However, you can refer to methods in the target class by calling them as if they were instance methods on the Dynamic Mixin. Notice how File.eachFileRecurse() method is called within the mixin.

This is our first clue how Dynamic Mixins are different than metaClass mixins. In dynamic mixin land delegate is not defined so referring back to the target is undocumented! There is no discussion about how it works or how its suppose to work. This is the point of this blog post.

Now let's say we want to add an unzip method to our Mixin. Let's look at the metaClass version first:

In this example I have two overloaded versions of the unzip method. That's cool because Groovy honors Java's call differentiation by type, but the crux of this method is in the first one. It's pretty straight forward unzips this File instance into the destination File instance. See any issue with porting? That first line is passing the target of the mixin using delegate keyword to ZipFile! How can we implement that in a Dynamix Mixin!? This is the confusing part. In Dynamic Mixin land what does this pointer point to? Why it points to the instance of the Mixin. In this case its an instance of EnhancedFile. Well that doesn't do us much good does it? But what is the relationships between Mixin and Mixee? That gets a bit fuzzy. We could try casting this to a File after all it appears this is a File because we can simply call instance methods as if they were inside EnhancedFile too. Let's try that:


    ZipFile zf = new ZipFile( (File)this )

But that doesn't work and throws a ClassCastException. What about using the as keyword to convert it?


    ZipFile zf = new ZipFile( this as File )

That actually works! And here is a simple test you can try out:


    class MeMixin {
        def me() {
           return this
        }
    }

    class MeTarget {
    }

    MeTarget.mixin MeMixin

    target = new MeTarget()
    println( target.equals( target.me() as MeTarget ) )
    println( target.equals( target.me() )

The above code will print true then false. So the as keyword somehow changes the this pointer of the Mixin into the target class. It's the same reference as the original (that's important). Well it'd be pretty useless if it wasn't. Now why this works I can't explain that yet.

Here is the full code:

7/22/2011

Now can we please raise the debt ceiling?!

I wanted to look at how bad it's gotten just by looking at the numbers we're up against. What we are arguing over is money the US takes in vs. pays out in obligations. At present time the government spends $3.834 trillion, and takes in $2.567 trillion. You should already see the problem. We're spending $1.267 trillion that we don't have. So where do we get that from without raising taxes?

By issuing more bonds, but we can't do that until this debt ceiling is raised. See we've been doing this since the 1980s. We spend more than we take in, and to get money we sell US Treasure bonds to people to keep operating. However, the debt ceiling is a law on the books that states the US Government won't borrow more than X, and every time we reach X Congress votes to raise it to Y, sells more bonds to cover the deficit, and we keep going. And, people are perfectly happy to buy them because the USA has NEVER defaulted on those obligations.

Now of that $3.834 trillion in spending some of it is allocated by law. By law we have to spend it. If we wanted to change it Congress would have to create a new law that cuts that spending. These are things like Social Security, Medicare/Medicaid, National Debt Interest, Income Security, and Veterans Benefits. This doesn't get discussed much because passing a law to cut these is really difficult, and politicians, on both sides, don't want to be the one that slashes these because they will be voted out. Some of these you can't do anything about like National Debt Interest. You don't pay that and that spells default, USA gets it's AAA rating slashed, interest rates rise up, babies die, and Jesus weeps. The $250 billion in National Debt interest is interest on all that borrowing we keep doing. For the remaining items Social Security, and Income Security are funded by specific taxes. If you cut those programs it doesn't help because those special taxes can't be used to pay for other spending. That is illegal. So what does that leave? Medicare, Medicaid, and the discretionary budget as places you can cut. I'm leaving Veteran's Benefits out of it because it's $68 billion which even you completely cut it to zero it would contribute squat, and persons who cut that thing would make Casey Anthony look like Mother Teresa.

What we're really talking about is the Discretionary Budget which in 2011 is $1.415 trillion dollars of which 63% ($895 billion) is spent on Military spending, and 37% ($520 billion) is spent on non-Military spending. In 2004 the Discretionary Budget was $782 billion and 51% ($399 billion) for Military Spending and 49% ($383 billion for non-Military Spending). That's a 58% increase in the budget in 7 years. You'll also notice how much the military percentage of the pie has increased. That means it's rising at roughly 6.8% per year. More than twice the typical 3% inflation rate. But, the more disturbing trend is military spending has increased 12.3% per year while non-military spending rose only 5% per year. Why is that important? Because Discretionary Military spending is single largest expense the American government pays out, hence if we really want to make serious cuts it has to start with the military spending.

If we didn't want to raise the debt ceiling we need to come up with $1.267 trillion by cutting spending or raising taxes. If we didn't want to raise taxes and you don't want to cut the non-Discretionary items, then we'd need to cut $1.267 trillion from the $1.415 trillion Discretionary budget. That would leave $148 billion for the government (both military and non-military) to run on. Our government couldn't function no matter how much the Tea Party wishes that were true.

What if we consider the full budget for cutting funding. In order to cut spending enough, so we don't have to raise taxes, we'd need to cut 58% from Discretionary Military spending, Discretionary Non-Military spending, Medicare, and Medicaid. If we included Income Security in those cuts we can get it down to 46% cuts across the board. And if we included Social Security it'd be around 36% cuts across the board.

Ok so let's look at it from what we'd need to do to raise taxes to cover it. In order to get $1.267 trillion more we'd need to increase taxes by 50%! 50% tax increase would cover the deficit without cutting any spending. Now if you thought cutting spending to cover it was insane. Raising taxes by 50% is bonkers. I can't afford a 50% tax hike as I bet neither can you, and corporations would get a shock so bad Wall Street would absolutely freak their shit. And, send their K Street soliders to figure out a way to shirk their responsibility. Yep same song different verse. So even if you could pass the bill I bet they couldn't collect on those taxes.

Those are the two extremes of the argument. You can't cut your way to a balanced budget, and you can't tax your way one either. However, getting really serious about fixing those problems means serious cuts and serious tax hikes. Looking at closing loops holes to raise revenue, and cutting spending is the only way you could reasonably do it. But again, there's no perfect answer given the constraints. It will still require serious cuts, and tax hikes. Even raising taxes 10% you'll need to cut $1 Trillion in spending across the board. That is going to be very hard. What about the Bush tax cuts? Even rolling those back will only add $300 Billion-ish in revenue.

The easiest way out is to raise the debt ceiling because defaulting will have tremendous consequences. And, to think it will get worked out if we miss the Aug 2nd deadline is a farse because we're already on borrowed time. This thing was supposed to get wrapped up 6 months ago, and the Treasury did some funny accounting to get more time. They've been in a stalemate since then. So if they can't figure it out in 6 months what makes you think they'll figure it out in another 6 months when the Treasury is out of money? They've been living on life support for 6 months.

So given all of the facts can we please just raise the debt ceiling? My 401K doesn't need 3rd shot to the junk in 10 years.

3/28/2011

How Failing Fast allows you to reframe the problem

Just read an article on Fast Company on how human powered flight was solved by Paul MacCready. It's really cool because it's not a software story, but it has so many similarities with software. Success centers around creating an environment where you can iterate on your idea. I like stories like this because the motto of "fail fast" gets hollow as it is over used. After a while It's hard to remember what it originally meant. Stories help re-affirm it's meaning.

In so many ways this is really what agile software development is trying to get you to. Agile demands a lot from your team, and the only way you can live up to the promises of agile development is to create this environment. Without it you'll just fail, or worse just survive on far less productivity.

No more big design up front. It failed people for human powered flight, it failed for cars, and it failed for software.

http://www.fastcodesign.com/1663488/wanna-solve-impossible-problems-find-ways-to-fail-quicker