Still here

<p>No, I haven’t abandoned this site. I’ve just been working my butt off at a new startup. I’ll try to get some content up soon, but in the meantime, enjoy some pictures of the newest member of our family. </p>

Productivity in Java vs. Rails

I am far more productive when writing Rails code than when writing Java. I just realized that one of the reasons for my lower productivity in Java is the need to recompile every time a make a change to a page on the site. In the 15 seconds or so it takes to recompile and redeploy to Tomcat, I get bored and am apt to go check my new favorite news site, popurls, or my RSS feeds, or (less likely) post to my blog. Suddenly those 15 seconds have become 5 minutes. And this happens many times throughout the day.

With Rails, I make a change, refresh my browser, and there it is. On to the next step.

HTTP Authorization with Apache/FastCGI

It took me forever to figure this one out, but if you want HTTP Authentication to work with Apache 2 and mod_fastcgi, you need this in your apache conf file:

FastCgiConfig -pass-header Authorization

FastCGI doesn’t pass the Authorization header by default for some reason.

Processes, Threads, and Ruby

While researching the best way to handle calling an external program from Ruby (and capturing stdout & stderr), I came across this post, which is a good review of how processes and threads work:

http://www.ruby-forum.com/topic/65155#75363

I still haven’t figured out exactly how I’m going to do this, but I’ll post it here when I figure it out. Ruby has a few different ways of opening and communicating with processes, but all seem to be lacking in some way or another. IO.popen lets you write to the process’ stdin, and read from its stdout, but you can’t get stderr without jumping through serious hoops (like redirecting stderr to a file and then reading the file…ugh). Open3.popen3 (brilliant naming) gives you stdin, stdout, and stderr, but the subprocess runs as a grandchild, so there seems to be no way to wait for it to finish.

Database War Stories

There’s an interesting series of posts over at O’Reilly Radar. Tim O’Reilly asked people how they were using databases in their “Web 2.0” applications (although I think the Web 2.0 part of it is for the most part irrelevant). The responses so far have made for interesting reading. So far we’ve heard from Second Life, Bloglines and Memeorandum, Flickr, NASA World Wind, and craigslist. One of the lessons learned is that with a high-traffic site, at some point you have to break your database up so that the “hot” data is spread across a number of boxes. This got me thinking. It should be possible to build a tool that analyzes your database usage and, given a number of slave boxes to configure as it sees fit, automatically configures masters and slaves and distributes your data across those boxes as necessary. This would not be a one-time only process either; it would continue to monitor usage and performance and adjust accordingly. Certainly not an easy task, but should be doable.