I'm sure lots of us have all heard about how we should be writing unit tests for our code, but haven't quite got round to witting them because the features of our project were a more immediate issue?
Well very recently I started a new project, Boogaloo (see previous entry). Although it was a diversion from my main project, I decided to go all out this time and write unit tests, or more specifically, write the tests before I wrote any of the features. This practice is called test-driven development, i.e your unit tests drive the evolution of your application.
Before I come to the punch, I must first tell you that recently I have been battling with what I identified as one of my major programming weaknesses -- complexity. In the past, I'd start out with trying to envision how I wanted a project to be in n months and begin writting code to accommodate all the different eventualities. I'd even add complexity at class and method level in an attempt to make the application as flexible and diverse as possible. A university friend commented the other day that "Ian's code doesn't look like anyone else's" -- he was talking about the needless complexity.
So, I've been consciously trying to avoid complexity where I can, but it doesn't always work and it's easy to want to slip in an extra little feature hear and there. Test-driven development has changed that though. As great as unit-tests are, I don't particularly enjoy writing them -- and that's actually a good thing in this case! Because I write the tests first, I don't bother with adding tests for any extra little features, just the essential functionality. Once the functionality is in place and the test starts passing, I don't want to add any extra complexity because that means I'll have to write new tests and probably re-factor the existing one, problem solved!
The end result is a simple application that does exactly what you wanted it to, plus you know for sure it actually works, and will continue to work as you make changes. This is a plus for the users too as they don't have to wade through your cruft to get at what they want.
At some point the project may approach the stage of a little more than 'simple', but at least it's still accessible, and you've only added the complexity where the users wanted it.
I'm definitely won over by test-driven development, it suits my development style and forces me to walk to begin with so that I can run later on.
I'm still discovering the wonders of testing, so now doubt I'll be blogging more about it very soon.
"Simple software is a lot harder to write than complex software" -- me
Saturday, October 14, 2006
Boogaloo Simple Cache Server
I've just made the first release of a new project named Boogaloo.
It's a simple cache server that provides persistent and temporary caching. You can also use it to host your own custom services.
Give it a look! http://boogaloo.rubyforge.org/
It's a simple cache server that provides persistent and temporary caching. You can also use it to host your own custom services.
Give it a look! http://boogaloo.rubyforge.org/
Wednesday, September 20, 2006
Just-In-Time including Liquid partials
If you're caching your Liquid templates and including partials into that template, updates to your partials will not become visible until you clear the cached template object from your cache.
But how do you know which template to clear from the cache? You'd either need to perform some kind of reverse mapping of partial includes to determine all templates including the partial, or maintain your own metadata. Neither sound very appealing.
Another solution is to make your partial includes happen at run-time.
and replace the current include tag with your own:
Liquid::Template.register_tag('include', JustInTimeInclude)
But how do you know which template to clear from the cache? You'd either need to perform some kind of reverse mapping of partial includes to determine all templates including the partial, or maintain your own metadata. Neither sound very appealing.
Another solution is to make your partial includes happen at run-time.
class JustInTimeInclude < ::Liquid::Include
alias :super_parse :parse
def parse(tokens)
@tokens = tokens
end
def render(context)
super_parse(@tokens)
super
end
end
alias :super_parse :parse
def parse(tokens)
@tokens = tokens
end
def render(context)
super_parse(@tokens)
super
end
end
and replace the current include tag with your own:
Liquid::Template.register_tag('include', JustInTimeInclude)
Tuesday, September 19, 2006
Home made cache with DRb
I recently came across the need to cache various Ruby objects in a persistent way. Memcached and memcached-client looked like the obvious solution, but I soon realised that Memcached isn't persistent -- it will start removing cached objects when it gets close to filling its allocated cache quota. I also wanted something that could be shared among servers and processes, as an Agile abiding developer I refused to violate the D.R.Y principle. So what other options did I have?
At this point I was getting a little worried that I was going to have to write my own distributed object system around Ruby's marshaling. A quick search around the web came up with a few prior implementations, but all were a lot more than I needed. I just wanted somewhere to store objects, no message bus, ACLs or events; just a simple store.
I soon came across DRb, Ruby's distributed computing library -- perfect! I'd heard of this library before, but never really looked into it or what it actually was. It turns out it was exaclty what I needed. It provides (as with most things Ruby) a dead easy way to transfer objects over a socket. All I had to write was my storage class, which is nothing more than a wrapper around a Hash.
In my case, the cache is for storing my parsed Liquid template objects.
The server:
Run that tiny script and you have yourself a cache.
To connect to the cache:
You can now access template_cache as if it were a local instance of TemplateCache.
So with all the time the DRb has saved us, we can go a little further and add more services to our cache server. I also needed somewhere to store my registered Liquid drops (classes made available to the template). Notice that in our current example, the cache is bound to a specific port, we either need to open up another port for our drop registry or create a gateway mechanism that makes all of our services over a single port.
Example gateway:
and to startup the server:
and the client now uses the cache like this:
DRb.start_service
cache = DRbObject.new(nil, 'druby://:7777')
cache.template_cache.put(:my_key, "hello, world")
But wait a moment, this isn't going to work. Calling the put method of our TemplateCache instance will give us a NoMethodError. You need to include DRbUndumped into your TemplateCache class. This tells DRb not to marshal the instance returned to the client, but only a reference. Meaning that all methods called on the instance that comes out at the client end will be forwarded to the object on the server.
So there you have it, a very simple distributed object cache in only a few lines of code.
At this point I was getting a little worried that I was going to have to write my own distributed object system around Ruby's marshaling. A quick search around the web came up with a few prior implementations, but all were a lot more than I needed. I just wanted somewhere to store objects, no message bus, ACLs or events; just a simple store.
I soon came across DRb, Ruby's distributed computing library -- perfect! I'd heard of this library before, but never really looked into it or what it actually was. It turns out it was exaclty what I needed. It provides (as with most things Ruby) a dead easy way to transfer objects over a socket. All I had to write was my storage class, which is nothing more than a wrapper around a Hash.
In my case, the cache is for storing my parsed Liquid template objects.
The server:
require 'drb'
class TemplateStore
def initialize
@store = {}
end
def get(key)
@store[key]
end
def put(key, value)
@store[key] = value
end
end
DRb.start_service("druby://:7777", TemplateCache.new)
DRb.thread.join
class TemplateStore
def initialize
@store = {}
end
def get(key)
@store[key]
end
def put(key, value)
@store[key] = value
end
end
DRb.start_service("druby://:7777", TemplateCache.new)
DRb.thread.join
Run that tiny script and you have yourself a cache.
To connect to the cache:
DRb.start_service
template_cache = DRbObject.new(nil, 'druby://:7777')
template_cache = DRbObject.new(nil, 'druby://:7777')
You can now access template_cache as if it were a local instance of TemplateCache.
So with all the time the DRb has saved us, we can go a little further and add more services to our cache server. I also needed somewhere to store my registered Liquid drops (classes made available to the template). Notice that in our current example, the cache is bound to a specific port, we either need to open up another port for our drop registry or create a gateway mechanism that makes all of our services over a single port.
Example gateway:
class ServiceGateway
def register_service(name, instance)
instance_variable_set("@"+name, instance)
self.class.instance_eval do
define_method(name) do
instance_variable_get("@"+name)
end
end
end
end
def register_service(name, instance)
instance_variable_set("@"+name, instance)
self.class.instance_eval do
define_method(name) do
instance_variable_get("@"+name)
end
end
end
end
and to startup the server:
gateway = ServiceGateway.new
gateway.register_service("template_cache", TemplateCache.new)
DRb.start_service("druby://:7777", gateway)
DRb.thread.join
gateway.register_service("template_cache", TemplateCache.new)
DRb.start_service("druby://:7777", gateway)
DRb.thread.join
and the client now uses the cache like this:
DRb.start_service
cache = DRbObject.new(nil, 'druby://:7777')
cache.template_cache.put(:my_key, "hello, world")
But wait a moment, this isn't going to work. Calling the put method of our TemplateCache instance will give us a NoMethodError. You need to include DRbUndumped into your TemplateCache class. This tells DRb not to marshal the instance returned to the client, but only a reference. Meaning that all methods called on the instance that comes out at the client end will be forwarded to the object on the server.
require 'drb'
class TemplateCache
include DRbUndumped
def initialize
...
class TemplateCache
include DRbUndumped
def initialize
...
So there you have it, a very simple distributed object cache in only a few lines of code.
Subscribe to:
Comments (Atom)