“There Will Be Code”

I spent a very interesting afternoon yesterday in west London, at the free Amazon Web Services Tech Summit for Developers and Architects. It was quite something, and my little brain is still cogitating, ruminating and speculating, but I shall attempt to summarise nonetheless…

Amazon Web Services logo

The event was billed as something that would show developers, engineers, and architects how to get started with AWS infrastructure services and how to architect applications for the cloud, and on that they most certainly delivered: I am itching to give the environment a far crack of the whip, and some of the possibilities presented were mind-boggling. Now, before you start rolling your eyes and looking to rant about “the cloud” versus “on-premises” and all that, let me state for the record that I believe in both states of play, and that I don’t necessarily think they’re mutually exclusive. On more than one occasion this afternoon, a presenter posited the idea that it is natural for an organisation to approach the cloud with some circumspection, perhaps starting off small with a couple of pet projects, and seeing where that leads them. Makes sense to me; I really can’t be doing with the idea of a “big bang” approach to IT projects, whether you’re kicking out a website or a full-fledged infrastructure renewal programme. Baby steps!

So, back to the content. All of the speakers were engaging, personable and knowledgeable. Iain Gavin (AWS UK sales rep.) was our master of ceremonies, and did a fine job. He was accompanied by the effervescent AWS “Evangelist” Matt Wood, who presented the deeper technical content—two sessions in fact, one on architecting apps in the cloud, the second a summary of the components available within the AWS world, and how to get started with them. Matt’s presentations sandwiched a series of customer “tech talks”wherein a CTO, a director, two architects, and a developer advocate all presented their respective organisations’ tales of life in the cloud. Each of these sessions had valuable points to make, and together they made for essential viewing, helping to bring the concepts behind AWS to life. There were a few take-aways here:

  • Martin Frost, Chief Architect at Playfish, discussed the challenges presented by a gaming website with over fifty million unique users per month (!), and how such a company deals with massive rates of growth whilst maintaining no servers in-house. His adoption graphs were just off the scale, I’d never seen anything like it, and their approach to AWS is truly innovative. One point Martin made bears emphasis: AWS is a great leveller. Anyone can use any piece of the AWS offering, so if a one-man shop wants to start using pretty sophisticated load-balancing solutions or dynamically-scaling computation stores—even location-based redundancy for their data—well it’s there for the taking. This is not something readily available from traditional hosting providers at traditional small shop rates!
  • Michael Brunton-Spall talked about how The Guardian tooled-up to present two “unknowns”: their Open Platform API initiative, plus the backing web API / data sources for their iOS apps. The key lesson here was around the cloud providing extra infrastructure at no risk—particularly how one can scale down as well as up. For example, if you cannot predict demand for your new project, go over-board on the EC2 instances so that you can satisfy demand until you know what you’re dealing with. Project accounting is nice and simple too: assign your given instances to a specific project, and you have a clear costs analysis for the bean-counters (and it’s all operating expenditure to boot, rather than hefty up-front capital costs).
  • Craig Box, fresh from SEE2010 gave a thoroughly entertaining presentation (Commodore VIC 20s used in server topology graphics: inspired!) on how the Symbian Foundation uses (used??) AWS. His focus was on the fact that the organisation had existing open-source software, such as MediaWiki, MySQL and Drupal, and simply wanted to shift all that to the AWS environment. The key learning point here is that AWS approaches the cloud from the point of view of it being infrastructure-based rather than the approach adopted by vendors such as Salesforce (“software as a service”) and Google (“platform as a service”).
  • We also enjoyed notable presentations from Jim Brown, CTO of the geo-data specialists CloudMade, and Andy Nichol of the Telegraph Media Group (who gave wry talk about the development and onward hosting of their impressive new fashion siteThis is so out of my comfort zone… raised a few guffaws!)

So, to some form of summary:

AWS is pretty mature—next year the company celebrates its fifth birthday—and the tooling / product offerings are constantly being updated and expanded. The AWS development team seems very responsive to customer requests, and the sales guys were certainly interested to hear what people think. Areas slated for improvement in some of the customer presentations included ways of providing shared storage space to multiple EC2 instances (Symbian wrote their own solution and released it—syncfs) and continued development of the relatively new Elastic Load Balancing service.

Another theme running through many of the presentations, especially given how many EC2 instances some of them had, was that of configuration management. Now this is an area I know nothing about, so it was interesting to hear about what people use. How on earth do you look after, say, 200 servers? Well, the common tool of preference was Puppet, an open source configuration management tool, with its own declarative scripting language.

(I should also point out that there was honourable mention in the various presentations for Chef, an alternative configuration management solution, in which one writes “recipes” to outline how a server should be set-up. Oh, and like Puppet, Chef is written in Ruby).

The “Getting Started” summary provided by Matt Wood was a fitting end to the afternoon, and he ran through all the main offerings from AWS, from SimpleDB through CloudFront to the Amazon security and economics white papers. AWS tooling is nice and simple: there are command line tools, there is an AWS console once logged-in on the website, and of course, one can code all sorts of shenanigans, using the AWS APIs, available for various languages.

All in all, an afternoon away from the screens very well spent. I got to talk to some interesting people, drank in a lot of useful information (drank a nice glass of red wine too), and saw whole other aspects of this software business we’re in.

Further reading

(By the way, in case you’re wondering about the title of this post, that comes from the content of one of Matt Woods’ slides!)


  1. Thanks for the great review Ben. I hope that we see something similar in Australia at some stage.

    Did they talk about pricing at all? I looked into setting up a Domino image but I got put off by the confusing pricing.Ethann Castell#
  2. Thanks Ethann. Re pricing, not really; some talk of the “micro” EC2 stuff, which comes in at $0.02 per hour. Pricing is pretty clear now, and there’s a free start-up level to boot.Ben Poole#

Comments on this post are now closed.


I’m a software architect / developer / general IT wrangler specialising in web, mobile web and middleware using things like node.js, Java, C#, PHP, HTML5 and more.

Best described as a simpleton, but kindly. You can read more here.