I’ve been learning node at work and to be honest it is the most fun I’ve had in a while learning. I am going to cover the basics, using Windows. I am going to assume that you have written JavaScript before in the browser.


Installation on Windows is pretty straight-forward. You download the executable for your platform (32-bit Windows or 64-bit Windows). If you are on Windows 8 like me you should see the following in your start screen.


If you have Windows 7 you will have a start menu folder with the same items in it. If you choose the highlighted item “Node.js” you will have a command window open with empty prompt.


This is node’s REPL interface, this stands for read-eval-print loop. It simply evaluates each line you type into the interface.

  1. ‘Hello World’;

This prints off “Hello  World” in the REPL interface. Also, if I do the following in the REPL interface I get the following output.

  1. > var a = 0
  2. undefined
  3. > a
  4. 0

The interface evaluates the expression that you have just entered. Assigning the variable a to zero, returned undefined as any other return type doesn’t make sense. When evaluating the variable a the return type is 0 since, we previously set a to 0. The REPL interface in node is very similar to the console in Chrome.


Running Scripts

Obviously the usefulness of REPL interface is limited to simpler tasks. For more complicated tasks you’re going to need to be able to run a script file. This is thankfully quite easy. When installing node, node is added to your path. This means you will be able to run node by simply opening up a command window and typing.

  1. node <my script>.js

And the script will execute. I have the following script “count-down.js” and it does exactly as you expect.

  1. var i = 10;
  3. do {
  4.     console.log(i);
  5. whilei-- );


  1. C:\Users\Luke\Desktop>node count-down.js
  2. 10
  3. 9
  4. 8
  5. 7
  6. 6
  7. 5
  8. 4
  9. 3
  10. 2
  11. 1
  12. 0

Note: You’ll notice the while conditional is evaluated as false once our variable gets to zero. Those familiar with JavaScript and other dyanmically typed languages, will know that zero evaluates to false in our while clause.

Setting up a simple development environment

So far the examples I have given are quite trivial so far if you have studied programming before. Now I want to get into the real stuff that node is capable of, but before we do that we really need to set-up a basic development environment.

Installing node-dev

At the moment if you modify your script you need to run node again This becomes somewhat tedious. This is easily rectified:


What exactly have I done there?

  1. I have invoked the node package manager using the command “npm”.
  2. The install keyword tells node that I wish to install the package “node-dev”.
  3. Finally the “-g” flag tells npm to install this globally throughout the environment.

More on this later.

If I use node-dev to start my script once the script has finished executing, it won’t fall back to the command prompt. If I edit the script when node-dev is running, it will detect that I have changed the script and restart the execution.

Installing node-inspector

For debugging purposes you could put various print statements throughout the code, but we all know this isn’t ideal. Node-inspector lets us hook chrome debugger up to node. Again we use npm install the node-inspector.

  1. npm install node-inspector -g

It installs far too many dependencies to list here.

Once node inspector is installed we can start node inspector. I usually have it running in another command windows.  To start it simply type.

  1. node-inspector

You should see the following output.



Now you need to start your script.

  1. nodedebug-brk count-down.js

The “debug-brk” option stops the script execution on the first line of code. To use the chrome debugger, you simply open chrome and type in the url localhost:8080/debug?port=5858. You can see from the screenshot below, that you can use the chrome debugger as you normally do in the browser.


Note: I know the first screen shot says that the debugger was opened on, this doesn’t work but putting localhost does. I don’t know why this is and it hasn’t worried me enough to research the underlying reason.

Node’s Package Manager

Node has included with it a package manager called “npm”. Just above we installed some tools to help us with development.

NPM will install and manage 3rd party code libraries that you specify and the libraries which they rely on. If you have used tools like NuGet, Gems, PEAR or are familiar with using Linux package managers you will be familiar with the concept. For now we will be downloading them from npm repository. Usage of the package manager when installing is pretty straightforward.

  1. npm install <package name>

NPM will install the package to a folder in your working directory called node_modules. You will only be able to include those libraries in node scripts that exist in that working directory. If you want to install it globally, you need to use the “-g” flag as we did before.

Note: NPM sometimes has to compile some libraries that are native i.e. they are C++ libraries. You will need a C++ compiler otherwise the package installation will fail. Node-gyp is the component that builds these C++ libraries, there are instructions here on what you need to get node-gyp (and ultimately npm) working.

On Windows 8 I found it simplest to install VS Express For Desktop and Active Python (use the version). Then use the “VS2012 x64 Cross Tools Command Prompt” when you are using npm install.

There is something that is said quite a lot about open source vs proprietary software and it is that “many eyes make all bugs shallow”. While most software developers would admit that a new set of eyes on a problem does help them solve problems, it doesn’t however scale. I believe the myth comes from the idea that if “one person can see a defect, then logically a lot of people will be able to find a lot of defects”.

Why doesn’t it scale?

There are several reasons why and I will list them in inverse importance, they all require understanding the problem domain.

  1. Developers will argue whether it is the best way to do it, when that maybe unimportant as long as it works.
  2. Some developers that won’t have a good understanding of the problem domain will incorrectly identify problems with code.
  3. Code takes longer to read and to understand than it did to write it.

I work with a very large code-base and I am changing parts of it everyday. There is very sparse documentation and/or comments. I have to read the code to understand what the original person’s intent. That the first hurdle is trying to understand what a particular method is trying to do.

Now if you accept that it is potentially difficult to understand somebody’s else’s code, it is doubly difficult to understand code when you don’t have a good idea about the problem it is trying to solve or the requirement that it is trying to implement.

If you have good unit-tests, requirements and use cases then you probably won’t run into these problems … but this is irrelevant as to whether it is open source or proprietary. You still needed skilled individuals that know the problem domain and have a good understanding of the code that they are working with.

Internet Explorer is a thorn in the side of any web developer. I have come out in its defense on some of the thing I think is unfairly said about the world’s most popular browser.

I recently watch a video called “Internet Explorer the story so far”. What was most interesting about the video was that it explained Microsoft’s philosophy about updates to their browser.

Some of the important things were:

  • Enterprise users expect a stable version which is tied to a version number. They also expect a set of features with that release which are set in stone.
  • Microsoft ship IE with their Operating system so they have a obligation to provide web browsing features with their OS.
  • He said that they would rather not ship a feature, if it was buggy or incomplete.
  • They were trying to engage developers with the version previews of newest browser.
  • Provide previews of the browser so developers can send Microsoft feedback.
  • They were sorry for pre-IE9 mistakes.

Worth watching the video, except I think people are more likely to still complain about IE, rather than looking for the differences between the customers that Microsoft makes IE for and who their competition make their browsers for.

Microsoft provide a decent out of the box browser that can be easily administered by system administrators, while other browsers produce their browsers for various different sets of non-corporate users.

Microsoft are trying to do to two things.

  • Ship a standard set of features the browser supports per release.
  • Make sure they implement well understood and defined web standards.

The are trying to walk a tight-rope between the needs of enterprise computing and of those of modern non-corporate customers. Personally I think that the modern versions of IE (9 & 10) are good browsers and while the inspector tools aren’t lagging behind, for the majority of users it will deliver a very pleasant browsing experience.

Recently there was an article about CSS Comb on smashing magazine. It looks like a good tool, but I took exception by a particular statement in the blog post on smashing magazine.

The only way to sort CSS properties usefully is to arrange them functionally. This is the sort order included in CSScomb by default. All properties are divided into several groups and arranged in the most logical order within each group.

I don’t agree with the first statement in this paragraph at all and I felt compelled to reply on the article myself. A few on there misunderstood why I was objecting to it, saying that I was missing the point because the tool solved these problems as it ‘decided for you’, and others disagreed with my statement that alphabetical order was the best for code maintenance.

I will concede that you should use whatever style you and your team reaches a consensus on and I wouldn’t be  as presumptuous to tell you how you should be coding or what standard to follow.

You can see my replies on the thread, but for brevity I will summarize my position here. I disagree with the first sentence in the article because of the following:

  • Grouping  by functionality or another set of rules is great if you are a sole developer or small team that has reached a consensus.
  • I believe alphabetical order is the simplest to decide upon between individuals.
  • It is far the simplest way for any developer of any experience level to implement Alphabetical order is a no-brainer. My friend said it best over skype while we were discussing this:

    David Walsh: You could spend something like 20 minutes trying to explain to another developer exactly how to follow functionally ordering, or just say “order by alpha”.

  • There might be ambiguity between how one developer might group related properties.
  • In larger teams I believe it is better to choose the simplest coding standard to follow, it is more likely that developers would follow it. This is very similar to an idea that was presented on the “dailywtf.com”. As previously stated alphabetical order is easily understood by everyone.

I expected a bit of vitriol for this response, and to be honest I was surprised when a lot of replies were arguing their position quite logically and politely.

To my surprise the maintainers of the project on GitHub actually added alphabetical ordering as a option for the tool itself. Which I think is pretty damn awesome. They responded to the feedback in a positive way, considering I was having a bit of a rant at the time.


I would like to summarise my feelings about this:

  • Use whichever coding style works best for your team. Don’t let me sway you if you have something in place that already works.
  • I believe alphabetical ordering of declarations is simplest to understand and implement when you have several teams working on CSS style sheets.
  • It was pretty cool that those that develop CSS Comb updated it to support an option for alphabetical ordering. Kudos to them!
  • I think some of the articles that Smashing Magazine provides are great for novices and small teams to spread new ideas. However for a novice working in larger teams, I think it must be understood that smashing magazine is targeted at smaller web agencies and individuals. Many of the articles ideas may not work in an environment that has larger teams and a more rigorous release process.

However I did share some views with other web developers which while I may disagree on the specifics there was a consensus that following a standard organisation of declarations and tools that helped that was a good idea. Which I think is more important than any specifics I have an opinion about.

P.S. I am sure there will be those that will point out that the CSS styles on this website aren’t in alphabetical order. I downloaded a basic theme and modified it myself for my personal use.


Saw an article that I saw on nettuts the other day.

Your IT department has probably tested those applications with newer browsers, and have recommended to management against using them within your company. This is very bad – but very much a reality!

This attitude irritates me, because the author is making this sound like this is some sort of short-sighted decision by the IT department. As much as I wouldn’t like it either, there are several reasons why it is perfectly understandable and as a professional in some cases should be embraced.

The application is working perfectly fine

The old adage, “If it ain’t broke, don’t fix it” is the first thing that comes to mind. If there is a significant number of intranet applications humming along quite nicely, why would it be a good idea to break what is currently working? There is no good reason.

Changes cost the business time and therefore money

To make it work with newer browsers may require significant investment of time which probably out-weigh any maintenance costs. I hate to sound like someone’s manager, but there is no benefit to the business in changing an existing and working application unless the maintenance costs are significantly costly or time consuming. Remember there are many other projects that the business may be taking on, and making an intranet application work with ‘browser version X’  isn’t important in the grand scheme of things.

Also I have to add it isn’t just the developer’s man-hours that are going to invested in the project, off the top of my head:

  • There will be someone managing it.
  • There will be someone monitoring the change request.
  • There will be a test effort which may require several individuals.
  • There will be someone that will need to deploy these changes.

It just not your time, there are many others involved.

The requirements/functional specification for the application may be long lost

Reimplementing any functionality should start with looking at a functional specification, if none exists and you re-factor/re-implement you are most likely changing the software in a way that doesn’t conform to the now lost specification. The elicitation process in a perfect world would start and then a new specification drawn up. This requires resources and has a cost associated with it.

Everything in that application that exists today as terrible and ugly as it maybe probably was implemented by someone for a very good reason. Sometimes it done badly, but that doesn’t matter as long as it works and the users know work-arounds for known bugs.

‘Whiz Bang’ CSS and JavaScript probably isn’t needed

I would wager most intranets don’t need the following to be useful:

  • Drag and Drop.
  • Smooth Animations.
  • Rounded Corners.
  • Text Shadow

I am sure I could pluck others off the top of my head, but I think you get the picture.

What users probably need to do is view, create and edit information. There is no reason why most legacy browsers such as IE6 can’t do this. You don’t need all the fancy features of the latest and greatest browser. The intranet is not public facing and aesthetics aren’t as important . While I agree information and the user interface should be tidy and straight forward, it doesn’t need to be pretty or sexy. It may well need to fit in with corporate design guidelines and branding. While I would like to produce something that users that internal users loved to use, the users of the said system will be well versed in the ins and outs of how it works (probably better than you the developer will be).

If they must use these applications, legacy browsers will most likely be virtualised

One of the points made in the article near the end is:

Remember that, inevitably, machines reach the end of their lifecycles. Hard Drives crash, motherboards fail, and software makers like Microsoft stop supporting and releasing patches and service packs for older operating systems. With new machines come newer and better browsers.

It is probably cheaper for the business to provide a virtual environment for users whether it be Citrix, Windows XP mode in Windows 7 or any other sort of virtualisation than actually upgrade or replace the existing solution.


While it isn’t an ideal situation to be in if you are a web developer and you may feel like you are falling behind (you may well be). However that doesn’t invalidate the good reasons that the business hasn’t upgraded their legacy browser. If you are frustrated to that extent with your job, you should consider looking for another.

Churn change is never good nor is stagnation. However after I have recently been working in a project and changed things that are a good idea … if I was working with a perfectly sane code-base, I wasn’t. Some of the delays were admittedly my fault because I didn’t have a complete understanding of the business and how the site worked. Others were because the code in places was not quite ideal and there were just  implications of my changes that I just did not foresee. Some of the blame lies on my head and other parts was just to existing problems in the code base. I have since become a lot more cautious at making changes that shouldn’t affect things.

Change on legacy systems that depend on certain browser feature (such as ActiveX) should be carefully considered, and in a lot of circumstances maybe undesirable. Simply implying that the IT department are making a bad decision because it doesn’t fit in with your personal opinions is not professional and not healthy for you or your employer.

Had this error today from IE7 & 8 from a script that I got given to fix.

SCRIPT438: Object doesn’t support property or method ‘bind’

I have never come across bind() before, I discovered it was part of the ECMAScript 5 specification, and is only supported by newer browsers (IE9+, FF 4.0, Chrome 7.0, Opera 11.60) it not supported in earlier browser or in Safari.

I found the solution on MDN. It appears the bind() is a new property that overrides the ‘this‘ keyword in a function. A bit of copy and pasta of some code on MDN into my own script, and I had bind working again.



I loaded up nettuts and I was greeted with this.


If not, the W3C mailing lists have been on fire ever since it was discussed (and essentially announced) that Microsoft, Opera, and Firefox will begin to adopt and style webkit-prefixed properties. One of the reasons behind this decision is that we developers aren’t being responsible when coding our stylesheets; we’re applying too many webkit-specific properties, without considering other browsers.

Oh brilliant. It seems I wasn’t quite correct in the first place, Webkit is the new IE6!

History Repeating Itself

The software industry has a long history of “lock-in” and I not talking about it in the terms that many open source advocates speak. I am talking about design decisions. e.g. many of the critical files for the Windows Operating system are under a directory called system32 as I am sure many of you know.

System 32

Until Microsoft started releasing x86-64 versions of Windows, these files and programs were intended for a 32bit processor (thus the name). However to preserve backward compatibility with older programs, they couldn’t make another directory called system64, since many older programs assumed the existence of a system32. Instead all the 64bit versions versions of these files were put under system32, and all the 32bit versions were put into a folder called WOW64.

So in essence:

  • All the 32 bit critical files are in a directory that is suffixed as “64″.
  • All the 64 bit critical files are in a directory that is suffixed as “32″.

Microsoft made a design decision almost 20 years ago that, that still them today. Microsoft are locked into that decision as long as they keep backwards compatibility with older programs.

There are other (possibly better) examples of this, but this is the most recent issue I that I knew sufficiently enough about to explain.

This has happened many in the history of software. MIDI is such an example of lock-in.

Browser Vendors

So much like Microsoft, browser makers made a design choice. At some point decided they needed a way of letting developers evaluate the following:

  • Browser support for new CSS features.
  • The specification for new proposed CSS features.

Vendor prefixes was the way that they decided to support this.They were never meant to exist in production. However browser vendors decided that their browser should recognise their own vendor prefixes irrespective of whether the site is meant is production or for simply evaluating new CSS features.

This design decision is now causing problems, this is because much like IE6 back in 2001; webkit is almost ubiquitous on tablets and smart-phones like IE6 was on desktop machines. We are now locked-in to that design decision.


So far all sorts of odd solutions have been put forwards.

While these are all good ideas in principle, lets remember these all work on the premise that people will do the right thing. It is assuming that people will listen, most developers actually care about their craft beyond that it gives them a pay-check, clients are willing to pay for the sites to be updated and browser developers are willing to lose users (which ironically might make the situation even worse).

Personally I don’t trust other developers. The vast majority of developers that I have ever met have been like electrons, in that they tend to follow the path of least resistance. I have met 2 or 3 developers that I could actually trust when working with them.


In Object Orientated Programming you have the concept of encapsulation.

A language mechanism for restricting access to some of the object’s components.

I know there is much more to encapsulation, but without these language features it doesn’t work well. This essentially keeps the inner workings of an object completely hidden to any code outside of the object. This has certain benefits, this stops certain methods being called in ways they weren’t intended for, or important variables being changed outside the object in error.

By doing this you essentially pretty much stop anyone misusing your code (this includes yourself). If I try compiling code that tried to access a “private” variable or another class, the compiler will mock me and tell me I am idiot and call me rude names.

It has many other benefits as well, but they all exist because of this initial design decision in the language. Encapsulation is trivial in C# and Java, however in JavaScript it is much more difficult (I am not entirely sure if is possible). If Java and C# didn’t have the words “private”, “protected”, “internal (C#)” and “public”, encapsulation would be much more difficult.

My Solution

So what does encapsulation in OOP have anything to do with webkit prefixes? I believe the only way to stop lazy developers from abusing a software feature is to make sure:

  • They can’t do it.
  • It is made extremely difficult.
  • There are better alternatives which are less difficult and therefore preferred.

So I essentially think the best way of stopping people from abusing vendor prefixes would be to have them turned off in the browser as default. They should only work when the browser is put into a “development” mode.

This could be implemented in several ways:

  • Have a development check-box in the advanced settings of the web browser.
  • Much like IE checks for a compatibility mode meta tag to tell it that it should render the page as IE7 or 8. You can have a “development” meta tag, if the tag is included in the page it will rendered in “development” mode.
  • Make a developer version of the browser.

However, I believe the first to be the better solution. The second will just be abused like vendor prefixes themselves. The third will just lead to everyone acquiring the developer version, once word gets round.

Since vendor prefixes won’t display on “normal” people’s machines, this will mean that developers can’t be lazy and assume that they can use a vendor prefix. However with a development mode, new browser specific CSS features can be tested by developers.

This will cause things to break, it will cause heartbreak … but it will force developers not to make assumptions about their visitors platform in a effort to cut corners. In the short term it will cause many problems, but in the long term (much like encapsulation) it will make things better.

Of course this will mean Apple and Google both having enough spine to actually put out an update in the next iteration of their browsers. I doubt it will happen, there will be too much short term breakage which will cost a lot of money to be fixed.

The benefit however is that we won’t have another browser manufacturer essentially dictating the direction of the web.

At the moment I have no proper internet in Spain. Now this has put me in the unusual position of basically not having internet except for my mobile phone which costs me £2 a day to use abroad, and I get to use 25mb a day before they charge me another £5.

So on I went to Chrome options and disabled Images and Plug-ins (aka Adobe Flash) in an effort to make best use of this limited bandwidth. Which was a good idea because I haven’t had a text from Vodaphone saying they are going to charge me an extortionate rate for something which is essentially free for them. Text-only is very low bandwidth and a small download foot print.

First thing you’ll notice is that,  the Internet looks very different when you turn a lot of this off.

A quick overview of how some websites look

FaceBook, just doesn’t work well (I will post some pics from my work computer because I don’t want to waste bandwidth). There are no alternative text on a lot of icons or titles to links, and I am not talking about people’s posts I am talking about the 4 big main buttons on the left hand side. I am basically using FaceBook from visual memory.

The conversation bar on the right hand side uses images for the online status, this could have been done in CSS quite easily, reducing their bandwidth (stuff like that might actually matter to facebook, with so many hits even something as small as a favicon can cane the bandwidth). I don’t know anyone status for FaceBook Chat.

Twitter works very well, looks almost the same without images and it is easily viewable without.

Gmail in comparison looks pretty much the same. There are some missing icons, however there is the alternative text or a title element that tells you what the button does on hover. Everything else looks exactly the same. In fact most of Google’s services work pretty well considering.

Some smaller websites such as cracked.com work almost the same, even though they are quite image heavy. YouTube I won’t go on because there is no point. One video would probably take up 3 or 4 times my entire bandwidth.

However there are a lot of sites that you have to rely on visual memory.

What does this have to do with the title of the post?

Back to the point. This all got me thinking about how to design and implement a site if the user had low bandwidth or limited download capacity.

And I got it down to some simple rules in my head, these being:

  • Semantic Mark-up.
  • Use text for text, don’t rely on a people being able to see styled link or button that uses a text for an image.
  • As much styled by CSS without background images as possible.
  • Make sure you use the title and alternative text on anchors and images.
  • Compress your CSS and JavaScript, Maybe even your HTML, every byte counts.
  • Make sure you web page is more of less laid out in the same way whether it has or hasn’t got images and plug-ins such as flash.

I am sure some people would add a few more. But I some giving consideration for these should go a long way.

These are basic tenets of good web development in general. They almost sound obvious. However a good number of web developers won’t be bothered because they have high bandwidth and unless forced like I am to use use my browser as though it was low bandwidth, won’t consider the implications of how they create their web pages.

You may say however that this isn’t a regular occurance, and most people have high bandwidth. Well things that need higher bandwidth take longer to download, which means time.

However the big guns like Google have considered this and their services work just as well when you are strapped for bandwidth. That is because every millisecond counts, things have to be instant and Google is geared to performance.

Amazon is geared towards performance.

Performance of your website is a feature, because people are impatient. I remember a newspaper saying that Amazon reckon saying  they gain £20000 for every millisecond faster their web page loads (I tried finding a link, but I have lost the bookmark and couldn’t find anything after about 10 minutes of Googling).


Design as though your website is a document first, and then make it pretty later. Look to see where things are taking time to download.

View the page with things such as Images and Plug-ins and maybe even JavaScript disabled. Does the website still work?

I have made some pretty background image heavy websites and I don’t want to even look at how those look on mobile device or if you have images turned off.

Modern web development seems worse than before in my opinion. Each time I read about some new web specification or something coming out, it seems to be worse than what came before it. These are some of my main problems and I am surprised nobody is mentioning it.


I been reading some books on HTML 5 on my Kindle since I didn’t know a huge amount about it, and I wanted to know how to use the new semantic elements correctly and some of the new APIs.

I seen plenty of stuff on the web about HTML 5 and I decided that I wanted a complete overview. I was horrified by the first chapter.

The main theme was “you can do this and you couldn’t do this before”, which was okay until I realised you didn’t have to close you tags! Really are we going back to HTML 3.2? I am one of these guys that will reformat a whole front end mark-up if it is badly formatted immediately.

I call my ‘id’ attribute sensible names and everything will be indented perfectly. The reasoning seemed to centre on the fact that many are doing it wrong and we should allow for that. WHY?

I had to stop myself from throwing my kindle across the room, because why should we pander to developers who simply aren’t good enough to close their tags properly. Sure you can serve XHTML 5, but what is the point when the lazy web developers will say “I can just not bother and use this”.

I know there is parts of the specification that says it will auto close if you happen to have certain html elements after it, but surely it is just easier to specify that you must close your tags.

Canvas Tag

All I have to say about this is why does it exist other than some sort of attempt at killing Adobe Flash?

I was told that the idea of a web page was that it was a document first.

The idea was quite basic, make the content meaningful without a style sheet. If we then apply another style sheet to present it differently (say for a printer or a mobile phone) the content means the same thing but it is presented correctly for the device that it is displayed from.

So while we have gone forward with CSS media queries, we have gone backwards with the Canvas Tag. All the Canvas tag is doing in my mind (correct me if I am wrong), is the sort of stuff you are likely to want to do with flash but with JavaScript. Flash already implements a version of ECMAScript.

Why are we inventing the wheel because Apple doesn’t want to have Flash on iOS and the few that run alternative operating systems because of idealism?

Video / Audio Tag

In my previous post I highlighted the decision making you were likely to make when deciding a strategy to add video to a a web-site. There was 4 different options to cover everything which takes up space on a server.

Unlike your home machine (before Thailand was flooded), you don’t just plug in a terabyte of space and forget about it in a professional hosting environment. 500GB cost my previous employer in the thousands of pounds to implement and my manager uttered something like “cheaper than I thought”.

So instead of deciding what codec they should have used, they left it unspecified. Maybe because:

  • WebM wasn’t about yet.
  • Theora was about but nobody except for nerds seemed to know what it was.
  • H264 was popular and many video chipsets had hardware acceleration for the codec. However it is proprietary.
  • Technology may change between in the future.

So now we have 3 competing Video formats. Once YouTube flips the switch, then I expect that WebM will become the de-facto standard … oh wait, we still have lots of people using iOS devices and the most used browser on Window 7 (which will become the new Windows XP) is Internet Explorer 9; both browsers use h264 for their video codec.

So maybe it will be still Flash, h264 and WebM as competing formats or we could just say sod it … lets use Flash and h264 and we will still cover 99.9% of vistors, and those with the alternate operating systems are still SOL.

Which defeats the whole point of the new video tag in my mind in the first place.


Every browser at it current version at the time of writing this, is rubbish, Maybe IE9 is decent.

  • Chrome is rubbish, we have some weird issues with chrome on rendering certain form elements. I will actually put up an demonstration of this behaviour when I can be bothered. I give you a brief summary, if the legend is displayed as block it won’t render properly until you add a padding, there was another browser that did similar behaviour there is a MSDN article about it.
  • Firefox and Chrome won’t load a web page reliably. You know the spinner loading icon at the top; the animation will continue for no particular reason and the browser will never load the page. This isn’t too much of annoyance while surfing the web, but developing it is a whole different matter.
  • Firefox, basic Form submission has some weird caching bollox going on e.g. in phpMyAdmin, I logged in and I made a typo on my password. So, I typed it again and I was fairly certain I was correct, Incorrect. I then opened up notepad and typed the password into notepad and then did a cut and paste, it said Incorrect. I then loaded up Internet Explorer and entered in the same credentials … SUCCESS! Firefox had cached the Form details on a Login box for crying out loud. This behaviour must have been thought up by someone that likes to torment people, plain and simple.
  • Internet Explorer 9 does everything slightly differently than every other browser, yet again.
  • Safari is awful on Windows and uses QuickTime for it <video> tag (see above). So if you want to use Safari on Windows (I have no idea why you would do this) and want to use HTML5 video, then you must install QuickTime as well.
  • Opera to be honest can just fuck off ;-)
  • CSS rendering problems aren’t a real issue IMHO. What is a problem is when the browser decides (Chrome) to use up one of your processor cores for no reason, or when it decides to eat up all you ram on a 8 gigabyte workstation and make typing slow on your machine (Firefox).

Some of these problems aren’t that bad if you are a home user. I am not, I am a web developer, I need to trust these things, just a little bit. I don’t expect them to be perfect, just better than “good enough”. There is nothing more frustrating than something misleading you, when you are trying to develop, it wastes valuable time.

For all of their problems, at least Internet Explorer 6/7 and 8 would load a page when told to.

CSS Vendor Extensions

Have you’ve seen -webkit-<blah> in the CSS of a page … well that is a vendor extension. There are similar extensions for pretty much every browser. They are put there as a way of letting developers test new CSS properties, which is great feature, until it is abused.

The thing is that these vendor extensions are picked up by browsers that aren’t developer’s browsers. So what happened was that a developer using a Mac and Safari would use -webkit vendor prefix to make the browser do what they want, when other browsers can’t do the same styling. The developer make a nice webpage and it is job done, well not quite.

The problem is that the page only really renders right in -WebKit and when other browsers caught up with Safari and Chrome with their CSS support, the page isn’t being rendered correctly even when the majority of browsers can do the equivalent CSS feature without the prefix.

I think vendor prefixes should only be enabled if the web browser is in a “development” mode, this will stop developers mis-using developer prefixes.

The new shiny shiny

I don’t know many websites that are seriously using HTML 5. I am sure many are going to say “Google” or another big name website. I mean website that aren’t developed by big names or people that are gunning for the new tech. XHTML 1.0 transitional (which is HTML 4) is just fine for 95% of what you want to do. Flash is great for video and YouTube is a testament to this.

I haven’t spoken to a developer in person that has used HTML 5 for their projects, except one guy who said he didn’t care about anyone that wasn’t using the latest and greatest.

I am not against progress the new semantic elements are IMO a good step forward, especially things like <nav>, <header>, <footer>, <section> and <article> because they mean something very unambiguous. My main problem is just that everyone is racing to use it and a lot of the time they don’t even know why. CSS 3 is brilliant (have used up-to 9 divs to style some boxes with drop shadows) and it another welcome improvement.

Browsers being updated every six weeks in some sort of rapid development process becomes irritating. I have used Software since 1999 that still works fine (I am looking at you Winamp 2.81 :D), when most of the pages are still in XHTML 1.0 why should I have to upgrade my browser every month?


I think it drills down like this.

  • Browser Vendors please fix issues instead of pushing new features first.
  • Don’t abuse vendor prefixes.
  • Write semantic markup even if the world is seems against you.
  • Flash video is going to be about for a while.
  • Please close your tags ;-).


Someone has pointed out to me that I may have been over dramatic about “the attempted murder of adobe flash” ;-).

I think I got too carried away here and some parts sound like a rant. I like to clarify that I don’t think the WHATWG and W3C are trying to kill Flash. However I do believe that some people would like to see the demise of Adobe Flash, for reasons that aren’t purely technical or think that HTML 5 will be some sort of silver bullet against that.

I would like to link to this article by Forbes that explain my position better than I could.

There was a lot of debate about codecs and the HTML video tag not so long ago. There were many people upset with h264 since distributors (such as browser vendors) will have to pay royalties, because the codec’s software algorithms are patented by the MPEGLA. This is of course a problem for smaller web browsers vendors; many people were angry at the MPEGLA and browser vendors that defaulted to using this codec (these being Apple and Microsoft).


It is also become “cool” to be a Flash hater. Though while I abhor websites which are completely Flash; Flash is great for things such as videos, audio players and games. Flash also works absolutely fine with the most popular platforms with the exception of iOS.

It is also much, much faster for complex animations on older browsers than JavaScript, these people are likely to be running older machines and I seen some JavaScript animations cause the processor usage to spike.

Yes it is owned by Adobe, but it is a solution that works and is freely available for overwhelming majority of platforms.

Possible Scenarios

There are 4 possible scenarios when dealing with support for the video tag.

  • There is no browser support (<IE 9, Safari Windows without QuickTime installed).
  • Theora Video Codec Support (>Firefox  3.5, Chrome, Opera )
  • Mp4/h264 support (Safari MacOSX, Safari Windows with QuickTime installed, iOS devices)
  • WebM (IE with WebM installed, Firefox > 4.0, Chrome, Opera)

To cover all scenarios you need to encode to 3 file formats and Flash video. So you are going to be using possibly 4 times as much storage space as you need to. Hard disk space on hosted environments is quite expensive compared to a home computer and popping in a Terabyte of extra storage isn’t cheap. So what is one to do?

Take the ideological approach

The h264 codec is patented and of course this caused quite a bit of a stir. Google then released WebM and every new major browser supports it except IE9 which support h264. Most people who use alternatives to IE and Safari, keep their browser up to date, they are likely to have WebM support. Now YouTube are supporting WebM, I suspect that many will forget about Theora entirely.

You could support say WebM as this is the open source codec and not support IE at all. I don’t support this approach, it is sending a message to your visitors that you care more about their browser choice than their experience on your site.

This is okay if it is going to be a you and your circle of friends using the site or if you are trying to make some sort of Idealogical point.

Just use Flash

The overwhelming majority of people browsing the web have the Flash Plugin installed. You could just op to use a Flash video player on your site. However you will be alienating iOS users and people that don’t have the plugin installed. If you have a low number of users that can’t or don’t have the plugin installed, this is a good option.

Take the third party approach

You could host your video on either YouTubeVimeo or another third party video hosting and use an embedding script. This is a good option for a personal website or smaller commercial websites.

However this may not be an option if you are a larger website because with the terms of service you essentially give up some of your rights over this content, to some organisation this may not be an option.

e.g.From section 6C on YouTube’s Terms of Service:

For clarity, you retain all of your ownership rights in your Content. However, by submitting Content to YouTube, you hereby grant YouTube a worldwide, non-exclusive, royalty-free, sublicenseable and transferable license to use, reproduce, distribute, prepare derivative works of, display, and perform the Content in connection with the Service and YouTube’s (and its successors’ and affiliates’) business, including without limitation for promoting and redistributing part or all of the Service (and derivative works thereof) in any media formats and through any media channels. You also hereby grant each user of the Service a non-exclusive license to access your Content through the Service, and to use, reproduce, distribute, display and perform such Content as permitted through the functionality of the Service and under these Terms of Service. The above licenses granted by you in video Content you submit to the Service terminate within a commercially reasonable time after you remove or delete your videos from the Service. You understand and agree, however, that YouTube may retain, but not display, distribute, or perform, server copies of your videos that have been removed or deleted. The above licenses granted by you in user comments you submit are perpetual and irrevocable.

Quite a mouthful; however YouTube are basically saying, once you submit Content to us, it is as much YouTube’s as it yours as long as we are using it on Youtube. Oh and btw any YouTube user can view the content and we can keep the content forever on our servers. This also includes things such as comments as well as the video itself.

It’s ultimately up to you or your organisation whether you are prepared to give up some rights to your content in exchange for possibly extra exposure and YouTube doing all the hard work.

Also be prepared that the embedding is most likely to be a Flash Video.

Look at your user statistics

When I was working in my previous job, most of our users were using IE7 or 8, and Firefox on Windows. There were  small number of Chrome users and Safari users, users of alternative operating systems such as Linux distributions were almost non-existant and almost all our mobile traffic was from iPhones.

The browsers that 99% of our visitors used support Flash except for iOS devices. This I expect is a common case, however it is best to check you user statistics. The decision at my previous place of work was quite simple, serve h264 and Flash video because it saves on storage.

The ideal

The ideal scenario is that you have lots of server storage and you can afford to encode many times. However I wouldn’t bother with encoding to the Theora codec, for the reasons mentioned above.

Server vs Client side detection

For the vast majority of cases, client side detection is going to be fine. You can quite easily roll your own, however JavaScript Libraries such as Modernizr are excellent and do all of the hardwork for you.

You can use device feature detection libraries such as WURFL, which have a database of device capabilities and finds these based on the browser’s user agent. However for detecting video this is overkill and honestly if the phone isn’t good enough for some basic JavaScript then it probably can’t play video anyway. Tools such as WURFL are best used as part of a progressive enhancement strategy.


I think that Flash video will be around for quite a while yet and I think it is a good solution that works most of the time.  Think about your users and choose wisely on how you serve video to your viewers. Any decent web statistics package will be able to break down who your users are and what they are doing.