Why redirecting to mobile website is bad

Nowadays when browsing web with a mobile device, often you get a choice to use mobile version of the website. When you say yes, you will get redirected to this mobile version. Sometimes you get even redirected automatically based on your browser's user agent string. Usually this mobile version is having either different origin (most of the time just different hostname), but sometimes URL is modified some other way to indicate that you use mobile version.

Often there is nothing wrong with this, but things get annoying when you use something like Firefox or Chrome sync service that allows you to synchronize the state of your browser across all your devices. Now think what happens when you use your mobile device and want to continue with your desktop machine. Everything gets synchronized nicely, but then you get the annoying part. When you started browsing with mobile device you will get the mobile version even when later switching to desktop. Hardly ever this mobile version is optimized for desktop use. To make things even worse, it is quite common that the mobile version has completely different structure that does not map against non-mobile version. In such case, switching between mobile and desktop versions might not even be straight forward.

For the past few years now, there has been a trend in web design called responsive design. Idea is that you design your web site in a way that you modify the layout and behaviour dynamically depending on certain rules. You do not build separate version of the web site for different types of clients. To simplify things a bit, idea is to have have single index.html that will work for all kinds of clients and dynamically adjusts the layout to match the most optimal layout profile for the client. When building your web site or app properly, you can easily support more than just mobile and desktop layouts. You can have one for small screen phones, one for larger screen phones and small screen tables, third for large screen tablets, fourth for desktops and one more for large screens people watch from distance like TVs or billboards.

If you work for creating a web site or an app, go there and find out more about responsive design and make your product properly responsive. Make sure your web site's structure works well for different layouts. And keep in mind that regardless of the client type, URL should always stay the same, indicating the page or part correctly regardless which layout is used. This way you make sure that users get the best possible experience and can switch between devices.

To test your site's/app's design, you can use something like Firefox's "Responsive Design View". As an example of some basics of responsive design, you can check my notes app in GitHub. It is not very pretty, but shows how you can switch between handset, small tablet and desktop layouts mainly using plain CSS.


First mini-module based building blocks

To make mini-module more interesting I have added first two modules that help on writing apps, new modules and simplify sharing code between node.js and browsers. These modules are: mini-promise and events. As an addition, I have improved documentation and added somewhat naïve test code for mini-module and these new modules. You can check code in test/tester.js for hints how to use these modules. These modules have fairly accurate JSDoc documentation, so that might also help.


As name hints, mini-promise is a promise/future implementation build as a mini-module. There are all kinds of promise/future implementations available, but I have been using this one in my own projects for sometime now and finally decided to convert it into mini-module and publish the code. When I started to write this one, I was not aware of DOM Futures, so the API does not match with that one. Instead, I was inspired by some other promise implementation, but I cannot remember which one. In the end, I end-up writing my own, because the promise implementation I was trying to use did not handle synchronous payloads and I needed one that does. Another reason to write my own implementation was that I wanted to understand more about promises and futures and I think writing your own implementation is one the best ways to learn.

This module handles asynchronous and synchronous payloads. You can also force it to run asynchronously, if synchronous payload causes problems. It also caches results so that as long as you have a handle to the promise, your callback will be called even when promise finished before you register your callback. You can find the code from GitHub.


As you might guess, this is just to bring node.js events module into browser side so that you can use EventEmitter easily. By default all events are emit asynchronously, but if you wish you can force EventEmitter to call listeners synchronously. The code is also available in GitHub.


Sharing modules between Node.js and browser, my way

I have been playing with JavaScript for a while now and using node.js and browsers to run my code. I have got a bit frustrated that I need to write my components either for node.js or for browser. It is quite annoying that the same code does not run easily in both environments and common pattern for modularization is not available out of the box.

Node.js has quite nice module implementation where components can require other components and each module defines its API by exporting the public parts. Of course this module mechanism is not available when running inside browsers. Instead in browser-land there is a fairly common pattern to separate your modules into a self executing unnamed function that encapsulates the private parts and returns your module API that is assigned to a global variable defining your namespace. If you wish something more sophisticated, there are many other approaches to provide modules in browsers.

I started to check few different module libraries, but for a reason or another I did not like any of those. Sometimes use did not match with node.js. Sometimes you had to use extra components in node.js to share the code. Sometimes defining modules was bit more complicated than I hoped. Most of the time I did not like those, because those were not invented by me ;)

To be honest, real reason to write my own module implementation is really to understand the problem. At the same time I wanted to find a nice pattern for writing the component so that it works for both: node.js and browsers.

Code related to this article that implements my module library is freely available in github.

Pattern for writing modules

To use my module implementation, you need to follow a fairly simple pattern for encapsulation. This is to prevent from polluting global namespace in browsers. What you need to do You need to wrap your component inside a self executing function that encapsulates your component:

(function(exports) {
    var myLocal = "this value is visible inside the module";
    function callMe() {
        return myLocal;

    exports.func1 = callMe;
})(typeof exports !== "undefined" ? exports : this.myModule = {});

To explain a bit, the syntax (function(exports) { ... })(...); creates a closure that will be executed automatically when your JavaScript file is evaluated. In this pattern, we pass the exports object to the module as a parameter and if exports is not available we pass this.myModule where this is most probably window object and myModule becomes a global object.

Often in node.js you want to pass all exports as an object. In this case you need to use module object instead of exports and set modules's exports property. See the example below:

(function(module) {
    var myLocal = "this value is visible inside the module";
    function callMe() {
        return myLocal;

    module.exports = {
        func1: callMe,
        version: "1.0"
})(typeof module !== "module" ? module : null);

In this case if module is not available, we will pass null that will eventually cause code to throw error. In the end, it is just fair because writer of the component expects module to exist and mini-module.js being used.

Using from API from other modules

As in node.js, you can use require(…) function to load other modules that your module will use. Keep in mind that you must not call require outside your module's encapsulating function or you will pollute global scope. You can use either relative paths or module names to require other modules:

(function(module) {
    var otherModule1 = require("../../otherModules/otherModule1.js");
    var otherModule2 = require("otherModule2");

    var myLocal = "this value is visible inside the module";
    function callMe() {
        return myLocal;

    module.exports = {
        func1: callMe,
        version: "1.0"
})(typeof module !== "module" ? module : null);

Using relative paths should be fairly clear, but using module names requires a bit more explaining. In node.js when you use npm to install other modules, you can later refer to these modules by their name and you do not need to know actual location of the module. Node.js uses certain module load paths to find modules that are installed under node_modules folders and these modules can be referred just by using module name. This causes some challenges to replicate this behaviour in browser.

Using modules in browsers

Now we have nice pattern for encapsulation to create modules, way to define public API and way to require other modules. Next we need a way to include our components in the web application so that things still continue to work with node.js. What I do not like so much in other module implementations, is the way how those load scripts dynamically at the execution time by injecting script elements or sometimes doing something as dangerous as loading script file with AJAX request and evaluating the content. There are definite pros in dynamic script element injection, but it requires separate JavaScript blob to be included to define modules you want to load and their dependencies. This approach allows calculation of module load order so that modules will be load in right order based on their dependencies, but on the other hand I have not seen implementation that works with node.js out of the box.

I like more the approach where all scripts are defined in HTML as static script elements so that it is easy for the developer to see what gets loaded and in which order. The main challenge is to create export mechanism that maps script elements to JavaScript modules. Luckily modern browsers allow this to be done by using document.currentScript that will tell script element matching with currently running JavaScript file. Use of document.currentScript is abstracted into internals of mini-module.js so that when you use module or exports this mapping is created and also possible module name mapping is created based on data-module-name attribute in script element. There is also a fallback mechanism included for some browsers that do not support document.currentScript. Though the fallback mechanism is tested only with Chrome and Firefox, but it should be fairly easy to add support for other browsers too.

To use mini-module.js, you need to add script element in your HTML file that loads mini-module.js and it needs to be located before any other script elements that use module, exports or require. This script element that loads mini-module.js must not be load asynchronously. When loading your own modules, you need to be careful with order of script elements. If you have inter-module dependencies, you need to make sure you load everything in right order. Also be careful with use of asynchronous loading (async attribute of script element). If you load your modules asynchronously, there is no guarantee for the order when your JavaScript will be executed and if your module requires another module, there is no guarantee that the required module is available. Easiest way is not to use async attribute with any module that is required by another one or at least do not use async attribute with modules that are required by other modules.

The purpose of this module concept is to be as simple as possible and provide development experience allowing to write modules without installing extra modules in node.js. This does not provide fancy features like dependency calculations or dynamically loaded dependency chains. Also minifying all JavaScript into a single file does not work with mini-module.js. If you want to minify your JavaScript, I suggest minifying each file separately.

Code is available in github. Under test folder, you can find example index.html loading few modules that are also load by node.js script. Feel free to fork the code and hack as much as you like.


Web App Example Using IndexedDB

This article explains how to build a note making application with Web technologies that works completely as a standalone application without need for active internet connection after initial load. It uses IndexedDB for saving notes and utilizes AppCache to make sure that all resources are available for offline use. In short, it behaves like any desktop application could work, but does not require installation. To show some less trivial uses, this application includes features like: saving documents while typing, using Memento design pattern to separate serialized state (the state that is stored in IndexedDB), data to HTML bindings with Knockout.js and live updates from database to HTML, also using Knockout.js. All the code for this example application can be found from GitHub.

What is IndexedDB

IndexedDB is fairly new technology in HTML5 scene that allows applications to store data locally. It is a NoSQL database engine build right into your browser. Currently Firefox, Chrome and IE10 support IndexedDB. For others that support only WebSQL, like Safari, there is a shim implementation. When starting to write my example application for this article, Chrome was implementing slightly older version of the IndexedDB specification, but seems to be fixed in Chrome version 24. I have tested the example with Chrome and Firefox.

Basic idea with IndexedDB is that you can store Javascript objects in the database and you can access those quickly. Everything happens inside your browser and without any need for active internet connection. If you are familiar with some NoSQL database, you should have no problems getting started with IndexedDB. If your background is in SQL databases, you probably need to adjust your way of thinking a little bit when it comes to structuring your data and how to find and access objects in the database. NoSQL does not mean that you do not need to think about the structure of your data, it just allows more flexibility.

When coming from SQL world, IndexedDB can feel a bit strange in the beginning. You do not need to define complete structure of your data and there is no query language. Instead you just define key-value pairs where the key is one of the properties in your object and value is the actual object. Additionally you can defined indices that can be seen as alternative key-value pairs that provide mapping from some other properties to your objects. When you get data from the database, you always get complete object or objects and there is no support for views filter or combining data. Important thing is to understand that accessing objects is always really fast operation, as long as you will not store big blobs of data in your objects.

Using IndexedDB

This part will go through the basic use of IndexedDB, doing so quite quickly. There are quite many much better articles about how to use IndexedDB. This article tries to focus more on where to use and what are the challenges of using IndexedDB. To see more detailed, you can always read the source code of the related example.

As commented earlier, IndexedDB is key-value storage where objects have predefined property that is used as key for identifying objects or such property can be generated automatically for you by using key generator when you save new object. Stored objects are pure-data Javascript objects and functions are not allowed or at least not stored. To access these objects, you either get objects by searching with key or you build some indices that are alternative property combinations to find your objects.

Database itself is divided in object stores, where each object store is expected to hold certain type of objects. Indeces are defined per object store and each object store is per database and database is per origin, more about origin is discussed in same-origin policy part. Object store also defines constraints for the data by expecting to find properties that are used as key or in indices. Other structural definitions or constrains cannot be made.

One really nice feature in IndexedDB is that has transactions. IndexedDB's case you can have multiple readonly transactions at any give time, but when object store is used in readwrite transaction all other transactions need to wait. This should be quite basics, but good thing to keep in mind when writing your transactions. For example when saving data in the example application, you need to remember that all other operations are blocked for that object store. That also why my example application does not call IndexedDB put operation every time you type something, even though it has save while typing behaviour, but instead it marks documents as modified or dirty.

Creating Database and Handling Changes in Its Structure

To create database you just open it, if database does not exist it will be created. Creation happens inside upgradeneeded event handler. This same event is emit when database has been modified and you need to update to new version of the structure.

The example application creates database called "Notes" that has single object store "notes" for storing Note objects. Code for initializing and opening the database is divided between main.js and notes.js so that App (type defined in main.js) takes care of opening the database and calling NotesProvider (type defined in notes.js) to initialize object store "notes" for Note objects. In case of upgradeneeded event, App calls NotesProvider.initStorage() to create its object store and update indices. Database initialization and opening is divided in two so that you could easily add new components to handle the code for initializing and upgrading their own object stores.

App.init() takes care of opening the database and if database version is changed it will request NotesProvider to update object store. Actual object store is created and updated in NotesProvider.initStorage(). This method handles initial creation (you could think it as upgrade from version 0 to 1) and upgrade from version 1 to 2. Later in NotesProvider.init() application expects database to be in good order and gets all records to populate list of notes at HTML side by using ViewModel that is used for Knockout.js bindings.

Finding and Accessing Your Objects

There are two approaches to find or query objects in IndexedDB: get by key or key range, and find through indices. Initial step for any operation is to start a transaction that defines which object stores your transaction uses and what is the mode to use: readonly or readwrite. By default mode is readonly.

For example to fetch all notes when initializing NotesProvider, I create a readonly transaction that uses object store "notes" and I use index for property called modified. I want to use modified property, because I want to populate list of notes in order of modification so that last modified gets top-most in the HTML side of the code. See Notes.init() for complete code, below is only a part of the init-function just to show how to get all notes:

var store;
var index;
var cursorRequest;

store = self.app.db.transaction(["notes"]).objectStore("notes");
index = store.index("modified");

cursorRequest = index.openCursor();

cursorRequest.addEventListener("success", function(e) {
    console.log("NotesProvider.init, cursorRequest success");
    var result = e.target.result;
    if(!!result == false) {
    } else {
        console.log(JSON.stringify(result.value, null, '\t'));
        self.app.viewModel.addNote(new Note(self, result.value));
}, false);

When using cursor like above, you can use event.target.result to iterate back, by calling previous(), and forward, by calling continue(). By reading result.value, you can access the object your cursor is currently pointing to.

Add, Modify and Delete Objects

To modify data, you need to start readwrite transaction like when getting data, but this time you define mode by passing "readwrite" as second parameter when creating your transaction request.

To add new documents, you can see NotesProvider.addNote() in notes.js. Basically I start transaction and call add to save new note into the object store:

var store;
var request;
var transaction = this.app.db.transaction(["notes"], "readwrite");

store = transaction.objectStore("notes");

request = store.add(note._memento);

request.addEventListener("success", function(event) {
    console.log("NotesProvider.addNote, succeed.");
    console.log("Add new note to top of the list and make it current.");
    self.app.viewModel.notes.splice(0, 0, note);
}, false);

To modify documents, again you need to start with readwrite transaction. Inside a transaction you can either save object by using object store method put or call update when accessing object through a cursor. In the example application I use only the put method. If you need to update multiple objects while iterating through with a cursor, you may want to use update method. Other solution is to find all objects inside a readonly transaction and then modify separately inside a readwrite transaction. This approach has an advantage of iterating through all items will not block other read operations.

Below you can find a snippet from the example application where I save a note. There should be nothing too special in the example apart from use of properties _saving and _dirty. These are needed, because the example saves notes automatically when modified and that could generate a lot of sequential readwrite transactions. To prevent database operations starving the system there needs to be some logics when to call put method to save a note. In the example I make sure that only a one transaction for saving a note is happening at a time and no other save operations are queued. This is achieved so that if we are in the middle of a save transaction, new save calls are skipped and instead we just set _dirty flag to indicate that there are unsaved changes. Later when transaction completes, we check if the note was modified during the save operation and if it was, we launch save operation again. In theory this example has possibility of loosing changes when multiple notes are saved at the same time, but since save operation requires user interaction and user can interact with single note at a time, in practice this is not going to happen. Only possible situation might be when user modifies document and really quickly clicks to create a new note, but this is only possible in the desktop or table layouts where machines are most probably fast enough to render this case impossible.

NotesProvider.prototype.saveNote = function(note) {
    var self = this;
    var store;
    var request;
    var IDBTransaction = window.IDBTransaction;
    var transaction;

    note._saving = true;

    transaction = this.app.db.transaction(["notes"], "readwrite");
    transaction.addEventListener("complete", function(event) {
        console.log("NotesProvider.saveNote, transaction completed.");
        note._saving = false;
        if (note._dirty) {
            console.log("NotesProvider.saveNote, re-save since note was updated during the operation.");
    }, false);
    transaction.addEventListener("error", function(event) {
        console.log("NotesProvider.saveNote, error: ", event);
        alert("Failed to save note.");
    }, false);

    store = transaction.objectStore("notes");

    note._dirty = false;

    request = store.put(note._memento);

    request.addEventListener("success", function(event) {
        console.log("NotesProvider.saveNote, saved.");
    }, false);

Deleting objects happens by calling object store's delete inside a readwrite transaction. As a parameter, you need to pass value of the key that identifies your object. Other option is to use cursor and through the cursor delete the object it is pointing at. In my example application, there is no use of delete, but below is an example of using cursor.

store = self.app.db.transaction(["notes"]).objectStore("notes", "readwrite");
index = store.index("modified");
var cursorRequest = index.openCursor();
cursorRequest.addEventListener("success", function(event) {
    var result = event.target.result;
    if(!!result == false) {
    } else {
}, false);

Making Application to Work Offline

Using IndexedDB does not make much sense unless you make sure that your application can be used completely in offline mode. To allow your application to work in offline mode, you need to create AppCache manifest that defines what are the resources needed by offline use.

Location of AppCache manifest is defined in your HTML file as a parameter of html-tag. In my example application it is in index.html: <html manifest="notes.appcache">.

Since my application is fairly simple, the actual content of manifest just lists all files that are part of the application:



To react on cache changes, this application just notifies user and when user clicks OK, it calls window.location.reload() to take new version in use. This is done in main.js:

window.applicationCache.addEventListener('updateready', function(e) {
    console.log("Appcache update ready.");
    if (window.applicationCache.status == window.applicationCache.UPDATEREADY) {
        // Browser downloaded a new app cache.
        // Swap it in and reload the page to get the new hotness.
        if (confirm('A new version of this site is available. Load it?')) {
    } else {
        // Manifest didn't changed. Nothing new available.
}, false);

There are some caveats when it comes to AppCache. One that you will learn sooner or later is when you modify files without changing content of manifest file and your application will not get refreshed. There are multiple ways of handling this problem, but this issue is good to keep in mind and you probably want to disable caching while developing the application. Dive to HTML5 has really good article about AppCache that you probably want to read: Dive into HTML5, offline.

Same-Origin Policy and Security

How about the security? Web browsers have a simplified security model that is called same-origin policy. Slightly simplifying, this means that all resources application or web pages stores, can be accessed only by other applications or pages that share the same origin. Origin itself is defined by scheme, address and port. In other words, if you have http://example.com/app1 and http://example.com/app2 these applications have same origin and both can access all resources saved by the other application. If you have http://example.com, https://example.com, http://example.com:8080 all these three have different origin and cannot see any resources from each others even if the application is actually the same one. All applications and pages that have same origin can access each others resources and there is no way to restrict this and vice versa, if your origin is different there is no way to allow applications or web pages from other origins to access your data. At least not without having additional help by using something like CORS or some other methods, but these are not in the scope of this article.

When it comes to IndexedDB, in good and bad, same-origin policy simplifies how to think your security: there is only a single way to protect your data from other applications and sites. If this is not enough, you probably do not want to store your data at the client side. On the other hand, this simplicity might bring some annoying constraints. Sometimes it would be nice to expose part of the data to applications from other origin, but this is not possible. Think of an example where you have contacts and email applications and you want to pick contacts for recipients of an email. This is not possible directly by using contacts database from email application, but by using web intents, postMessage or web widgets you could go around the issue. Especially with web intents and postMessage, you could popup window that loads contacts application where you pick the contacts and then return the data to the email application. As an alternative approach, Web widgets could allow you to embed contacts picker widget that actually runs inside another origin and passes the selected contacts to email application. I have not tried this approach and I am just guessing that this might be possible. I might try it and have an another article in the future around this topic.

Where to Use IndexedDB

When and how to use IndexedDB? When writing web applications, you have few possibilities for storing your data: use server side storage or database, use File API and write your data to filesystem, use WebStorage or use IndexedDB. Basically if you aim for Web application that works in offline mode, then the first one is obviously out of the scope. Second is not necessarily optimal and might require quite a lot of logics. So in the end, only two last ones are truly viable choices.

WebStorage has been around quite some time and it is very simple key-value storage where you can list your keys and you can get and save objects identified by key. It is very simple storage, but often it is good enough for simple use cases. For example it would be good enough for my notes application.

IndexedDB requires a bit more learning since you need to get a bit deeper into the NoSQL database world and you need to start using transactions. On the other hand, it is more capable and gives you some additional speed improvements when searching data from a larger dataset. Another gain over WebStorage is that you have a way to divide your data into silos by saving your data into type specific object stores.

When you think about desktop applications that store data in your machine and do not synchronize the data with external services or between multiple devices, IndexedDB has all the capabilities to match such needs. Additionally since you can write your application in HTML, CSS and Javascript, development is fairly fast and easy, at least in case of simple applications. Also distribution and update process is really easy since all you need for running the application is a modern Web browser. When correctly combined with AppCache, you will be able to use the application in offline mode and when connected again, browser will check availability of new version. What would be easier distribution mechanism that this?

Unfortunately world is moving towards having data synchronized between all your devices and allowing cloud to have a replica of the data. This is where the shortcomings of IndexedDB are starting to bite.

Shortcomings of IndexedDB

IndexedDB is fairly new technology and HTML 5.0 is the first version of the standard where it is included. Being a new technology, there are some shortcomings that limit its usefulness. It is just natural that this kind of shortcomings exist with new technologies and I hope these will go away in the future when the technology matures and users point what kind of new features are needed.

Searching by Using Partial Match

Coming from SQL world, one thing that you definitely miss is the lack of search by partial match of a text field. In SQL world you can use LIKE in WHERE clause with % to match either string starting with (LIKE "<search term>%"), including (LIKE "%<search term>%") or ending with (LIKE "%<search term>").

If you are familiar with NoSQL world it should not be too surprising that IndexedDB lacks this kind of capability. The nature of NoSQL databases is build around getting objects and building views or indices pointing to these objects. Often keys and indices are stored in a tree-like data structure that makes it very hard to do partial matches of values of some fields. On the other hand, traversing through all items in the database and do partial match based on certain fields can be quite expensive. To play nicely with this shortcoming requires some changes in the way you think and design you application and your database. If you need this capability, there are some ways to achieve search capabilities or at least partially achieve same features you would use LIKE in SQL.

Of course one possible solution is to walk through all your items and match the field you want with the search term. But this is quite inefficient and especially with mobile devices can take a while and eat your battery.

Alternative solution is to create an index using your text field, you can use key range to achieve search by start of the string. Often this can be enough. Or if you take this further you can create dictionary where you have words to object mappings and use words as the key. This way you can search you by the beginning of any word to find objects that have words which match the beginning of the word with your search term. Pretty much like with SQL LIKE "<search term>%" would give you. Idea is to use key range search and match anything starting with your search term and ending with string where you replace last character of your search term with character following your last one and upper bound of the range not included. As and example to match all words starting with "rep" you would create key range starting with "rep" and ending "ret" and upper boundary "ret" would not be included in the results. This will give you all items where your key starts with "rep" like "reptile" or "representative", but not those starting with "ret" like "return". Only challenge with this approach is that you need to build your dictionary.

To build this dictionary you need to split your text fields into separate words and then inject those into object storage that provides the needed index and mapping. In a theory this kind of generic component, doing this for you, could be created quite easily. But it is a bit challenging, because there is no proper event mechanism to listen database change notifications and hooking into change notifications to harvest these words.

Lack of Event Mechanism for Changes

IndexedDB provides no means to listen changes in the database. This makes it very complicated to create multi-window applications that have multiple windows open and keep views up-to-date when other windows are modifying the data. Or in case of multiple clients that synchronize data through some server or peer-to-peer manner. Other challenge is when you want to create generic component that manipulate or harvest information from new records or when existing ones are modified.

When thinking about first case where you have two windows or tabs accessing same data. If you have one view that shows list of items in your database and another that has single item open that you edit. Now there is no way of list view to get events that one of the items in the list was modified. To be honest, there are ways, but you need to use or create event mechanism that crosses window boundaries, but implementing such is a bit annoying task to do.

One even more useful target for this event mechanism would be to create server-client data synchronization library that is generic and can be easily included as a part of any application. Without this mechanism you still can create this kind of library, but you need to do lots of plumbing to notify about changes in client side or get changes from server side propagated to connected clients. As a one use case, think of a CouchDB connector and how easy it would be to create generic connector that reads your object stores and follows changes in those synchronizing everything with CouchDB.

Third use case is to create generic data workers. It would be useful if you could create generic workers that listen for changes and then do something to new and modified objects. As an example, think about the the example application where you would create dictionary of all possible words found in your notes and then search notes by beginning of any word. Now you need to hook into your own save/update functions and launch your indexer to harvest and update all words found in your notes. Instead of doing this, it would be very handy if you could listen for change notifications and do the processing separately.

To solve this shortcoming, we need a way to emit events when objects are created, modified or deleted in an object store. It should not require much to have this kind of mechanism especially when you already have a event mechanism in the browser. Having this would allow creation of generic libraries that hook into object stores and process items, keep data in sync between client and server or synchronize views across multiple windows.

Lack of Globally/Universally Unique Identifiers

It would be very nice if you could use UUIDs as IndexedDB identifiers since this would make it easier to synchronize your data between server and client. Currently IDs are just numbers that are incremented automatically per object store and if you need UUIDs, you need to insert those your self into your objects and define that field as key path when creating your object store. Only problem is that this requires you to include yet another library or you need to write your own to generate UUIDs. Also if you follow UUID version 4 specification you need a proper random number generator and as far as I know Javascript's Math.random() has some issues which may lead to UUID collisions.

Having UUIDs or similar unique identifiers as object identifiers is pretty common in NoSQL world and it just makes me wonder why IndexedDB does not have this in the specification. Instead it uses approach that is more common in SQL world for having auto increment key field.


IndexedDB is very nice and easy to get started with and I am really happy to see it included in HTML5 standard. Having this kind of real database engine as a part of the browser allows you to do more and more with HTML applications and reduces the need for having native applications.

When comparing IndexedDB features to some other NoSQL databases, I am glad to see that IndexedDB is fairly easy to use. Also inclusion of transactions makes me happy and I have been missing that when using CouchDB.

Only major problem is the lack of event mechanism that notifies for changes in the database. I really cannot understand how this kind of feature is leaved out of the specification when there are so many obvious use cases that absolutely scream the need of it.

When accepting the shortcomings and admitting that this is really the first version of the specification, then I have to say IndexedDB is fairly good. When you think it as a Web replacement of an embedded database that can be used only from single application process at a time and does not have any synchronization support, then IndexedDB is actually matching the needs of Web applications. If you would like to use it as a client side cache to server side big database, then IndexedDB will give you some headache.

Luckily the shortcomings are not causing any insurmountable obstacles and there are projects like PouchDB that matches CouchDB API and is build using IndexedDB. Nice thing with PoutchDB is that it allows you to keep in sync with remote CouchDB. Only thing I would wish from them is to provide API that matches with IndexedDB instead of CouchDB. That would make it even more interesting, if you ask me.

I hope you enjoyed reading this fairly long article and find it useful. If you have any questions or comments about the example code, do not hesitate to contact me.