Data center colocation benefits and drawbacks

data-center-colocationIs it the right solution for you?

Data center colocation or server colocation are an interesting option for many organizations. In-house facilities can be expensive, managed facilities may not provide flexibility. However, there are a few key points that need to be considered before choosing data center colocation.

After working with a data center closely for an analytics project, we have listed a few benefits and drawbacks that should be considered for organization planning to set up their data center at a colocation site.


  • Scale of operation
  • Risk management
  • Choice of hardware/software

Scale of operation

Smaller organizations can benefit from data center colocation. Especially, when an organization does not have a lot of resources to invest in data center setup. By colocating their data centers, these organizations can take the benefit of bandwidth, specialist staff, and (comparatively) lower cost.

Risk management

Colocation hosting providers also take care of redundant systems. This, in turn, allows them to offer 99.999% uptime. Many of them also have a 24/7 support that could help the customers in case of any issues. This lowers the risk of (unplanned) downtime and allows organizations to provide continuous service to their customers.

Choice of hardware/software

Managed data center hosting providers has its benefits. However many organizations are not comfortable with the type of hardware and software choices. In data center colocation, the customers can choose their own hardware and software. The colocation service providers usually only provide the physical space and network infrastructure.


  • Responsibility
  • Expense
  • Maintenance


Customers are responsible for their own hardware and deployment. This means when a problem arises in deployment or installation, the responsibility lies on the customers. This makes data center colocation a more difficult proposition than managed data centers.


Data center colocation may be a good choice for smaller organizations but they can be expensive. Maybe not as much as in-house facility but still more than some other available options (IaaS, third party managed DC, shared hosting etc.).


There can be difficulties in terms of maintenance. With data center colocation, the hardware is off premise. As a result, in case of maintenance, the engineers need to travel from the customer location to the DC colocation site. Also the time spent between the travel needs to be considered while choosing a data center.

Overall data center colocation can be a sweet spot. Organizations with moderate resource can leverage their flexibility. At the same time can keep the overall cost under control.

Real Time Data Analytics: The good, the bad and the hard place

Data analytics has probably been around from the time of invention of abacus and has improved as centuries passed by. Population of the planet and the complexity of the problem increased exponentially since then. Solving a problem in batches and deciding on the corrective action is often not enough.

Enter real time analytics! The main goal for has been to solve problems quickly as they happen, or even better, before they happen. Today we all want to use real time data analytics to foresee and solve problems for our business. But how exactly is the current industry faring?

The Good

Real time data analytics have come a long way in the recent decade. Especially with the advent of big data and the general industry inclination on data driven decision-making. While there has been ongoing debate about instinct vs insight in business, the importance of data has risen exponentially with the introduction of big data.

The best part is that technologies and tools are easily available today. Tools have real time data analytics capability for any business is right at our fingertips. Starting from open source solutions, which are absolutely free (combination of Kafka-Druid etc.) to enterprise grade solutions (New Relic, Splunk), business with patience and/or the fund to afford it are no short of choice.

The Bad

Real time analytics is yet to be efficiently utilized to its full potential. We often see businesses using real time analytics tools to barely monitor a data source such as data center or websites. While the idea of using real time analytics fascinates business decision makers, often the technical team of an organization does not align with the business teams to fulfil the promise of real time data.

Often, the technology stack used (open source components to build real time analytics tools are often not very convenient for rapid development) or the tools that are used (charges for enterprise grade tools are out of reach for small and medium business) provide the challenge.

The Hard Place

Stuck between a rock and hard place. That how small and medium businesses often end up working with tools with limited features. Eventually they lose to bigger organizations in terms of agility.

Building a custom open source based analytical tool requires money, time, effort and the know-how. On the other hand often tools like Splunk (lowest price $900/GB) or New Relic (separate pricing for data and hosting) puts these tools out of the reach of these organizations.


The good news is that several companies in recent times have been trying to bring affordable solution to this space. The likes of IQLECT (approx. $0.93/GB), DataDog ($15/host/month) and Scout ($10/host/month) are making decent effort to provide an alternative to small and medium businesses.

Picture credit: Freepik

Real time analytics in eCommerce

Traditional retailers and store owners have always grappled with the difficulties of customer retention and loyalty and have been the foremost adopters of methods that help them understand their customers better. After working three years in one of the world’s largest retailers (where we called our customers as ‘guests’), I have seen large teams focused on analyzing every single data that affects the customer experience. Every decision on store planning, product merchandizing, logistics, etc was taken on the basis of how it would affect customer experience. The physical nature of brick and mortar store, however, limited the kind of insights we could generate at that point.

It all changed with the entry of internet and eCommerce. With the capability to collect data and analyze customer behavior, buying patterns, eCommerce store monitoring and logistics in a continuous manner, marketers have moved very close to the ‘Universal Customer Profile’ – the Holy Grail in marketing. This has been possible due to massive advancements in data technologies and computation capabilities. Does this mean that every problem has been solved? Not really, there are still plenty of challenges to overcome.

Challenges in eCommerce

Consider a very frequent challenge that eCommerce companies face – How to engage better with a prospective customer who has a clear intent to purchase but leaves the online store and moves away? This could happen due to several reasons ranging from loss of interest, availability of additional information (competitors’ website offering better value), performance issue with the infrastructure (website performance, server performance issues) and others.

By the time the business managers today come to know that a customer is losing interest, analyze the root cause and take corrective action, it is likely that the customer has already been lost. A brick and mortar store owner could possible provide answers to the customer’s queries, engage the customer in a conversation and avoid losing the customer to competitors (which are typically a few hours away). But with eCommerce, it only takes a few seconds for a customer to move to a competitor.

How do we solve this problem?

The time is now

The short story ‘Three Questions’ by Leo Tolstoy comes to mind when tackling this challenge. The first question in the story was “What is the right time to begin/act?”. As the hermit indicated at the end of the story – “Now”.

If eCommerce businesses draw inspiration from this story, they are possibly headed towards a satisfactory result. Broadly speaking one would require two main components to actually implement that.
1. Real-time knowledge: Getting to know that you have a visitor who is genuinely interested in purchasing a product at that very instance.
2. Real-time action: Take the right action to help the customer make the leap from being an interested visitor to a buyer. More importantly do it while s/he is still engaged with the online store.

The Beginning…

Whether or not Tolstoy had foresight of eCommerce when he narrated his story a century ago, the principle of real-time absolutely holds true for them today. Technology to achieve this in massive scale was not widely available till a few years ago, but with plenty of options today, it is more a question of “How can we best use real-time data?”.

This post (the first in a series of 3) just touches upon the importance of real-time knowledge and action for the eCommerce industry. I will dive a little deeper into the possibilities that can be achieved by an e-commerce company and compare multiple options that one has to achieve this.
Watch this space for more!

Pictures designed by Freepik

Why do we need real time data analytics: Analogy to project management lifecycle

With the advent Internet and ever changing data, the landscape and the value of data has changed to a great extent than it was 25 years ago. In today’s world the quicker a company can take action, the more impact they can have on the outcome. With stronger competition, every company in every industry needs to be more alert than ever before. The importance of data over its lifecycle hence has become a real important factor in decision making. While the older method of accumulating data, analyzing it and then taking a business decision (across a time period of a few weeks) is still practiced, we do not need to look further than software industries to see a change in practices that we should be adopting.

A decade ago, most IT companies preferred to follow the Waterfall model of project management. However today most modern companies follow Agile model where continuous engagement is required. Agile methodology gives a team the ability to act quickly in case any change is needed and hence eventually make the final product more stable and suitable. Continuous iteration process also squashes the bugs that would otherwise plague the product if it were developed through Waterfall model. While both the model has its pros and cons, Agile model provides more dynamism to the whole approach.

Waterfall model
Waterfall Model

Agile Model
Agile Model

The need of the hour can be simplified by drawing an analogy to the project management methodologies. The continuous process of product development is extremely important. In the same manner, it is important to provide input key inputs for the requirement phase. That is only possible if one gathers information from market and other sources. Hence the data that we feed into our software development process has a different value as the time passes

The value and importance of data along its life time is depicted in the chart below (credit: Sachin Sinha).
Data value vs Cost comparison
Data value vs Cost comparison

On a brief note, the data we gather keeps loosing its value on decisions as time passes. On the other hand if one had to make a decision on the stored data, with passing time more storage will be required. For a given business and considering that the daily volume of data collected remains constant, a team that analyses their data every month would require storage capacity higher than a team that analyses their data every week.

As our software development methodologies have changed to accommodate the faster development requirements and quicker change management, our business models and modus operandi are also changing to accommodate faster decision-making and to stay competitive in the market. Any business should seriously consider analyzing their data in real time to gain insights that could give them an edge over their competitors.

“To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of” – Ronald Fisher

As we navigate around the worlds of agile/scrum, unit economics and nanotechnologies, one thing becomes certain. Continuous iteration and management at unit level will be necessary to stay ahead of the curve in any industry in the near future. In terms of data, every second will count when we think of impactful data analytics.

BangDB vs LevelDB – Performance Comparison

This post is about performance comparison for BangDB vs LevelDB. Following are high level overview of the dbs.


LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Leveldb is based on LSM (Log-Structured Merge-Tree) and uses SSTable and MemTable for the database implementation. It’s written in C++ and availabe under BSD license. LevelDB treats key and value as arbitrary byte arrays and stores keys in ordered fashion. It uses snappy compression for the data compression. Write and Read are concurrent for the db, but write performs best with single thread whereas Read scales with number of cores


BangDB is a high performance multi-flavored distributed transactional nosql database for key value store. It’s written in C++ and available under BSD license. BangDB treats key and value as arbitrary byte arrays and stores keys in both ordered fashion using BTREE and un-ordered way using HASH. Write, Read are concurrent and scales well with the number of cores. BangDB used here is the embedded version as LevelDB is also an embedded db, but BangDB is also available in other flavors like client/server, clustered and Data Fabric(upcoming)

Following commodity machine ($400 commodity hardware) used for the test;

  • Model: 4 CPU cores, Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz, 64bit
  • CPU cache : 6MB
  • OS : Linux, 3.2.0-54-generic, Ubuntu, x86_64
  • RAM : 8GB
  • Disk : 500GB, 7200 RPM, 16MB cache
  • File System: ext4

Following are the configuration that are kept constant throughout the analysis (unless restated before the test)

  • Assertion: OFF
  • Compression for LevelDB: ON
  • Write and Read: Sequential and Random as mentioned before the test
  • Access method: Tree/Btree
  • Key Size: 10 bytes
  • Value Size: 16 bytes

The tests are designed to cover following;

Performance of the dbs for

  1. sequential write and read for 100M keys using 1 thread
  2. sequential write and read for 75M keys using 4 threads
  3. random write and read for 75M keys using 1 thread
  4. random write and read for 75M keys using 4 threads
  5. sequential write and read for 1 Billion keys using 1 thread

Note that the data being written on disk is much more than the available RAM on the machine, esp in Billion keys ops, hence db has to flush data, make new pages available for on going ops to continue. Typically amount of data written in the range of few GB to hundred of GB with only 8GB of RAM. BangDB used 5GB (Buffer_pool size set was 5GB overall) as buffer pool hence it did not use any memory more than 5GB. Whereas LevelDB uses as much memory as it can.

Also most of the tests are done with single thread mainly because LevelDB put performance is best with single thread and it degrades considerably if we have more concurrent threads. However, BangDB leverage the CPUs well  and both read and write improves with more threads. Test number 2 & 4 are the case in point here.

1. Sequential put and get for 100M keys and values using 1 thread

Here we note that both dbs perform well with IOPS around 580,000 ops/sec

For get, we note that BangDB finishes the operations around 60 sec before LevelDB hence BangDB takes around 25% less time as compared to LevelDB.

IOPS for BangDB = 600,000 ops/sec
IOPS for LevelDB = 450,000 ops/sec

2. Sequential put and get for 75M keys and values using 4 threads

BangDB improves the performance with 4 threads whereas performance of LevelDB with 4 threads decreases by 3 fold.

Avg IOPS for BangDB = 680,000 ops/sec
Avg IOPS for LevelDB = 205,000 ops/sec

Here again BangDB finishes task much before LevelDB. But later LevelDB picks up and performs with higher IOPS

Avg IOPS for BangDB = 650,000 ops/sec
Avg IOPS for LevelDB = 385,000 ops/sec

3. Random put and get for 75M keys and values using 1 thread

We note that for put operations for random keys, LevelDB and BangDB performs well except that LevelDB takes a dip for around 75 seconds (~23% of its total run time) where it’s performance is almost close to zero. This makes LevelDB take around 125 seconds more than BangDB. BangDB finishes put for 75M keys in 250 sec and LevelDB in around 325 sec.

IOPS for BangDB = 300,000 ops/sec
IOPS for LevelDB = 230,000 ops/sec

Note that the IOPS for both the dbs are lower when random keys are used compared to when sequential keys were used. However, performance for LevelDB goes down by larger margin

Here we see interesting data, LevelDB performance for get for random keys goes down drastically, whereas BangDB takes time to pick up but finishes much ahead of LevelDB.

IOPS for BangDB = 210,000 ops/sec
IOPS for LevelDB =   55,000 ops/sec

4. Random put and get for 75M keys and values using 4 threads

BangDB overall performs better than LevelDB, though LevelDB remains consistent throughout.

IOPS for BangDB = 375,000 ops/sec
IOPS for LevelDB = 130,000 ops/sec

The IOPS for BangDB is more with more threads is because BangDB leverages the number of CPUs available on the machine whereas LevelDB does not for put operations.

Again, BangDB performs better with more threads but LevelDB, which typically performs better with more threads for get operations, here performs worse when compared with the performance with 1 thread. This is mainly because, LevelDB is very good in handling sequential ops and it performs better with more threads for sequential get operations, however, it’s evident that for random IO, its performance degrades with more threads. Again for machine with multiple CPU, BangDB exploits the situation much better

Avg IOPS for BangDB =  240,000 ops/sec
Avg IOPS for LevelDB =   40,000 ops/sec

5. Sequential put and get for 1 Billion keys and values using 1 thread

LevelDB performs better than BangDB in terms of IOPS, but both are very consistent and high performant.

Avg IOPS for BangDB = 560,000 ops/sec
Avg IOPS for LevelDB = 560,000 ops/sec

For read, LevelDB takes lot more time than BangDB to complete the job.

Avg IOPS for BangDB = 500,000 ops/sec
Avg IOPS for LevelDB = 150,000 ops/sec

LevelDB spends lots of time initially, for almost half of it’s run time, with just few thousands of ops/sec and later it picks up with much higher number.


Both BangDB and LevelDB are high performance databases. However there are certain highlights based on the data collected above;

  • BangDB and LevelDB perform very well for sequential operations
  • For random operations, performance of LevelDB goes down considerably, whereas BangDB’s still performs well
  • BangDB leverages the available CPUs on the machine fully and performs better with more threads (upto num of CPU), whereas LevelDB write is best with single thread and read for sequential ops improves with more threads. Hence BangDB is more suitable for multi core machines
  • Use of SSD would benefit both DBs but it will benefit BangDB more as it exploits the CPUs better than LevelDB and in the absence of seek time, BangDB would give lot better performance than LevelDB (would be demonstrated in upcoming blog)

Please see the earlier blog on same topic at highscalability . Current blog also tends to address the requests received for performance tests for larger number of keys and values there in the previous blog at high scalability

Note that BangDB is also a server hence interested folks can also see the comparison with Redis here

Upcoming is the BangDB as Data Fabric, Document Database and Columnar DB in respective separate blogs

Those interested in trying out BangDB, please visit iqlect

Walden International Chairman & Founder Lip-Bu Tan joins the advisory board of IQLECT

Bangalore, INDIA 15 th Nov 2015:-IQLECT, a converged big data platform company headquartered in Bangalore today announced that it has appointed Lip-Bu Tan to its advisory board. Lip-Bu Tan is Chairman and Founder of Walden International (WI) and has been active in the venture capital industry for the past two decades. Additionally, he introduced and pioneered the U.S. venture capital concept in Asia and contributed towards the promotion of early-stage technology investing in the Asia-Pacific region. Lip-Bu currently serves in the boards of HP Enterprise, Semiconductor Manufacturing International Corporation and Global Semiconductor Association. Sachin Sinha, Founder of IQLECT, commented “We are delighted to have Lip-Bu join our advisory board. His experience, knowledge and extensive network in the electronics industry will be of great help to IQLECT”

“IQLECT’s converged all-in- box solution for real time data analysis will simplify, and accelerate the big data analysis.” said Lip-Bu Tan. ”I look forward to work closely with IQLECT to make them world leader in real time analytics platform”


IQLECT offers a hardware-software converged platform to provide actionable data insights in real time. Real time insights are the need of the hour for businesses such as e-commerce, financial services, web and digital media, personalized and targeted marketing, mobility and internet of things. However, it is difficult to setup such an infrastructure currently, since the available options are either costly or complex, requiring integration of multiple software components, which makes it necessary to spend months of effort just to get started.

IQLECT simplifies the overall proposition and offers a fully baked off-the- shelf software converged platform, either on the cloud or as a converged hardware-software all-in- a-box platform. The convergence of all necessary software and hardware in one box enables the users to get up and running in a few hours, thus making the proposition highly scalable, cost effective and easy to integrate and accelerates the time to market for enterprises.

The team at IQLECT represents the best of minds and leaders that come with decades of experience in this area from companies such as Amazon, IBM Research, Facebook, EMC, Informatica, Microsoft, Oracle, Yahoo, Novel etc. They have an academic background from reputed institutes like IITs, MIT, Berkeley, Univ of Utah, Univ of Wisconsin, Madison with B.Tech, MS and PhDs in Database and Analytic domains.

Analytics with BangDB


One of the main goals of BangDB is to allow user to deal with high volume of data in an efficient and performant manner for various use case scenarios. Features like different types of tables, key types, multi indexes, json support, treating BangDB as document database etc… allow users to model data according to their requirements and gives them flexibility for storing and retrieving data as needed. This certainly means that users can create their own custom app using BangDB for doing data analysis

The other approach would also be to provide fully baked up native constructs, the abstractions which can be used off the shelf for enabling data analysis in different ways. The abstractions hide all the complexities and expose simple APIs to be used for storing and retrieving data for analysis. Thus the built in constructs frees developers from worrying about the data modelling, configuring db objects, processing the input, querying method, post processing etc… and allows them to just enable the analysis by using the object of the type

Native Constructs or abstraction 

The following high level constructs are being provided by the BangDB 1.5, and the goal is to keep on adding more and more abstractions and more capabilities such that user may find the BangDB useful for lot many other analytical requirements. 

1. Sliding Window 
2. Counting
3. TopK 

Sliding Window

In real time analysis, we are interested in most recent data and wish to analyse the data accordingly. This is different from typical hot or cold data concept where older data could be hotter than recent data. Here we strictly want to work within the defined recent window.

BangDB provides the concept of Sliding Window as a type where user can define the term ‘recent’ by providing time range and then work within the time range always as the window keeps on sliding continuously.

To further ease the development, BangDB also provides sliding table concept, which means that user can simply create a table which always works on recent data window sliding continuously. Similar abstraction is for counting and topk.


In almost all analytical purposes, counting in inevitable. Many a times we need exact total counting and some times aproximate count is also sufficient within acceptable error margin, and in many other cases we need unique counting or may be non-unique in some other scenarios. Again these counting could be counting since begining or for specified time window which keeps sliding. For such use cases, BangDB provides native constructs for counting.

Counting can be done in various ways using BangDB. For example, we can simply create the object of Counting type and let it count uniquely for ever. Now in some case this would be good but imagine a scenario where user would like to do counting for each entity uniquely and if the number of entity is large then overhead of counting becomes very high. Let’s say we have 100 M entities and we would like to count for each entity. Even if we have dedicated 16 bytes for each entity for counting we would need 1.6GB of space to do that and since we need to respond quickly we would like to keep these in memory as much as possible. In such scenario, if we are fine with not counting exactly and are ready to tolerate error margin or say 0.05% then BangDB provides a construct using which we can count in required fashion with few MB (less than 4-5 MB as compared to few GB) overhead only. This is probabilistic count with using hyperloglog concept.

All these counting can then be done in sliding window and there are many configurations for different setting in different use cases.


This is another important feature from analytics perspective. TopK has been a topic of interest for many researchers and analysts and therefore used at many places. BangDB provides native construct for TopK.

TopK means keeping track of top k items. These top k items could be anything, for ex; top 30 users with highest items in cart, top 20 prodcuts searched every 15 min, top 10 queries done every 1 hour etc… Using BangDB topk abstraction, user can simply do the topk analysis with just using get and put API.

TopK can again be done in absolute manner or within a sliding window with different settings
These are available in BangDB as fully baked up constructs and hence amy be used directly. However user can enable different analytical capabilities using BangDB different features. In coming days more such abstraction will be added for different analysis needs.

In next blog we will go into the details on these concepts and also provide example code for each of these concepts. The power of these concepts could be defined by stating that we can now create google analytics kind of portal within organisation, covering lot more data points in less than few hours. We will demonstrate this in upcoming blog

Redis vs BangDB – Performance Comparison

This post is not related to our series of posts on “distributed computing”. I have digressed a bit and since I released the BangDB as master -slaves config cluster hence thought of doing a simple performance comparison with the very popular db redis. This post is about a simple performance comparison of Redis and BangDB (server)

Redis: ( )

Redis is an open source, BSD licensed, advanced key value store. It is also referred to as a data structure server since key can contain strings, hashes, lists, and sorted sets

In order to achieve its outstanding performance, Redis works with an in-memory dataset. Depending on the use case one can persist it either by dumping the data set to disk every once in a while, or by appending each command  to a log

Redis also supports trivial-to-setup master-slave replication, with very fast non-blocking first synchronization, auto reconnection. Other features include Transaction, Pub/Sub, Lua  scripting, Keys with limited time-to-live, and configuration to make Redis behave like a cache

BangDB: ( )

BangDB is multi flavored, BSD licensed, key value store. The goal of BangDB is to be fast, reliable, robust, scalable and easy to use data store for various data management services required by applications

BangDB is transactional key value store which supports full ACID by implementing optimistic concurrency control with parallel verification for high performance and concurrency. BangDB implements it’s own buffer pool, write ahead log with crash recovery and provides users with many configuration to control the execution environment including the memory budget

BangDB works as embedded, stand alone server and cluster db. It’s very simple to set up master-slave configuration, with high performant non-blocking slave synchronization without ever bringing the server standstill or down

Following machine (commodity hardware) used for the test;
  • Model : 4 CPU cores, Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz, 64bit
  • CPU cache : 6MB
  • OS : Linux, 3.2.0-51-generic, Ununtu, x86_64
  • RAM : 8GB
  • Disk : 500GB, 7200 RPM, 16MB cache
  • File System: ext4
The Bangdb configuration;
  • Key size : 12 bytes  (unique, random)
  • Val size : 20 bytes – randomly picked
  • Page size : 8KB
  • Write ahead log : ON
  • Log split check : ON, check every 30 ms
  • Buffer pool size : 512MB
  • Background log flush : every 50 ms
  • Checkpointing : every ~4 sec
  • Buffer pool background work : ON, every 60 ms
  • Number of concurrent threads : 4

The Redis configuration:

All default values provided in the redis.config. However, we switched OFF/ON the save part as mentioned before individual tests below. Note that with save OFF redis performs better as it doesn’t have to dump data frequently


Goal of the exercise is to test the redis and BangDB as server. I will not go into the details of slave synchronization and replication to test them as cluster (master – slaves). However, in future post I will cover this area too

The client and server are on the same machine in order to avoid the network latency for the tests

Test 1

10 M (key, val) put and get. Save = ON and OFF. For Save = ON, redis saving with default values and BangDB flushing with 50ms frequency. For both cases fsync is off

Test 2

50M key val put and get. Here redis completes the test when save is OFF, that is when redis is not writing anything to disk not even the append only log. But when save is ON, it take very very long time to finish the test. In fact it took more than 10 times to complete the test (compared with BangDB). Here are the graphs;


100 M (key, val) put and get. Here I was unable to complete the test for redis even with save = OFF. Redis works well until 67M keys but then the performance goes down really low and it starts taking a second to put 50 keys. I had to halt the test in the interest of time hence the graph will only show partial figure for redis. Note that this is happening when save is off and db is doing no write to disk at all. The performance of redis goes further down if we enable save and it gets stuck much before 50M when save = ON. For get, redis performs well even for the higher numbers, but unfortunately since I was unable to insert 100M keys and values hence could not run the get test for 100M keys.

BangDB on the other hand works on expected lines and performs well for both read and write


Last test is to put and get 1Billion  keys and values. Expectedly, I was unable to complete the redis test as it was not done even after running it for more than 10 – 12 hours, hence I am producing the results of  BangDB only


BangDB and Redis both are very high performant db and serves well for both read and write of key value sets. Though BangDB performs better when compared with redis, both for read and write and in save mode or non save mode(as a cache). We also note that beyond a certain point, performance of redis suffers a lot and it goes down drastically to 100 keys per second for write which is too low from any perspective. Case in point is the 100M key value insert test which could not get completed because redis was too slow after 67M inserts. Whereas BangDB continued with expected performance even for billion keys insert. The test was done on 20 byte val, if we increase the value size then redis would slow down much before 67M as experienced here

BangDB implements its own Buffer Pool with semi adaptive page prefetch and page flush algorithm. BangDB also implements write ahead log which is just append only and hence optimizes the disk writes by avoiding random seeks as much as possible. Interesting point to note is that the log flush frequency which can be set in micro sec and for our test it was 50ms.

Redis has got many features and hence called as data structure server as well. Whereas BangDB focus is on handling high volume of data with expected high performance. If specific features are required (like list, set etc…) then redis works very well and BangDB would not support these. But when handling of large amount of data is required, and survival in highly stressed scenario is important and finally performance is important criteria then BangDB suits better. Note that transaction, sync and replication are other features which are supported by both these dbs

Note that BangDB comes in various flavor, namely embedded db, client server model and p2p based clustered elastic data space. Interestingly BangDB client is same for all flavors which means once code is written for any of the flavor, user can potentially switch from one model to another based on requirement in few minutes.

The test apps was written in c++ and used hiredis for redis and bangdb-client for BangDB. Both dbs are available free of cost under BSD license hence one is free to download and test accordingly. Also note that for BangDB we can specify memory budget for the server to use, that is if we allocate 2GB on 8GB RAM machine, it will only use 2GB and not go beyond that. For the above test we used 5GB as memory budget on 8GB RAM machine. On the other hand redis was using all available and there was no limit set.

Please post your comments and thoughts.

Sachin Sinha

Model of distributed system

Post #2 of the distributed computing discussion series 

Previous post gave the high level introduction of the distributed system. In this post we will discuss about model of distributed system.

Once defined, model will help us understand many features and flavours of distributed computing challenges and put them in perspective and allow us to formulate workarounds or solutions to solve or overcome those challenges. These models will be used throughout in our next set of blogs related to the topic. Hence it is important to focus on the subject and understand clearly.

How do we visualize a distributed system?

Here is how I would describe a distributed system in simple terms. 

  • message passing system – nodes interact by sending/receiving messages
  • loosely coupled – no upper bound on message arrival time
  • no shared memory – all nodes have their own private memory
  • no global clock – clocks of different nodes can’t be synchronized globally
  • a graph topology – consisting of processes as graph nodes and channels for message passing as edges (directional)
  • ordering of messages are not assumed in channels 

A simple figure to describe what I wrote above;

State and Event

A process or node in broader sense, can be seen in the context of following properties;

    • set of states,
        • set of events, and
            • set of initial conditions
              Hence it’s easy to see that in a graph kind of topology, with node as a process and edge as a channel, the state of both node and edge change when an event arrives. The following diagram would make it bit clearer;

              Here we have a node with state S2 and channel (incoming) to the node with state S1. An event e arrives and it changes the states of both the channel and the node.

              Thus we can safely assume that an event changes the state of at least one node and a channel connected to it. This concept will come handy in future discussions

              Global State

              We have just discussed that a distributed system has many nodes connected with each other through channels and interacting with each other by sending messages through channels. We have also discussed that the systems are loosely connected in the sense that we can’t put a cap on message arrival time or the ordering of the messages in the channel. Also a state of the node changes after an event arrives to it (by also changing the state of the channel).

              With this much information, we can safely define the global state of a distributed system as a cross product of local states of all the nodes and channels in the system. Also when channels are empty, we define the situation as initial state of the global system


              When we execute a distributed program, it generates many events (such as function output, sending and receiving of messages etc…). Each of these events would not make much sense if we don’t apply order on them. There are many ways of thinking how order can be applied on the events and here are some;
              1. All events in the system are ordered. This means, across the system, with all nodes and channels, each and every event is ordered. That is, there exist a global sequence of events which is properly ordered. This assumes following; 
              • we have a shared global clock available
              • all events are instataneous
              • no two events are simultaneous
              1. Let’s relax the first assumption, that is now assume that there is no shared global clock available to be used. This means that globally we can no longer compute the sequence of events. But still we do have the full ordering on a particular process or node. Thus we have partial order in the system but full order on a node
              1. Finally, let’s also relax the assumption that every event is due to or caused by a other previous event(s). This makes the full order on a single node also difficult and hence we are left with partial order in the system and partial order on a node
              Model of distributed system

              Now we can define the model of distributed system by naming the above three ways of ordering in a formal manner. 
              1. Interleaving (Physical time) – Total order across the system
              1. Happened Before (Logical order) – Total order on a process, partial on system
              1. Potential Causality (Causality) – Partial order on a process or system 
              The important difference between 2 and 3 is that 2 assumes that all events on a process will have cause and effect relationship, which is not always true. An example would be two different messages received on two different ports of a machine. Or two threads accessing or changing mutually disjoint set of variables.

              Other important point to note here is that the potential causality is equivalent to the set of happened before that are consistent with it. Of course this is dependent on the potential causality relation on one process.

              Now that we have defined the model of a distributed system, the question that we might have is that which model is the appropriate model? The answer depends on the application for which the model is being used. But as seen logically, a distributed program can be viewed as a set of potential causality, in turn, is equivalent to a set of happened before, and finally each happened before is equivalent to a set of global sequences of events (if each process is taken separately)

              Here are few graphical description of two models described above;

              The first one is happened before whereas the later is for potential causality.

              That’s all for this post, in the next one we will discuss about logical clock and vector clock algorithm which are very important in synchronizing events in a distributed system. Basically these clocks are used to track the orders of the events in different models of distributed system as defined above

              Sachin Sinha

              Distributed System – a high level introduction

              Post #1 of the distributed computing discussion series

              This is the beginning of the series of articles/blogs on the distributed computing. I will try to first put forward some of the basic concepts of the distributed computing and then take up some of the related problems and dig deeper. In the process I would also dedicate few blogs on existing products/systems which are relevant to the discussion and try to explain why few things are done in some manners or how someone has solved or overcome a problem.

              This post is to quickly give you the introduction on the distributed computing from high level as it will be referred at many places in future blogs. Please note that this subject is too vast to cover in a single blog hence I will try to focus on stuff important from practical design and implementation perspective.

              Distributed System

              For simplicity and for the discussion sake lets define distributed system as a collection of multiple processors (programs, computers etc…) connected by a communication network. These connected processors try to achieve a common goal by communicating with each other using messages that are sent over the network.

              How many ways are there to construct such a system? many… but let’s consider two of these. First, we can divide a task into m sub tasks and distribute tasks over the network. Second, we can create sets of different subsets of jobs executed by each program in the distributed system. In both the scenarios (or for that matter any other scenario that you can come up with) we clearly note the power of the system. It scales linearly, can handle humongous amount of load, remains available all the time in practical sense. Also to add very important factor, we don’t need the million dollars tailor made machines to do relatively bigger task. Rather we can use many cheap commodity hardware to create a distributed system and achieve same task with lot less dollars spent and probably with more redundancy and availability. The use of commodity hardware in comparison with the tailor made bigger machines, provides more resiliency in the typical practical ambiance.

              Hence the next question is, why there are very few software programs that exploits the cheap hardware and do computing in economical manner using distributed system.

              The answer is simple, the distributed system and computing requires set of different tools and technique than required by traditional applications. We can view, for simplicity, non distributed system as sequential systems, this helps us understand the concept in relatively easier manner.

              So what are the benefits and challenges of distributed system and computing? We will cover that after we have clarified one vastly discussed and confused topic.

              Parallel vs Distributed computing

              Without going into the detailed discussion, this is how we can distinguish between the two. In parallel programming we have multiple processors working and communicating with each other using a shared memory. Whereas in distributed programming model we lack shared memory in true sense, hence each processor works with local memory and communicate with each other by sending messages. But note that this is only at the logical level and in real world we can simulate sending messages using shared memory in case of parallel processors. And on the other hand we can simulate shared memory with connected network in distributed processors by sending messages.

              A simple diagram will fix the concept (inspired by wiki)

              The first figure is for distributed system whereas the second one is for parallel system.

              Now we will list why one should chose distributed computing over the parallel one

              Benefits of distributed system

              1. Scalability: In parallel system shared memory becomes the bottleneck when we increase the number of processors, too many contentions for single resource

              2. Availability: Distributed system is inherently more available due to natural redundancy in it, whereas special efforts are required in parallel system to achieve the same

              3. Resiliency: Distributed system doesn’t demand same type or processor to work when we want to add extra node in it. The sheer modularity and heterogeneity  in the distributed system provides the resiliency which is not possible in parallel system

              4. Sharing: Data and Resource can be shared in distributed computing. Multiple organizations or divisions within same organization can share data and resource with each other. For ex; high power expensive machine can be connected in the distributed system so that other machines/processors can use it

              5. Geographical benefit: The local computation or accessibility is possible with distributed structure

              6. Reliability: Single processor/machine failure doesn’t fail the overall system

              7. Economy: This in my view is very important. Since we can plug in less expensive commodity hardware in the distributed system, the overall expense goes south to great extent compared to hugely costly tailor made multiprocessor machines

              Benefits of parallel system

              1. Accessing shared memory: Accessing shared memory is very fast compared to sending and receiving messages over the network

              2. Fine grained parallelism:Since the data can be communicated with much ease and speed, fine grained parallelism is easily achieved in the parallel system

              3. Distributed System is hard to develop and maintain: We will see in next section the challenges thrown by the distributed system and computing. All these complexities can be avoided by using a parallel system

              4. Hardware support: The presence of multiple processors on a commodity hardware actually spurs the use of parallel programming in an implicit manner

              Note: In fact, I will, through out the discussion, in this post or upcoming future posts,  always assume and support parallel processing on single node/machine. Therefore when I say single node processor, I mean single commodity hardware with multiple processors in it. Hence I assume and encourage the use of multiprocessing/multithreading on a node

              Challenges of distributed system

              We can summarize the challenges in following three points;

              1. Lack of shared clock: Due to uncertainty in the communication delays over the network, it’s not possible to synchronize the clock of different processors precisely in a distributed system

              2. Lack of shared memory: It’s impossible for any processor or node to define the global state of the whole system at any point of time, hence no global property of the system can be observed

              3. Lack of accurate failure detection: In most of the scenarios, due to communication issues, it’s not possible to tell if a processor has failed or the it’s unreachable or there is a delay in reaching it. In  nutshell it’s not possible to detect the failure with great accuracy, especially in asynchronous model

              The above three challenges require different tools and technique to address the issues faced in distributed computing.


              In next post, we will discuss about the ‘model’ of distributed computation. In the next to next we will discuss the logical clock using which we can order events in distributed scenario and so on… Very soon we will have all the concepts in place to discuss the real world problems of distributed computing and how to address them effectively. Also how others (not many) have applied these techniques to design their products

              Thanks for reading through the post!

              Sachin Sinha