Enterprise

Will Vitalik’s ‘sharding’ proposal fix Ethereum’s scaling problems?

ethereum - sharding
(C) iStock.com/briddy_

At Devcon last week, Vitalik Buterin presented a proposal to deal with the massive problem of scaling facing the network as adoption of Ethereum and the smart contracts system it supports gains increasing market share. His “modest proposal” for Ethereum’s future was timely and exciting.

A problem common to all blockchains, but in particular Ethereum, specifically due to the  explosive growth of the ICO market, is that the size of the database underlying the blockchain grows without bounds, and in direct proportion to the number of people using the blockchain.

This translates to major overheads and challenges that are getting worse over time for the underlying infrastructure needed to run Ethereum.

As a direct result, the Ethereum database has grown over the last few years from being under a gigabyte to tens of gigabytes today and it appears to be growing on an exponential trajectory. This is clearly not sustainable.

Vitalik’s solution is ‘sharding’

The key technology Vitalik believes is the solution is “sharding”, where the data is sharded and the network is partitioned.  He explained that sharding the Ethereum blockchain would involve allowing each node to store only a part of the complete network, while the nodes would be able to validate the network through the underlying mathematics and mutual communication.

It is interesting to note that ‘sharding’ is a term borrowed from database technology, which resonates particularly well with the team at Bluzelle, as the Bluzelle database was architected from the get-go to employ three key technologies, one of which is database sharding.

Briefly, sharding consists of three steps:

  1. Break up the data into small chunks (also called shards)
  2. Partition the network into subnetworks (Bluzelle calls each a swarm. Vitalik calls each a universe)
  3. Distribute the shards into the swarms

What is interesting about Vitalik’s proposal is that, if properly executed, it will enable Ethereum to scale well past the size limits it is facing now. The execution specifics of how the subnetworks will intercommunicate remains a subject that undoubtedly Vitalik is actively working on.

Bluzelle and ‘sharding’

I thought it would be interesting to share some insight into how database sharding is used within Bluzelle.

We accomplish scalability by sharding all database data and then ensuring shards get properly stored into Bluzelle’s own flavour of network partitions – swarms. A Bluzelle swarm is a collection of nodes that all replicate the same shards of data, where all these nodes are part of a unique network that defines that swarm.

Bluzelle’s swarm is very similar to Vitalik’s universe. our database shards are small chunks of a database that can scale across swarms. This is precisely how small chunks of the Ethereum network get assigned to an Ethereum universe, thereby enabling Ethereum scalability.

Our scalability approach is conceptually exactly the same as what Vitalik’s scalability plans for Ethereum for the future.

Today’s announcement strongly validates our fundamental architecture and design principles for scalability. The timely announcement exemplifies how one of the largest and fastest growing blockchains plans to use the same principles as Bluzelle, to achieve mass scalability.

Neeraj Murarka

 

Neeraj Murarka is CTO of Bluzelle. A software engineer and computer systems architect with over 20 years expertise in cutting edge technology. He has worked on projects for Google, IBM, Hewlett Packard, Lufthansa, Thales Avionics, and Zynga.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top