Bazaarvoice is a digital marketing company based out of Austin, Texas. The service it offers enables retailers to add customer reviews to their websites.

Also, it helps big brands like Adidas & Samsung syndicate the customer reviews, who purchased their product, across multiple retail e-commerce websites.

This write-up is an insight into their service-oriented architecture which they wrote from the ground up & moved their workload to in parts following the divide & conquer approach from the existing monolithic architecture.

The new architecture sailed them smoothly through events like Black Friday, Cyber Monday serving record traffic of over 300 million visitors.

At peak, the Bazaarvoice platform handled over 97k requests per second, serving over 2.6 billion review impressions, with a 20% increase over the former year.


The Original Monolithic Architecture

Right from the start, Bazaarvoice had a Java-based monolithic architecture. The UI was rendered server-side.

With custom deployments, tenant partitioning & horizontal read scaling of MySQL/Solr architecture, they managed the traffic pretty well.

But as the business grew, new business use cases emerged, for instance, having a mobile-first responsive design, managing social content from social portals like Twitter, Insta & Facebook.

The Bazaarvoice customers needed to display their customer reviews, who purchased their products, across multiple e-commerce & social portals.

This was handled by copying the reviews many times over throughout the network but the approach wasn’t scalable & expensive as the data grew pretty fast.

Below is the monolithic architecture diagram of the Bazaarvoice platform

Bazaarvoice monolithic architecture


The aim of the engineering team was to introduce client-side rendering. Have an efficient system in place to manage fast-growing data, migrate the workload to a distributed service-oriented architecture.

Alright, the need for managing Bigdata & transitioning to a distributed architecture, I get it. But why the need for client-side rendering?


Why Client-Side Rendering? What were the Problems with Server-Side Rendering?

Client-side vs Server-side rendering deserves a separate write-up in itself. I’ll just quickly provide the gist, the pluses & minuses of the two approaches.

Server-side rendering means the HTML is generated on the server when the user requests a page.

This ensures faster delivery of the UI, avoiding the whole UI loading time in the browser window, as the page is already created & browser doesn’t has to do much assembling & rendering work.

This kind of approach is perfect for delivering static content, such as wordpress blogs. Good for SEO as the crawlers can easily read the generated content.

But since modern websites are so Ajax-based. The required content for a particular module or section of a page is fetched & rendered on the fly.

Server-side rendering doesn’t help much as for every Ajax-request instead of sending just the required content to the client, the approach generates the entire page. Which consumes unnecessary bandwidth & also fails to provide a smooth user experience.

A big downside to this is once the number of concurrent users on the website rises, it puts an unnecessary load on the server.

Client-side rendering works best for modern dynamic Ajax-based websites.

We can also leverage a hybrid approach, to get the most out of both techniques. We can use server-side rendering for the home page, also for other static content on our website & use client-side rendering for the dynamic pages.


Technical Insights

Bazaarvoice adopted big data distributed architecture based on Hadoop & HBase to stream data from hundreds of millions of websites into its analytics system.

Understanding this data would delineate the entire user flow which would help Bazaarvoice clients to study the user shopping behaviour.

As the primary display storage, Cassandra, which is a wide-column open source NoSQL data store was picked. This technology choice was inspired by Netflix’s use of Cassandra as a data store.

On top of Cassandra, they built a custom service called Emo, which was intended to overcome the potential data consistency issues in Cassandra, also to guarantee ACID database operations.

For the search use cases, ElasticSearch was picked with a flexible rules engine called Polloi to abstract away the indexing & aggregation complexities from the team that would use the service.

The workload is deployed on the AWS Cloud which also helped them manage monitoring, elasticity & security.

The entire existing workload was moved to the service-oriented AWS cloud part by part following a divide & conquer approach to avoid any major blow-ups.

Below is a new service-oriented architectural diagram of the Bazaarvoice platform

Bazaarvoice service oriented architecture


Originally, the customers used a template-based front end. The engineering team wrote a new client-side rendering front end with JavaScript.

As you see in the diagram, the system as a whole has the original monolith & well as the distributed design working in conjunction with each other. Due to the reason that not all the customers were moved at once.

The engineering team wrote an API service that could be used to hit any of the monolithic or the distributed service just by changing the API endpoint key.

With the initial start of moving a few clients at a time, the scalable architecture allowed them to move upto 500 customers at a time.



All this massive engineering effort needed dedicated DevOps teams for monitoring, deployment, scalability & to continually test the performance of the workload running in the cloud.

A microservices architecture enabled different teams to take dedicated responsibility of the respective modules. Right from understanding the requirements, to writing code, to running automated tests, to deployments, to 24*7 operations.

The platform infrastructure team developed a program called Beaver, an automated process that examined the cloud environment in real time to ascertain that all the best practices were followed.

An additional service called the Badger monitoring service helped them automatically discover nodes as they spun up in the cloud.


A key takeaway from this massive engineering feat which the Bazaarvoice engineering team says is do not let the notion of having a perfect ideal implementation or transition of something hold you back.

Start small, keep iterating, keep evolving. Keep moving ahead with patience. Celebrate each step of the architectural journey.

It took them three years of hard work to pull this massive engineering feat of monolithic to microservices transition off.

Source for this write-up;


More on the Blog

How Hotstar scaled with 10.3 million concurrent users – An architectural insight

How Evernote migrated & scaled their cloud with Google Cloud Platform

Designing a video search service with AWS – Cloud architecture 

What database does Facebook use – a deep dive 

How does Linked-In identify it’s users online


Well, Guys!! This is pretty much it. If you enjoyed reading the article, do share it with your folks. You can subscribe to on social media.

I’ll see you in the next article.
Until then…