🚀  Manifold is joining Snyk! Read more in our announcement.

Asynchronous Microservices with RabbitMQ and Node.js

Scalability + Fault Tolerance Powered by Messages and Events

The giants of the web (Netflix, Amazon …) have been using Microservices for over 10 years. But recently, these techniques have become more accessible to smaller software projects. And startups can now leverage them to build robust and scalable cloud software. In this article, we’ll explore a simple way to get started with Microservices using RabbitMQ and Node.js.

What is a Microservice ?

I like this simple definition by Martin Fowler:

A particular way of designing software applications as suites of independently deployable services. <cite>– Martin Fowler</cite>

The traditional Monolith app handles API requests, authentication, background data processing for multiple tasks, etc. The idea is to split it up into specialised Microservices which handle a smaller set of tasks each.

Refactoring a Monolith into Microservices
Refactoring a Monolith into Microservices

Microservices communication patterns

Now that we have broken up the monolith into multiple services, we need to set up a way for them to collaborate. The 2 most common patterns used for inter-service communication are:

  • Remote Procedures
  • Asynchronous Messaging

Remote Procedures

Remote procedures is essentially sending a request directly from Service A to Service B and waiting for the reply, in a *Synchronous *style. Examples of technologies to achieve this are REST and gRPC.

Asynchronous Messaging

Asynchronous messaging involves using a separate layer or message broker to send a message from Service A to Service B. Service A does not wait for the reply. Service B will send the result (if a result is expected) when it is available, usually through the same messaging system.

Don’t call us, we’ll call you
<cite>— Asynchronous Messaging</cite>

Examples of technologies that we can leverage to achieve this are RabbitMQ and Apache Kafka.

Building a fault tolerant data processing service

To illustrate a simple use case for Microservices, we’ll be building a web application that receives external requests via a REST API, does some expensive processing on the data and saves the results.

Data Processing App Flow

The requirements for this app are:

  • Fault Tolerance: We want the data processing to be retried if it fails, without affecting other parts of the system.
  • Scalability: We want to be able to scale the data processing feature independently of the web API.

The issues with a Monolith solution

One way to solve this problem would be to build a single app that handles all the steps of this process. But some issues with this approach are :

  • What happens when we add new, independent features to our app? How can we scale each of these features efficiently?
  • What if we received a really large, unexpected or malformed request on one of our endpoints? Does it take down the entire app?

The issues with a 2-services synchronous solution

We will be splitting our app into 2 services: WebService and ProcessorService

  • WebService will handle the incoming API requests, pass the requests to ProcessorService, and log the results as received from ProcessorService.
  • ProcessorService will process the data as it comes from WebService and send it back the results.

If we were to use a REST API as a way for WebService and ProcessorService to communicate, we would have to deal with these potential issues:

  • ProcessorService could be offline when WebService tries to reach it. What does it do ? Does it save the requests somewhere and retry later ? When should it retry ? How often ?
  • ProcessorService could be at full capacity when a new request arrives.
  • A data processing attempt could take a long time or fail while Webservice is waiting for a reply.
  • If ProcessorService has finished processing the data but WebService is now offline, what should it do with the results ?
  • If we add more instances of ProcessorService, how do we tell WebService to which instance to send the request ?

A 2-services async solution with Node.js and RabbitMQ

A possible solution to these problems is to include a third party, a “message broker” that acts as intermediary in passing the messages from WebService to ProcessorService and from ProcessorService to WebService. The technologies we’ll use to implement this are:

  • RabbitMQ: The website states it’s “the most widely deployed open source message broker”. It’s easy to install, low maintenance, very fast and has libraries for most popular programming languages.
  • Node.js: Node.js provides an event-driven programming model that is perfect for this asynchronous task.

Tools Setup + Code

I’ve setup a github repo so that you can refer to the code and follow along.

Node.js and RabbitMQ setup

We’ll be using the latest Node.js LTS release (10.15.0 at the time of writing this article) which you can download here.

The easiest way to get started with RabbitMQ is to use CloudAMQP. They offer RabbitMQ as a Service, allowing us to focus on building our app instead of configuring and maintaining RabbitMQ. Setting up RabbitMQ this way requires only a few easy steps:

First signup for a free Manifold Account.
Then create a Manifold Project:

Create a Manifold Project

Add a CloudAMQP resource to the Manifold Project:

Add CloudAMQP

Select the CloudAMQP Free Plan:

CloudAMP Free Plan

Click on “Show Credentials” in the next screen and then click on “Download .env”. Save the file under the filename “.env” in your project path:

CloudAMQP Credentials

RabbitMQ queues configuration

Our RabbitMQ instance is now ready to use. You can run the following Node.js script to setup the queues:

<a href="https://gist.github.com/didil/e8a2c62d934b6eb8f4d724c3fe3d2c0f" class="embedly-card" data-card-width="100%" data-card-controls="0">Embedded content: https://gist.github.com/didil/e8a2c62d934b6eb8f4d724c3fe3d2c0f</a>

What we did in the script above is:

  • Declare an exchange “processing”
  • Declare 2 queues: “processing.requests” will store the requests and “processing.results” will store the results
  • Bind the queues to the exchange

You can read more about the RabbitMQ concepts here but the general idea is that WebService will send the requests to the processing.requests queue, ProcessorService will read them from there and post the results to processing.results where WebService can access them.

WebService Code

The code for WebService can be found here. It’s a web api using Express.js to handle `POST` requests to an endpoint `/api/v1/processData`. It assigns each request a requestId, sends it to RabbitMQ and displays the requestId as a response. The service also listens for the results from ProcessorService and logs them.

<a href="https://gist.github.com/didil/8040e2b6dc4f4daa6e4bdbd1755df187" class="embedly-card" data-card-width="100%" data-card-controls="0">Embedded content: https://gist.github.com/didil/8040e2b6dc4f4daa6e4bdbd1755df187</a>

ProcessorService Code

The code for ProcessorService can be found here. It listens for request messages on the RabbitMQ channel processing.requests that we defined earlier, processes them and sends the results back in the channel processing.results. For the purposes of this demo, we simulate the heavy-processing part by waiting 5 seconds and concatenating ‘-processed’ at the end of the input string.

<a href="https://gist.github.com/didil/b969672482db3b1780898a193f87e483#file-processor-service-js" class="embedly-card" data-card-width="100%" data-card-controls="0">Embedded content: https://gist.github.com/didil/b969672482db3b1780898a193f87e483#file-processor-service-js</a>

Exploring the results

We start by firing up the WebService only and sending it a couple of requests from the terminal


$ *cd web-service
$ node web-service.js
*Listening on port 3000.

*// From a different terminal
*$ *curl --header "Content-Type: application/json" \
 --request POST \
 --data '{"data":"my-data"}' \


$ *curl --header "Content-Type: application/json" \
 --request POST \
 --data '{"data":"more-data"}' \



If you go to the RabbitMQ management dashboard on CloudAMQP at this point you’ll see something like this:

RabbitMQ Dashboard after Step 1

The requests have been queued and are not yet being processed as we don’t have a ProcessorService running yet.

To see how ProcessorService works and to show that it is scalable out of the box, we’ll now start 2 instances of it at the same time. Make sure to stop the web service for now.


$ cd processor-service
$ node processor-service.js
Received a request message, requestId: 1
Published results for requestId: 1

# In a second terminal just after starting the first instance
$ cd processor-service
$ node processor-service.js
Received a request message, requestId: 2
Published results for requestId: 2


Each instance has processed a message. Back to the RabbitMQ dashboard, we see that the results have been queued.

RabbitMQ Dashboard after Step 2

Finally we start the web service again:


$ cd web-service
$ node web-service.js
Received a result message, requestId: 1 processingResults: my-data-processed
Received a result message, requestId: 2processingResults: more-data-processed


If you check the dashboard again, both queues should be empty as all the messages have been processed.

Thanks to the decoupled architecture, we can scale this app by adding more WebService or ProcessorService instances independently. Failed messages will also be requeued to RabbitMQ automatically, making our app a bit more robust.


In this article we’ve looked at Node.js + RabbitMQ through CloudAMQP but we can of course replace RabbitMQ with other solutions such as :

  • Apache Kafka: An alternative that is a bit different in it’s design but very popular as well. One interesting aspect of Kafka is that it can store and replay all the Messages received in order. RabbitMQ deletes the messages once we acknowledge reception.
  • Google Cloud Platform PubSub and AWS SQS: Alternative proprietary solutions by Google and AWS. Can be handy if you prefer using one of these cloud providers.

Finally, some extra steps needed to deploy our solution to production:

  • Using a High Availability setup. A single RabbitMQ instance design would quickly become a single point of failure for our system.
  • Adding unit and integration tests.
  • Centralised logging and monitoring.
  • Building Docker Containers for each Microservice to streamline deployment.

Recent posts

Related posts

No items found.