Azure Chronicles - Mule

Overview

Continuing from the previous post, I now move to the other end of the spectrum, from Raspberry Pi to the cloud. Microsoft Azure cloud has been making impressive strides in the cloud space, especially among the enterprises.

This is an experiment of running the same Mule run time in an Azure cloud server. The server is created in the East US Azure location.

I created an A series server (Extra Small) in Azure using the ARM templates. An A series Extra Small server has 1 CPU core and 07.GB RAM. On this server, the Mule run time is on a JVM with 256 MB heap size.

Then, I then ran some scripts to perform the following - 
  • Install Oracle JDK 8
  • Configure the Mule run time (version 3.9)
  • Deploy a simple REST based API in Mule

Objective

The purpose of this experiment is to test how much load can a very simple REST API in Mule handle, while running on a very low specification server in Azure. I wanted to see how the Mule run time would perform from a performance perspective.

A simple REST API is created using Anypoint Studio. The response of the REST API is dynamic based on the parameters passed to it. This ensures that the response doesn't get cached in any layer - server/network/client.


Once the REST API is deployed in Mule run time in Azure, a sample request is tested from a browser. The JSON response is as below. This sample API is based on the sample RAML available at Mulesoft documentation site.



As the REST API is publicly exposed, the load tests are setup in Blazemeter.



Before starting the load test from Blazemeter, the resource statistics on the server are normal. Press F5 in htop utility to view the processes in a tree structure.


The Blazemeter tests are now kicked off and the test results are impressive. The 90 percentile response time is under 200 ms. For a "read" operation on a REST API, this response time is pretty good. The average throughput is also looking good at 170 hits/second. This test simulates 20 concurrent virtual users hitting the REST API endpoint. For an A0 server, this performance is not bad at all.




During the above test, the load on the CPU has spiked to 100%. The server stats can be viewed from the Azure Portal.


Now, I setup the New Relic agent on the server, restarted the Mule run time and re-ran the same test. The results are now starkly different. The 90 percentile response time has gone up by 5 times, to above 1 second. The average throughput has dropped by more than 50% to 71 hits/second.


While we can see the application statistics in New Relic, there is a warning as well. This is about a circuit breaker being triggered in the New Relic agent.


The circuit breaker gives more details that the JVM heap is under pressure.

Summary - 

  • When running API's or websites in a production environment, there is usually a server cluster / farm that handles all requests. While a Mule run time can technically run on an A0 server, one should probably use at least a Dv3 or a Ev3 VM type in Azure. For details on more Linux VM types, please refer here.
  • New Relic agent has the "circuit breaker" pattern implemented, which is quite thoughtful and helpful.
  • When sizing the memory for a JVM based application, keep in mind the additional CPU/Memory requirements for an agent such as New Relic.
  • If the API's are publicly exposed, evaluate cloud based tools such as Blazemeter. Also, ensure that the cloud based tool and the server are in different clouds, to simulate traffic across the internet. For instance, here the server is in Azure while Blazemeter test runner is in AWS.
  • Potentially, Blazemeter could be used not just for load testing of API's but also as a synthetic monitoring tool.
  • Mule Community edition is a great way to explore the tool. From an enterprise perspective, as of now, multiple deployment options are available. They are - On Premise, Anypoint Cloudhub (SaaS) and Anypoint Runtime Fabric (PaaS).

Comments

Popular posts from this blog

Azure Chronicles - VM Security

Tech - Sprinkle some Salt - Part 1

Cloudera Quick Start VM in Hyper-V