Book demo
Start trial
Book demo
Start trial

The LEAP

Automation insights and productivity tips from LEAPWORK.

All Posts

Technical Post: Big Performance Improvement in Data Serialization

Usually, in this blog, we tend to look at the LEAPWORK Automation Platform from a user perspective. We talk about how to automate different kinds of work processes and we also give you a few tips and tricks every now and then. However, we’ve recently made some changes to the core engine, so we thought it would be interesting to look into the engine room for a change. Buckle up because we’re about to get a little technical.

For a long time, LEAPWORK has been using the JSON.Net code library to serialize and deserialize data we send between Studio and the Controller. We’ve also used it to serialize complex structures we send to the Agent during both preview and scheduled runs.

JSON.Net is a very powerful and extremely popular code library, but when working with large and complex data structures at scale, it falls a bit short. The result is high pressure on CPU and large memory consumption.

At LEAPWORK we strive for continuous improvement so, for that reason, we decided to fix both.

After researching different solutions, we ended up loving a component of the open source Azos framework for building scaled-up business applications. It handles JSON and BSON serialization, both of which are needed in LEAPWORK.

We had to make some minor improvements and adjustments to the component, which we contributed back to the Azos project, but the results were very good.

We experienced a significant impact on LEAPWORK’s performance when working with large volumes of keyframes, that is the individual steps that happen when flows run. Due to this success, we are happy to announce that this new component has already been included into our upcoming service release.

Actions speak louder than words, so let us show you some of the results you will get to experience with the upcoming release. The following graph shows how much memory was consumed by LEAPWORK for gathering and communicating approximately 300.000 keyframes between an Agent and the Controller. In here you can see the difference between the current and the upcoming release:

Chart Memory Usage for 300.000 Keyframes

This change has impacted not just memory consumption but also CPU pressure. You can see a significant performance boost in the following graph,  where we show the difference in time spent serializing and deserializing large volumes of keyframes:

Chart: Elapsed Time for 60.000 Keyframes

Together, this means a much faster execution time of large automation flows, and an overall better experience working with LEAPWORK Automation Platform.

We always strive for the best user experience and, for that reason, we will continue to improve how data travels through LEAPWORK. This includes serialization and deserialization, as these account for a large amount of the CPU pressure and memory consumption in the Controller.

 

If you would like to know more about LEAPWORK Automation Platform, or how this upcoming release can help you improve your automation efforts, book a demo in the link down below.

Book Demo

Claus Topholt
Claus Topholt
CTO and co-founder of LEAPWORK.

Related Posts

How To Use RPA in HR [w/ real-world example]

Even though HR departments are made by people for people, what if I told you that robots can help HR become more human?

[WHITEPAPER] Test Automation: The Codeless Answer

Today, digital transformation affects businesses in every market. Either they are driving it or being driven by it. As new business models emerge and customer demands keep increasing, enterprises everywhere struggle to stay relevant.

[Factsheet] 9 Reasons to Use No-code Automation Tools

In the jungle of automation tools, it can be complicated to select the right one for your enterprise. This is why we have listed 9 reasons to choose no-code automation tools in a factsheet. No-code automation tools speed up business processes and reduce errors while limiting boring and repetitive work.