Book demo
Start trial
Book demo
Start trial

The LEAP

Automation insights and productivity tips from LEAPWORK.

All Posts

Technical Post: Big Performance Improvement in Data Serialization

Usually, in this blog, we tend to look at the LEAPWORK Automation Platform from a user perspective. We talk about how to automate different kinds of work processes and we also give you a few tips and tricks every now and then. However, we’ve recently made some changes to the core engine, so we thought it would be interesting to look into the engine room for a change. Buckle up because we’re about to get a little technical.

For a long time, LEAPWORK has been using the JSON.Net code library to serialize and deserialize data we send between Studio and the Controller. We’ve also used it to serialize complex structures we send to the Agent during both preview and scheduled runs.

JSON.Net is a very powerful and extremely popular code library, but when working with large and complex data structures at scale, it falls a bit short. The result is high pressure on CPU and large memory consumption.

At LEAPWORK we strive for continuous improvement so, for that reason, we decided to fix both.

After researching different solutions, we ended up loving a component of the open source Azos framework for building scaled-up business applications. It handles JSON and BSON serialization, both of which are needed in LEAPWORK.

We had to make some minor improvements and adjustments to the component, which we contributed back to the Azos project, but the results were very good.

We experienced a significant impact on LEAPWORK’s performance when working with large volumes of keyframes, that is the individual steps that happen when flows run. Due to this success, we are happy to announce that this new component has already been included into our upcoming service release.

Actions speak louder than words, so let us show you some of the results you will get to experience with the upcoming release. The following graph shows how much memory was consumed by LEAPWORK for gathering and communicating approximately 300.000 keyframes between an Agent and the Controller. In here you can see the difference between the current and the upcoming release:

Chart Memory Usage for 300.000 Keyframes

This change has impacted not just memory consumption but also CPU pressure. You can see a significant performance boost in the following graph,  where we show the difference in time spent serializing and deserializing large volumes of keyframes:

Chart: Elapsed Time for 60.000 Keyframes

Together, this means a much faster execution time of large automation flows, and an overall better experience working with LEAPWORK Automation Platform.

We always strive for the best user experience and, for that reason, we will continue to improve how data travels through LEAPWORK. This includes serialization and deserialization, as these account for a large amount of the CPU pressure and memory consumption in the Controller.

 

If you would like to know more about LEAPWORK Automation Platform, or how this upcoming release can help you improve your automation efforts, book a demo in the link down below.

Book Demo

Claus Topholt
Claus Topholt
CTO and co-founder of LEAPWORK.

Related Posts

Test Automation of Native Apps on Real Devices

Enterprise mobile apps are mission-critical, but most enterprises have not yet solved the puzzle of how to perform test automation of native iOS and Android apps on real devices. In this blog post, we show how easy it is to do just that with LEAPWORK and BrowserStack’s (https://www.browserstack.com) real device cloud.

Decoding Automation Podcast: #1 'The No-Code Manifesto'

What motivates someone to write a manifesto? And why is it necessary to create one in the world of automation?

How To Use RPA in HR [w/ real-world example]

Even though HR departments are made by people for people, what if I told you that robots can help HR become more human?