Why NextRoll Chose Rust as Its Main Programming Language for Its New Bidders
At NextRoll, we believe in using the right tool for the job. We don't believe there is a single programming language or technology that fits every problem perfectly. Instead, our philosophy is centered on finding the most suitable language for each system we build. In some cases, this requires developing solutions that are built on multiple programming languages, such as python, erlang, elixir, go, scala, and, recently, Rust.
Following Google's initiative to create web standards for advertising without the use of third-party cookies (Privacy Sandbox), we decided to create a completely new bidding system that will integrate with the technologies of this initiative: the Protected Audience API, Topics API, Attribution Reporting API, and Private Aggregation API.
Rust has gained attention and adoption across NextRoll’s engineers and product teams for a few years now, and for good reason. It has high performance and memory safety—due to its strict type system and borrow checker—, and it’s even fun to use, all important factors for selecting a language for our third party cookie deprecation projects. For our integration with the Privacy Sandbox technologies, we needed to bring 10+ years of bidding experience and knowledge into a completely new bidding system. Not only is the system new, but it was built in a relatively short period of time and required a relatively new language for most engineers. To top it all off, these engineers span multiple teams and specialities, each with expertise in different programming languages.
While we can't say it’s the fastest prototyping language out there, Rust enabled almost twenty engineers to quickly assemble a new system without encountering any weird bugs, errors, or other roadblocks. It was incredible that, given all the work we did while everything was moving so quickly, when we turned the switch on things actually worked as we expected. It just worked!
Despite the language’s steep learning curve—there is definitely some initial cognitive load in understanding lifetimes, the type system and the borrow checker—Rust enabled even newcomers to quickly make meaningful contributions. Over time, as everyone gets more experienced with Rust, we have become more efficient at producing and shipping robust contributions.
Although it requires some upfront effort to initially implement ideas, once this is done and the system compiles, its reliability and efficiency affords the team greater focus on the system’s functionality, design, data flow and integration with other systems. A few benefits we’ve experienced include:
Performance: Although programs in Rust are pretty fast out of the box, we were surprised by the level of traffic our web server was able to handle without requiring super powerful hardware. When we had to make a few space or memory optimizations, Rust cleanly allowed us to do so. Any issues with the speed of data flow were simply a matter of finding how to alter our process to fit with what Rust already offers.
Memory and Safety Correctness: Rust’s safety guarantees give us confidence in avoiding race conditions, despite this project primarily running on asynchronous contexts. We’re capable of hot-reloading data that’s being used by multiple threads at runtime without fearing data races or corruption. Since the code is much more safe, readable and maintainable, we also don’t have concerns about memory or data flow integrity issues, which means avoiding long sessions of debugging pointer errors or memory/data corruption.
High to Low Level Versatility: From high-level components, like our web server, log processor, and data loader, to low-level components like memory-efficient storage, optimized lookup data structures and optimized serialization formats, Rust fits well with the system we’ve already built. We have been able to quickly build these components without any memory issues. This also encourages reusability of code across the system for shared logic and functionality.
Standardized Ecosystem: Rust has an advanced compiler that’s extremely helpful when refactoring. The compiler can spot errors and provide guidance on which code to change in which places, giving us peace of mind that when a type is changed, it’s highly unlikely other parts of the code will be using the wrong type. The ecosystem has quality, well-documented crates (https://crates.io) for different needs that can save us from building things ourselves. Dependency management has also been smooth, as we don't need to worry about major conflicts when upgrading crates that could force us to do major refactors of our code, even when upgrading to newer versions of Rust.
Data De/serialization: Rust’s ecosystem has adopted a streamlined unified de/serialization of data. For most formats, we define a struct of how we want the data to be represented and that's it: no custom code needed to optimize de/serialization or separate large functions to de/serialize in specific formats. This allows us to quickly develop components that ingest and dump data in and out of the system, as well as have a single representation of the data inside the system.
Interoperability With Other Languages: Rust has nice interoperability with other languages, such as C and python. It’s also a powerful contender when it comes to WASM generation, which has been useful for our Protected Audience API integration where we needed to expose some bidding logic in javascript. We have been able to minimize our javascript exposure by writing Rust code that’s later compiled into WASM and made available to the browser. This code is not only safer and more performant, but it also has the advantage of being shareable across the browser and our backend server, including de/serialization.
Of course, not everything has been perfect with Rust. A few issues we’ve encountered include:
Long Compile Times: There are options, but no definite solution for caching dependencies compilation for deployments from scratch.
Extra cognitive load due to Type System, Lifetimes and Borrow Checker: Especially in asynchronous environments, the strictness of these can cause moments of frustration and require code that can have complex type/trait requirements which are susceptible to type changes, requiring some refactors or different ways to implement ideas.
Refactors Larger Than Expected: Some refactors can be greater than initially perceived, especially when changing types. Given this can break type/trait requirements downstream, the compiler serves as a good guidance to not forget to refactor things where required.
The new bidder functionalities we’ve been able to build with Rust aren’t just some rusty magic (pun intended). All of the talented engineers that work on this project have been able to quickly learn and implement the language with great results. Having a good system architecture that lets us iterate quickly, as well as strong knowledge of the Privacy Sandbox technologies lets us see how we can take advantage of what Rust has to offer and use the right tools for each solution we need to build.
We are confident that Rust will still serve us well to continue building our new bidders and new things with the robustness and quality that we demand.
Ricardo Murillo is a Staff Software Engineer at NextRoll