Privacy, Progress, and the Open Web: Where We’ve Been and Where We’re Going


We at NextRoll have been intensely privacy-focused for some time now, working on new ways to mathematically and technologically protect users’ data while at the same time helping advertisers spend their marketing budgets effectively and, on top of that, supporting publishers to keep the Web open. A lot of progress has been made, particularly in the last year. Classically, January is a time to look both forward and back, and so I thought this would be an opportune time to reflect on the past and discuss what we’re excited about for the future.

Where have we been?

Famously, Google announced their Privacy Sandbox initiative in 2019 with a larger push and initial technical specifications published in 2020. I was excited to get involved at this time because not only have I personally been pro-privacy for a very long time, but also because it seemed like a great chance to learn some new math that I had only really dabbled in prior. The bulk of these discussions happened at the World Wide Web Consortium (W3C).

Google deserves a ton of credit for kick-starting this process. Yes, Google is an advertising company, but they correctly recognized that simply deprecating third-party cookies in Chrome, the most popular web browser in the world, would be a disastrous upheaval for the open web. Instead, they proffered a basic set of solutions to provide a meaningful replacement to their functionality.

This very quickly turned into a true industry-wide effort, across DSPs, SSPs, browser vendors, and more. Soon, the W3C was flush with other specifications that either enhanced Google’s proposals or worked fundamentally differently. (I’m very proud to have been the principal author on one of these proposals.) Google incorporated much of this feedback into new specifications. It took some time to even smooth these specs over, but by 2023, there were APIs that were ready enough to test.

And so an ambitious goal was set for 2024: test the APIs, in the wild, with real traffic, and real money, across multiple players in the adtech ecosystem.

This test happened and, in my opinion, was remarkably successful. I’ve said as much in some interviews, but it bears repeating here: yes, the Privacy Sandbox APIs left some things to be desired, and yes, the performance characteristics were not great. But these are new technologies, and for a first-run, real-world test with so many different organizations needing to coordinate, the fact that it worked and that we were able to collect meaningful data as an industry is a true achievement. Software almost never runs so smoothly on an initial launch even when it’s all under one company’s purview. More work is needed, but such is the way of the world.

Many organizations, including NextRoll, supplied their testing results to the United Kingdom Competition and Markets Authority, CMA. We’re still waiting for the aggregate results to be published, but NextRoll and others shared some of their results and suggestions for future development.

Not long after this, Google announced that no longer would they be deprecating third-party cookies, but instead making this a choice for users. This created a flurry of speculation that Privacy Sandbox was no longer relevant. We had a different position: Privacy Sandbox is here to stay, most users would take the private option, and that means it’s critical.

And so we see this. Development on Privacy Sandbox APIs continues unabated, NextRoll continues to test and build against these APIs, and more problem-solving and coordination happens every day. A private web is a better web. The future web is a private web.

Where are we going?

Let’s get the obvious caveat out of the way: I don’t know the future. But I’m excited for 2025 because I see it brimming with potential to push privacy forward in meaningful ways.

First, we have an excellent partnership with Audigent, which we announced earlier this month. This partnership was really born out of our collective desire to solve for privacy. By working together, Audigent and NextRoll can activate audiences in a fully private way using Chrome’s Protected Audience API (PAAPI). Because PAAPI is fundamentally about first-party data activation, we were able to take Audigent’s best-in-class audiences and show ads to them across tens of thousands of websites. This really demonstrates the power of these APIs when adtech players work together to find solutions.

Second, I strongly suspect that more adtech players will want to test the Privacy Sandbox at a deeper level this year. On the one hand, yes, you can say 1% of Chrome browsers no longer have third-party cookies available and what should you do to reclaim that addressability? But on the other hand, the Privacy Sandbox APIs are available on all Chrome browsers (modulo the users who opt out). This means it’s feasible to test with larger numbers when the will is there. Last year, the tests were about implementation and starting to figure out how the APIs work their best. This year, more people are going to come on board because the early adopters have already made strides.

As an addendum to the previous prediction, I think 2025 is still going to be dominated by testing across the adtech industry. If you haven’t read any of my content before, I urge folks to start getting involved as early as possible. The specifications were kind of complex in 2020. As we’ve been solving more problems and adding new features, they only get more complex, and so the knowledge deficit has grown. It will continue to grow as there’s still more to do on the specs. We still need many more players to help us advance the cause of privacy; for these reasons, a sea change in how the broader adtech industry operates in 2025 seems unlikely.

Third, I would expect some real guidance from CMA on what the user choice solution is going to look like for Chrome. It’s been half a year since the user choice announcement, and so it’d be surprising to hear nothing this year. I think this is a pretty safe prediction, so to make it a bit spicier, I’ll say that CMA is going to land on the positive side of Google’s plan that “elevates user choice.” While CMA of course cares about competition, providing the option to the user is compatible with the standard philosophies we see around privacy legislation: let users dictate what their data may be used for.

I also think we’re going to see continued adoption of other privacy-centric technologies beyond the scope of Privacy Sandbox. Governments and users have been demanding more privacy, and forward-thinking businesses will want to be ahead of the curve. This will include things like clean rooms, differentially private machine learning, and anonymization pipelines. While it can take some time to get these technologies up and running, once they’re humming along they actually can save a lot of headaches with compliance because the data are blind. (As an aside, the machine learning and AI that powers NextRoll’s BidIQ? Years and years ago, we stripped out all user identifiers from the data because they just weren’t necessary to do the job.)

Finally, I’m expecting a shift toward server-side solutions in trusted execution environments. To briefly explain this technical jargon, the most developed portions of PAAPI require many computations to happen on users’ browsers. Shifting those workloads to servers allows for a bunch of nice properties, including larger machine learning and AI models. These specifications are less developed and less tested, but I think the benefits will begin to draw people toward them.

What’s the big picture?

I made a very broad prediction through an analogy when I was attending a roundtable discussion a few months ago. It garnered enough curious attention, that I figured it might be worth documenting here. I’d be curious to hear other people’s takes. I’ll begin with the question I asked myself:

What is the single most valuable part of the web?

I don’t think it’s advertising. I don’t think it’s publishers. I don’t even think it’s human interconnectedness. My personal answer to that question is public-key cryptography.

This requires some explanation. Public-key cryptography is a collection of mathematical tools that enables trust. Most people don’t understand how it works under the hood, but its power has become so bog-standard and ubiquitous that we now implicitly trust the trust mechanisms themselves.

Without public-key cryptography, you could not trust that when you typed google.com into your browser that you were actually communicating with Google. Without public-key cryptography, you would never dream to purchase things online by submitting your credit card number, let alone do any sort of online banking. Without public-key cryptography, how could you even have that level of human interconnectedness if you couldn’t even be sure you were communicating with the right person, even pseudonymously? Nothing’s perfect, but by and large, public-key cryptography enables all of these things.

These mathematical tools, at one time in history completely abstract, became so mainstream that trust, in many ways, is inherent to our typical web experience. And when people and businesses trust each other more, ideas, information, and money flow more freely.

And that is the analogy with privacy-preserving technologies. We’re at the outset of new mathematical tools that initially feel inscrutable, only for the most niche of use cases, applied on top of limited web traffic and some datasets here and there.

In the long run, I believe these technologies will become second nature; tools that most people still don’t completely understand, but are deemed ultimately trustworthy. People will not have to be so concerned about where their data is going because it is by default protected in the same way cryptography protects our bank details. But the big promise is that these data, despite being private, are still usable: to provide better user experiences, to conduct scientific research more readily, and, of course, to keep the web open and vibrant.

Andrew Pascoe is NextRoll's Vice President of Data Science Engineering.