Advertisement

A Root-Cause Framework for Trans-Atlantic Data Privacy

By on
Read more about author Chris McLellan.

When the United States and the European Commission together announced a new Trans-Atlantic Data Privacy Framework earlier this year, the news didn’t raise too many eyebrows. After all, there’s nothing particularly objectionable about the new framework – the goal is to “foster trans-Atlantic data flows” and address concerns about the underlying EU-U.S. Privacy Shield framework, and who could complain about that? It’s a step forward, which is good. 

Here’s the problem: The market right now needs giant steps, not incremental changes.

Consider the foundation. Data privacy is a universal problem – consumers everywhere have a right to expect that their most confidential information is protected – but that doesn’t mean there’s a universal solution. Europe deserves praise for setting the bar higher than ever before, but regulatory cross-pollination goes only so far. Bringing U.S. regulations into closer alignment with the sweeping General Data Privacy Regulation (GDPR) mandate, for example, won’t solve most or even some of the issues that fundamentally hinder the ability of consumers to gain meaningful control of their personal data. In fact, any kind of national mandate would likely run into huge obstacles around, for example, cultural norms (even between different states), technology capabilities, enforcement policies, and more. 

In the U.S., meanwhile, we’re seeing incremental progress. California set the tone with its groundbreaking CCPA (soon to be incorporated into the more comprehensive CPRA), and many other states, including Utah, Virginia, and Maine, have already initiated or are preparing their own data privacy mandates. It’s possible that within the next couple of years, most states will have some regulations in place. 

The federal government is inching forward too. On July 20, 2022, the House Energy and Commerce Committee voted to advance the American Data Privacy and Protection Act (ADPPA) in a 53-2 bipartisan result. The act aims to create a national standard as opposed to a patchwork of state laws and includes updates that grant the California Privacy Protection Agency, created under the state’s privacy law, the express authority to enforce the ADPPA.

The Dept. of Commerce also recently announced the Global Cross-Border Privacy Rules Forum, which features mostly Asian signatories. However, the most important data flow is between the U.S. and the EU, and this has been an area of heightened concern since 2015, when the principles known as “Safe Harbor” were first declared invalid by EU courts. Its successor, known as Privacy Shield, was also struck down by the Court of Justice of the European Union in 2020 (in a decision widely known as “Schrems II”). The new Trans-Atlantic Data Privacy Framework reestablishes this important legal mechanism to regulate transfers of EU personal data to the U.S.

All this matters because most modern apps operate as part of a “data supply chain” in which information is routinely exchanged between application databases (aka data silos) that usually span multiple regulatory jurisdictions. These “data flows” often include personal and sensitive information such as search, location, and transaction data, and that’s why setting a baseline of rules between nations to govern who can access and use this data is vital to establishing data privacy rights for individuals and institutions alike. However, regulations can’t do everything, and they don’t address the root cause of why data is so difficult to control in the first place. And that is data fragmentation.

The big variable is technology, and the problems with this start at the core. Think of the worker bees at the eye of this storm: application developers, the people who must comply with rigorous data protection frameworks while seeking to enable control of the data, particularly personally identifiable information (PII). This is in an environment where every new app, regardless of origin or purpose, traps your data in an app-specific data silo, which can only be connected by exchanging copies with other silos. This is point-to-point data integration, and it’s truly the bane of modern-day digital operations. 

This process of endless copying makes it difficult, even impossible, to achieve desired GDPR outcomes like ubiquitous data access controls, portability, custodianship, deletion (the right to be forgotten), and precision auditability. For the record, these provisions could be included in the post-Privacy Shield framework. But that’s not likely to happen. 

This goes to the heart of data privacy: As much as malicious cybercriminals stealing and selling private information for rampant misuse is a huge problem, the core data management issues and their effect on data governance are just as serious. 

So, what’s the best path forward? 

We must establish new approaches. Remember that when technology is the problem, it’s also often the solution. 

Organizations should begin by evaluating and adopting privacy-enhancing technologies that help anonymize and encrypt data, minimize data usage, and better manage consent. They should also consider first-party and zero-party data collection practices that redirect sensitive data away from third-party apps (e.g., Google Analytics, which has been deemed in violation of the GDPR in France, Austria, and Italy), over which they have no control. There are also processes and workflows that help establish “purpose-based” data access requests.

Even at the international level, there are realistic frameworks that can be adopted without diplomatic wrangling. For example, we recommend that jurisdictions everywhere consider adopting Zero-Copy Integration, a pending national standard in Canada that incentivizes software developers to de-couple the management of data from applications and take controlled and collaborative approaches to application design. This fosters an environment where data never needs to be copied in order to be operationalized (i.e., used to power new digital solutions). In the process, it eliminates traditional copy-based data integration (via ETLs, APIs), which in turn translates into meaningful access controls that can be universally enforced. 

The key here is that it’s a win-win. Zero-Copy Integration enables meaningful data ownership while simultaneously generating significant time- and cost-saving efficiencies. 

This kind of data governance will be meaningful and comprehensive and afford data the same level of respect as, for example, intellectual property and currency – those can’t be copied either. Without it, we’ll continue to see the rampant proliferation of data silos and wide-scale copying of personal data up and down digital supply chains. 

The Zero-Copy Integration framework represents one of the most promising approaches we have to square that circle. This is how we can place control and collaboration at the heart of new technology. 

Leave a Reply