In the last few weeks, the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) released a Motion for a Resolution reviewing today’s privacy law landscape. The findings detailed that the most popular Privacy Enhancing Technologies (PETs) used by businesses today do not provide adequate protection for popular Big Data use cases.
The context for this review is clear; the methods and purposes of data processing have significantly changed over the past several decades. At the same time, technology developments mean it is increasingly easy for businesses to switch between the original primary purpose for data collection and processing to advanced secondary processing involving advanced analytics, artificial intelligence (AI) and machine learning (ML).
While Big Data continues to evolve utlising these technologies, so must data protection techniques and their associated legal requirements show the same level of progress. The introduction of the General Data Protection Regulation (GDPR) was a significant step towards ensuring that data subjects' fundamental rights were respected with organisations forced to comply with data protection obligations. However, nearly three years on it is clear that non-compliance remains a widespread issue as highlighted by the recent ruling by the Court of Justice of the European Union, known as “Schrems II, which invalidated the EU-US Privacy Shield treaty for international data transfer for failing to comply with GDPR requirements.
Simple data protection is no longer enough
One crucial aspect highlighted by the LIBE Motion is that businesses can no longer rely on simple “data housekeeping” practices to satisfy data protection practices required by both GDPR and Schrems II. Indeed, the LIBE noted the importance of data protection by design and by default for all processing, which requires technical and organisational measures to protect data both in the EU (as required by the GDPR) and outside of the EU (as confirmed by the Schrems II ruling regarding lawful secondary data processing and cross border data transfers).
Despite this, the reality is that many businesses and organisations are still only focusing on simple data protection for primary data collection and processing. This does not satisfy legal requirements for secondary processing via analytics, AI and ML, which requires a different lawful basis known as ‘legitimate interests’ processing, something the LIBE highlights as being crucial for further Big Data development. This requires a “balancing of interests” test to be satisfied, ensuring the processing is done in a proportionate manner and protects the rights of data subjects.
GDPR Pseudonymisation
While the LIBE recommends that the European Data Protection Board (EDPB) create guidelines and “a list of unambiguous criteria to achieve anonymisation”, many do not believe anonymisation is even possible in today’s Big Data world.
New technical measures are needed; and GDPR itself provides a blueprint for what these should be. Heightened requirements for Pseudonymisation are explicitly mentioned in the GDPR as a means to implement data protection by design and by default, with GDPR Pseudonymisation also recommended by the EDPB as a means to continue international data transfers while remaining compliant with Schrems II.
Schrems II highlighted the issue of lower protection standards in third countries such as the US. However, techniques such as Pseudonymisation can protect data from surveillance in other countries and thereby satisfy the requirements for global data transfers in the Schrems II ruling.
The path forward for compliance
The growing shift towards Big Data processing and the accompanying need for advancement in data protection emphasises the importance of technical and organisational controls for data processing to continue without revealing user identity unless under authorised conditions.
The European Parliament has explicitly highlighted that it is no longer acceptable for businesses to solely rely on “data housekeeping” practices. Organisations must therefore act now and transition towards more effective data protection techniques to control Big Data use, or else face the risk of substantial financial penalties as well as wider reputational damage.