Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Organisations are increasingly looking to anonymisation as a method to process data. This is because they can achieve their intended processing purposes while not being held to the same restrictions placed on the processing of personal data under data protection law. The European Data Protection Supervisor (EDPS) and the Spanish Data Protection Authority (AEPD) recently published a useful paper on anonymisation. The paper’s objective is to “raise awareness about some misunderstandings about anonymisation, and to motivate its readers to check assertions about the technology, rather than accepting them without verification”. We look at the 10 lessons on anonymisation the EDPS and AEPD think you should learn.

Anonymisation is the process of rendering personal data anonymous so that data protection law doesn’t apply to it. Regulatory guidance and case law show the high threshold that exists for anonymisation. The Paper does not delve into the details of what constitutes anonymisation, and the various techniques that can be used to achieve it. These can get quite complicated, both from a legal and technical perspective. Instead the Paper focuses at a relatively high-level at the common “misunderstandings” of anonymisation which they wish to correct.

10 Lessons from the EDPS and AEPD Paper

1. Pseudonymisation is not the same as anonymisation

Pseudonymisation involves “processing of personal data in such a manner that the data can no longer be attributed to a specific data subject without the use of additional information”. Pseudonymous personal data is still personal data under GDPR. With truly anonymous data, an individual is no longer identifiable and so the information will not fall within the scope of the GDPR.

2. Encryption is not an anonymisation technique

Encryption does not render data anonymous, as it is capable of being reversed through decryption. Encryption can be a strong privacy enhancement tool, particularly for sharing information securely.

3. Anonymisation of data will not always be possible

It is not always possible to lower the risk of identification below a certain threshold whilst also retaining a useful data set. This depends on the context or the nature of the data where the re-identification risks cannot be sufficiently mitigated. The Paper provides an example of a dataset containing only the 705 members of the European Parliament where the total number of possible individuals is too small to allow effective anonymisation. This is something to keep in mind for considering anonymisation for relatively small data sets.

4. Anonymisation may not always be forever

Anonymising data does not mean it cannot ever be reversed in the future. There is a residual risk that technical developments or the availability of additional information, eg by personal data breach, may make reidentification possible in the future. There are examples of studies in recent years showing the power of AI to reverse engineer incomplete datasets. The take-away here is that for very sensitive data, you should build in contingencies to protect the data on the basis that the anonymisation may be compromised in the future.

5. Anonymisation does not always reduce the probability of re-identification of a dataset to zero

The anonymisation process and the way it is implemented will have a direct influence on the likelihood of re-identification risks. The aim of anonymisation is to reduce the risk of re-identification below a certain threshold. The Paper notes that the threshold in each case will depend on several factors. These include existing mitigation controls, and the impact on an individual’s privacy in the event of reidentification. The most desirable goal would be 100% anonymisation, but in some cases it is not possible. Given this, organisations should build in good data handling practices considering the risk of reidentification.

6. Anonymisation is not a binary concept that cannot be measured

It is possible to analyse and measure the degree of anonymisation. The expression “anonymous data” does not mean that datasets can simply be labelled as anonymous or not. The records in a dataset have a probability of being re-identified based on how easy it is to single it out. An anonymisation process should assess the re-identification risks over time.

7. Human intervention is needed in anonymisation

Human expert intervention is an important part of the anonymisation process, in addition to automated tools. Indirect identifiers will not always be obvious and will require detailed review and analysis to avoid re-identification, which may not be picked up by automated tools. Organisations should be cautious when deciding which anonymisation processes they choose to automate and should include human supervision to ensure indirect identifiers are being picked up.

8. Anonymisation can keep data functional for a given purpose

Anonymisation may restrict certain use cases of data but may keep a data set functional for other useful purposes. Anonymisation can be particularly useful for storing data for longer periods that would otherwise be permitted under GDPR, while retaining a limited use of the data. An example provided is anonymisation of access logs of a website. This allows the growth of the website to be tracked into the future, but no personal data is processed.

9. Processes must be individually tailored

Anonymisation processes must be tailored according to the scope and context of the processing, as well as the specific associated risks. When data is only made available to a limited number of recipients, the re-identification risk will be low. However, this would change substantially if the data was made available to the general public, for example. The general point here is that following the anonymisation technique used successfully by another organisation will not necessarily have the same results for your organisation.

10. There is a risk in finding out to whom this data refers to

The re-identification of a data subject could have a serious impact on their rights. The Paper gives the example of an individual’s television preferences that could lead to inferences about that person’s political opinions.


The Paper sets out the EDPS and AEPD’s anonymisation lessons at a high level. It also serves as a useful checklist for organisations considering an anonymisation solution for certain processing activities.

The UK’s Information Commissioner’s Office (ICO) has recently published a first chapter on anonymisation, pseudonymisation and privacy enhancing technologies for public consultation. The first chapter is mainly focused on explaining the core concepts of anonymisation and pseudonymisation. It is open for public consultation until November 2021. The ICO intends to publish further draft chapters for comment throughout the summer and autumn.

For more information please contact a member of our Technology team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.

Share this: