When you purchase a product or use a service, at some point you will probably receive a feedback form. It’s almost an inevitability.

It might be a form that arrives on email, or an irritating pop-up in an app. Recently, if you use a smart speaker you may get a notification which proceeds to tell you “Two months ago, you bought cat food, how many stars would you give this product?” It’s easy to answer the question. Although, depending on how irritating the distraction is, the validity of the feedback is questionable!

Whenever these pop-ups appear, you’re told “Your responses will remain anonymous”. It’s such a common appearance that most of us probably don’t even notice. With the smart speakers, there is no privacy information at all. We all assume our feedback is anonymous. Maybe it’s worth taking a step back and asking ourselves “What is anonymisation anyway?”

 

What is Anonymisation?

Anonymous data is defined in recital 26 of the GDPR as “Information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable”.

Anonymous data is, therefore, not subject to the provisions of the UK GDPR. However, anonymisation is not as simple as removing names and addresses, particularly with the new definition of personal data. The UK GDPR defines personal data as data relating to an identified or identifiable natural person.

 

Understanding Identifiability

To understand the breadth of identifiability, let’s look at a stock image.

picture of commuters waiting for train. Long exposure picture, commuters and train arriving are both blurred.

A stock image of london commuters. This photo is too blurry to easily identify the individuals in the picture.

This image is from Adobe Stock, a website where you pay a licence fee to use images in commercial works. If I hadn’t told you where the image came from, you could find out quickly with a reverse image search. We’re not going to get into the debate here about invasion of privacy by the photographer and whether publishing the image for sale, truly puts into the public domain.

If we go on the Stock image website, we’ll find the name of the photographer who took the picture. We could then contact the photographer and ask about this particular photo. The photographer might say that it was a candid photo, without any models, but that they took it at 6.40am on the 5th October 2020.

You could then canvas around this station at the same time of day the photographer took the picture. Given how many people use the tube for their daily commute, there’s a distinct possibility you’ll find some of the people in the photograph.

In three or four steps, you can identify the individuals in the photograph. The individuals are identifiable, so this picture could be defined as personal data. It’s easy to see from here why anonymisation is a harder task than it used to be.

 

True Anonymisation

So, are our responses to those rating questions anonymous? The answer to that question is “maybe.”

If the data is requested and collected in a way that provides the rating to the company with no other details, then we could say the feedback was anonymous.

However, lets take an experience that many of us are familiar with. You download an app on your phone and happily set about completing puzzles, building civilisations or destroying aliens. After a while a request pops up asking for a review.

For Apple users this is all provided by the Appstore. Interestingly, an application provider must only request this information three times year. The application provider must therefore record how many times they’ve sent a pop-up notification. So, it’s clear they must store some personal data.

Seeing as the App Store handles ratings and reviews, you could consider Apple as a data processor, running the review process on behalf of the App developer. So, maybe they are processing personal data after all.

Let’s think about a simpler example. You run an event in a school and you ask for feedback afterwards. Let’s say you send out a link to a Google Form and someone answered about the lack of wheelchair access, or rapidly flashing lights without a warning. If you have one person who uses a wheelchair or one person with photo-sensitive epilepsy then the anonymity of the feedback is very much weakened.

 

Managing Anonymisation

The bigger question is ‘Do you actually need to have perfectly anonymous data?”

For the education sector, feedback is essential to improve teaching, educational resources and student wellbeing. Educational organisations often need to show their commitment to progress and equality. The publication of statistical data can support that.

The UK Data Service provides advice on anonymising both Quantitative Data (numbers and statistics) and Qualitative Data (opinions, statements and written responses).

However, if you take sensible anonymisation measures (or use sensible alternatives such as pseudonymisation measures) and you protect the data you gather as personal data, any risks can be cut substantially, and you can get on with driving improvements based on the results of your feedback.