For today’s post, we’re taking a quick dive into the murky depths of Subject Access Requests.

Imagine this scenario. One of your students, staff members, or anyone you might have information about is standing at the front desk, and they’re asking for all of their data. What do you do? What happens next? For many organisations, it’s a daunting thought.

In reality, answering a Subject Access Request can be reasonably simple. However, the process is easier to understand with a little background knowledge:

What is a Subject Access Request?

 

Every individual is entitled to know what information you are holding about them. Individuals have a range of rights, including the right to rectification and the right to be forgotten, but the right most commonly exercised is the right to access.

Individuals exercise this right by submitting a Subject Access Request. Through this process, individuals can ask you to provide them with a copy of the personal data you hold about them.

An individual has been able to submit a Subject Access Request (SAR) since before the GDPR. Although the process was slightly different, you could request your data under the 1998 Data Protection Act. The GDPR has simplified the process for individuals, so they can exercise their rights with more ease.

Receiving a Subject Access Request

Back to the situation at hand, you’re manning the front desk and you’re face-to-face with a SAR. What comes next?

The first thing to note is that a Subject Access Request can come to anyone within an organisation, and can come in any format. Your organisation may have a specified route for Subject Access Requests such as an email address or phone number, but individuals are not obligated to use these methods.

A Subject Access Request can be verbal or written and can be sent through mechanisms like social media. You can receive a Subject Access Request via tweet. When you receive a request via one of these routes, it’s best to make a written record of the request, but you can’t force the individual to do so.

An individual can send in a Subject Access Request as an instant message like this.

In this scenario, the first thing you’ll want to do is gather a little more information. You can ask the individual for their name, and whether they have any other information such as a reference number to help locate their records.

After that, it’s best to enquire if there is anything specific the individual is looking for. Sometimes an individual is looking for specific information, or data from a specific time-period. They’re under no obligation to narrow the scope of their request, but it can sometimes save them from sifting through hundreds of records and save time for your organisation.

 

A Note About Identification

It is important that the person making the request is who they claim to be. This process requires some common sense. Identity verification must not be used to block access to information. In the case of requestors such as students or staff, you already hold information about their identity, and their presence at your organisation on a daily basis means that further checking is not required.

When requestors are less well known, verification of identity is required. This is easiest to do in person. The requestor can show identification documents such as:

  • Passport
  • Current Drivers Licence
  • Utility Bill
  • Bank or Building Society Statement
  • Letters from the Benefits Agency
  • Letter from Professional

In the scenario used here, you might not be absolutely certain of the requestor’s identity. You could ask to see identification when they first make the request. If they didn’t have any, you could advise them to return with ID when possible, but that you’ll start processing their request immediately.

Documents like passports, driver’s licenses and utility bills can be used to verify someone’s identity.

 

Moving down the chain

Once you’ve taken this information, the next step depends on how your organisation works. For larger organisations, you might have a designated data protection team. For smaller organisations, you may have a single point of contact, or you might be the data protection lead. In particularly small organisations, it’s not unusual for employees to have more than one hat to wear.

Regardless of size, your next step will involve recording the request, and setting the data discovery process in motion. Your organisation has a calendar month to respond to a Subject Access Request, and this begins when anyone in the organisation receives a request, not when the designated data protection lead receives it, so it’s important you get the ball rolling quickly.

You should have somewhere to record any Subject Access Requests your organisation receives. It might be an cloud-based system like ours. You should record the request, along with the date requested and any additional details. If someone makes a written request or sends a tweet, you should record that request verbatim so there can’t be any confusion.

Data Discovery

The request has been recorded and passed on to the correct team. If you’re in a large organisation, that might be the last you hear of the request. If your organisation is smaller, you are more likely to be involved in data discovery.

At this stage you’ll need to look through all your records for any personal data of the requestor. If the scope has been narrowed down, you may not need to locate all the personal data for an individual. When searching for data, make sure to search using any identifiers that might be used in your organisation. If customers have a reference number, or if individuals are referred to by initials, these should be searched for too, as well as searching for the requestor’s name. It’s also worth noting that something can be personal data, even if none of these identifiers are used. For instance, sometimes a cursory glance through all relevant emails might turn up one or two results that were missed by a filtered search done previously.

This can be quite a complex process, and if you’re struggling to hit the deadline, it might be worth looking at extensions.

Redact and Return

The data has been found and it’s been collated. Now you need to provide it to the individual. In general, SARs are sent out in electronic format, but a general rule of thumb would be to respond to a request in the format you received it. In this case, the request was verbal, and it wouldn’t make sense to try and provide the response verbally. Instead, the data protection team are using the contact details you collected at the start of the process and will be emailing out the response. Good for the bees, good for the trees, and easily accessible to the requestor.

However, before this data can be sent out, it must be redacted. An individual only has the right to access their own data, so anybody else’s personal data needs to be removed.

There are a few different methods to redact information, either in hard copy or in digital format. A quick scribble in black pen seems to suffice in the movies, but it’s not a reliable form of redaction. To ensure you don’t give away data you shouldn’t, have a look into electronic redaction, which covers the data and then deletes it from underneath so it cannot be recovered.

An individual is only entitled to their own personal data. You need to redact or remove other people’s information.

The end of the line

A month has passed, and your student or staff member has received an email with the personal data they requested. They’re happy, you’re happy, and there have been no inadvertent data breaches.

Subject Access Requests might seem daunting to begin with, but they just require a little teamwork and some forward planning. With a step-by-step process you can solve a SAR with ease.

Here’s a good news story to kick off the new month…

 

The GDPR has provided a whole new framework for data protection, a framework that is centred around an individual’s right to privacy rather than an organisation’s desire for data. Your rights are now stronger and clearer, and organisations must safeguard data and be transparent about how they use it.

Individuals have benefited from tighter data security, greater control over their information and clearer requests for consent. However, the GDPR has also provided an additional benefit. GDPR is good for the environment!

What does GDPR have to do with the environment?

At first, it’s quite hard to see the connection. How can a data protection law affect climate change? This becomes clearer when we remember that the majority of data these days is held electronically. We use electricity to run computers, servers and routers, as well as to manufacture computing equipment. Each email sent creates around 5g of CO2. This may not sound like a lot, but when the average office worker sends and receives 140 emails a day, it all adds up.

So, using the internet has an environmental impact, but that still doesn’t explain why the GDPR has been good for the environment.

Infographic: Environmental Benefits of the GDPR. simple image of green hills with wind turbines. Underneath a green box text states "Data Protection and Environmental Protection may seem completely unrelated, but they are linked through Electricity. We power our digital world through electricity, often generated using fossil fuels. So, every time we process digital data, we’re adding to our carbon footprint. The GDPR is helping us reduce CO2 production.".

 

We’re receiving fewer emails

Since the GDPR came into effect, organisations have reduced the number of marketing emails they send. The GDPR made consent requirements are more stringent; organisations must provide an opt-in mechanism for emails, rather than an opt-out mechanism. This means that only those interested in a product or service receive marketing emails. Not only is this better for individuals, but it also cuts down emissions. A reduction in the number of emails sent means that over 360 tonnes of COare saved every day. The same as a flight from London to New York

Websites are Slimming Down

Additionally, many websites have cut the number of third-party cookies and ad trackers on their websites. A report from Jet Global found that since the implementation of the GDPR, UK news sites have 45% fewer third-party cookies on their sites. UK companies aren’t the only ones to slim down. When the GDPR came into effect, USA Today ran a cut-down version of their site for EU customers. This version (with all the tracking code removed), was one-tenth of the size of the original size, and took just 3 seconds to load, compared to 45 seconds. Slow sites use more energy, and most of this energy is provided by fossil fuels and non-renewables. Digital sustainability expert Chris Adams estimated that the cut-down version of USA Today can save about as much CO2 as a flight from Chicago to New York a day.

 

The GDPR is good for individuals, and is good for businesses. Turns out, it’s also good for the planet.

 

In schools, responsibility surrounding children’s mental health and wellbeing is clearly documented. Legislation such as the 1989 Children’s Act, and guidance such as “Keeping Children Safe in Education,” set out responsibilities of staff and governors. Furthermore, it is made abundantly clear that data protection concerns should not prevent action from being taken to support the welfare of the child or young person.

However, these requirements only apply to those under 18. In the Keeping Safe in Education documentation, all individuals under the age of 18 are classed as children. Yet in higher and further education, most students are adults. This makes management of their welfare a more complicated puzzle.

 

The Statistics: Mental Health in Universities

In a 2020 survey of 1800 university students, Randstad (A human resources consulting firm) found nearly 40% of participants felt their studies were affecting their mental wellbeing. Of students considering leaving their course, 55% of them considered mental health decline a leading factor in their decision. There is plenty of evidence that attending university can create stress for students, and exacerbate mental health problems.

There is also plenty of evidence that early intervention supports recovery from mental health problems, but this can only happen if those who can intervene know that it’s needed.

One obvious source of support would be parents, or another trusted adult. Many parents want to provide that support, but if the student won’t ask for help, does the university or college have the right to get in touch and flag the concern?

To disclose, or not to disclose?

Let’s think about this from the pure approach of data protection. A data controller is in possession of special category personal data (concerns about the mental wellbeing) for an adult. Without explicit consent or a statutory duty, can the data controller disclose that personal data to a third party?

The obvious answer to this question is no. It would be a clear breach of confidentiality.

The issue is, from the perspective of the College or University, they can only go so far in providing welfare services, so surely sharing concerns with parents/guardians would be justified in the best interests of the student?

This is where everything becomes complicated. At a point of crisis, where an individual’s life is at risk, the Vital Interests lawful basis could allow an emergency contact to be made, but only to alert the contact of the emergency. Universities can only disclose additional detail if the individual is incapable of giving consent.

If mental health intervention is most effective before an emergency situation, we can’t disclose health data on the basis of Vital Interests.  We also have to consider that the student may have good reasons for not wanting their parents to be contacted, even in an emergency. It’s possible the issues may relate directly to the parents or to their attitudes towards the student. In some cases, informing parents could put the student at risk of serious harm.

How can the institution know the family background of the thousands of students they deal with? Worse still, if parents were contacted when a concern was flagged, then this might put students off coming forward in the first place.

 

Creating a new infrastructure

Information cannot be disclosed without consent. For a person with declining mental health, getting that consent can be difficult. Depending on the particular problem being faced, that consent may not even be considered freely given.

As such, the best time to address this problem is during the process of application and enrolment. A structured opt-in scheme with very clear rules could be established at universities, so students can give their consent for concerns to be shared with a named contact. This scheme could be promoted in school during applications, and by universities during open days.

This wouldn’t be used if students miss a lecture or don’t hand work in on time. University is not school. While most students are on the younger side, they should be treated with the same respect given to any adult. However, for serious concerns about a student’s wellbeing, this opt-in mechanism would enable help to be sought before a crisis. Should a student not consent or withdraw their consent later, the university or college would follow the normal welfare process, giving students freedom of choice. For those who do consent, this scheme would act as an additional safety net, giving students and parents a level of reassurance.

 

A Widespread Solution

Opt-in systems are appearing at various universities, but it’s unclear whether these will be adopted more widely. UCAS, who manage most university applications, currently have nothing in place in their application process. Furthermore, the Office for Students say that “individual universities are responsible for developing their own mental health policies.”

As such, it’s unlikely there will be a standardised process across all colleges and universities. However, even if the Office for Students does not mandate an opt-in system, Universities and Colleges could implement them independently.

This is the most obvious method of balancing the issues of consent and concern. Yet, a recent Freedom of Information Request found that of 149 responses, only 32 higher education institutions had a system for students to opt-in to parent/guardian contact.

 

Finding the Perfect Balance

From a broader perspective, this debate raises questions about where duty of care lies at university. From a data protection perspective, it’s relatively simple. An individual’s right to consent or object to processing/sharing of data should be respected.

A consent based system gives students the choice they are entitled to. However, it also means that a student’s support system can be contacted if they are in crisis. 96% of students at the University of Bristol opted in to their welfare scheme, and the university used the scheme 36 times in the first year. By stepping back, and adding additional infrastructure designed with data privacy in mind, higher education institutions can keep both data protection and welfare as a priority.

 

We’ve reached a new checkpoint in Boris’s Covid roadmap. Yesterday, non-essential shops reopened, and many flocked to their local pub to enjoy a pint outside. For many, yesterday also marked their first day back in the office. While teaching staff have been back for a few weeks now, for others the full time return to the office is just happening. While returning to the office is cause for celebration, we should also take the opportunity to renew our data protection vigilance. 

Many organisations have used lockdown as an opportunity to reshuffle the office. Speaking from experience, it can be a little jarring when you come back to work and find your desk in a whole new place! 

 

Separating Work and Home Life

 

It’s important to acknowledge that changes like this can leave people a little uncertain, and that working away from the office can lead to lapses in data protection practices. Now is a good time to check staff still have the little things in place, such as locking computers when they leave their desk., It’s important to remember that not all information is suitable for sharing with colleagues, particularly if you’re in an office with people from different teams. 

Now is also a good time to check your digital distancing. Lockdown compressed our work lives and personal lives into a single space. Now, much like social distancing, we need to make sure we leave enough space between those two lives, that data can’t cross from one to the other. While working from home, you may have had to use your own devices, or store things in personal drawers in your house. Now we have access to the office again, take a few moments to assess your home workspace. Have you left any paperwork at home? Do you still have access to work emails and files on your home computer? 

Another common occurrence in lockdown has been the use of personal mobiles for business related tasks. We’ve talked previously about some of the dangers of this, but for many, it has seemed like the only choice to keep organisations running smoothly. Now we can return to our usual methods of communication, it might be best to close down any work WhatsApp or Messenger groups, as they greatly increase the risk of a data breach. 

Moving Forwards Safely

 

 It’s wonderful to be back in the office, and it’s wonderful to see colleagues face to face, albeit from a safe distance. It might take a little while for things to feel normal again, but hopefully this is a permanent step towards ‘Business as Usual’.  As we move forward along the Covid roadmap, we can start getting excited for holidays and weddings, as well as indoor sport and museums. However, we should also keep a careful eye on our behaviour, so our excitement isn’t dampened by a data breach and its consequences. 

And on that note, may we move cautiously, but optimistically, to May 17th, the next step in our journey to normality.   

 

 

When you purchase a product or use a service, at some point you will probably receive a feedback form. It’s almost an inevitability.

It might be a form that arrives on email, or an irritating pop-up in an app. Recently, if you use a smart speaker you may get a notification which proceeds to tell you “Two months ago, you bought cat food, how many stars would you give this product?” It’s easy to answer the question. Although, depending on how irritating the distraction is, the validity of the feedback is questionable!

Whenever these pop-ups appear, you’re told “Your responses will remain anonymous”. It’s such a common appearance that most of us probably don’t even notice. With the smart speakers, there is no privacy information at all. We all assume our feedback is anonymous. Maybe it’s worth taking a step back and asking ourselves “What is anonymisation anyway?”

 

What is Anonymisation?

Anonymous data is defined in recital 26 of the GDPR as “Information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable”.

Anonymous data is, therefore, not subject to the provisions of the UK GDPR. However, anonymisation is not as simple as removing names and addresses, particularly with the new definition of personal data. The UK GDPR defines personal data as data relating to an identified or identifiable natural person.

 

Understanding Identifiability

To understand the breadth of identifiability, let’s look at a stock image.

picture of commuters waiting for train. Long exposure picture, commuters and train arriving are both blurred.

A stock image of london commuters. This photo is too blurry to easily identify the individuals in the picture.

This image is from Adobe Stock, a website where you pay a licence fee to use images in commercial works. If I hadn’t told you where the image came from, you could find out quickly with a reverse image search. We’re not going to get into the debate here about invasion of privacy by the photographer and whether publishing the image for sale, truly puts into the public domain.

If we go on the Stock image website, we’ll find the name of the photographer who took the picture. We could then contact the photographer and ask about this particular photo. The photographer might say that it was a candid photo, without any models, but that they took it at 6.40am on the 5th October 2020.

You could then canvas around this station at the same time of day the photographer took the picture. Given how many people use the tube for their daily commute, there’s a distinct possibility you’ll find some of the people in the photograph.

In three or four steps, you can identify the individuals in the photograph. The individuals are identifiable, so this picture could be defined as personal data. It’s easy to see from here why anonymisation is a harder task than it used to be.

 

True Anonymisation

So, are our responses to those rating questions anonymous? The answer to that question is “maybe.”

If the data is requested and collected in a way that provides the rating to the company with no other details, then we could say the feedback was anonymous.

However, lets take an experience that many of us are familiar with. You download an app on your phone and happily set about completing puzzles, building civilisations or destroying aliens. After a while a request pops up asking for a review.

For Apple users this is all provided by the Appstore. Interestingly, an application provider must only request this information three times year. The application provider must therefore record how many times they’ve sent a pop-up notification. So, it’s clear they must store some personal data.

Seeing as the App Store handles ratings and reviews, you could consider Apple as a data processor, running the review process on behalf of the App developer. So, maybe they are processing personal data after all.

Let’s think about a simpler example. You run an event in a school and you ask for feedback afterwards. Let’s say you send out a link to a Google Form and someone answered about the lack of wheelchair access, or rapidly flashing lights without a warning. If you have one person who uses a wheelchair or one person with photo-sensitive epilepsy then the anonymity of the feedback is very much weakened.

 

Managing Anonymisation

The bigger question is ‘Do you actually need to have perfectly anonymous data?”

For the education sector, feedback is essential to improve teaching, educational resources and student wellbeing. Educational organisations often need to show their commitment to progress and equality. The publication of statistical data can support that.

The UK Data Service provides advice on anonymising both Quantitative Data (numbers and statistics) and Qualitative Data (opinions, statements and written responses).

However, if you take sensible anonymisation measures (or use sensible alternatives such as pseudonymisation measures) and you protect the data you gather as personal data, any risks can be cut substantially, and you can get on with driving improvements based on the results of your feedback.

 

 

Last week, a global hacking campaign targeted Microsoft Exchange servers, and compromised hundreds of UK companies. It was estimated that more than 500 email servers in the UK were hacked, alongside many more across the world. Attackers used newly discovered vulnerabilities in the software to gain access to data, or to install ransomware.

Ransomware can cripple an organisation, with hackers locking the organisation out of their own servers and removing access to data unless the organisation hands over a hefty fee. Attackers often delete or sell the data they held hostage, even if the victim pays the ransom. We’ve talked about the damaging impact ransomware can have on operations in previous posts, such as the Travelex incident in early 2020. The company spent several weeks unable to function, with all of their systems offline.  In short, a ransomware attack can bring an organisation to its knees.

A ‘Zero-Day’ Hack With Widespread Damage

The recent hack has been particularly damaging due to multiple factors:

Firstly, thousands of organisations use Microsoft Exchange. These range in size from large corporations like Metro and the Independent, to individual schools with a handful of students. Smaller organisations may not have dedicated IT staff, and are less likely to spot growing problems, or may miss a patch, which removes a vulnerability that could later be exploited. When an attack compromises a widely used software, small organisations often receive the most disruption.

The second factor in this hack is the type of vulnerability that was exploited. According to Microsoft, hackers used new techniques, that have not been seen before. This meant that attackers knew of vulnerabilities in the Microsoft Exchange software before the software developers knew. This is referred to as a “zero-day” vulnerability. The developers have “zero days” to fix the problem that has just been exposed — and perhaps already exploited by hackers. Software vendors must work to quickly release a patch while the world waits, and customers are at risk. If developers fail to release a patch before hackers exploit the security hole, the “zero-day” vulnerability becomes a “zero-day” attack.

Preventing Zero-Day Attacks:

While these attacks can lead to personal data breaches. Zero-Day attacks are a broader Cyber-Security issue. In organisations such as schools and colleges the two issues overlap; most of the data held on systems such as Microsoft Exchange will be personal data.

Having a specialist on-call should you run into a problem might be worth considering. Some insurance policies can provide access to this type of expertise. More complex preventative measures require a more detailed understanding of IT, but there are still some more simple things that you can put in place to reduce risk.

  1. Ensure you have Firewalls and Anti-Virus software in place, and you update the software regularly.
  2. Make sure to install any new patches or updates released for your software. These patches are likely to be securing vulnerabilities in the software.
  3. Keep an eye on the news. If a software you use appears as part of a hack or cyber-attack, letting IT staff know as soon as possible gives them a head start to tackle any issues that arise.
  4. Ensure your organisation has a secure backup in place, and that you hold the backup separately to your main servers. Should hackers delete your records, you may be able to retrieve lost data from your backup.

Disaster Recovery and Workforce Education

These are just a few ideas as to how to keep your organisation safe from cyber-attacks. However, you can’t prevent every single attack. The nature of Zero-Day attacks mean that you don’t know about a vulnerability until after an attack. Therefore, having a disaster recovery plan is useful, should you need to deal with such a situation.

A final point. In this post we’ve explored some of the aspects of a personal data breach caused by a cyber-attack, rather than human error. Chances are, the majority of data breaches you encounter will be caused by human error. The preventative measures discussed above are important for reducing risk of a cyber-attack, but you should combine them with workforce education and a strong data protection ethos. A breach caused by an individual can have just as damaging of an effect as one caused by code.

 

Last Month, The U.K. Commissioner for Public Appointments posted an advertisement for a new Information Commissioner. Current Commissioner Elizabeth Denham announced previously that she was leaving her post in Octoberhaving overseen the UK’s transition to new data protection laws.  

Whoever is hired will be stepping into quite a sizable pair of shoes. Data protection complaints doubled in 2018/19, from around 21,000 to over 40,000 complaints. There were slightly fewer complaints in 2019/20, but the number is still far higher than before the implementation of GDPR. This is not necessarily a bad thing; the rise in numbers is partially due to more stringent rules, but it has also come from increased public awareness around data protection. Individuals are learning to be a bit savvier about their data, and they’re learning where to go when they feel their data is being misused. 

Awareness is important in current times, as we are producing personal data almost all the time. All our time on electronics, our use of smart technology, and even the signing of a humble visitor book. It all creates data. In 2025, we’re due to hit a total of 175 zettabytes of data in the global datasphere. A single zettabyte is around a trillion gigabytes. Not all of that is personal data, but the numbers are still rather overwhelming to think about. Increasing amounts of data, and increased pressure on the Information Commissioner’s Office (ICO), must make the job of Information Commissioner rather daunting to apply for.   

 

What is an Information Commissioner?

The information commissioner is the head of the ICOFor the most part, they act as the public head of the body, and lead the organisation through its strategic development. The Information Commissioner looks at the big picture, while those employed by the ICO manage the day-to-day duties of the office.  

In the government’s advertisement for the role, they describe some of the duties of the ICO: 

  • Give advice to members of the public about their information rights; 
  • Give guidance to organisations about their obligations with respect to information rights; 
  • Help and advise businesses on how to comply with data protection laws;  
  • Gather and deal with concerns raised by members of the public; 
  • Supports the responsible use of data;  
  • Take action to improve the information rights practices of organisations; and 
  • Co-operate with international partners, including other data protection authorities. 

 

To a Data Protection Officer (DPO), this role description might sound familiar. Most Data Protection Officers work within a smaller remit, and unfortunately there’s rarely an opportunity for an international visit. However, many of the same principles apply. Data Protection Officers need a good memory and a keen eye for relevant legislation. In schools and colleges, they also need strong problem-solving skills. In an environment with a lot of personal data, and quite a few wildcards, a DPO needs to be prepared for a brand-new scenario every day. Taking a piece of legislation (written largely for businesses) and translating it into practical solutions for the education sector can take some talent. Maybe the public appointments office needs to have a look round schools for their next candidate.  

Image of DNA model, with vertical code in green text in the background.

In the second instalment of our Emerging Tech series, we look at the development of commercial genetic testing, and the data protection implications of widespread genetic screening. 

 

“Customers who are genetically similar to you consume 60mg more caffeine a day than average.” 

“You are not likely to be a sprinter/power athlete” 

“Customers like you are more likely to be lactose intolerant” 

“You are less likely to be a deep sleeper” 

These are all reports you can get from commercial genetic testing. Companies such as 23 and me, Ancestry.com, MyHeritageDNAfitWe’ve talked about the rise of genetic testing before, but recent announcements from Richard Branson have brought the topic back into discussion. 

Earlier this month Richard Branson announced he was investing in 23 and Me, and the company would be going public (meaning shares will be traded on the New York Stock Exchange). This push for growth and investment has reopened the proverbial can of worms, and people are once again considering the privacy implications of genetic testing. 

What is genetic testing?

Genetic testing takes a DNA sample, such as hair or saliva, and identifies variations in your genetic code. These variants can increase or decrease your risk of developing certain conditions. These tests can also identify variations in ‘junk DNA’ that have no impact on your life, but can be used to identify relatives and ancestors. 

Genetic screening first appeared in the 1950s. Researchers later developed more detailed DNA profiling in the 1980s, used for crime scene investigation. Technology has come forward in leaps and bounds since then. Once an expensive and costly feat, you can buy a reasonably affordable testing kit in many pharmacies or online. In Estonia, the government are offering genetic testing to citizens; to screen for predisposal to certain conditions and help individuals act early with personalised lifestyle plans or preventative medication. 

There have been suggestions to utilise genetic screening in the Education sector as well. In 2006, two years before 23 and Me began offering their first testing kits, geneticists suggested schools as the perfect place to carry out widespread screening. Researchers have also investigated the possibility of genetically informed teaching, with teaching style tailored to an individual’s predisposition to certain learning styles. 

For those outside education, the biggest development has been Direct to Consumer (DTC) genetic testing. DTC testing began mostly as a tool for ancestry identification, now there are millions of consumers, and even companies offering tailor made nutrition plans designed around your genetics. 

I find myself writing this a lot, but it sounds like science fiction. Yet again, the science of today has caught up with the fiction of yesterday. However, if growing up surrounded by shelves of Sci-Fi has taught me anything, a cautious approach is often best. This is definitely true of Genetic testing. There are many possible advantages, but there are also risks. 

A Breach with Big Implications:

Data breaches are always a possibility when you entrust your information to someone else. However, genetic data is clearly a sensitive type of personal data, particularly if a customer has opted for genetic health screening. 

Companies will put swathes of protective measures in place, but in a world where a cyber-attack occurs approximately once every 39 seconds, there will be breaches. In fact, there already have been breaches. In July last year, hackers targeted the genetic database GED match, and then later used the information to target users of MyHeritage. Even without cyberattacks, breaches occur. When recovering from the recent hack, GEDmatch reset all user permissions. This opened up over a million genetic profiles to police forces, despite users opting out of visibility to law enforcement. 

If genetic testing is ever to be used in schools or offered nationwide, one key issue will be ensuring they hold that data securely. If schools and colleges offered genetically informed teaching, they would have to hold that data too. Adequate security measures for such information can be difficult to manage, particularly if education budgets stayed the same. Infrastructure would require radical change before genetic testing could ever be implemented safely. 

Breaches are nothing new, but with such precious data, they can be worrying. 

Secondary Functions and Sources of Discrimination:

Under the data protection act, data controllers must set out what they will use your personal data for. They cannot use that data for unrelated purposes without informing you. However, over recent years, there have been several cases where ambiguity over accessibility has made it to the news. 

Individuals can opt-in to share their data with 23 and Me research teams. Many customers were comfortable with researchers using their data for medical advances. It was not until their public deal with GlaxoSmithKline, that it was clear genetic data was being passed to pharmaceutical companies for profit. 

This data was anonymised, so the outcry following the announcement was more about ethics than data protection. However, there have been multiple cases where companies have allowed law enforcement to access their databases, despite stating otherwise in their privacy policy. 

Your genetic data reveals a huge amount about you and your characteristics, so it’s important to know exactly who can see it. For example, variations of the MAOA gene have been linked to levels of aggression, as well as conditions such as ADHD. Identification of these types of variants could help employers find individuals more likely to succeed in their field. However, it could just as easily lead to discrimination in hiring. Researchers have also linked other conditions such as bipolar disorder to certain genetic variants. Should that information be available to employers, it might lead to workplace discrimination. For example, bosses not promoting individuals they think might later become “unstable.” 

There has been speculation that biological data could be used for identifying terrorist subjects, tracking military personnel, or even rationing out treatment in overstretched health systems. This is all speculation. Even so, there are fears of discrimination based on the possibility of you developing a certain condition or trait. 

The Risk of Re-identification:

The speculation above works on the basis of genetic data being individually identifiable. Companies use anonymisation to reduce risks of such discrimination. Genetic companies go to great lengths to separate genetic data from identifiers. For instance, anonymising data for research purposes, or storing personal and contact details on a separate server to genetic data. The view has always been that if you separate personal identifiers from the raw genetic data, the individuals remain anonymous. 

Unfortunately, research has already shown that it is possible, in principle, to identify an individual’s genomic profile from large dataset of pooled data. It’s an interesting thought. Companies are often quite willing to share anonymised data for additional purposes. It is no longer personal data and isn’t protected with the same legal safeguards. If we can re-identify a data subject, it requires the same levels of security and legal protection as personal data. Dawn Barry, cofounder of genetic research company LunaDNA, said “we need to prepare for a future in which re-identification is possible”. 

If this data could be re-identified, it raises questions over the definition of anonymity. It also reignites the discussion over who Genetic Testing companies should be sharing data with. 

Understandable Worries? Or Needless Fear?

Schools and colleges have always been a proving ground for new technologies. It’s worth remembering that fingerprint scanning was being used in UK schools for over ten years before the Protection of Freedoms Act caught up in enforcing parental consent.  

It would be easy see than a “scientifically based, individualised learning experience” could be presented as an ideal way of helping all students achieve the best outcomes.  

InterestinglyDirect to Consumer genetic testing has now been available for just over a decade, so there is still plenty of room for development. However, we’re still some way from determining the dayto-day life of students in education. 

Here’s a sobering thought though. Should the worst happen, and something compromises your data, you can change your passwords, you can change your bank details. You can even change your appearance and your name. You can’t change your DNA. We’ve got to keep that in mind as the world of biometrics continues to grow. 

Next time, we’ll look at remote learning and the technologies that are being developed for the virtual classroom. Find previous posts from this series here.

 

WhatsApp have spent the last month putting out self-inflicted fires. After a disastrous announcement of changes to their terms of service, the company have been scrambling to convince users to stick with the app. However, even with delayed implementation of the new terms of services, and hundreds of reassurances, their PR nightmare has prompted many organisations to take a closer look at their use of the messaging app.

The Story of WhatsApp

WhatsApp have marketed themselves as a safe and secure messaging app from the start in 2009, emphasising their end-to-end encryption, the minimal amounts of data they collect, and the fact that they don’t share that data with anyone. However, Facebook acquired WhatsApp in 2014, much to the chagrin of privacy activists. Many feared it was the start of a slippery slope, leading to abuse of user data. To allay fears, executives reassured users that WhatsApp would operate separately, and data would not be shared with Facebook.

Fast-forward now to the beginning of this year. Upon opening the app, users found a notification that WhatsApp’s privacy policy and terms of service were changing. It seemed they were going against their promises and intended to share user information with Facebook. For users in the EU and the UK, there was additional confusion. It wasn’t clear what changes would be applicable to those under the GDPR.  There was chaos, there was confusion, and there was a lot of hopping to new messaging platforms.

Ultimately, WhatsApp cut their losses and delayed implementation to May, in order to re-evaluate their plans. Even so, it’s left a sour taste in everyone’s mouth, and that might not be a bad thing. It’s hard to deny the ease of using WhatsApp, but is it really appropriate for professional use?

 

Messaging Apps and Privacy Problems

Are you part of an employee WhatsApp group? Many people would answer that question with a yes. In a study from 2019, it was found that 53% of global frontline workers check messaging apps up to six times daily, for work related issues. Over half of respondents were using personal messaging apps like WhatsApp for professional correspondence. There are plenty of issues with this, as you can see below:

 

  1. The first issue is that business use actually goes against the WhatsApp terms of service. The terms of service prohibit any “non-personal use of our Services unless otherwise authorized by us.” Violating these terms could lead to suspension or deletion of your account, but there are additional data protection issues with the app. When you create a WhatsApp account, you add your list of contacts to the app, meaning you upload the data of other individuals without their consent. When using WhatsApp personally, this is less of a problem, but if you use WhatsApp for business purposes, any processing of personal data falls under the GDPR.

 

  1. Individuals can also be added to a WhatsApp group without giving consent. Once added, anyone else in the group can see their contact information, any information held within their bio, and when they were last active. Unless you have provided every member of staff with a work mobile, employees will be using their personal numbers to create WhatsApp accounts. Create an “All Staff WhatsApp Group” and you’ve just handed out the personal contact numbers for all your employees. While you may think you know your staff well, you can’t be sure there aren’t underlying tensions or conflicts, that could escalate should one of your members of staff be able to contact another outside of work hours. Ultimately, it’s not why you originally collected that data, and it’s not how it should be used.

 

  1. It’s not a sensible place to discuss school or college related matters. It’s not unusual to hear someone groaning because their phone has deleted all their chat history. Nor is it unusual to hear someone left their phone in a taxi at the weekend. Should something happen to your phone or your WhatsApp account, you could be dealing with a breach of accessibility. Conversely, when you work with personal data, you need to  to delete it after the appropriate time; a rather complicated venture when it’s sitting on hundreds of mobile devices.

 

  1. Finally, discussing work on a personal device always increases the risk of a breach of confidentiality. In the 2019 study, 30% of respondents found that the 24/7 nature of messaging apps made it hard to maintain a work/personal life balance. Answering work queries late at night, or in the middle of personal time, can lead to sending information to the incorrect group. Indeed, 12% of respondents said they worried about a serious data breach via a messaging app.

 

Using Messaging Apps for Work

These are just a few of the complications that can arise from using messaging apps like WhatsApp. They are easy to use, but not designed for business communication. Is it time to retire the faithful green speech bubble? For business communication, it’s certainly worth considering. Finding an alternative can be difficult, but there are business messaging apps out there.

A woman lying in bed on her side in a dark room, illuminated by the screen of her phone. She is yawning and covering her mouth with her left hand, whilst her right holds her phone.

The 24-hour nature of messenger apps can make it hard to keep business to appropriate hours.

However, it’s best to take a moment and assess whether a new messaging app would be the best step forward. They’re hard to regulate and it’s difficult to ensure people only see information they need to see. With staff in many schools and colleges taking on multiple roles, messaging groups can get quite messy. Add on that many of these groups operate under the noses of HR, and you get a breeding ground for breaches.

Image of face breaking into cubes, representing AI and Machine Learning

Anyone involved in last year’s exam grade saga probably harbours a level of resentment against algorithms. 

The government formula was designed to standardise grades across the country. Instead, it affected students disproportionately, raising grades for students in smaller classes and more affluent areas. Conversely, students in poorer performing schools had their grades reduced, based on past grades from previous years.  

Most of us are well versed in the chaos that followed. Luckily, the government have already confirmed that this year’s results will be mercifully algorithm-free.  

We touched on the increased use of AI in education in an article last year.  Simple algorithms are already used to mark work in online learning platforms. Other systems can trawl through the websites people visit and the things that they write, looking for clues about poor mental health or radicalisation. Even these simple systems can create problems, but the future brings machine learning algorithms designed to support detailed decision making with major impacts on peoples lives. Many see Machine Learning as an incredible opportunity for efficiency, but it is not without its controversies.  

Image-generation algorithms have been the latest to cause issuesA new study from Carnegie Mellon University and George Washington University, found that unsupervised machine learning led to ‘baked-in biases’. Namely, the assumption that women simply prefer not to wear clothes. When researchers fed the algorithm pictures of a man cropped below his neck, 43% of the time the image was auto completed with the man wearing a suit. Researchers also fed the algorithm similarly cropped photographs of women. 53% of the time, it auto completed with a woman in a bikini or a low-cut top.  

In a more worrying example of machine-learning bias, A man in Michigan was arrested and held for 30 hours after a false positive facial recognition match. Facial recognition software has been found to be mostly accurate for white males but, for other demographics, it is woefully inadequate.  

Starring Cary Grant and Katherine Hepburn, Bringing up Baby follows a palaeontologist through his adventures with a scatter-brained heiress… and a leopard called Baby.

Where it all goes wrong:

These issues arise because of one simple problem, garbage in, garbage outMachine learning engines take mountains of previously collected data, and trawl through them to identify patterns and trends. They then use those patterns to predict or categorise new data. However, feed an AI biased data, and they’ll spit out a biased response.

An easy way to understand this is to imagine you take German lessons twice a week and French lessons every other month. Should someone talk to you in German, there’s a good chance you’ll understand, and be able to form a sensible reply. However, should someone ask you a question in French, you’re a lot less likely to understand, and your answer is more likely to be wrong. Facial recognition algorithms are often taught with a white leaning dataset. The lack of diversity means that when the algorithm comes across data from another demographic, it can’t make an accurate prediction.  

Coming back to image generation, the reality of the internet is that images of men are a lot more likely to be ‘safe for work’ than those of women. Feed that to an AI, and it’s easy to see how it would assume women just don’t like clothes.  

AI in Applications:

While there’s no denying that being wrongfully arrested would have quite an impact on your life, it’s not something you see every day. However, most people will experience the job application process. Algorithms are shaking things up here too.  

Back in 2018, Reuters reported that Amazon’s machine learning specialists scrapped their recruiting engine project. Designed to rank hundreds of applications and spit out the top five or so applicants, the engine was trained to detect patterns in résumés from the previous ten years.  

In an industry dominated by men, most résumés came from male applicants. Amazon’s algorithm therefore copied the pattern, learning to lower ratings of CVs including the word “women’s”. Should someone mention they captain a women’s debating team, or play on a women’s football team, their resume would automatically be downgraded. Amazon ultimately ended the project, but individuals within the company have stated that Amazon recruiters did look at the generated recommendations when hiring new staff 

Image of white robotic hand pointing at a polaroid of a man in a suit, with two other polaroids to the left and one to the right. The robot is selecting the individual in the picture they are pointing at.

Algorithms are already in use for recruitment. Some sift through CVs looking for keywords. Others analyse facial expressions and mannerisms during interviews.

Protection from Automated Processing:

Amazon’s experimental engine clearly illustrated how automated decision making can drastically affect the rights and freedoms of individuals. It’s why the GDPR includes specific safeguards against automated decision-making.  

Article 22 states that (apart from a few exceptions), an individual has the right not to be subject to a decision based solely on automated processing. Individuals have the right to obtain human intervention, should they contest the decision made, and in most cases an individual’s explicit consent should be gathered before using any automated decision making.  

This is becoming increasingly important to remember as technology continues to advance. Amazon’s experiment may have fallen through, but there are still AI-powered hiring products on the market. Companies such as Modern Hire and Hirevue provide interview analysis software, automatically generating ratings based on an applicant’s facial expressions and mannerisms. Depending on the datasets these products were trained on, these machines may also be brimming with biases.  

As Data Controllers, we must keep assessing the data protection impact of every product and every process. Talking to wired.co.ukIvana Bartoletti (Technical Director–Privacy at consultancy firm Deloitte) stated that she believed the current Covid-19 pandemic will push employers to implement AI based recruitment processes at “rocket speed”, and that these automated decisions can “lock people out of jobs”.

Battling Bias:

We live in a world where conscious and unconscious bias affects the lives and chances of many individuals. If we teach AI systems based on the world we have now, it’s little wonder that the results end up the same. With the mystique of a computer generated answer, people are less likely to question it. 

As sci-fi fantasy meets workplace reality (and it’s going to reach recruitment in schools and colleges first) it is our job to build in safeguards and protections. Building in a Human based check, informing data subjects, and completing Data Protection Impact Assessments are all tools to protect rights and freedoms in the battle against biased AI.  

Heavy stuff. It seems only right to finish with a machine learning joke: 

A machine learning algorithm walks into a bar… 

The bartender asks, “What will you have?” 

The  algorithm immediately responds, “What’s everyone else having?” 

 

The technologies used to process person data are becoming more sophisticated all the time.

This is the first article of an occasional series where we will examine the impact of emerging technology on Data Protection. Next time, we’ll be looking at new technologies in the area of remote learning.