Last Month, The U.K. Commissioner for Public Appointments posted an advertisement for a new Information Commissioner. Current Commissioner Elizabeth Denham announced previously that she was leaving her post in Octoberhaving overseen the UK’s transition to new data protection laws.  

Whoever is hired will be stepping into quite a sizable pair of shoes. Data protection complaints doubled in 2018/19, from around 21,000 to over 40,000 complaints. There were slightly fewer complaints in 2019/20, but the number is still far higher than before the implementation of GDPR. This is not necessarily a bad thing; the rise in numbers is partially due to more stringent rules, but it has also come from increased public awareness around data protection. Individuals are learning to be a bit savvier about their data, and they’re learning where to go when they feel their data is being misused. 

Awareness is important in current times, as we are producing personal data almost all the time. All our time on electronics, our use of smart technology, and even the signing of a humble visitor book. It all creates data. In 2025, we’re due to hit a total of 175 zettabytes of data in the global datasphere. A single zettabyte is around a trillion gigabytes. Not all of that is personal data, but the numbers are still rather overwhelming to think about. Increasing amounts of data, and increased pressure on the Information Commissioner’s Office (ICO), must make the job of Information Commissioner rather daunting to apply for.   

 

What is an Information Commissioner?

The information commissioner is the head of the ICOFor the most part, they act as the public head of the body, and lead the organisation through its strategic development. The Information Commissioner looks at the big picture, while those employed by the ICO manage the day-to-day duties of the office.  

In the government’s advertisement for the role, they describe some of the duties of the ICO: 

  • Give advice to members of the public about their information rights; 
  • Give guidance to organisations about their obligations with respect to information rights; 
  • Help and advise businesses on how to comply with data protection laws;  
  • Gather and deal with concerns raised by members of the public; 
  • Supports the responsible use of data;  
  • Take action to improve the information rights practices of organisations; and 
  • Co-operate with international partners, including other data protection authorities. 

 

To a Data Protection Officer (DPO), this role description might sound familiar. Most Data Protection Officers work within a smaller remit, and unfortunately there’s rarely an opportunity for an international visit. However, many of the same principles apply. Data Protection Officers need a good memory and a keen eye for relevant legislation. In schools and colleges, they also need strong problem-solving skills. In an environment with a lot of personal data, and quite a few wildcards, a DPO needs to be prepared for a brand-new scenario every day. Taking a piece of legislation (written largely for businesses) and translating it into practical solutions for the education sector can take some talent. Maybe the public appointments office needs to have a look round schools for their next candidate.  

In the second instalment of our Emerging Tech series, we look at the development of commercial genetic testing, and the data protection implications of widespread genetic screening. 

 

“Customers who are genetically similar to you consume 60mg more caffeine a day than average.” 

“You are not likely to be a sprinter/power athlete” 

“Customers like you are more likely to be lactose intolerant” 

“You are less likely to be a deep sleeper” 

These are all reports you can get from commercial genetic testing. Companies such as 23 and me, Ancestry.com, MyHeritageDNAfitWe’ve talked about the rise of genetic testing before, but recent announcements from Richard Branson have brought the topic back into discussion. 

Earlier this month Richard Branson announced he was investing in 23 and Me, and the company would be going public (meaning shares will be traded on the New York Stock Exchange). This push for growth and investment has reopened the proverbial can of worms, and people are once again considering the privacy implications of genetic testing. 

What is genetic testing?

Genetic testing takes a DNA sample, such as hair or saliva, and identifies variations in your genetic code. These variants can increase or decrease your risk of developing certain conditions. These tests can also identify variations in ‘junk DNA’ that have no impact on your life, but can be used to identify relatives and ancestors. 

Genetic screening first appeared in the 1950s. Researchers later developed more detailed DNA profiling in the 1980s, used for crime scene investigation. Technology has come forward in leaps and bounds since then. Once an expensive and costly feat, you can buy a reasonably affordable testing kit in many pharmacies or online. In Estonia, the government are offering genetic testing to citizens; to screen for predisposal to certain conditions and help individuals act early with personalised lifestyle plans or preventative medication. 

There have been suggestions to utilise genetic screening in the Education sector as well. In 2006, two years before 23 and Me began offering their first testing kits, geneticists suggested schools as the perfect place to carry out widespread screening. Researchers have also investigated the possibility of genetically informed teaching, with teaching style tailored to an individual’s predisposition to certain learning styles. 

For those outside education, the biggest development has been Direct to Consumer (DTC) genetic testing. DTC testing began mostly as a tool for ancestry identification, now there are millions of consumers, and even companies offering tailor made nutrition plans designed around your genetics. 

I find myself writing this a lot, but it sounds like science fiction. Yet again, the science of today has caught up with the fiction of yesterday. However, if growing up surrounded by shelves of Sci-Fi has taught me anything, a cautious approach is often best. This is definitely true of Genetic testing. There are many possible advantages, but there are also risks. 

A Breach with Big Implications:

Data breaches are always a possibility when you entrust your information to someone else. However, genetic data is clearly a sensitive type of personal data, particularly if a customer has opted for genetic health screening. 

Companies will put swathes of protective measures in place, but in a world where a cyber-attack occurs approximately once every 39 seconds, there will be breaches. In fact, there already have been breaches. In July last year, hackers targeted the genetic database GED match, and then later used the information to target users of MyHeritage. Even without cyberattacks, breaches occur. When recovering from the recent hack, GEDmatch reset all user permissions. This opened up over a million genetic profiles to police forces, despite users opting out of visibility to law enforcement. 

If genetic testing is ever to be used in schools or offered nationwide, one key issue will be ensuring they hold that data securely. If schools and colleges offered genetically informed teaching, they would have to hold that data too. Adequate security measures for such information can be difficult to manage, particularly if education budgets stayed the same. Infrastructure would require radical change before genetic testing could ever be implemented safely. 

Breaches are nothing new, but with such precious data, they can be worrying. 

Secondary Functions and Sources of Discrimination:

Under the data protection act, data controllers must set out what they will use your personal data for. They cannot use that data for unrelated purposes without informing you. However, over recent years, there have been several cases where ambiguity over accessibility has made it to the news. 

Individuals can opt-in to share their data with 23 and Me research teams. Many customers were comfortable with researchers using their data for medical advances. It was not until their public deal with GlaxoSmithKline, that it was clear genetic data was being passed to pharmaceutical companies for profit. 

This data was anonymised, so the outcry following the announcement was more about ethics than data protection. However, there have been multiple cases where companies have allowed law enforcement to access their databases, despite stating otherwise in their privacy policy. 

Your genetic data reveals a huge amount about you and your characteristics, so it’s important to know exactly who can see it. For example, variations of the MAOA gene have been linked to levels of aggression, as well as conditions such as ADHD. Identification of these types of variants could help employers find individuals more likely to succeed in their field. However, it could just as easily lead to discrimination in hiring. Researchers have also linked other conditions such as bipolar disorder to certain genetic variants. Should that information be available to employers, it might lead to workplace discrimination. For example, bosses not promoting individuals they think might later become “unstable.” 

There has been speculation that biological data could be used for identifying terrorist subjects, tracking military personnel, or even rationing out treatment in overstretched health systems. This is all speculation. Even so, there are fears of discrimination based on the possibility of you developing a certain condition or trait. 

The Risk of Re-identification:

The speculation above works on the basis of genetic data being individually identifiable. Companies use anonymisation to reduce risks of such discrimination. Genetic companies go to great lengths to separate genetic data from identifiers. For instance, anonymising data for research purposes, or storing personal and contact details on a separate server to genetic data. The view has always been that if you separate personal identifiers from the raw genetic data, the individuals remain anonymous. 

Unfortunately, research has already shown that it is possible, in principle, to identify an individual’s genomic profile from large dataset of pooled data. It’s an interesting thought. Companies are often quite willing to share anonymised data for additional purposes. It is no longer personal data and isn’t protected with the same legal safeguards. If we can re-identify a data subject, it requires the same levels of security and legal protection as personal data. Dawn Barry, cofounder of genetic research company LunaDNA, said “we need to prepare for a future in which re-identification is possible”. 

If this data could be re-identified, it raises questions over the definition of anonymity. It also reignites the discussion over who Genetic Testing companies should be sharing data with. 

Understandable Worries? Or Needless Fear?

Schools and colleges have always been a proving ground for new technologies. It’s worth remembering that fingerprint scanning was being used in UK schools for over ten years before the Protection of Freedoms Act caught up in enforcing parental consent.  

It would be easy see than a “scientifically based, individualised learning experience” could be presented as an ideal way of helping all students achieve the best outcomes.  

InterestinglyDirect to Consumer genetic testing has now been available for just over a decade, so there is still plenty of room for development. However, we’re still some way from determining the dayto-day life of students in education. 

Here’s a sobering thought though. Should the worst happen, and something compromises your data, you can change your passwords, you can change your bank details. You can even change your appearance and your name. You can’t change your DNA. We’ve got to keep that in mind as the world of biometrics continues to grow. 

Next time, we’ll look at remote learning and the technologies that are being developed for the virtual classroom. Find previous posts from this series here.

 

WhatsApp have spent the last month putting out self-inflicted fires. After a disastrous announcement of changes to their terms of service, the company have been scrambling to convince users to stick with the app. However, even with delayed implementation of the new terms of services, and hundreds of reassurances, their PR nightmare has prompted many organisations to take a closer look at their use of the messaging app.

The Story of WhatsApp

WhatsApp have marketed themselves as a safe and secure messaging app from the start in 2009, emphasising their end-to-end encryption, the minimal amounts of data they collect, and the fact that they don’t share that data with anyone. However, Facebook acquired WhatsApp in 2014, much to the chagrin of privacy activists. Many feared it was the start of a slippery slope, leading to abuse of user data. To allay fears, executives reassured users that WhatsApp would operate separately, and data would not be shared with Facebook.

Fast-forward now to the beginning of this year. Upon opening the app, users found a notification that WhatsApp’s privacy policy and terms of service were changing. It seemed they were going against their promises and intended to share user information with Facebook. For users in the EU and the UK, there was additional confusion. It wasn’t clear what changes would be applicable to those under the GDPR.  There was chaos, there was confusion, and there was a lot of hopping to new messaging platforms.

Ultimately, WhatsApp cut their losses and delayed implementation to May, in order to re-evaluate their plans. Even so, it’s left a sour taste in everyone’s mouth, and that might not be a bad thing. It’s hard to deny the ease of using WhatsApp, but is it really appropriate for professional use?

 

Messaging Apps and Privacy Problems

Are you part of an employee WhatsApp group? Many people would answer that question with a yes. In a study from 2019, it was found that 53% of global frontline workers check messaging apps up to six times daily, for work related issues. Over half of respondents were using personal messaging apps like WhatsApp for professional correspondence. There are plenty of issues with this, as you can see below:

 

  1. The first issue is that business use actually goes against the WhatsApp terms of service. The terms of service prohibit any “non-personal use of our Services unless otherwise authorized by us.” Violating these terms could lead to suspension or deletion of your account, but there are additional data protection issues with the app. When you create a WhatsApp account, you add your list of contacts to the app, meaning you upload the data of other individuals without their consent. When using WhatsApp personally, this is less of a problem, but if you use WhatsApp for business purposes, any processing of personal data falls under the GDPR.

 

  1. Individuals can also be added to a WhatsApp group without giving consent. Once added, anyone else in the group can see their contact information, any information held within their bio, and when they were last active. Unless you have provided every member of staff with a work mobile, employees will be using their personal numbers to create WhatsApp accounts. Create an “All Staff WhatsApp Group” and you’ve just handed out the personal contact numbers for all your employees. While you may think you know your staff well, you can’t be sure there aren’t underlying tensions or conflicts, that could escalate should one of your members of staff be able to contact another outside of work hours. Ultimately, it’s not why you originally collected that data, and it’s not how it should be used.

 

  1. It’s not a sensible place to discuss school or college related matters. It’s not unusual to hear someone groaning because their phone has deleted all their chat history. Nor is it unusual to hear someone left their phone in a taxi at the weekend. Should something happen to your phone or your WhatsApp account, you could be dealing with a breach of accessibility. Conversely, when you work with personal data, you need to  to delete it after the appropriate time; a rather complicated venture when it’s sitting on hundreds of mobile devices.

 

  1. Finally, discussing work on a personal device always increases the risk of a breach of confidentiality. In the 2019 study, 30% of respondents found that the 24/7 nature of messaging apps made it hard to maintain a work/personal life balance. Answering work queries late at night, or in the middle of personal time, can lead to sending information to the incorrect group. Indeed, 12% of respondents said they worried about a serious data breach via a messaging app.

 

Using Messaging Apps for Work

These are just a few of the complications that can arise from using messaging apps like WhatsApp. They are easy to use, but not designed for business communication. Is it time to retire the faithful green speech bubble? For business communication, it’s certainly worth considering. Finding an alternative can be difficult, but there are business messaging apps out there.

A woman lying in bed on her side in a dark room, illuminated by the screen of her phone. She is yawning and covering her mouth with her left hand, whilst her right holds her phone.

The 24-hour nature of messenger apps can make it hard to keep business to appropriate hours.

However, it’s best to take a moment and assess whether a new messaging app would be the best step forward. They’re hard to regulate and it’s difficult to ensure people only see information they need to see. With staff in many schools and colleges taking on multiple roles, messaging groups can get quite messy. Add on that many of these groups operate under the noses of HR, and you get a breeding ground for breaches.

Anyone involved in last year’s exam grade saga probably harbours a level of resentment against algorithms. 

The government formula was designed to standardise grades across the country. Instead, it affected students disproportionately, raising grades for students in smaller classes and more affluent areas. Conversely, students in poorer performing schools had their grades reduced, based on past grades from previous years.  

Most of us are well versed in the chaos that followed. Luckily, the government have already confirmed that this year’s results will be mercifully algorithm-free.  

We touched on the increased use of AI in education in an article last year.  Simple algorithms are already used to mark work in online learning platforms. Other systems can trawl through the websites people visit and the things that they write, looking for clues about poor mental health or radicalisation. Even these simple systems can create problems, but the future brings machine learning algorithms designed to support detailed decision making with major impacts on peoples lives. Many see Machine Learning as an incredible opportunity for efficiency, but it is not without its controversies.  

Image-generation algorithms have been the latest to cause issuesA new study from Carnegie Mellon University and George Washington University, found that unsupervised machine learning led to ‘baked-in biases’. Namely, the assumption that women simply prefer not to wear clothes. When researchers fed the algorithm pictures of a man cropped below his neck, 43% of the time the image was auto completed with the man wearing a suit. Researchers also fed the algorithm similarly cropped photographs of women. 53% of the time, it auto completed with a woman in a bikini or a low-cut top.  

In a more worrying example of machine-learning bias, A man in Michigan was arrested and held for 30 hours after a false positive facial recognition match. Facial recognition software has been found to be mostly accurate for white males but, for other demographics, it is woefully inadequate.  

Starring Cary Grant and Katherine Hepburn, Bringing up Baby follows a palaeontologist through his adventures with a scatter-brained heiress… and a leopard called Baby.

Where it all goes wrong:

These issues arise because of one simple problem, garbage in, garbage outMachine learning engines take mountains of previously collected data, and trawl through them to identify patterns and trends. They then use those patterns to predict or categorise new data. However, feed an AI biased data, and they’ll spit out a biased response.

An easy way to understand this is to imagine you take German lessons twice a week and French lessons every other month. Should someone talk to you in German, there’s a good chance you’ll understand, and be able to form a sensible reply. However, should someone ask you a question in French, you’re a lot less likely to understand, and your answer is more likely to be wrong. Facial recognition algorithms are often taught with a white leaning dataset. The lack of diversity means that when the algorithm comes across data from another demographic, it can’t make an accurate prediction.  

Coming back to image generation, the reality of the internet is that images of men are a lot more likely to be ‘safe for work’ than those of women. Feed that to an AI, and it’s easy to see how it would assume women just don’t like clothes.  

AI in Applications:

While there’s no denying that being wrongfully arrested would have quite an impact on your life, it’s not something you see every day. However, most people will experience the job application process. Algorithms are shaking things up here too.  

Back in 2018, Reuters reported that Amazon’s machine learning specialists scrapped their recruiting engine project. Designed to rank hundreds of applications and spit out the top five or so applicants, the engine was trained to detect patterns in résumés from the previous ten years.  

In an industry dominated by men, most résumés came from male applicants. Amazon’s algorithm therefore copied the pattern, learning to lower ratings of CVs including the word “women’s”. Should someone mention they captain a women’s debating team, or play on a women’s football team, their resume would automatically be downgraded. Amazon ultimately ended the project, but individuals within the company have stated that Amazon recruiters did look at the generated recommendations when hiring new staff 

Image of white robotic hand pointing at a polaroid of a man in a suit, with two other polaroids to the left and one to the right. The robot is selecting the individual in the picture they are pointing at.

Algorithms are already in use for recruitment. Some sift through CVs looking for keywords. Others analyse facial expressions and mannerisms during interviews.

Protection from Automated Processing:

Amazon’s experimental engine clearly illustrated how automated decision making can drastically affect the rights and freedoms of individuals. It’s why the GDPR includes specific safeguards against automated decision-making.  

Article 22 states that (apart from a few exceptions), an individual has the right not to be subject to a decision based solely on automated processing. Individuals have the right to obtain human intervention, should they contest the decision made, and in most cases an individual’s explicit consent should be gathered before using any automated decision making.  

This is becoming increasingly important to remember as technology continues to advance. Amazon’s experiment may have fallen through, but there are still AI-powered hiring products on the market. Companies such as Modern Hire and Hirevue provide interview analysis software, automatically generating ratings based on an applicant’s facial expressions and mannerisms. Depending on the datasets these products were trained on, these machines may also be brimming with biases.  

As Data Controllers, we must keep assessing the data protection impact of every product and every process. Talking to wired.co.ukIvana Bartoletti (Technical Director–Privacy at consultancy firm Deloitte) stated that she believed the current Covid-19 pandemic will push employers to implement AI based recruitment processes at “rocket speed”, and that these automated decisions can “lock people out of jobs”.

Battling Bias:

We live in a world where conscious and unconscious bias affects the lives and chances of many individuals. If we teach AI systems based on the world we have now, it’s little wonder that the results end up the same. With the mystique of a computer generated answer, people are less likely to question it. 

As sci-fi fantasy meets workplace reality (and it’s going to reach recruitment in schools and colleges first) it is our job to build in safeguards and protections. Building in a Human based check, informing data subjects, and completing Data Protection Impact Assessments are all tools to protect rights and freedoms in the battle against biased AI.  

Heavy stuff. It seems only right to finish with a machine learning joke: 

A machine learning algorithm walks into a bar… 

The bartender asks, “What will you have?” 

The  algorithm immediately responds, “What’s everyone else having?” 

 

The technologies used to process person data are becoming more sophisticated all the time.

This is the first article of an occasional series where we will examine the impact of emerging technology on Data Protection. Next time, we’ll be looking at new technologies in the area of remote learning.

Today is Data Protection Day. It’s not on the front page of the papers, but you might see a little notification on the bottom of the Google Homepage.

In 2007, the Council of Europe designated January 28th Data Protection Day (or Data Privacy Day in other parts of the world), to highlight the importance of protecting our own personal data. The reminder is not just to individuals. It also serves as a reminder to organisations that a right to a private life is a fundamental human right.

How does this relate to Education?

Schools and Colleges have been feeling the strain. Home learning has been extended for at least another three weeks, and many schools are trying to find new and inventive ways facilitate learning. Teachers are searching every corner of the internet for additional tools and software, just to give their students the best shot at success. Administrators are in overdrive, devising and executing plans to rollout internet connections and new devices to disadvantaged families. There is a lot of hard work going on behind the scenes to keep young people from being left behind.

These moves are important. They’re integral to the continued support of students in the pandemic, but they are not without risk. Many of the new applications in use are based in the USA, where data protection laws are far less stringent. Schools and colleges must consider whether these applications are adequately protecting the privacy of their students. With the EU-US privacy shield now defunct (as discussed in a previous blog post), ensuring adequacy takes a little more work. We must also consider the possibility that families provided with an electronic device may be unfamiliar with the risks of the online world.

Schools and colleges must also be cognizant of the risks around collaborative working. Socialisation is key for young people. Without the community of the classroom, teachers are relying on collaborative working solutions to provide that important interaction. However, staying alert is key. Take care to ensure details about individuals are not disclosed in error and be extra aware of the use of personal email addresses. With so many more emails flying about, additional errors are inevitable.

Staying Alert

So, on a day where even Google is reminding you about Data Privacy, there are a few simple questions we can keep in mind to help us keep personal data safe:

  • Do members of staff have a simple process to keep online sessions secure?
  • Are you taking steps to minimise the risks of sharing email addresses?
  • Are new applications being checked by your Data Protection team before you use them?
  • Have you thought ahead about the risks that will come up when students do start to return?

Taking two seconds to run through a quick privacy checklist can save you a lot of difficulty in the future. Privacy is a fundamental right, and it’s a right we can all work to protect.

With all of that said, Happy Data Protection Day!

 

It has now been over a year since Chinese authorities reported the first case of Covid-19 to the World Health Organisation. This year has brought tragedy for the many people who have lost loved ones. It’s also brought difficulties for all, with restrictions on our daily life that haven’t been seen since the 1940s.

Mental wellbeing and Data Protection?

We’ve had months of lockdown, daily emergency briefings reinforcing the gloomy outlook. We live under more tiers than a wedding cake, and enough Government U-Turns to make a driving instructor pale. Many of us are feeling long term fatigue even though our physical activity levels have dropped.

This is because we have all been facing the stress of COVID-19. When we face psychological stress, our bodies enter ‘fight or flight’ mode. Your heart rate goes up and raises blood pressure, whilst suppressing non-essential functions like digestion and your immune system.

This response is ideal if you’re being chased by a bear. However, with a long-term stressor like COVID-19 you’re on high alert for long periods. This takes a toll on your energy levels and leaves you at risk to the normal coughs and colds that are still circulating.

This chronic stress takes its greatest toll on our mental wellbeing. Poor sleep, depression, anxiety, and a loss of concentration are common results. A study by the ONS found that 1 in 5 adults reported depression, compared to 1 in 10 before the pandemic. Symptoms for those with pre-existing conditions have worsened and many have reported feelings of stress and anxiety.

What does this have to do with data protection?

It might seem strange that an opinion piece on GDPR is delving into the pandemic’s impact on mental health. However, poor mental wellbeing might put a strain on your data protection measures.

Several studies have found that sleep loss and fatigue can lead to increased risk taking, poorer reaction times, and lower levels of neural and data processing. Fatigued people are simply more likely to make a mistake. When handling personal data this can result in a data breach.

A higher risk of data breaches is concerning at any time, but it is even more worrying in current circumstances. Schools and colleges are providing education in a very difficult environment.

Just over a month ago, the UK government announced that secondary schools and colleges had the chance to set up mass asymptomatic testing of pupils and staff. Since the announcement, there has been a flurry of activity from the DfE, Teachers unions and the MHRA (Medicines and Healthcare products Regulatory Agency). While education leaders have been calling for testing for a long time, there have been understandable levels of apprehension.

Unlike pilot schemes, schools and colleges are to provide their own staff to assist with testing, cleaning, and administration. Of course, in the latest U-Turn, the government dropped repeated daily testing for students who had contact with a positive case.

The Secretary of State for Education confirmed that regular mass testing for staff and eventually returning pupils will continue. It is also likely to be extended to all settings. Unlike Test and Trace, schools will be responsible for processing test data. A totally novel process, schools and colleges will be processing and storing hundreds of items of special category data. That data will be held internally on pre-existing systems, not specifically designed for this process. Furthermore, schools will need to inform individuals who test positive. It is a veritable nightmare of new data protection risks.

Combine this with everything we’ve said about the increased risk of error, and maybe it makes more sense that a Data Protection company is talking about mental health in the pandemic.

Protecting yourself from Breaches

Unfortunately, there is no quick fix. Each day brings more vaccinations, and with it more hope for normality, but this has always been a marathon rather than a sprint. The best we can do is be aware of increased risk and put additional safety nets in place.  For instance, a quick guide for test administrators, that they can refer to should they become unsure, might reduce mistakes in administration, or in contacting individuals.

When writing procedures for test administration, building in checking stages can help as well. Making a double check part of the routine, will help staff catch mistakes they might make on the first run.

Even taking a few minutes to check in on your staff can make a huge difference. Just talking about a worry can reduce anxiety. It also gives you an opportunity to assess where additional support might be needed to prevent data protection problems.

We are all working hard to simply manage our daily lives while the landscape shifts every minute. This fatigue is understandable and normal. We just have to be aware of it, so we can try to minimise mistakes, and save ourselves from even more stress.

It has been a year of chaos. The Oxford English Dictionary usually nominate one word as ‘Word of the Year’. This year, there has been so much change that they couldn’t narrow it down to just one. “Covid-19, Lockdown, Anti-Maskers, Unmute” Not to forget “Bushfire” when millions of acres of Australian bushland burnt at the beginning of the year.

The point is, a lot has changed.

However, the more things change, the more they stay the same. Throughout all the madness, GDPR has not gone anywhere. As we reach another festive period, the same questions pop up again. Chiefly, “What about Christmas cards?”

Secondary school teachers, Universities and College staff, you probably get off lightly here! En Masse Christmas card delivery is usually a feature of primary schools. Unfortunately for primary schools, it can be a little complex.

 

The Christmas Card Conundrum

Children in your class can send Christmas cards. They’re (tiny) individuals, sending cards to other (tiny) individuals. Nothing wrong with spreading a little holiday cheer! However, many teachers get asked for a class list to make the process easier for parents…

…And you can’t give it to them. Giving out a class list is disclosing personal data. The task isn’t essential to running a school, so you’d have to get consent. Co-ordinating consent forms for the parents of hundreds of children? Not fun, and not easy.

So you can’t hand out a list.

“But how can the little Boys and Girls spread their festive cheer?”  you ask. Never fear, there is a solution.

There’s nothing stopping an individual from looking round the classroom, and writing down the names of their friends. They’re an individual, not an organisation, and all they’re doing is writing down publicly disclosed information (It’s hard to keep your identity secret when you’re sat in the second row). If you’re working with young children, it might even be a opportunity for some yuletide handwriting practice!

So Christmas isn’t cancelled, and with a little bit of sideways thinking you can still have Holiday Cards flying around the classroom! Although we’re not health and safety experts, so it’s your choice on the flying part.

The EU-US Privacy shield, a framework designed by the U.S. Department of Commerce, and the European commission, has been struck down by the European Court of Justice (ECJ).

The framework, approved by the EU in 2016, has been at the centre of several international discussions for the last few years. The program allowed companies to sign up and certify that their data protection practices were in line with EU law.

With the introduction of GDPR, the EU-US privacy shield acted as strong evidence that a company was holding data in a compliant manner, allowing organisations to justify using American firms, and sending data out of the EU to be processed.

However, European data protection law states that data should only be transferred out of the EU if there are appropriate safeguards in place. Due to differing laws on national security and law enforcement, the ECJ have now deemed that US domestic law does not provide appropriate safeguards for personal data, and that “Surveillance programs … [are] not limited to what is strictly necessary”

Therefore, the agreement between the European Union and the USA, has been struck down.

What does this mean?

In practical terms, this ruling could have a substantial impact on education settings. Most schools and colleges use various apps and games to facilitate learning, and many institutions will be using external software to manage HR and finance. Some of the apps and software used will be capturing personal data, hosted on servers within the US.

Previously, if a school or college wished to use a company hosting data outside of the EU. They could check whether the company was signed up to the Privacy Shield. The Shield provided organisations sufficient comfort that their data would be handled in a compliant manner. With that safeguard gone, it will be harder to prove that using a data processor from outside the EU is an appropriate action.

Moving Forward:

This is likely to affect a proportion, but not all, of your organisation’s suppliers. A list of companies signed onto the scheme can be found here, but here are a few examples used in the Education setting:

Mailchimp: The automated email company stores all data on servers in the US, including recipient and sender details.

Consilio: A firm supplying legal services and software, Consilio provides hosting, processing and analysis on data, and is hosted on servers in the US.

Egress: Egress provides secure email and file transfer for many organisations, including the NHS and many councils. However, Egress does have servers within the UK.

It’s worth noting that in some cases a supplier may have their main services based in the EU, but sub-processors or performance management functions are based in the US. Zoom is an example of this, but you can relax. They use Standard Contractual Clauses (SCCs) and not the Privacy Shield.

However, the access the US Government has to data hosted within the country is not prevented by the Standard Clauses, and the individual who brought this case to the ECJ has now taken aim at SSCs, so watch this space.

A review of your suppliers is now in order and you may have to make difficult decisions.

With more than 3,000 cases of the new coronavirus confirmed, Italy has announced that it will be shutting all schools for 10 days, to slow the spread of the disease. With cases beginning to increase in the UK, the possibility of similar action being taken here is also increasing.

Most students, teachers and lecturers are currently working on business as usual, but behind the scenes, admin staff are frantically devising plans for remote learning. As the spread of the coronavirus cannot be easily predicted, institutions need to be prepared to continue the working day from home at a moment’s notice. The difficulties of such an endeavour are more complex than you might expect.

An Unexpected Break:

In the event of mass school closure, few primary and secondary schools have the infrastructure to support remote learning. This is less of a worry for Higher Education, as many universities already have lecture streaming and recording facilities in place. They often also use virtual learning sites such as Moodle. Lots of University students already use their personal devices to access work, so while closures wouldn’t be ideal, they’d be manageable.

For schools, closures are much more likely to be an issue. Setting up a secure way to share lessons and resources online takes time. It often also takes money. For schools and their increasingly tighter budget, bespoke software and quick fixes are far out of reach. Various free or cheap file sharing sites such as SharePoint, Google Drive and Dropbox can make sending resources to students possible, but returning work and grades can easily become a messy affair.

Even if personal data is transferred between staff and students in a secure manner, it’s very difficult to control where it goes next. Most services offer the ability to download files onto your device, and personal devices are under far less scrutiny than those owned by schools and universities. SARS-Cov2 (the virus causing the disease), is not the only virus that threatens institutions. Should a teacher have a computer virus on their device, the data of their students could be compromised. Does this mean that schools need to provide malware protection for their employees? It’s another cost and another worry to add to the list.

Finding a Way for Face Time:

Assuming schools can find a secure solution to send work home, textbooks and tests aren’t a patch on a good teacher. File sharing can keep children learning long division at home, but what about ‘W plane transformations’ in A level Further Maths?

Many schools are trying to find a solution meaning teachers can still run lessons for their pupils. However, with little preparation time, schools are turning to use of personal accounts on media like WhatsApp and Skype. Anyone who has had any involvement in education, knows that this rings warning bells. Sharing personal accounts poses significant safeguarding risks. Regulating a private video call between staff and students is nigh on impossible. It would be the responsibility of the individual to record and report any issues, and it’s far too easy to say “I forgot to record the meeting”. Regardless of the safeguarding issues, should your call be recorded by the application you use, your personal data is being held by yet another data processor.

Increased Ratios. Increased Risk:

Should schools and universities remain open, Covid-19 could still cause problems. The Prime Minister has announced that relaxing rules on class sizes could be used to combat staff absences, should the virus take hold. While this could reduce issues with childcare and education, it’s likely to increase pressure on staff. If Students are moving class, their data is moving too. Teachers will not only be working under the increased pressure of cramped classrooms, but will be responsible for the safeguarding of more personal data than usual, and all of this whilst also being more stressed than usual. These are the ideal conditions for a serious data breach. Just working in a different classroom poses risks. When all this is over, no one will be completely sure where all the student and staff records have ended up.

A Necessary Compromise:

Every school, college and university across the country is desperately trying to find a balance. A situation where safeguarding is still prioritised, data protection is ensured, and students can still receive the quality of education they deserve. There is no simple answer, and there will be much relief when the crisis has passed.

For now, we will be closely watching the situation as it unfolds. Preparation is key, and according to Boris Johnson, a lathering of soap and singing Happy Birthday twice over, will save us all.

Last week, the University of East Anglia (UEA) paid out over £140,000 compensation to students affected by a 2017 data breach. An email containing information on personal issues, health problems and circumstances such as bereavement, was mistakenly sent to 300 UEA students. The email contained sensitive personal data of over 190 people. UEA reported that the email was mistakenly sent to a group email address, that autofilled when the sender started typing. This very simple mistake had a severe impact on hundreds of already vulnerable students.

This is an all too common example of how a simple slip can have a major impact. As Outsourced DPO for many schools and education institutions, we’ve seen just how often these mistakes are made. A misdirected email is one of the most frequent breaches logged by our customers. Although these breaches are prevalent, mismanagement of communication can have a devastating impact on an organisation.

What’s more, there are additional risks in Higher and Further education. Unlike trusts, student information is often held centrally, rather than in separate faculties or sites. Universities such as the University of London, The Open University and The University of Manchester, all have over 100,000 students on their enrolment list. When mistakes happen — and they will happen — a lot of people could be affected.

As this data breach occurred in 2017, the General Data Protection Regulations were not yet in effect. Should the Information Commissioner’s Office (ICO) have decided to fine the University, the amount would have been far lower than if the breach occurred today. However, The ICO decided that no punishment was required. Yet, while the regulatory consequences were low, the University was not absolved of their responsibility to the affected students.

As compensation for the damage from the breach, UEA paid a large sum to all affected students. The breach also damaged UEA’s reputation. Prospective students and their parents may worry about attending a university that is known for leaking personal data. Due to the high levels of media coverage, this breach could affect UEA’s admission rates for quite some time.

How can this be prevented?

The key tool here is knowledge. Providing training on good data protection practice can help staff stay on top of breach risks. The more aware they are, the more mindful they will be when handling personal data. Training can also help staff put their own preventative measures into place. For example, adding a ‘grace period’ to your email account. This puts a delay on your sent emails, allowing you to cancel them if you realise they are being sent to the wrong person, or hold the wrong information.

Other actions include adding the email recipient last, after writing the email and checking attachments. This gives the staff chance to double check emails and stops incomplete messages from being sent. A motto we encourage among our customers is “check twice, send once.” Taking the time to review a piece of work, saves time and stress in the long run.

What about when breaches can’t be prevented?

It’s important to note that breaches will happen. However hard your organisation works at prevention, mistakes happen to the best of us. Therefore, it is important to put a plan in place to mitigate the effects of a breach. Having a clear pathway for recording and managing breaches can ensure an issue doesn’t spiral into a reportable offence. Once again, knowledge is key. Make sure that members of your organisation know how to recognise a breach, and they know exactly who to tell. When you discover a breach, the clock is ticking. Not only do you have just 72 hours to report major breaches to the ICO, but the faster you act on a breach, the more effectively you can contain the impact. To avoid compensation pay-outs, a quick and efficient management process is vital.

 

 

The GDPR may feel like a hassle to many, but for all the students who entrust their data to educational institutions, following the regulations is integral to their wellbeing. The University of East Anglia has reminded us just how important data protection is. Effective management, good training, and staff awareness are all incredibly important.