Posts

Image of face breaking into cubes, representing AI and Machine Learning

Anyone involved in last year’s exam grade saga probably harbours a level of resentment against algorithms. 

The government formula was designed to standardise grades across the country. Instead, it affected students disproportionately, raising grades for students in smaller classes and more affluent areas. Conversely, students in poorer performing schools had their grades reduced, based on past grades from previous years.  

Most of us are well versed in the chaos that followed. Luckily, the government have already confirmed that this year’s results will be mercifully algorithm-free.  

We touched on the increased use of AI in education in an article last year.  Simple algorithms are already used to mark work in online learning platforms. Other systems can trawl through the websites people visit and the things that they write, looking for clues about poor mental health or radicalisation. Even these simple systems can create problems, but the future brings machine learning algorithms designed to support detailed decision making with major impacts on peoples lives. Many see Machine Learning as an incredible opportunity for efficiency, but it is not without its controversies.  

Image-generation algorithms have been the latest to cause issuesA new study from Carnegie Mellon University and George Washington University, found that unsupervised machine learning led to ‘baked-in biases’. Namely, the assumption that women simply prefer not to wear clothes. When researchers fed the algorithm pictures of a man cropped below his neck, 43% of the time the image was auto completed with the man wearing a suit. Researchers also fed the algorithm similarly cropped photographs of women. 53% of the time, it auto completed with a woman in a bikini or a low-cut top.  

In a more worrying example of machine-learning bias, A man in Michigan was arrested and held for 30 hours after a false positive facial recognition match. Facial recognition software has been found to be mostly accurate for white males but, for other demographics, it is woefully inadequate.  

Starring Cary Grant and Katherine Hepburn, Bringing up Baby follows a palaeontologist through his adventures with a scatter-brained heiress… and a leopard called Baby.

Where it all goes wrong:

These issues arise because of one simple problem, garbage in, garbage outMachine learning engines take mountains of previously collected data, and trawl through them to identify patterns and trends. They then use those patterns to predict or categorise new data. However, feed an AI biased data, and they’ll spit out a biased response.

An easy way to understand this is to imagine you take German lessons twice a week and French lessons every other month. Should someone talk to you in German, there’s a good chance you’ll understand, and be able to form a sensible reply. However, should someone ask you a question in French, you’re a lot less likely to understand, and your answer is more likely to be wrong. Facial recognition algorithms are often taught with a white leaning dataset. The lack of diversity means that when the algorithm comes across data from another demographic, it can’t make an accurate prediction.  

Coming back to image generation, the reality of the internet is that images of men are a lot more likely to be ‘safe for work’ than those of women. Feed that to an AI, and it’s easy to see how it would assume women just don’t like clothes.  

AI in Applications:

While there’s no denying that being wrongfully arrested would have quite an impact on your life, it’s not something you see every day. However, most people will experience the job application process. Algorithms are shaking things up here too.  

Back in 2018, Reuters reported that Amazon’s machine learning specialists scrapped their recruiting engine project. Designed to rank hundreds of applications and spit out the top five or so applicants, the engine was trained to detect patterns in résumés from the previous ten years.  

In an industry dominated by men, most résumés came from male applicants. Amazon’s algorithm therefore copied the pattern, learning to lower ratings of CVs including the word “women’s”. Should someone mention they captain a women’s debating team, or play on a women’s football team, their resume would automatically be downgraded. Amazon ultimately ended the project, but individuals within the company have stated that Amazon recruiters did look at the generated recommendations when hiring new staff 

Image of white robotic hand pointing at a polaroid of a man in a suit, with two other polaroids to the left and one to the right. The robot is selecting the individual in the picture they are pointing at.

Algorithms are already in use for recruitment. Some sift through CVs looking for keywords. Others analyse facial expressions and mannerisms during interviews.

Protection from Automated Processing:

Amazon’s experimental engine clearly illustrated how automated decision making can drastically affect the rights and freedoms of individuals. It’s why the GDPR includes specific safeguards against automated decision-making.  

Article 22 states that (apart from a few exceptions), an individual has the right not to be subject to a decision based solely on automated processing. Individuals have the right to obtain human intervention, should they contest the decision made, and in most cases an individual’s explicit consent should be gathered before using any automated decision making.  

This is becoming increasingly important to remember as technology continues to advance. Amazon’s experiment may have fallen through, but there are still AI-powered hiring products on the market. Companies such as Modern Hire and Hirevue provide interview analysis software, automatically generating ratings based on an applicant’s facial expressions and mannerisms. Depending on the datasets these products were trained on, these machines may also be brimming with biases.  

As Data Controllers, we must keep assessing the data protection impact of every product and every process. Talking to wired.co.ukIvana Bartoletti (Technical Director–Privacy at consultancy firm Deloitte) stated that she believed the current Covid-19 pandemic will push employers to implement AI based recruitment processes at “rocket speed”, and that these automated decisions can “lock people out of jobs”.

Battling Bias:

We live in a world where conscious and unconscious bias affects the lives and chances of many individuals. If we teach AI systems based on the world we have now, it’s little wonder that the results end up the same. With the mystique of a computer generated answer, people are less likely to question it. 

As sci-fi fantasy meets workplace reality (and it’s going to reach recruitment in schools and colleges first) it is our job to build in safeguards and protections. Building in a Human based check, informing data subjects, and completing Data Protection Impact Assessments are all tools to protect rights and freedoms in the battle against biased AI.  

Heavy stuff. It seems only right to finish with a machine learning joke: 

A machine learning algorithm walks into a bar… 

The bartender asks, “What will you have?” 

The  algorithm immediately responds, “What’s everyone else having?” 

 

The technologies used to process person data are becoming more sophisticated all the time.

This is the first article of an occasional series where we will examine the impact of emerging technology on Data Protection. Next time, we’ll be looking at new technologies in the area of remote learning.

Careless Talk Causes Breaches

(and can be costly too!)

 

GDPR is not normally associated with parties, but recently I heard the end of a conversation about an office Christmas party and it set me thinking about the impact that a misplaced sentence can have. Friendships and working relationships can be badly damaged, in some cases, irreparable.

If I choose to pass on my unvarnished opinion about a colleague during the Christmas bash, then I can find myself in a lot of trouble. If on the other hand, I whisper information that has come from the data controller then not only am I in hot water, but I’ve also given the extra present of a data breach.

Paragraph 4, Article 32 of the GDPR says:

“The controller and processor shall take steps to ensure that any natural person acting under the authority of the controller or the processor who has access to personal data does not process them except on instructions from the controller, unless he or she is required to do so by Union or Member State law.”

Put more simply, you must ensure that people are given clear guidance about what they can and can’t do with personal data and you must ensure they stick to those rules.

Bear in mind that it doesn’t matter how information is disclosed for it to be a breach. Whether you’ve been hacked, sent an email to the wrong person, lost a paper file or repeated information to someone who shouldn’t know it, a breach has occurred.

With verbal disclosure the situation is often made worse by the fact that our natural desire is to share more ‘interesting’ information, which is also usually more confidential and leads to greater upset.

We’ve seen examples where incidents have been dealt with from a disciplinary standpoint but have gone unrecognised as a data breach. Obviously, if you need to report the breach to the ICO, you’ll have to explain why you missed the 72-hour deadline for reporting. It is difficult to say that you have a sound regime for data protection but missed this high-profile target.

What steps should you take to avoid these issues:

Training
  • All your staff need to know about the risks of verbal disclosure. Include it in your normal GDPR training but you may need to provide a special briefing. As well as knowing that they need to notify your DPO or GDPR lead, it’s a great time to remind people of the perils of letting information slip.
Easy reporting
  • Take away any barriers that prevent staff from alerting you to an issue. Have an email address just for staff to alert you of issues or consider an online form.
A response procedure
  • If people do report issues then you need to have a well-established procedure to deal with them. Get it recorded and you can even practice to make sure the 72-hour deadline can be met.
Joined up processes
  • Issues which trigger disciplinary procedures may relate to data protection issues and vice-versa. Make sure that there is a section in the guidance for both areas that highlights the risks and include this in your general training and particularly induction training.

So, as you contemplate the upcoming festivities, it may be worth a timely reminder to everyone that we have to consider what we’re saying just as much a what goes into an email.

Being as clear as mud when it comes to Data Protection

A key principle of data protection is transparency. You must be upfront about what you plan to do with personal data.

A failure to be transparent has recently brought the Department for Education into the Information Commissioner’s Office’s sights. Information from the annual census returns was apparently shared with Immigration Officials, without the data subjects being informed. This is the opposite to what you should be doing.

According to an article in the Guardian the ICO position is that:

“Our view is that the DfE is failing to comply fully with its data protection obligations, primarily in the areas of transparency and accountability, where there are far reaching issues, impacting a huge number of individuals in a variety of ways.”

It’s not clear yet what the consequences for the DfE will be from these findings.

Just a few days before the news about the DfE broke, the Information Commissioner felt compelled to ensure that political parties understood their data protection responsibilities as we head towards the election in December.

In addition to telling the parties that they needed to follow the principles of data protection, she also specifically addressed the controversial issue from the Brexit Referendum and subsequent elections – advertising on social media.  You can read the Commissioner’s full statement here.

These concerns are about transparency. How do you know what someone is doing with your personal data and how that usage might affect your rights and freedoms? The GDPR is very clear about the information that should be provided especially when your personal data is being used. It’s not always clear how well individuals understand the information presented to them, if they are given any at all.

The principle of transparency doesn’t just apply to political parties and government departments. It should be the cornerstone of the data protection policies and practices for every organisation.

So, what does Transparency mean for an organisation in the Education sector?

  • You must be clear about why you are processing personal data
  • You must be able to show you’re using the minimum data necessary
  • You must be able to show you have a legal basis for your processing and sharing of data
  • You must take action to inform individuals about how their data is being processed

How can you demonstrate that you’re meeting these requirements?

Your data mapping, providing it follows the model set out by the ICO will address the first three and your privacy notices should address the last item.

Our experience is that many schools and colleges haven’t mapped their use of personal data to the level of detail that the regulation expects, and this could become a problem if a complaint is raised by a data subject.

You may know in detail how data is collected, stored, updated and shared, but the legislation requires that this is documented. Does your documentation fully cover the movement of personal data around the organisation?

Do your privacy notices strike the balance of informing individuals about how their data is used while being accessible and unambiguous.

While you try and figure out the fake news from the real around the election it may be a good time to ensure that you’re being properly open about the way you collect and use personal data.

If you are unsure how you process data, or would like some guidance on how to document this, please contact our GDPR experts on 0113 804 2035 or click here.