09/10/2023
As we experience a rapid boom in AI, and the types of such services available, organisations will be looking at new and novel ways to employ them. However, as with any tool, adopters need to make sure they understand the implications and uses of AI. In this article we discuss the use of AI in the employment sector, in the light of a recent report from the House of Commons and the number of reported concerns about the use of AI and machine learning.
What is AI?
There are a number of different types and definitions of AI. However, for those useful for employers, the two most common types are:
Large language models (LLMs) such as Chat GPT
Large language models are based off a very large snapshot of human-written material (45 gigabytes of text data in the case of Chat GPT-4). This allows the model to appear to ‘think’ or respond to prompts by providing the words it assesses as most likely to come next. At their very core, LLMs can be seen as a very advanced text auto complete program.
Algorithmic management
Algorithmic Management is often used to refer to tools designed to monitor and manage employees/contractors, such as those employed by Uber or Amazon, to look at and make decisions about performance and evaluation. However, these tools require a great deal of data about the subjects of the tools and can be very invasive. They therefore represent significant privacy concerns.
Why use AI?
At its best, AI can represent a cost-effective, reliable and efficient tool for carrying out tasks – for example by providing a ‘self-help’ chat bot to help with everyday employee queries, or reviewing and screening job applications for hiring managers. Alternatively, it could be used to assess and evaluate employees in real-time at a substantial level of detail and provide suggested solutions to issues, allowing managers to handle burgeoning performance issues as soon as they appear.
Challenges
However, as with all things, AI presents a number of specific challenges and risks.
How does it reach decisions?
Ultimately, the employer will likely be responsible for any decisions made on the basis of AI-input, and therefore needs to ensure that the tool is reliable. For example, an AI used to sift job applications could have unknowingly been trained on the basis of decisions made by racist hiring managers. The AI would then carry that same prejudice into its analysis, but would not be ‘trained’ to clearly communicate that prejudice. The employer using it could therefore unknowingly be carrying out unlawful discrimination on the basis of race. Understanding the model and identifying any bias in an AI product will not be an easy task.
Unfair dismissal
If an organisation uses data collected by AI to dismiss an employee, it could result in an unfair dismissal claim (provided the employee has the required two years’ service to bring such a claim). As part of this, an employment tribunal will have to decide whether there was a fair reason for the dismissal and whether a fair process was followed. Organisations may face difficulties proving that the dismissal was fair if they cannot explain how the AI works and how the data relied upon to dismiss an employee was calculated.
Additionally, if decisions about an employee’s salary or bonus are made by AI and these decisions cannot be explained due to the complexity of the AI involved in making those decisions, this could lead to issues with the employer-employee relationship. For example, an AI decision not to give an employee a bonus may be interpreted as a breach of the implied contractual term of ‘mutual trust and confidence’ between an employee and employer. This could lead to a claim for constructive unfair dismissal.
Discrimination
The Equality Act 2010 will still apply to decisions made by organisations using AI. As mentioned above, organisations should be aware of the potential for in-build bias related to any of protected characteristics as defined by the Equality Act (such as race, religion and sex). There is also a discrimination risk if AI used for monitoring performance does not account for an employee’s disability impacting on their individual performance.
What are the data protection concerns?
AI use attracts a number of data protection issues. Firstly, employers should ensure that any personal data processing is UK GDPR compliant (including that it is lawful, fair and transparent) – in particular for any employee monitoring that could be seen as invasive. Secondly, there is a specific carve out stressing that individuals have the right not to be subject to profiling or automated processing in some situations, i.e. an AI can recommend a decision, but it must be taken by a human. Thirdly, there are specific requirements for further agreements and controls where personal data is being exported from the UK; many AI providers’ servers are likely to be international.
The interaction between AI and privacy is a very complex issue and any data controllers considering utilising AI products should carefully assess risks.
What can organisations do?
This will inevitably develop over time as more organisations use AI and their overall understanding of AI increases. However, it is likely that transparency about AI decision making processes will be key to mitigating against the risks which AI may pose. Organisations should provide training to decision makers who rely on data obtained from AI so that they understand how it works and how decisions are made. It would also be beneficial for staff whose targets or role is monitored by AI to understand how this is calculated. Organisations may want to consider creating an internal policy to assist with this and should be conducting data protection impact assessments.
The future of AI is unclear but it is very likely that greater regulation will be put in place at some point. Organisations will need to ensure that they are regularly monitoring any AI systems to ensure they remain suitable for use.