ARTIFICIAL Intelligence (AI) has the ability to transform our lives on a personal level and in a business capacity. From personalised Netflix recommendations to predictive analytics, AI is becoming more prevalent in all aspects of our daily lives.
The recent phenomenon of the online AI platform known as ChatGPT has sparked much conversation and debate about the use of AI and how far reaching it can be.
AI can greatly benefit businesses if they implement the right technology. AI tools such as natural language processing, sales forecasting and chatbots can lead to increased efficiency, a reduction in overheads and in turn can contribute to overall profitability.
However, who is responsible when AI or advanced technologies fail to perform or cause harm as the functioning of AI and robotics often consists of many parties – from the developer, the manufacturer, the provider, the AI system itself to the end user?
This question of liability in the context of AI and robotics is a new and developing area of law so there is relatively little case law in the UK.
The tragic death of a man in 2015, following complications in a robotically assisted surgery, which was the first of its kind to take place at Freeman’s Hospital in Newcastle, triggered the courts to look at this issue more closely.
The coroner leading the inquest into the patient’s death concluded the deceased died because the operation was undertaken with robotic assistance and the surgeon responsible for the procedure had not been sufficiently trained or observed and proper guidelines were not in place regarding the implementation of new surgical techniques and technologies.
The coroner further stated the risks of using such innovative surgical techniques should have been discussed with the patient along with risks, benefits, and potential unforeseeable outcomes.
Trying to separate a business, organisation or individual from the liability of harm or failed performance caused by AI and robotics will ultimately be difficult. We can look at the coroner’s findings in the robotic surgery case from a broader perspective.
If businesses or organisations are implementing AI and robotics, then certain steps can be taken to ensure they are reaping the benefits of such advanced technologies whilst giving themselves the necessary protection should things go wrong.
It would certainly be prudent to implement policies around a business’ use of AI. For example, if using it in a job selection process, then businesses should consider having the proper governance in place to make sure the AI does not have unintended biases and thereby exposing the employer to potential discrimination claims.
Businesses utilising AI and other advanced technologies should make sure staff are adequately trained and that the guidance on usage is sufficient and continually updated. Suitable indemnity insurance would also be advisable. The issue of consent from clients, customers and patients when it comes to engaging these technologies should also be explicitly considered.
Over the next few years we will likely see increased litigation involving disputes over liability for increasingly ‘autonomous’ products which have failed to perform, or which have caused harm.
Ultimately, the law in this area has not been able to develop at the same pace as the technology. If businesses and organisations therefore take pre-emptive steps, compliance with future legislation will be more seamless and protection against potential claims will be strengthened.
:: Niamh Dunford is a solicitor in commercial law firm McKees (www.mckees-law.com) specialising in disputes