Select Page

AI ready? Let's get you AI-powered with a free consultation. Book your call today!

AI Liability in Software Development Projects

by | December 16, 2024

AI software development is now being regulated by a whole new suite of legal regulations, however, what these new laws generally fail to unequivocally address is the question of artificial intelligence and legal liability, and where responsibility lies with respect to damage that arises from AI-based solutions. Indeed, the whole point of AI software is to incorporate a greater level of autonomy and efficiency into software products, and this poses the most fundamental challenge to the traditional liability framework since it is increasingly complicated to identify intent, fault and causality. However, for all stakeholders of AI-based systems, from AI software developers to third-party users, the fundamental goal, as with any consumer product, should continue to be safety and security. For AI software developers and clients alike, this translates specifically as taking a reasonable level of care in the design, testing, and implementation of AI solutions.  

Risk-Assessment and Documentation

One of the key features of the EU’s AI Act is that it addresses the complexities of AI systems according to a risk-based assessment. Put simply, AI systems deemed to hold an unacceptable risk for EU users are banned, whilst high risk AI systems require greater levels of compliance than low risk AI systems that do not require such compliance. These compliance requirements include certain reporting obligations and adequate risk assessment and mitigation systems.
 

For AI software developers and clients, it is prudent that an adequate risk assessment is carried out during the discovery phase for such projects, and that appropriate documentation is collated regarding the purpose of the project, the projected datasets to be used to train the AI model, and the levels of security and human oversight to be assigned to the model. This will go some way to protecting against liability for inadequate safeguards or oversight mechanisms, and the use of flawed algorithms or training data biases.

 

By undertaking these preliminary measures, stakeholders of an AI-based project ensure that they have at least begun to minimize the potential for AI liability in a space where such a question is comparatively fraught. Indeed, it has been suggested that a risk-based approach should be the first step in providing an answer to who may be held responsible for damage caused by an AI system. That is, proportional liability based on risk sharing between various stakeholders of an AI system. This, at least, is one of the suggested approaches for the EU Commission who are currently revising the Product Liability Directive and AI Liability Directive 

 

AI Proportional Liability: A New Framework

So, what is this proposed framework that AI-system stakeholders should be aware of? In essence, proportional liability distributes responsibility between AI software developers, AI system owners, and even third parties, based on their level of involvement and the risks they have assumed. Theoretically, this approach recognizes that no AI system is completely risk-free and that there are a multitude of parties contributing to the functioning of these systems; including AI software developers, platform providers, integrators, and end-users. Instead of burdening one party entirely, proportional liability divides the costs of accidents according to each party’s role and the risks they have knowingly accepted. 

 

With that in mind, and as alluded to above, it is prudent that from the beginning of an AI project, AI software developers conduct a risk assessment and disclose the potential risks of an AI-based project, such as system limitations, the decision-making processes, and level of autonomy. AI system owners, on the other hand, should clearly inform AI software developers about the intended use of the AI system and comply with operational guidelines prepared by the AI software developer. Both parties are encouraged to share vital information transparently to minimize such risks and therefore stave off liability in a proportionate manner.  

 

As suggested, these steps should be undertaken during the discovery phase of an AI project, but we suggest that this should continue throughout the lifespan of an AI-based system and become part of a typical aspect of system maintenance. This will likely contribute to protecting both AI software developers and their clients against potential claims, but it will also ensure that all parties remain abreast of rapidly changing ethical standards and technical regulations, thereby maintaining safety standards in an increasingly regulated space. 

moving to cloud

This is all the more important in circumstances where liability may fall squarely with AI platform developers. That is, the giants of the AI space; from OpenAI to Google AI and Microsoft, who provide users with the capabilities of furthering software development projects and bringing other AI-based ideas to life. Where there are fundamental flaws with AI platforms, there is scope for AI software developers and clients alike to seek redress where a causal link is established between the platforms fault and the harm caused. To illustrate, harm could result from “improper” training and the use of copyrights works, or from the unpredictable behavior of self-adapting AI, which leads to an unexpected outcome from the designer’s perspective.

 

To establish negligence in these circumstances, it would need to be shown that the circumstances in which the AI system caused harm were foreseeable and that the AI platform developer did not account for them during the design and programming stages. This has been evidenced in the explosion of copyright claims against AI platforms which have been accused of using copyrighted works in the training of AI models. Put simply, AI software developers and their clients could reasonably rely on the fact that such platforms have the necessary permissions for the data used to train their models, and therefore work with those platforms in the development of a client’s vision. Liability for copyright infringements in such a case would lie squarely with the AI platform. 

 

However, predicting the vast range of scenarios an AI system with machine-learning capabilities might encounter, and how it might self-adapt as a result, is nearly impossible. This is why, even for the main AI platforms, the ‘proportional framework’ model appears to be one of the more logical approaches since it balances the need to prove causation and harm, with the innovation required for the development of the AI ecosystem.

service cloud features

Conclusion: A Fairer, Forward-Looking Legal Approach

As AI systems continue to evolve alongside government regulation, it appears that a proportional AI liability model based on risk sharing offers a more adaptable and fairer framework for addressing the complexities of AI-related harm. By encouraging transparency, mutual responsibility, and international cooperation, this approach provides a balanced solution that fosters innovation while ensuring accountability. It is up to all stakeholders of AI systems to work together transparently, with the fundamental goal being consumer safety. 

About the AI Software Development Projects and Liability Guide

This guide was authored by Joseph Chigwidden, In-House Legal Consultant.

Scopic provides quality and informative content, powered by our deep-rooted expertise in software development. Our team of content writers and experts have great knowledge in the latest software technologies, allowing them to break down even the most complex topics in the field. They also know how to tackle topics from a wide range of industries, capture their essence, and deliver valuable content across all digital platforms.

If you would like to start a project, feel free to contact us today.
You may also like
Have more questions?

Talk to us about what you’re looking for. We’ll share our knowledge and guide you on your journey.