Liability and Risk Management: When an AI System Causes Harm

By Eric Prindle, Esq. | Reviewed by Canaan Suitt, J.D. | Last updated on February 19, 2026

As people and businesses have increasingly embraced artificial intelligence tools to complete tasks previously done either manually by humans or through deterministic automation, the question of what happens when these AI systems cause harm has come to the fore.

Traditional legal concepts such as negligence, strict liability, and professional responsibility are, on their face, applicable to harms caused by AI technology. However, these areas of law have, in some cases, started adapting as AI challenges traditional legal assumptions. Some jurisdictions have started implementing new AI-specific regulatory regimes.

For those responsible for risk management within companies and institutions, it is important to consider not only the applicability of current law to AI risks, but also the ways in which the law may continue to evolve, in order to properly allocate and mitigate risk. For legal guidance, reach out to an attorney with experience in technology legal issues.

Evaluating AI Liability Under Tort Law

In the United States, liability for harms where no specific statutes exist is generally managed through tort law. Tort is a type of common law, a body of law originating in England that courts have developed over the centuries.

While some states have passed laws governing specific aspects of AI technology, no new wholesale regimes for managing AI liability claims have been created. Therefore, in the immediate future, it will fall on tort law to assign liability for AI harms.

Get Help With Technology Law Matters

Technology law is changing and catching up. For legal help with tech law and regulation, use the Super Lawyers directory to find a technology lawyer near you.

Find a lawyer today

Negligence and the Reasonable Care Analysis

One tort claim that those harmed by artificial intelligence could use to pursue compensation is negligence, a claim that is widely used in personal injury cases.

To succeed in a negligence claim, a plaintiff must show that the defendant had a duty of care to take reasonable precautions to avoid injuring others, that they breached that duty, and that their breach was the actual and proximate (or reasonably foreseeable) cause of an injury.

Who Is the Responsible Party in an AI Case?

The first question when someone has suffered an injury involving AI technology is who the responsible party would be. While the inner workings of AI systems are often described casually as a form of “reasoning,” an AI is not a person. Nobody would seriously claim that an AI itself could be sued for failing to exercise reasonable care. Rather, a person, business, or other legal entity would need to be sued.

In the case of AI products that are marketed directly to consumers, presumably someone who feels they have been harmed through their interactions with an AI would pursue a claim against the AI developer that created and marketed the system. An example could be someone who was encouraged in the direction of self-harm through interactions with an AI chatbot.

On the other hand, someone who has been harmed by someone who was using an AI tool to complete a task could pursue a claim against the AI user who was ultimately responsible for the harm.

That party could then consider whether they, in turn, might have any claims against the AI developer. An example of this could be someone who was defamed in an article written by someone who was using a generative AI application to create content.

In either case, in order to pursue an AI liability claim under the law of negligence, a plaintiff would need to demonstrate that each of the elements listed above applies to the defendant. It remains to be seen how the courts will handle these claims.

Applying Strict Liability to Software Products

In certain areas, the courts and/or legislators have determined that the risk of harm is great enough to impose an alternative theory of tort liability called strict liability.

Under strict liability, it is not necessary to prove an intentional breach of a duty. This theory is therefore more favorable to injury victims. One prominent example of strict liability is product liability, the legal responsibility for harms caused by defective and dangerous products.

Traditionally, under a product liability analysis, computer software has been considered a service rather than a product. Therefore, a plaintiff harmed by computer software would only be able to pursue a strict liability claim if that software was embedded in a product. With consumer product manufacturers rushing in recent years to add “smart” features to their products, often powered by AI models, this certainly creates some potential for strict liability claims.

However, the penetration of AI technology and other forms of software into so many facets of life have also caused courts and legislators to reconsider the traditional distinction between products and services.

For instance, in Garcia v. Character Technologies, Inc., a case in the U.S. District Court for the Middle District of Florida, the judge determined that an AI app should be treated as a product in the strict liability sense. Likewise, outside the U.S., in 2024, the European Union extended their product liability framework to explicitly include software products.

When Professionals Misuse AI: Non-Delegable Duties

Certain professions, such as law and medicine, impose specific ethical responsibilities on their practitioners. These responsibilities are considered non-delegable, meaning that even if a lawyer or doctor delegates specific tasks to someone else, they are responsible for the outcome. The concept of non-delegable duties applies in legal cases for malpractice, as well as in professional discipline matters such as license revocation.

As professionals increasingly turn to making use of AI algorithms for tasks that they would have traditionally done themselves or delegated to assistants, it is generally accepted that the professional remains responsible for any errors introduced by the AI system.

For instance, multiple lawyers have been disciplined by judges for allowing AI errors to make it into court filings. If such a situation were to result in a client losing their case, they would presumably be able to bring a malpractice claim.

Likewise, AI tools designed to assist in healthcare procedures have been known to generate false outputs, which could result in medical malpractice liability on the part of a doctor using such a tool.

Risk Management Considerations in Deployment of AI

As part of the risk management function within any size institution, it is important to consider the potential for new risks created by the use of AI systems, the applicability of current law to those risks, and the likelihood that the law in this area will continue to evolve.

Risk mitigation strategies could include:

  • Clear and consistent decision-making processes and governance frameworks around the use of AI systems and other emerging technologies so that individuals are not choosing to use these systems in ways that create unanticipated risks
  • Review of indemnity clauses in contracts with vendors and customers to properly allocate liability for harms that could be introduced as AI is deployed in products and services

It is inevitable that, as artificial intelligence technologies continue to be incorporated into more aspects of everyday life, courts and legislatures will have decisions to make about how to assign responsibility for new kinds of harm.

Staying abreast of, anticipating, and potentially influencing these decisions will be important tasks for deployers of AI. For specialized legal advice, reach out to a science and technology lawyer.

Was this helpful?

What do I do next?

Enter your location below to get connected with a qualified attorney today.
0 suggestions available Use up and down arrow keys to navigate. Touch device users, explore by touch or with swipe gestures.

At Super Lawyers, we know legal issues can be stressful and confusing. We are committed to providing you with reliable legal information in a way that is easy to understand. Our legal resources pages are created by experienced attorney writers and writers that specialize in legal content in consultation with the top attorneys that make our Super Lawyers lists. We strive to present information in a neutral and unbiased way, so that you can make informed decisions based on your legal circumstances.

0 suggestions available Use up and down arrow keys to navigate. Touch device users, explore by touch or with swipe gestures.

Find top lawyers with confidence

The Super Lawyers patented selection process is peer influenced and research driven, selecting the top 5% of attorneys to the Super Lawyers lists each year. We know lawyers and make it easy to connect with them.

Find a lawyer near you