The executive body of the European Union (EU) has introduced draft regulation that would make it easier for individuals and organizations to sue companies whose products – including drones – caused prejudice, damage, or injury due to malfunctioning or misuse of integrated artificial intelligence (AI) technology.
The EU Commission unveiled its AI Liability Directive yesterday to address public concerns about not only the potential troubles that products like drones enhanced with smart capacities can cause, but also current legal difficulties to successfully sue over such mishaps. In doing so, the proposal shifts legal proof of responsibility burden from plaintiffs, and on to companies that created the AI apps behind the accidents – or, when applicable, manufacturers or users of goods provoking the complaints.
Critically, the EU’s package of rules would allow victims in cases of AI gone awry – say, the crash of an autonomous drone that causes injury, or in property assessment whose faulty findings result in depreciated value – to sue on the grounds of basic rights violations, including privacy. In being founded on those sacrosanct EU principles, the proposal gives plaintiffs stronger legal footing from the outset of cases, and transfers the onus of proving the AI involved wasn’t responsible to the creating company.
Current laws require victims to hire expensive legal teams to prove the smart apps were at fault – a daunting, when not impossible task given the complex, super-sophisticated “black box” nature of AI systems. If passed, the new rules would allow courts to order that obscurity to be illuminated by AI companies providing detailed information to substantiate that their tech and risk control contingencies had functioned as designed.
“The new rules will give victims of damage caused by AI systems an equal chance and access to a fair trial and redress,” said EU Justice Commissioner Didier Reynders in describing the initiative. “If we want to have real trust of consumers and users in the AI application, we need to be sure that it’s possible to have such an access to compensation and to have access to real decision in justice if it’s needed, without too many obstacles, like the opacity of the systems.”
Given the spreading incorporation of AI in not just drone platforms, but in an expanding number of tech-driven automated systems, the potential for future litigation in the EU under the proposed law is enormous. Imagine, say a student whose exams suffered from flawed machine grading, a job seeker’s ideal application being rebuffed by a dodgy computer analysis, or someone injured by a self-driving vehicle.
The proposal, however, also makes it possible for developers of AI to escape the legal crosshairs by presenting evidence the companies that had integrated their apps, or end users operating them at the time they caused trouble, hadn’t used the tech as required.
Lobbyists for tech companies potentially affected by the proposed EU rules will have time to put the squeeze on officials before if takes effect – if, indeed, it does.
The initiative first needs to be approved by all 27 national governments in the EU Council, and would have to clear the EU Parliament – a fiercely independent body that may oppose the current text as having weakened the corporate liability aspect that its earlier recommendations contained.
Photo: Christian Lue/Unsplash