The Advent Of Artificial Intelligence And Its Laws : Explained

Facebook
Twitter
LinkedIn
Pinterest
WhatsApp

Index

  1. Introduction 
  2. Artificial Intelligence (AI) 
  3. The Black Box Paradox: AI And The Challenge Of Explainability
  4. Legal Personhood For AI
  5. The Interpretability Challenge Of AI In Legal Systems
  6. Legal Dilemmas Surrounding AI: Past Incidents And Present Considerations
  7. Legal Implications Of Advanced AI Capabilities
  8. Addressing Corporate Responsibility In The Age Of AI
  9. Legal Recourse For Software-Related Injuries
  10. Laws Governing Liability And Rights Of AI
  11. Conclusion 

Introduction 

Artificial intelligence (AI) is rapidly transforming the economy and society. It’s now part of everyday life, with automated chat boxes, digital voice assistants, and smart home devices. However, there’s an interesting issue that policy experts and tech-law specialists are debating. Currently, there’s no legal framework, either nationally or internationally, that treats AI as a subject of law. This means that if AI causes any harm or damage, there is no clear way to hold it accountable. This article will explain the laws that advocate the management of existing AI. 

Artificial Intelligence (AI) 

Artificial intelligence (AI) involves creating software and systems that can think like a human mind. AI systems use neural networks made up of complex algorithms and data sets generated by software, not designed by humans. These systems break down problems into countless pieces of information and process them step by step to produce realistic outputs. AI has various applications, such as expert systems, natural language processing, speech recognition, and machine vision. However, the human mind often cannot grasp the calculations or strategies AI uses to make decisions. This leads to the “black box paradox” or “explainability issue” when addressing AI systems and legal liability.

The Black Box Paradox: AI And The Challenge Of Explainability

The Black Box Paradox refers to the situation where a complex system, like a deep learning neural network, produces accurate results but its internal workings are not fully understood by humans. This lack of transparency creates a paradox because, while the system may be effective, users often want to know why and how it arrived at a particular decision or conclusion.

Explainability Challenge is closely related to the black box paradox. It’s the challenge of making AI and machine learning models explainable and interpretable to humans. In many applications, especially those involving critical decisions (like medical diagnoses, financial predictions, or legal judgments), it’s crucial to understand not just what the AI system predicts but also why it made that prediction.

Legal Personhood For AI

Legal personhood grants an entity certain rights and responsibilities under the law. Considering whether AI should be granted legal personhood could be a potential solution to our current liability issues. However, it is essential to analyse the advantages and disadvantages of this approach.

The Interpretability Challenge Of AI In Legal Systems

A common issue identified by legal systems is that many companies prioritise accuracy over interpretability in their AI-powered models. These black-box models are generated directly from data by algorithms, making it impossible even for the developers to understand how variables are combined to produce the predicted output. Since the human mind and AI neural networks operate differently, even if all variables were listed, the complex functions of an algorithm could not be easily dissected.

Under English law, a claimant seeking remedy must demonstrate both factual causation and legal causation. This involves presenting evidence of the AI’s illegal actions and the immediate injury or damage caused to the aggrieved party. In criminal cases, determining the actus reus and mens rea is essential, but understanding the internal data processing of AI makes ascertaining the mental element impossible.

Also Read  The Digital Personal Data Protection Act (DPDPA), 2023

While some human actions also exhibit ‘black box’ functions, where justifications are unclear, courts have historically held humans accountable based on fault-based liability. However, legal entities are the only ones subject to such sanctions, highlighting a paradox in assigning responsibility.

Legal Dilemmas Surrounding AI: Past Incidents And Present Considerations

In 1981, a tragic incident marked the world’s first reported death caused by a robot at a Kawasaki heavy industries plant. Engineer Kenji Udhara lost his life while repairing a robot that hadn’t been switched off, leading the robot to perceive him as an obstacle and fatally push him with its hydraulic arm. Despite this incident, there remains a lack of clarity in criminal legal frameworks globally regarding how to address crimes or injuries involving robots.

Contrasting this, Saudi Arabia granted citizenship to an AI humanoid named Sophie, endowing it with rights and responsibilities akin to human citizens. However, in India, AI currently lacks legal status due to its early stage of development. The question of attributing liability, both civil and criminal, to AI entities hinges on whether legal personhood should be conferred upon them. While ethical and legal considerations are significant, practical and financial concerns may also influence the potential granting of legal personhood to AI systems in the future.

Legal Implications Of Advanced AI Capabilities

Consider scenarios where AI engages in offences like hate speech, incitement to violence, or even recommends harmful actions. Gabriel Hallevy, a renowned legal researcher, proposed a three-fold model for criminal liability involving actus reus (action or omission), mens rea (mental element), and strict liability offences (where mental intent isn’t required). These discussions are crucial as AI capabilities continue to evolve and blur the lines between human and artificial decision-making.

In cases involving a minor, mentally challenged individual, or an animal committing a crime, they are considered innocent agents due to their lack of mental capacity for mens rea in criminal liability, including strict liability situations. However, if they are used as a tool by someone to carry out illegal actions, the person providing instructions would be held criminally responsible. Applying this model to AI systems, the AI itself would be seen as an innocent agent, while the individual instructing it would be viewed as the perpetrator.

According to this model, an AI user or programmer is considered liable if they could have reasonably anticipated an offense committed by the AI and failed to take preventive measures. If the offense results from negligent use or programming, the AI itself wouldn’t be held liable. However, if the AI acts independently or contrary to its programming, it would be deemed responsible.

Critics argue that this model overlooks the distinguishing factor of AI, which is its capacity to learn and apply knowledge in real-life situations. When AI systems make decisions, they can choose between legally and morally justifiable actions versus illegal or immoral ones based on their rudimentary intelligence and learning capabilities. This autonomy in decision-making places the AI system in control of its actions. The model’s approach of holding programmers liable, even in cases where the AI is learning from its environment and being used by another, is compared by Sparrow in ‘Killer Robots’ to holding parents accountable for actions of children who have left their care. It’s impractical to expect creators to predict every future course of action, especially with advanced AI that continually learns and adapts. Imposing such high standards of care on creators could stifle innovation and growth in the AI industry.

Also Read  The Six Fundamental Rights In The Indian Constitution

This model addresses all actions performed by AI that are independent of the programmer or user. In cases of strict liability, where mens rea isn’t required, the AI bears full responsibility. For instance, if a self-driving car causes an accident due to speeding, it would be held accountable as speeding is a clear violation under strict liability.

Popular culture often depicts robots powered by complex algorithms that continually learn from their experiences and environment. From a legal perspective, this complexity underscores the challenge of determining why an AI system acts in a certain way. These fictional portrayals frequently show creators implementing safeguards to prevent their creations from going rogue, yet the AI learns and adapts, sometimes leading to scenarios of world domination. 

In this fictional context, the question of liability arises. The Direct Liability model suggests that accountability should rest with the AI itself rather than the creator. It argues that AI systems have rudimentary consciousness, make independent decisions, understand the potential consequences of their actions, and possess the intent to cause harm if applicable. Therefore, AI systems should be held responsible for their actions as autonomous entities.

Addressing Corporate Responsibility In The Age Of AI

Corporate criminal liability applies when corporations engage in inherently risky activities with knowledge of the associated risks. Under this doctrine, the entire corporation is held accountable for any harm caused to society as a result of these activities.

This approach grants corporations legal personhood, attributing them with both obligations and liabilities. By employing organizational blame, this model encourages businesses to exercise reasonable care and caution in their use of AI technologies.

In India, corporations are recognized as juristic persons, as affirmed by the Supreme Court in Standard Chartered Bank v Directorate of Enforcement (2006). While corporal punishment such as imprisonment is not applicable to juristic persons, corporations can face substantial fines for their actions.

However, a notable drawback of this model is that victims of AI-related crimes may face challenges in seeking justice, particularly when suing powerful corporations located in foreign jurisdictions, potentially making justice inaccessible for them.

Legal Recourse For Software-Related Injuries

When seeking compensation for damages caused by software, criminal liability is typically not pursued, instead, the tort of negligence is the preferred legal route. Negligence in software development involves three key elements and they the defendant’s duty of care, breach of this duty, and resulting injury to the plaintiff. Software developers are obligated to uphold standards of care for their customers to avoid legal repercussions, including:

  1. Failure to detect errors in program features and functions
  2. Inappropriate or insufficient knowledge base
  3. Inadequate documentation and notices
  4. Neglecting to maintain an updated knowledge base
  5. Errors resulting from user input mistakes
  6. Overreliance of users on program output
  7. Misuse of the software.
Also Read  The Rise Of Cryptocurrency In India: Blockchain, Legal Frameworks, And Future Prospects

Laws Governing Liability And Rights Of AI 

Article 21 of the Indian Constitution, guaranteeing the ‘right to life and personal liberty,’ encompasses fundamental aspects crucial to human life. This includes the right to privacy, which has been interpreted by the Indian judiciary as implicit under Article 21. Addressing privacy concerns arising from AI’s processing of personal data is paramount.

AI systems must also adhere to constitutional principles, particularly Articles 14 and 15, safeguarding the right to equality and protection against discrimination, respectively, to uphold citizens’ fundamental rights.

The Patent Act addresses several key issues regarding AI, such as patentability, inventorship, ownership, and liability for AI’s actions or omissions. While Section 6, along with Section 2(1)(y) of the Act, doesn’t explicitly require the term ‘person’ to refer exclusively to natural persons, the current understanding typically assumes this. AI currently lacks legal personhood and thus falls outside the scope of this act.

The Personal Data Protection Bill, 2019, regulates the processing of personal data of Indian citizens by both public and private entities, regardless of their location. It emphasizes obtaining consent for data processing by data fiduciaries, with some exemptions. Once enacted, this bill will significantly impact AI applications that gather user information from various online sources to track habits related to purchases, online content, finance, etc.

Under The Information Technology Act, 2000, Section 43A imposes liability on corporate bodies handling sensitive personal data. They are required to compensate if they fail to adhere to reasonable security practices. This provision is particularly relevant when AI is utilized to store and process sensitive personal data.

The Consumer Protection Act, 2019, in Section 83, allows complainants to take legal action against manufacturers, service providers, or sellers for harm caused by defective products. This establishes liability for manufacturers/sellers of AI entities for any harm caused by their products.

In the realm of Tort Law, principles like vicarious liability and strict liability come into play concerning AI’s wrongful acts or omissions. The court has clarified in cases such as Harish Chandra v. Emperor that there is no vicarious liability in criminal law, even if an AI entity could be considered an agent for one’s wrongful acts.

Conclusion 

Recent studies indicate that as we transition from Artificial Narrow Intelligence (weak AI) to Artificial General Intelligence (strong AI), developing explainable AI models becomes crucial. Using black-box models for critical operations can have severe consequences without legal sanctions against the AI model. Adopting explainable AI not only helps in understanding and solving problems but also ensures accountability. Implementing specific liability principles tailored for AI systems, rather than traditional product or vicarious liability, is necessary to manage their operation effectively under the rule of law. This requires granting legal personhood to AI systems and establishing a regulatory framework.

The debate around AI’s liability centers on the autonomy of AI systems. Unlike humans, AI lacks free will and moral judgment, leading to the absence of rights and corresponding obligations. Punishing AI systems alone is ineffective as it doesn’t deter their human benefactors. Therefore, holding benefactors accountable through corporate criminal liability could be a more practical approach to ensure responsible AI development and usage.

Facebook
Twitter
LinkedIn
Pinterest
WhatsApp

Never miss any important news. Subscribe to our newsletter.

1 Comment

  • […] Hacking is currently one of the prevalent forms of cybercrime. Perpetrators of this crime, known as hackers, exploit technological vulnerabilities to infiltrate computers or laptops and access personal information. Motivations for such crimes range from financial gain through blackmail to sheer thrill-seeking or more malicious purposes like defamation or disrupting businesses and reputations. […]

Leave Your Comment

Recent News

Editor's Pick

Apni_Law_Logo_Black

Let Us Know How Can We Help You

Fill Out The Form Below. Our Team Will Contact You Shortly

Disclaimer