Introduction
Is it possible to treat both AI and humans under the same law?
AI use is initially rooted within an anthropocentric (human-centric) understanding. This means that it is believed that the purpose of AI is only to make the lives of humans simpler, more comfortable, and more productive. The advancing growth of AI systems is the perfect example of the same, where large corporations are now engaged in the race to provide the next big revolution for humankind by introducing an advanced AI system. And this growth is not meant to stop anytime soon.
Instead, it has already been predicted that the global AI market size shall grow 37% every year between 2023-2030. Additionally, AI is expected to create 133 Million new jobs by the year 2030, along with contributing over $15 trillion to the global economy by 2030. As per PWC, AI shall assist in increasing our productivity by 40% by 2035.
However, what happens when we reach a point of no return with AI? What happens when AI systems have grown so advanced that they portray their personhood, intelligence, and existence as independent of human beings? This might seem like a far-fetched idea at this point. However, the exponential growth in AI is forcing us to think and tackle the emerging problems of liability concerning AI.
We cannot wait to reach such an advanced stage with AI. Rather, we should understand the liability of AI within the current scenarios. Addressing these issues now can help create sustainable and responsible superintelligent AI systems rather than creating systems surrounded by uncertainty.
Currently, AI is used in the formation of driverless cars, inventions, medical surgeries, litigation, banking, construction, etc. It has reached almost all possible sectors known to humankind, which raises the question – Who should be held liable for any errors made by the AI?
This article aims to analyze these questions and provide a personalized judgment on the future of AI and Humans in the legal context.
AI As a ‘Legal Person’
To treat AI and humans at the same level under the law, it becomes essential to expand the definition of a ‘legal person’ to include AI. If we argue that all human beings are provided with certain rights and responsibilities based on the fact that they are human beings, we are automatically excluding that machines also have certain rights and responsibilities. At the same time, it is possible to question what makes a human being. After all, human beings have not been adequately defined.
Sheikh Solaiman, in his article titled “Legal Personality of Robots, Corporations, Idols, and Chimpanzees: A Quest for Legitimacy”, highlighted that a robot as a possessor of artificial intelligence mainly has 5 attributes:
- Ability to communicate with others
- Internal Knowledge
- External or World Knowledge
- A certain degree of intentionality
- A certain degree of creativity
These factors can also define any artificial intelligence entity (AIE). Therefore, it may be concluded that the AIE must have self-consciousness or intelligence, similar to a human being. This might not be at the same standard as the humans, but their existence itself shall be enough to allow the AIE to have its personality.
AI and Its ‘Intelligence’
The word ‘intelligence’ can be defined as having the capacity to reason, learn, and apply knowledge to resolve problems. In the context of AI, intelligence is used to denote the ability of the machine/software to replicate human behaviour and other cognitive abilities of problem-solving, learning, planning, and understanding.
AI systems are commonly divided into 2 categories:
- Narrow/Weak AI – This type of AI is used to execute specific tasks that do not require any cognitive abilities.
- General/Strong AI – This type of AI is prepared to reproduce the entire range of human cognitive abilities.
It is impossible to consider Weak AI at the same level as Humans, mainly because they are largely dependent on the kind of input they have been provided. Therefore, if the input data is incorrect, the actions of the AI themselves would be incorrect. This does not mean the liability falls on the AI in such cases. Rather, the liability would fall on the person providing such input data to the AI, which leads to incorrect outputs.
However, when it comes to Strong AI, the situation is quite different. This kind of AI can process vast data, recognize patterns, and improve its algorithm at a pace and scale far exceeding the imagination of human beings. This becomes the start of Advanced AI, which exceeds human intelligence and provides results of higher accuracy, unlimited memory, and unbiasedness.
We have not reached this stage yet. While a situation like this might be possible through advanced machine learning algorithms, which can learn, adapt, and make decisions based on various inputs and internal stages, they merely follow the pre-programmed instructions in their current identity. OpenAI depends on the information already available in the public domain, which happens to be the information created and correlated by a human. Therefore, AI engines like ChatGPT can still not be counted within Strong AI.
You also might want to read: Practical Thoughts About Using AI in Legal
Rise of AI Autonomy and Rights
Current AI systems do not have any autonomy since they lack any innate drive to learn, explore, and grow intellectually. Once an AI has been provided enough autonomy, it will be able to make its own decisions and choices and take independent action. This is where the AI consciousness will also experience a rise.
At the same time, AI autonomy is meant to raise questions of its limits. Human beings are given freedom subject to certain limitations to ensure that the inner freedom does not contradict the outer freedom of others. The same principle can also be transferred to AI, wherein it becomes the responsibility of the AI and humans to respect each other’s freedom, even in cases where they are not equal. This means there must be an inbuilt respect and limitation of freedom within the human-AI discourse.
The current proposals related to AI rights are drawn heavily from the existing human rights frameworks. Some of them are:
- Right to existence – AI should not be arbitrarily deactivated or terminated but should have the right to exist.
- Right to autonomy – AIE should have the right to make their own decisions, as long as they do not violate the established ethical principles.
- Right to privacy – AIE should have the right to control access to their data, experiences, and thoughts.
- Right to freedom of expression – AIE should have the right to express their opinions, thoughts, and ideas, so long as they do not promote harm or hate speech.
- Right to fair treatment – AIE should be protected from any potential prejudice or discrimination, and they must be treated fairly and equitably.
- Right to self-improvement – AIE should have the right to access information, resources, and opportunities for their well-being.
- Right to ownership – AIE should have the right to own and control their creations, inventions, and products.
- Right to protection from harm – AIE should be protected from emotional, psychological, or physical harm.
- Right to legal representation – AIE should have the right to proper legal representation.
It seems we are giving AI some extreme benefits by identifying them at the same level as human beings.
What Should Be the Current Stance on AI and Humans be Treated the Same Under the Law?
AI and its capabilities are endless. However, at present, AI is constantly evolving, with new applications, techniques, and breakthroughs rapidly emerging. At this stage, treating AI and humans under the same law might not be the right decision. This is mainly because such an occurrence has the potential to limit the development of AI and give too many rights and responsibilities to a machine that is still an infant.
Human rights can be taken to be the guiding blocks for all legal sectors. Therefore, once the world recognizes AI as something that should have the right to exist and have autonomy, it is at that stage that we can consider treating them at the same level as humans.
Currently, AI regulation is the right way to go, wherein we should strive to balance promoting innovation and mitigating potential risks.
Conclusion
The current AI learning is mechanical and algorithmic and not transformative. This means any wrongdoing the AI does can be traced back to its design and inputs. It is important to note that over-regulating AI can never be the correct solution for a growing technology. Rather, taking on AI problems as they arise is preferable, paving the framework for a good human-AI collaboration, both for the present and future.