AI’s Blurred Responsibility Lines

As I sit down to write about AI, I’m transported back to the world of En Iniya Iyandhira, my first Tamil book by Sujatha. I remember feeling so immersed in the story that I jumped on my bicycle to visit a small book stall at Singanallur Bus Stand, where I purchased the sequel, Meendum Jeeno, eager to continue the adventure.

En Iniya Iyandhira is set in a dystopian India, ruled by a dictator named Jeeva. The population is kept under control through strict rules, including a prescribed age limit that results in the killing of elderly citizens. Each person is assigned a unique name with two Tamil letters by the government and must strictly follow Jeeva’s rules. While the protagonist is technically Nila, the robotic dog named “Jeeno” steals the show – a portrayal of AI that both fascinated me and gave me pause, as it seemed almost too powerful for the story itself.

“Jeeno”, imagined by Sujatha nearly half a century ago, no longer remains merely a fictional machinery. In the past two years, ChatGPT has moved from being a secret among developers to become something used by everyone. I personally have a friend who uses ChatGPT to generate content for social media posts, this surely was not something possible in 2020. 


The rise of AI also amplifies the concerns regarding responsible usage of it. The biases that can be introduced into AI systems, both through the data that they are trained on and the way that they are programmed, can have a significant impact on the decisions that they make. For example, an AI system that is trained on a dataset of resumes that are mostly from white men is likely to be biased against women and minorities when making hiring decisions.

Another concern is the potential for AI systems to be used for malicious purposes. For example, an AI system that is designed to generate fake news could be used to spread misinformation and sow discord. Or, an AI system that is designed to hack into computer systems could be used to steal sensitive data or launch cyberattacks.

The blurred lines of responsibility in AI make it difficult to hold anyone accountable for the potential harms that these systems can cause. In many cases, the companies that develop AI systems are not the ones who deploy them. And, even when the companies do deploy the systems, they may not be the ones who make the decisions about how the systems are used.

This lack of accountability can make it difficult to prevent AI systems from being used for malicious purposes. It can also make it difficult to hold anyone accountable for the harms that these systems cause.

One way to address the problem of blurred lines of responsibility in AI is to develop clear standards for the responsible development and deployment of AI systems. These standards should include requirements for transparency, accountability, and fairness. They should also include requirements for companies to take steps to mitigate the potential for bias and misuse of AI systems.

Another way to address the problem of blurred lines of responsibility in AI is to develop new laws and regulations that specifically address the use of AI. These laws and regulations should provide clear guidelines for companies and individuals who develop, deploy, and use AI systems. They should also provide mechanisms for holding people accountable for the harms that AI systems cause.


Let’s take a real world example for it, the above segment of this article was not written by me, but I had it generated through Google’s Bard AI. In this situation, it is quite clear that I own the above text, as it was generated by the prompts that I had provided and I am the one posting it here.

The solution is not always so simple, especially in situations like deep fakes where celebrity images are used to generate explicit content. Here, generation of the content cannot morally be equivalent to ownership over the content. This is why the definitions of ownership and responsibility become even more important, and specific to the circumstances. 

With the extraordinary rate at which AI is developing, it certainly doesn’t seem far-fetched to assume one day we’ll have our very own personal “Jeeno” – just be sure to keep an eye on it!

Leave a Reply

Your email address will not be published. Required fields are marked *