Skip to content
4 min read

What Are The Essential Rules for Governing Ethical AI?

What Are The Essential Rules for Governing Ethical AI?

In the "Money Masala EP2" episode titled "The Core Rules Governing Ethical AI" from the channel "AI, SaaS & Agentic Pricing with Monetizely," the hosts engage in a thought-provoking discussion about the ethical considerations surrounding artificial intelligence, prime directives for robots, and the potential timeline for achieving Artificial General Intelligence (AGI).

The Foundation of Robot Ethics: Asimov's Laws

The conversation begins with a reference to Isaac Asimov's famous laws of robotics, which have become foundational to discussions about AI ethics. As one of the speakers explains:

"The robots will always have prime directives which will… more or less drill down to the same laws that were defined by Asimov: a robot will not injure a human being, a robot will have to obey a human being so if a direct order is given it cannot be overridden by the robot, and assuming that the first two laws are not being breached, the robot will make sure that it does not get harmed."

These three laws establish a hierarchical framework for robot behavior that prioritizes human safety above all else. Despite being conceived by a science fiction author decades ago, they remain remarkably relevant in today's discussions about AI development.

The Ongoing Battle for Ethical AI

The concept of "ethical AI" has been a topic of debate since the inception of artificial intelligence technologies. However, there appears to be some skepticism about whether this phrase carries substantial meaning. One speaker notes:

"Ethical AI is a phrase that has been going on ever since AI started becoming a thing, even before the word AI became… I understand that it is a phrase, but I don't think that it means much."

This skepticism is countered by the argument that advocacy for ethical AI is crucial for preventing potentially harmful scenarios. The discussion highlights concerns about data sources and transparency, with one participant pointing out:

"The company that is the number one in AI right now is also the company… that has questions have been put against it about where it is copying the data. I am just saying that the creation follows the pattern of the creator, and if the creator is not above board, then what ethical AI?"

Human Oversight vs. AI Governance

An interesting philosophical point emerges when the speakers consider who should be making decisions about AI ethics. There's a recognition of inherent human limitations:

"The only solution that I see for this is not to have humans making these decisions, because a human will always suffer even if it is a committee, even if it's a democracy… a human body will always contain the flaws of the human body."

This leads to a somewhat paradoxical conclusion that perhaps a "benevolent super AI" might be needed to ensure true impartiality and ethical adherence. The speakers draw parallels between this concept and religious frameworks where humans acknowledge their limitations and seek guidance from higher powers.

"Whether it be us praying to Ganesh ji, Hanuman ji, or AI, we are always admitting that we are weak creatures and we need supervision."

The Timeline to Artificial General Intelligence

The conversation shifts to the timeline for achieving Artificial General Intelligence (AGI), with references to recent expert opinions:

"I was listening to this podcast by… the CVP at Meta where he was saying that right now if we only concentrate on making LLMs better, AGI is not happening in two years. But what my takeaway is that if we concentrate on other architectures, other infrastructures, AGI is a possibility within two to four years."

The speakers distinguish between simply scaling up existing Large Language Models (LLMs) versus developing fundamentally new AI architectures. They argue that merely expanding current models will not achieve true AGI:

"Without changing the circuitry or changing the main components, what you are doing is making it bigger, giving it more data, and giving it more memory. In the end, it is possible that you have… a mega brain which will have decent answers to any and every question that you might ask, but it will still be reliant on the data it has ingested. There is not going to be a new thought."

The Accelerating Pace of AI Development

The conversation concludes with reflections on how rapidly AI capabilities are advancing, particularly in professional domains like coding:

"I think we will probably looking at how fast my job as a coder is being replaced. I am no longer surprised. I am already flabbergasted right every day… as I interact with these AI tools… the pace nothing surprises me anymore to be honest."

This observation underscores the real-world impact of AI advancements and adds urgency to the ethical considerations discussed earlier.

Implications for Business Leaders

For SaaS executives, these discussions have profound implications. The rapid development of AI technologies means companies must actively engage with ethical considerations from the outset of their AI initiatives. Rather than treating ethics as an afterthought or mere compliance exercise, organizations should integrate ethical principles into their AI development processes.

As the speakers suggest, the ethical character of AI systems often reflects the values of their creators. This places a significant responsibility on technology companies to establish robust governance frameworks and transparent practices around data usage, algorithm design, and deployment scenarios.

Moreover, the timeline predictions for AGI—particularly the suggestion that new architectures could accelerate development—indicate that business leaders should be preparing for more transformative AI capabilities within a relatively short timeframe. This preparation extends beyond technical readiness to include ethical frameworks that can accommodate increasingly autonomous systems.

The conversation reminds us that as we delegate more decision-making authority to AI systems, we must ensure they embody the values and safeguards necessary to benefit humanity while minimizing potential harms.