Skip to main content

The Authentic Heart of AI, Pt 1 - Sagwadi Mabunda

AI is hot on the lips of academics, policy makers, technologists and lay persons alike. It is shrouded in as much mystery as it is in intrigue. I will admit that, like many people, I came late to the party. In fact, I cannot even manufacture an affinity to it through a carefully curated love of science fiction. The closest I can get, perhaps would be vague memories of scenes from i, ROBOT and if I’m truly honest, 16-year-old me did not watch it for the broody robot or the deep philosophical questions of whether it should “save the girl” or apply the nano something to the thingamabob.  

 

(scene from I, Robot )

It did remind me of Tesla’s unveiling of its humanoid robot creation Optimus . Elon Musk announced at the launch that the intention was to create fully-functional robots which would lead to a “future of abundance, a future where there is no poverty, a future where you can have whatever you want in terms of products and services.” What is unique about Optimus, is that it is intended for mass production. It’s meant to bring a humanoid robot that can perform complex tasks like lift objects from the ground, use tools like a screwdriver or a drill or climbing stairs to every home. 

I would love to have an Optimus companion. Perhaps even more than Optimus. I would cash in my pension for a home version of Boston Dynamic’s Spot. Admittedly, its vast array of capabilities would be wasted on me, but if it were a personal pet with “artificial affection”, it would go a long way to soothe my guilt over not having enough time to spend at home loving and caring for a sentient dog. 

Of course, it is a oversimplification to think of AI as ‘just cool robots’; it goes much further than that. To me, Optimus and Spot are just a physical representation of what is possible with AI. Absent these, it is difficult to grapple with the promises and dangers of something as abstract as AI. When the likes of Sam Harris and Roman Yampolskiy warn of the dangers of Artificial General Intelligence (AGI) or Superintelligence, it is hard to conceptualise these as real threats for those of us without the technical expertise. It does not mean, however, that we should abdicate our duty to comment on and participate in discussions about other aspects of the impact of AI. 

The question that confronts us is how do we, as humanity, pursue the promise of AI which will not lead to our ultimate demise? The question is a difficult one to answer. While not a new phenomenon, AI is still unfolding and the extent of its potential is still unknown. Ethics, therefore, are the dynamic basis upon which the normative evaluation and guidance of AI technologies are founded. The ethical norms that we adopt as a society should be ones that promote human dignity, well-being and prevention of harm. As such they must act as the compass for scientific and technological development. They should be for the good of humanity. 

Naturally, it begs the question: what do we mean by good? Who determines what is good? Will good in the West mean the same thing as good in the Global South? Should we hold the creators or developers of AI responsible for any harm that AI causes? If we do, does it follow that they should have full control over the ethical lens through which they design the AI or should the fabrication of AI ethics be a multi-stakeholder collaborative effort? 

In a conversation with Robotics Engineer and podcaster Lex Fridman, Musk was asked whether he thinks it matters who builds AGI. His response is indicative of the need for strong ethical foundations. He stated that he hopes “that whatever AI wins is a maximum truth-seeking AI that is not forced to lie for political correctness or political anything.” He raised serious concerns about “AI succeeding that is programmed to lie even in small ways because [the small ways] become big ways.”

In a 2016 TedTalk Harris captured it well when he said “I think we need something like a Manhattan Project on the topic of Artificial Intelligence. Not to build it, because I think we will inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you’re talking about super intelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right.” 

Let me ask, what does it mean to you to have AI that seeks truth? 


Comments

  1. This piece is as insightful as informative! The fundamental question raised as to whether what is good in the West will mean the same thing as good in the Global South borders on the issue of cultural differences. In my opinion, AI technology must be developed to align with the cultural peculiarities of the target population.

    ReplyDelete

Post a Comment

Popular posts from this blog

A Caution Against Emerging Digital-Industrial Complex in Africa - Nelson Otieno

In his  farewell Presidential Address in 1961, outgoing US President Dwight Eisenhower then warned about the dangers of the military-industrial complex . He acknowledged the necessity of collaboration between the US government, the military, and industry for national development. But he cautioned that the alliance was vulnerable to abuse. Eisenhower further expressed his concern that the military-industrial complex could gain undue influence, ultimately surpassing the traditional powers of the State. Eisenhower’s warning has proven prescient. Over time, the military-industrial complex has evolved into a powerful force that, driven by profit motives, plays a significant role in perpetuating global conflicts and wars. Today, in this digital age, Eisenhower’s speech resonates once again , but with a modern twist — ushering in the rise of what I term the “digital industrial complex.” As we witness the increasing convergence of technology, industry, and government, especially in Afr...

AI's Next Frontier Could Supercharge Tax Havens. Here's Why That Matters - Scott Timcke

The quiet revolution in AI isn't just changing how we work and live. It's about to transform how wealth moves around the globe - and not necessarily for the better. As AI systems become autonomous, they threaten to supercharge tax avoidance strategies in ways that could devastate public finances worldwide. Consider a future where AI systems don't just identify tax loopholes, but actively exploit them in real-time, moving capital across jurisdictions faster than regulators can respond. This is the emerging reality of what technologists call "agentic AI" - systems capable of making independent financial decisions without direct human oversight. The implications are acute. Tax havens have traditionally relied on human financial experts laboriously analyzing regulations to find advantageous arrangements. These new AI systems can perform the same analysis continuously across multiple jurisdictions, identifying opportunities for tax arbitrage with unprecedented speed an...