Skip to main content

Learning from the environmental justice movement - Nelson Otieno

AI technologies are revolutionizing multiple economic sectors, in multiple jurisdictions across the globe. They are transforming the traditional sense of healthcare provision, insurance, education, finance, and others, in ways never seen before. While this transformation is attributed to perception of AI’s efficiency, one must not ignore its negative social impact.

In the wake of this disruption, and the urgent need to seek a balancing act in its implementation, society also needs to seek and find justice in AI’s benefits - namely, an idea of fairness, equality, and order. In that hierarchy, law should be seen not as the ultimate goal but as one of the tools which can be deployed to ensure there is justice.

Role of Law in Managing AI Disruption

Society must be concerned with the need to manage disruptions brought by AI, to ensure that it's disruptive force does not result in additional discrimination, inequality, or unfairness. Laws passed in the form of Executive Orders, Acts, and Strategies, can be a tool for managing such disruptions - by stipulating overarching principles and substantive provisions on the use of AI and the prevention of harm and worsening inequalities.

The use of these legal tools to manage AI's disruption will, however, not necessarily be smooth. This can be attributed to three main reasons - which hinge on the process and substance of the emerging AI laws.

First, while AI technology promises more automation of processes and lifecycles, it does not respect jurisdictions. It is possible, for example, that an AI technology could be manufactured in a state in Asia, procured by a subsidiary company in Africa, and bought by a Kenyan entity but which, in turn, serves customers across other countries in Eastern Africa. 

The above scenario convolutes the determination of an East African national who might wish to make a claim for justice - where they can make these claims, and the legal bases for it, and the use of which is legal for making a claim for justice. There are also real fears that this jurisdictional complexity may produce some regulatory safe havens or regulatory lapses in restricting AI’s negative disruptions.

Second, we are at a testing period of AI development, with ambitious development strategies being prioritized. Reports indicate how big tech lobbyists are lurking in engagement rooms with suggestions to water down the substance of AI law in jurisdictions where AI law is being prioritized. Yet, lobbying does not stop there. Reports indicate that it continues all the way to the implementation phase of the emerging legal regime.

Third, laws which have, or are emerging are not applicable to all possible cases of AI bias, especially the nuanced ones that are visited upon vulnerable persons and communities in Africa.

Connecting AI Disruption, Law, and Justice

The above three main challenges give birth to a teething concern. The concern is how to connect the path for oversight of AI, on the one hand, and justice, on the other hand. While the novelties of the implications of artificial intelligence and other new technologies require reinventing the wheel in some cases, it is possible to learn lessons from the circumstances surrounding the development of the concept of environmental justice.

Stakeholders who are concerned with better regulation of AI for just ends can, and should, take note of the following lessons from the circumstances surrounding the development of environmental justice as a response to the impacts of the industrial revolution:

No emerging law on AI is definitive. In times of crises, or rather ‘abnormal times’, legal developments may not be neutral. The political and economic baggage that informs the structure of AI laws is difficult to ignore and is sometimes inevitable. In such cases, regulators and arbiters should be ready to consider emerging laws as falling short of being an effective tool for achieving justice, in warranted cases.

Finding answers in simpler solutions. In abnormal times, it is the long-held principles that can be safely invoked to bridge the gap between regulatory practice and the desired end of justice. Though the space of AI development is a complex one, complexity must not be the only mantra that guides where, when and how we resort to solutions. Sometimes, solutions lie in multiple quarters. Scholars have demonstrated that when the regulatory environment fails the public, principles such as the rule of law and its related impacts can be used to create necessary obligations on AI manufacturers and other stakeholders.

Keeping an eye on the grund norm. As AI laws develop, they do not take away the focus on constitutional developments across the many waves of development. The constitutional approaches to transparency, accountability and access to information and justice are key, providing overarching regulatory stances in respect of AI technologies.

Final call

The space for AI development is a complex one. And it is quite tempting to reinvent the wheel and overly focus on emerging laws to the exclusion of other tools of realizing justice. Lessons from the environmental justice perspective shows how we must not shift our eyes from the goal of justice as we navigate the advent of AI and its increased deployment. Learnings from past transitions on the use of long-held principles of rule of promises AI justice transitions for States with and without AI laws at the moment.

Comments

Popular posts from this blog

Beyond a buzzword: Can Ubuntu reframe AI Ethics? - Anye Nyamnjoh

The turn to Ubuntu in AI ethics scholarship marks a critically important shift toward engaging African moral and politico-philosophical traditions in shaping technological futures. Often encapsulated through the phrase “a person is a person through other persons”, Ubuntu is frequently invoked to highlight ontological interdependency, communal responsibility, relational personhood, and the moral primacy of solidarity and care. It is often positioned as an alternative to individualism, with the potential to complement or “correct” Western liberal frameworks. But what does this invocation actually do? Is Ubuntu being used to transform how we think about ethical challenges in AI, or is the emerging discourse merely softening existing paradigms with a warmer cultural tone?   The emerging pattern A recurring pattern across the literature reveals a limited mode of Ubuntu engagement. It begins with a description of AI-related ethical concerns: dependency, bias, privacy, data coloni...

AI Worker Cooperatives and a Strategy for Tackling Safe Havens - Nelson Otieno Okeyo

Artificial intelligence is transforming society, particularly work environments, with the rise of click work complicating labour rights. AI companies contract third-party firms in Kenya to train datasets, aiming to reduce bias and toxicity in AI models, under the guise of creating ethical AI . Despite AI systems being computer-based, they fundamentally rely on human labour for training datasets and for algorithm development. These workers could be content moderators or data labellers. A TIME investigation revealed concerns of working conditions for AI labourers , including low wages, toxic environments causing mental health issues, insufficient compensation, and stressful working conditions. Kenya's status as a regional ICT hub has made it a centre for this work, including dirty work, with several workers recently sharing their challenges in interviews with 60 Minutes Australia. Platform cooperatives' response: Opportunities and gaps  In Kenya, individuals and organizations ha...