The turn to Ubuntu in AI ethics scholarship marks a critically important shift toward engaging African moral and politico-philosophical traditions in shaping technological futures. Often encapsulated through the phrase “a person is a person through other persons”, Ubuntu is frequently invoked to highlight ontological interdependency, communal responsibility, relational personhood, and the moral primacy of solidarity and care. It is often positioned as an alternative to individualism, with the potential to complement or “correct” Western liberal frameworks.
But what does this
invocation actually do? Is Ubuntu being used to transform how we think about
ethical challenges in AI, or is the emerging discourse merely softening
existing paradigms with a warmer cultural tone?
The emerging pattern
A recurring pattern across the literature reveals a limited mode of Ubuntu engagement. It begins with a description of AI-related ethical concerns: dependency, bias, privacy, data colonialism, extractive infrastructures, bias, accountability, transparency, alignment or the exclusion of African voices in global AI discourse. These are crucial diagnoses. Ubuntu is then introduced as a counterpoint — a philosophical tradition grounded in interdependence and communal values.
But from there, the
discussion often stalls. Ubuntu’s ethical promise is asserted without
demonstrating how it fundamentally reorients ethical analysis or
decision-making.
Instead of showing how it reshapes ethical reasoning, concepts or categories, many texts default to general affirmations. Ubuntu is said to promote care, solidarity, relationality, harmony, and shared responsibility. These values are ethically rich, but they are frequently offered in broad strokes, leaving the hard ethical and political work unaddressed.
They often stop short of showing how Ubuntu alters our understanding of power asymmetries in datafication, leads to different institutional or governance practices, or would respond to the political economy of AI differently than existing frameworks like sustainability, fairness, or responsibility.
In short, Ubuntu is
frequently treated as a general ethical orientation rather than a critical
method. It tends to affirm shared values without probing their implications or
reconfiguring dominant frameworks with any depth.
The depth of ethical reframing present
Despite this overall pattern, there are notable exceptions where Ubuntu begins to do more substantive work.
Autonomy,
consent and privacy
One kind of reframing occurs in the triadic context of consent, autonomy, privacy. Here, Ubuntu is often used to directly challenge the dominant liberal model of data governance. This model conceptualizes privacy as a personal right to control access to one’s data and positions consent as a discrete, transactional mechanism that legitimates data collection. Within this model, data governance centres on individual autonomy, assuming that informed individuals can freely choose how their data is used.
However, relational frameworks like but not limited to Ubuntu offer an alternative to this data regime. By highlighting the relationality of data, they challenge the notion that data rights can be fully exercised by individuals alone. This critique unfolds across several dimensions: social (data affects others beyond the individual), epistemic (knowledge production is a collective contribution relying on aggregation and pattern recognition across populations), and political (data use often reinforces structural inequalities).
In contrast, relational approaches like Ubuntu and feminist ethics help to reframe and reimagine privacy as relational integrity, whereby ethical data use depends not on isolated permissions but on collective processes of responsibility, oversight, and mutual recognition.
This reframing can be connected to other AI ethics issues, like data colonialism, by allowing for its systemic critique, rather than just symptoms like misappropriation of personal data. Relational frames, like Ubuntu, challenge the ethical, epistemic and political foundations that make colonial data extraction appear legitimate. After all, data colonialism thrives on the fragmentation and commodification of individual data, where consent operates as a contractual license to extract. Put simply, the isolation and decontextualization of data are the very conditions of possibility for systems of extraction, abstraction, and inequality. By reframing the data subject (as socially embedded) and conceptualisations of privacy (from individual control to collective ownership), relationality undermines the premise that individual consent can justify the extraction and transnational movement of data with wide-reaching implications.
Moreover, the transformations here are profound, shifting from a logic of individual rights to responsibility and care towards others we are linked to through data, in addition to those communities that bear the brunt of extractive infrastructures, and the future generations to whom we bequeath inequity. Some authors propose a data commons model grounded in Ubuntu, where data is not treated as a commodity but as a shared resource governed through cooperation and care — a move that reframes data governance by directly opposing extractive data colonialism and reconceptualizing consent, privacy, and ownership in relational terms.
A
feminist critique of Ubuntu in sustainable AI
Another compelling instance of deeper ethical reframing emerges in Mensah & van Wynsberghe (2025), where the authors engage Ubuntu and the discourse of “sustainable AI” through an African feminist lens. Rather than treating Ubuntu as monolithically virtuous, they interrogate its internal limitations, particularly how patriarchal interpretations may reproduce marginalization. By confronting embedded hierarchies within African ethics traditions themselves, it suggests that Ubuntu must evolve to address gendered power asymmetries in AI development and governance.
In my view therefore, their contribution to AI ethics lies in their use of African feminist ethics to critically rework Ubuntu, shifting it from a generalized ethos of communal harmony to a framework attentive to gendered power relations and context-specific obligations. They argue that mainstream AI sustainability debates often ignore the lived realities of African women, who disproportionately bear the social and environmental costs of AI infrastructures. For example, they highlight how e-waste and mining sites (essential to AI supply chains) often displace or harm women farmers and caregivers in rural communities. By foregrounding care, vulnerability, and interdependence, the authors call for an AI ethics rooted not just in global principles like fairness or transparency, but in relational accountability and repair. Their approach reframes sustainability as a justice issue and positions Ubuntu, when critically reinterpreted through a feminist lens, as a source of ethical guidance that challenges both Western individualism and patriarchal communitarianism.
Ubuntu
and environmental ethics in African AI policy
Another example of moral reframing can be found in Mensah & van Wynsberghe (2025), where Ubuntu is mobilized to critique the Eurocentric foundations of African AI policy, the technocratic and growth-driven framing of sustainability, and the ethical dumping of AI’s environmental costs onto African communities.
In dominant AI ethics, they observe, environmental harm is often understood in instrumentalist terms (e.g., increased carbon emissions, unsustainable energy usage, or resource depletion). However, Mensah and van Wynsberge foreground Ubuntu to recast environmental harm not merely as physical degradation of regulatory concern, but equally a cosmological transgression — a rupture in spiritual and relational order. Here, nature is not a resource but a living entity, part of a relational chain that includes ancestors, deities, and future generations, and renders environmental care not just a policy preference, but a moral and spiritual duty.
In addition, they also offer a robust critique of ethical dumping, the practice of locating AI infrastructure such data extraction zones and digital waste sites in Africa due to weak regulations. While we tend to see this as a problem of insufficient consent or lack of compliance, the Ubuntu lens recasts it as a violation of ethical interdependence and solidarity. This is a shift from regulatory failure to moral breach that undermines the wellbeing of communities in Africa and their future capacity to flourish.
In the
process, it redefines harm not in terms of lack of safeguards, but as a failure
of mutual responsibility and recognition between regions, peoples, and
generations. Though this moral reframing is deep and forceful, it remains light
on procedural detail, failing to spell out how these values would reshape
actual governance structures, mechanisms, or policy tools. Still, it opens a
critical space for imagining an ethics of AI grounded in relational worldviews.
Conclusion
The use of Ubuntu in AI ethics literature is both promising and precarious. It promises a rich, relational, and communitarian orientation to ethical life and desperately needed alternative to individualistic and utilitarian frameworks. But its current use is precarious because it often stops short of doing the conceptual and political work needed to reshape the discourse it enters. Moving forward, scholars and practitioners should ask not just whether Ubuntu is present, but what it is allowed to do.
Does it
reframe the ethical stakes? Does it propose alternative forms of reasoning,
obligation, or governance? Does it push back against the epistemic authority of
the dominant paradigms? Or is it invoked only to signal ethical pluralism and
cultural difference (Africanness), without reshaping the ethical grammar of AI
governance?
Comments
Post a Comment