Skip to main content

New Technologies, Old Injustices - Scott Timcke

The increasing use of AI systems poses new risks for cybersecurity in democratic African societies. The fast-changing nature of AI risks requires organisations to adopt new measures to protect sensitive information from unauthorised access. Encryption is one of these measures, as it can secure data from being intercepted or tampered with by malicious actors. 

While autonomous adversarial attacks may attract the most attention, Mathew Ford and Andrew Hoskins argue that reliance on generic entry-level consumer and corporate IT systems can create a single point of failure that may jeopardise the overall security of an organisation. (In 2022 I wrote a review of Ford and Hoskins’ book for the LSE Review of Books.)

Indeed, as my friends Andrew Rens, Enrico Calandro and Mark Gaffley suggest, these mundane risks are the most susceptible to advances in software. AI can be used, for example, to generate fake correspondence for a large-scale phishing campaign. And machine learning can enhance the scale and effectiveness of cyber-incidents by manipulating enterprise AI through inputs, causing distortions that benefit an attacker. And then adversarial attacks can exploit AI to identify weak points. Apart from state espionage, cybercrime activities can infiltrate IT systems, holding them hostage for ransom payments.


The international political economy of AI-enabled risk

There is a whole international political economy of AI-enabled risk that also shapes the uses and abuses of technology. By this I mean that transnational economic forces and political objectives affect the development and deployment of AI systems, regardless of the potential risks associated with these technologies. These factors include government policies and regulations, corporate investment and competition, and global trade and economic relations. 

The international political economy of AI risk also covers the social and ethical implications of AI - such as issues related to privacy, autonomy, and the possibility of AI being used for malicious purposes. Encryption can play a role in preserving privacy and autonomy, as well as in deterring or detecting malicious uses of AI. This political economy influences the distribution, circulation and exposure to risk, as well as the differences in power and position between those that produce AI systems and those that bear the brunt of those systems. 

In short, AI is connected to broader issues of social and economic injustice because often people within poor, underserved communities are more likely to be exposed to the hazards from AI, or are more likely to face unequal enforcement of AI regulations, while also lacking the ability to exercise their digital civil rights.

AI and institutional racism can exacerbate systemic discrimination with grave consequences for affected communities. International actors are poised to reap many of the benefits of AI systems, while the risks are largely shouldered by others, including the public sector in Africa. Popular AI ethics discourse around explainability, algorithmic fairness and privacy may have blind spots that may perpetuate, rather than remedy,discriminatory practices.


The state of cybersecurity

Likewise, there may be blind spots, oversights and issues relating to impartiality if the state security cluster ‘owns’ cybersecurity issues, with some of the state’s surveillance functions being declared unconstitutional for violating the right to privacy. Encryption is a key element in protecting the right to privacy and preventing discriminatory practices. Encryption can secure data from being accessed or manipulated by unauthorised parties, including state actors or private entities. Encryption can also enable individuals and groups to communicate and express themselves freely and securely, without fear of surveillance or censorship. 

Given the lasting impact of colonial underdevelopment in Africa, people living on the continent do not have the same material, institutional, or fiscal resources to address cybersecurity risks as their former colonial occupiers. A related factor is that post-colonial state formation often occurred under neocolonial fiscal relationships that imposed structural adjustment programmes that deprived public institutions of resources. Public administration lost technical skills and institutional knowledge for a generation. These are some of the reasons why African countries are ill-equipped to deal with cybersecurity vulnerabilities in general. And why existing risk assessment and mitigation frameworks may need to be revised to account for more fundamental vulnerabilities that may be overlooked elsewhere.

Encryption can help prevent or mitigate ransomware attacks, as it makes data inaccessible or unreadable to attackers. Public policy should prioritise understanding the scope, magnitude, and implications of the AI-enabled cybersecurity risk. This involves examining the potential harm that AI deployment can inflict on digital networks and systems, societies, organisations, and individuals. This can include issues such as AI-enabled cyber-incidents on critical infrastructure, the use of AI to disseminate disinformation or influence elections, and the use of AI to reinforce existing social biases or amplify misconceptions. These are some reasons why encryption can help protect AI systems from being compromised or corrupted by external or internal threats. 

It is essential that state officials and policymakers collaborate with experts in AI and cybersecurity to understand and address these risks in a comprehensive and effective manner.


If you wish to read more about this area and approach, I have a few pieces on cybersecurity. There is a policy brief with Nawal Omar, New roles for new skills: Drawing upon African technologists to build the AfCFTA, plus our essay Encryption is vital to promoting democracy in Africa. Last year Andrew Rens, Mark Gaffley and I have published an article on cybersecurity as industrial policy and developmental practice in the African Journal of Information and Communication.

Comments

Popular posts from this blog

Beyond a buzzword: Can Ubuntu reframe AI Ethics? - Anye Nyamnjoh

The turn to Ubuntu in AI ethics scholarship marks a critically important shift toward engaging African moral and politico-philosophical traditions in shaping technological futures. Often encapsulated through the phrase “a person is a person through other persons”, Ubuntu is frequently invoked to highlight ontological interdependency, communal responsibility, relational personhood, and the moral primacy of solidarity and care. It is often positioned as an alternative to individualism, with the potential to complement or “correct” Western liberal frameworks. But what does this invocation actually do? Is Ubuntu being used to transform how we think about ethical challenges in AI, or is the emerging discourse merely softening existing paradigms with a warmer cultural tone?   The emerging pattern A recurring pattern across the literature reveals a limited mode of Ubuntu engagement. It begins with a description of AI-related ethical concerns: dependency, bias, privacy, data coloni...

AI Worker Cooperatives and a Strategy for Tackling Safe Havens - Nelson Otieno Okeyo

Artificial intelligence is transforming society, particularly work environments, with the rise of click work complicating labour rights. AI companies contract third-party firms in Kenya to train datasets, aiming to reduce bias and toxicity in AI models, under the guise of creating ethical AI . Despite AI systems being computer-based, they fundamentally rely on human labour for training datasets and for algorithm development. These workers could be content moderators or data labellers. A TIME investigation revealed concerns of working conditions for AI labourers , including low wages, toxic environments causing mental health issues, insufficient compensation, and stressful working conditions. Kenya's status as a regional ICT hub has made it a centre for this work, including dirty work, with several workers recently sharing their challenges in interviews with 60 Minutes Australia. Platform cooperatives' response: Opportunities and gaps  In Kenya, individuals and organizations ha...