The increasing use of AI systems poses new risks for cybersecurity in democratic African societies. The fast-changing nature of AI risks requires organisations to adopt new measures to protect sensitive information from unauthorised access. Encryption is one of these measures, as it can secure data from being intercepted or tampered with by malicious actors. While autonomous adversarial attacks may attract the most attention, Mathew Ford and Andrew Hoskins argue that reliance on generic entry-level consumer and corporate IT systems can create a single point of failure that may jeopardise the overall security of an organisation. ( In 2022 I wrote a review of Ford and Hoskins’ book for the LSE Review of Books .) Indeed, as my friends Andrew Rens, Enrico Calandro and Mark Gaffley suggest , these mundane risks are the most susceptible to advances in software. AI can be used, for example, to generate fake correspondence for a large-scale phishing campaign. And machine learning can enh...
Exploring the intersection of AI, law, and political economy