Skip to main content

The Properties and Property of AI Products - Scott Timcke

By now you have heard that John Hopfield and Geoffrey Hinton have been awarded the 2024 Nobel Prize in Physics. Their award is “for foundational discoveries and inventions that enable machine learning with artificial neural networks”. Absolutely well done to both and their various teams. 

In conjunction with the recent release of the UN's Governing AI for Humanity report, these events can be said to mark a milestone in the public attention given to AI over the past two to three years. 

The Nobel committee's decision celebrates significant scientific achievements. Predicting protein structure from primary amino acid sequences, as exemplified by tools like AlphaFold, is indeed impressive. If your head is spinning a little, thankfully the Nobel Foundation has provided an infographic that does a fantastic job at explaining all this for a general audience. 

More importantly, the prize represents a recognition of how machine learning can enhance the ‘brute compute force’ approach to complex problems. 

As someone who is somewhat more circumspect about developments in tech, I occasionally encounter friction from AI enthusiasts who point to successes like AlphaFold as evidence of AI's unequivocal value. The enthusiasts suggest that people like me are naysayers, individuals who refuse to see the capabilities of these technologies and their potential for transformative impact. 

However, I believe we often speak past one another because we  focus on entirely different aspects of AI.


The Problem of Black Boxes 

Too often, AI enthusiasts fail to discuss the technology with sufficient granularity, whether socially, politically, or technically. Through abstraction they become advocates for a monolithic concept of ‘AI’ that lumps very diverse technologies and products under a single umbrella. 

As a result, AI becomes an all-encompassing term, and the positive qualities and value of a specific technology like AlphaFold are then transferred onto products like the YouTube Trending Videos algorithm, or any other AI-related product for that matter. We are creating an ‘AI aura’ that suspends a more detailed evaluation of each product's individual value.

There is also ‘commercial reason’ at play. Tech companies are quite happy to encourage a positive AI aura while also using a degree of opacity to stave off an understanding of how their product works. One functional by-product is a diffusion of responsibility which can make it harder to scrutinize particular applications. 

This overgeneralization suspends critical evaluation of individual AI products and their specific merits, drawbacks, utility and utilization. 

A propensity to treat technology as a "black box" further complicates matters. When we fail to examine the inner workings of AI systems, for example, we miss the opportunity to move from broad product classes to granular assessments of goods, services, their value propositions, and associated costs. Giving credence to the AI aura can prevent us from engaging in necessary levels of analysis that private interests may otherwise discourage.


The Question about Property Regimes

Beyond the properties of any one AI product lies an equally crucial consideration: how these innovations are treated as property and what is the prevailing property regime?

Regardless of the potential benefits that tools like AlphaFold might offer, we must be frank about how these are often privately owned assets. This means that any potential goods derived from these technologies are subject to the discretion and pricing strategies of their owners. 

In thinking about these issues, I’m reminded of Joseph Keller’s point that “clear language is necessary for clear policy on AI.” 

And so I very much concur with Emily Tucker: “For institutions, private and public, that have power consolidation as a primary goal, it is useful to have a language for speaking about the technologies that they are using to structure people’s lives and experiences which discourages attempts to understand or engage those technologies.” 

Black box thinking won’t help us recognise the next set of socio-technical problems, and indeed the precise role of property in shaping which properties are afforded to the public,and which are affordable by the public.

So yes, congratulations about technical achievements are due. But this is also a moment to think about how private property has the right to control who gets to benefit from these advances.



Comments

Popular posts from this blog

Beyond a buzzword: Can Ubuntu reframe AI Ethics? - Anye Nyamnjoh

The turn to Ubuntu in AI ethics scholarship marks a critically important shift toward engaging African moral and politico-philosophical traditions in shaping technological futures. Often encapsulated through the phrase “a person is a person through other persons”, Ubuntu is frequently invoked to highlight ontological interdependency, communal responsibility, relational personhood, and the moral primacy of solidarity and care. It is often positioned as an alternative to individualism, with the potential to complement or “correct” Western liberal frameworks. But what does this invocation actually do? Is Ubuntu being used to transform how we think about ethical challenges in AI, or is the emerging discourse merely softening existing paradigms with a warmer cultural tone?   The emerging pattern A recurring pattern across the literature reveals a limited mode of Ubuntu engagement. It begins with a description of AI-related ethical concerns: dependency, bias, privacy, data coloni...

AI Worker Cooperatives and a Strategy for Tackling Safe Havens - Nelson Otieno Okeyo

Artificial intelligence is transforming society, particularly work environments, with the rise of click work complicating labour rights. AI companies contract third-party firms in Kenya to train datasets, aiming to reduce bias and toxicity in AI models, under the guise of creating ethical AI . Despite AI systems being computer-based, they fundamentally rely on human labour for training datasets and for algorithm development. These workers could be content moderators or data labellers. A TIME investigation revealed concerns of working conditions for AI labourers , including low wages, toxic environments causing mental health issues, insufficient compensation, and stressful working conditions. Kenya's status as a regional ICT hub has made it a centre for this work, including dirty work, with several workers recently sharing their challenges in interviews with 60 Minutes Australia. Platform cooperatives' response: Opportunities and gaps  In Kenya, individuals and organizations ha...