October 4, 2021
A few days ago, an algorithm popped an ad into my Facebook feed, recommending me to invest, along with other 2,000 shareholders, in a product developed by graduates of a famous US university. The ad boasted that in this way, I would contribute to “building the future” and to ”value creation”, words that sound very odd to my ears.
“Everyone desires today ‘something + AI’, it’s a magic formula,” someone mentioned in the comments on the company’s ad. In over 400 comments, many seemed genuinely enthusiastic, while others sounded optimistic at least. People contributed with real money to this collection (a kind of masked crowdfunding, which required a minimum threshold of a few hundred euros) and from discussions and comments it looked like many of the participants had enough confidence to invest. When skeptics asked questions and wanted more details, a company representative responded by trying to provide new information about the business plan. Their explanations ranged from how it “creates value”, to briefly presenting the ways they aim to attract future big investors, to offering detailed complex calculations about a future stock market listing, projections of profitability, scalability, and, last but not least, giving assurance that the promise of a secure investment is based on real data. Although the deadline for achieving these goals was set to be 2030, for many this timeline seemed reasonable.
Obviously, everything was presented as an unmissable opportunity in which the value brought by AI will change the lives of those who have the courage and inspiration to get to the next level and invest. But on a closer inspection, what seemed to be a great opportunity looked more like a fishy campaign for some kind of start-up. It wasn’t very clear where the company was located and the owners intended to raise just $ 20,000 to put into practice a rather ambiguous but nonetheless interesting idea: to sell AI.
However, it was not clear what the term AI refers to, what type of AI, what services, or what kind of products they intended to create and sell on the market. The reference to AI was probably considered sufficient and self-explanatory. In the absence of other information, I assumed that in this case, it was a limited form of AI (i.e. Artificial Narrow Intelligence – ANI), a category that can perform just some specific tasks.
This peculiar invitation to invest combined with the idea of an AI market made me very curious. The promise of a safe and easy return, all the inaccuracies from the presentation, the information gaps, and the cocktail of beliefs and expectations that emerged from the official announcement reminded me of some kind of ritual communication in which the expression ”Artificial Intelligence”, was used as a type of magic formula. This is ironic and intriguing because AI’s area of coverage is assimilated to science rather than magic.
Let me take a step back from this story and change the focus to the subject of knowledge, which is closely linked with AI. We know from Francis Bacon to Thomas Hobbes, that Scientia potentia est, meaning that when we talk about knowledge, we are talking about power. We also know that in the spirit of this famous dictum, Michael Foucault paid special attention to the power-knowledge, power-discourse relationship, and also to the ways in which the monopoly on knowledge contributes to the reproduction of power structures. From this point on, things get more complicated and new challenging questions arise:
Can knowledge be democratized?
Could AI become a tool for the widespread distribution of knowledge?
Is this new, unarticulated desire, the engine behind the exuberance around AI?
These questions will remain open and I will not try to find answers here, but it is certain that the mixture of social sciences and technology in this old story of knowledge has opened in the last decades many “black boxes”. And so, some social science researchers’ shifted their attention, lock, stock, and barrel, to the construction of knowledge in AI, a topic that provoked lively debates among visionary computer scientists, skeptical observers, sociologists, anthropologists, philosophers, and other scientists.
In short, two distinct ways of looking at AI emerged. On the one side, H.M. Collins (1990) tried to answer questions that are an important topic even today:
What is knowledge? How is it made and moved around (i.e., from man to (and in) the machine)?
Collins states that the focus of knowledge is not the individual, as it is believed, but the social group, because what we cannot articulate, we know through the way we act. “Knowing things and doing them are not two separate issues. I know how to talk only by talking to others and I can learn how to talk only by talking to others (…) if I isolate myself for a day in a room without having contact with anyone when I go out in the evening, my knowledge will not be changed too much”, argues Collins. For him, the place of knowledge is not the individual, but the social group and, therefore, the individual is the one who is built by social groups, not the other way around.
In other words, knowledge is based on a process of continuous socialization and interaction. Looking at the ways knowledge acquisition is done, Collins identifies two models of learning: an “algorithmic model“, I could also call it rational, in which knowledge is clearly stable and transferable like something that takes the form of a recipe – that is what AI does – and an “enculturational model“, I could also call it intuitive, in which the processes have more to do with the unconscious social contagion – that is what people do.
“Machines can only imitate us in those cases where we would prefer to do things in a way close to machines,” adds Collins. His conclusion is that these are the limits of AI and we may say that this is, in fact, what today we call ANI.
In addition, based on ethnographic research of the knowledge acquisition processes Diana E. Forsythe (1993) arrived at a similar conclusion after observing the engineers in the laboratory where she was doing research. She found out that the “knowledge engineers” had operationally re-defined knowledge as globally applicable rules, as a stable entity that can be acquired, transferred, and manipulated by a program. In this way, following Star, who previously said that computer scientists “deleted the social”, Forsythe, adds: “they also deleted the cultural” and thus the AI community managed to automate the concept of knowledge.
On the other side, following Bruno Latour (1991) we could say that such approaches to AI are looking at it as if it is one particular segment of a program of action, while the essence of this type of knowledge is its total, unbound existence. This reflects what Latour (2005) identified as one of the great problems in social sciences, that is taking ”the social” as an explanation, and not as something to be explained. To overcome this shortcoming, he proposed a new definition of the social, which is no longer seen as a distinct field, a specific domain, or something bounded, but a particular movement of re-associations and re-assemblies which includes non-humans.
By displacing the anthropocentric perspective of the social, the Paris School (Bruno Latour, John Law, Michel Callon, and others) developed the actor-network theory (ANT), a way of looking at the world symmetrically, putting equal weight on people, machines, objects, texts, calculation formulas, etc. This shift in perspective allows “objects of science and technology to be socially compatible” (Latour 2005).
In other words, we can look at a mobile phone, a kitchen appliance, a computer, etc., as being part of the social, because non-humans – in our case technological artifacts (including AI) – become actants, overcoming the state of helpless bearers of a symbolic projection.
As Michel Callon (1997:170) puts it: “The sociology of science and technology makes this argument: Entities – human, nonhuman and textual – aren’t solid. They aren’t discreet or clearly separated from their context. They don’t have well-established boundaries. They aren’t, as jargon puts it, distinct subjects and objects, but sets of relations… ”. And the best example to illustrate this is the fact that we are in the middle of emergent and extremely dynamic process of expanding AI production and consumption that shows its effects in various forms, from chat-boots to ”entities” such as SIRI or ALEXA, to the daily automated recommendations and targeted ads that we encounter more and more in our everyday interaction with technology.
Who can say that our preferences, decisions to invest, to listen to music, or to watch movies are ours alone today?
Or that the technical solutions to solve particular challenges in areas as varied as agribusiness, banking or insurance, etc. are the results of an automated machine alone?
AI started to be more and more present in the most diverse areas of everyday life and is being adopted in various fields and industries, as the possibility of storing, handling, and creating large amounts of data is constantly growing.
A quick look at the latest industry research in the field shows that AI is increasingly seen as a major driver of economic activity and that the changes it is expected to generate in the next decade are radical. For example, a study by PricewaterhouseCoopers (PwC) predicted that the widespread adoption of AI by companies could generate a 14% increase in GDP by 2030. Another study, this time published by Gartner, showed that AI is a priority for most CIOs (chief information officers), and a McKinsey Global Institute (2019) study found that 70% of companies will adopt at least one type of AI technology by 2030. Keep in mind that all this data was collected before the Covid-19 pandemic and that more recent news points at an even faster pace of acceleration for these trends.
We speak today not only about our carbon footprint, but also about our digital footprint, the digital universe, and its explosive expansion, but also about the shortage of workforce needed to navigate the digital world that is affecting the industry. It is understandable that in this new context Harvard Business Review called data science “the sexiest job in the 21st century”. What made this new context possible was that, unlike the 1990s, when AI was part of limited and rather monopolistic research that took place in the laboratories of big university centers or big IT companies, today it could become a widespread means of production. Just as the Internet and social networks are seen as giving rise to a culture of “sharing” seemingly bringing back “the commons” as opposed to “private ownership”, AI is seen as a tool to unlock and deliver knowledge and to improve the decision-making process.
By incorporating the non-human, the social becomes a heterogeneous assemblage and, consequently, from this perspective, AI transcends the status of the individual, as it appeared in Collins’ vision, and becomes part of a collective. Entity and network, at the same time, co-extensive. In this sense, service or product that incorporates AI through design can be understood as a web of relationships in which various actors are involved. In trying to uncover these actors, we must pay attention not only to the way their tangible and intangible infrastructure is composed, but also to how knowledge is managed, from text to data, from bits of information and mathematical calculation formulas to people who are part of the specific project teams, potential clients and their interests and, finally, the users. The essence of this organization lies in the impossibility of delimiting the entity from the network through which it operates. In this sense, AI is an actor-network, a collective-individual. Reversing Collins’s argument, we could say that in this way AI reveals itself not as an individual, but as being built from human and non-human social groups. Moreover, from this perspective, we have the possibility to observe how the social and the cultural are not deleted but restored and refreshed.
Turning back to the story at the beginning of this article, it is somehow understandable why the mere invocation of the term AI proved to be, for some individuals, enough incentive to invest. The “techno-optimists” offer a good illustration of what the economist Robert J. Schiller (2000) would call irrational exuberance.
On the other hand, however, some anthropologists would see in the company’s promises of gain and, especially, in the industry research which lays the foundation for such optimism, nothing less than sentences that are neither true nor false, but a form of social action. We can say, following the philosopher John Austin, that by putting in practice and publishing this type of research, the AI industry performs the development of AI and builds not only its own future but also its ”system of beliefs”.
In addition, AI uses data generated by people, who make decisions sometimes based on the very same AI’s recommendations, reintroducing into circuit new data and information that is continuously multiplied, re-arranged, and re-used by data scientists, engineers, and machine learning processes, in a loop. If we uncover the network in which AI is generated and put to work, we can see even the framework of ‘grounded cognition. The process of knowledge-making as ”heterogeneous engineering” (Callon 1997) is also affected by the body, brain, and environment, because data-scientists, engineers, developers, as actors in the network, provide vision, movement, audition, emotion, motivation, and so on, to the acquisitor of knowledge. In this way, AI is a socio-technical system that holds society together.
Embodied by AI, the relation between technology and society reveals itself as a reciprocal relation. In this way, technology is reaffirming one more time its social re-integration by putting AI into the very social fabric and leaving aside, at least for the moment, the long-awaited evolutionary leap to an entity that thinks like a human being.
But in all these new assemblages where mobile bits of knowledge circulate, which we encounter in everyday life, we can recognize a hybrid form of knowledge, that has nothing exclusively extra-social or extra-cultural in it. It is more plausible to say that we are dealing with a human and non-human form of knowledge, not with an artificial one, a fuzzy logic – to use the terminology of mathematician Lotfi A. Zadeh – placed in the interstices of society and technology. Maybe this is what gives AI its magical aura, the fact that it is seen more and more as a technical tool that has the power to add intelligence or to augment human knowledge. This opens the way for novel possibilities (and markets) because we speak of something that seems – or seemed until recently – to be impossible to buy. If here lies that ”value”, which the AI engineers emphasize, then the exuberance around it is, in a way, understandable.
Collins, H.M. (1990) Artificial Experts: Social Knowledge and Intelligent Machines, MIT Press, Cambridge, MA. 266 pages. ISBN: 0-262-03168-X.
Callon, M., & Law, J. (1997). After the Individual in Society. Lessons on Collectivity from Science, Technology and Society. The Canadian Journal of Sociology / Cahiers canadiens de sociologie Vol. 22, No. 2 (Spring, 1997), pp. 165-182.
Latour, Bruno (2005) Reassembling the social. An introduction to Actor-Network Theory, Oxford University Press.
Shiller, Robert J. (2000). Irrational exuberance. Princeton, N.J. :Princeton University Press,
Forsythe, D. E. (1993) ‘Engineering Knowledge: The Construction of Knowledge in Artificial Intelligence’, Social Studies of Science, 23(3), pp. 445–477. doi: 10.1177/0306312793023003002.
McKinsey Global Institute Study (2019)
PricewaterhouseCoopers (2021) Navigating the payments matrix. Charting a course and revolution amid evolution and Payments to 2025 and Beyond
We’d love to hear about your plans for transformation.
Use the form below to send us a message, and we will get back to you within 24 hours.
New Broad Street House, 35 New Broad Street, London, EC2M 1NH
4D Gara Herastrau Street, 2nd Floor
Building C, 020334
+40 31 425 19 08
30 Stirbei Voda Blvd,
Malmo Business Center, 200423
+40 35 142 36 80