Limits to digitalisation

Artificial intelligence and the work of the future: The room was well filled with around 45 participants when publicist and theatre director Fabian Scheidler presented his theses on AI. He argued for a demythologisation and for the not inconsiderable ecological damage caused by AI to be taken into account.

The discussion on 7th December 2023 in the neighbourhood center in Berlin-Prenzlauer Berg was intended to enable trade union perspectives on good work in times of digitalisation to be combined with considerations on the crisis-ridden transformation of society and the economy, under the conditions of which technology is being used. Dr. Nadine Müller, Head of Innovation and IT at the services trade union ver.di, unfortunately had to cancel her participation at short notice due to illness. Fabian Scheidler, author and theatre director, and the audience made up for this shortcoming with their interesting contributions.

The moderator, Sophia Bickhardt from weltgewandt e.V., began by introducing the institute’s activities and in particular the “Resilient Work” project, which was the framework for the event. She referred to the learning platform that is being developed on this topic and invited participants to study the free courses on the learning platform of the predecessor project “Fresh Up Economics. Towards Economic Literacy in Europe“.

She then outlined the horizon of the evening’s questions and discussed the term artificial intelligence. AI seems to be a magic word at the moment – even if it has been around for a long time. For example, in facial recognition at railway stations, as a translation algorithm such as deepl.com, for food delivery by drone or journeys in autonomous taxis. ChatGPT from Open AI has been in use for a year now, and for those who like it, even in girlfriend mode, which makes “endless conversations” with a virtual girlfriend possible. AI would be used in medicine – the keyword here is “digital health” -, in speculation on the financial markets, in drone missions in war, in journalism and thus also in social understanding of what is considered truth. One of the promises of AI is its use in everyday office life, for example when reports and minutes are written with it or students use it to write their papers. AI stands for new (technical) possibilities – and for new markets.

What is artificial intelligence?

According to the Bremen Economic Development Agency, AI can be defined as

“… the attempt to transfer human learning and thinking to the computer and thus give it intelligence. Instead of being programmed for every purpose, an AI can find answers independently and solve problems autonomously. […] The goal of AI research has always been to understand the function of our brain and mind on the one hand and to be able to artificially recreate it on the other.” (1)

In contrast to earlier phases of technological change and industrialisation, AI is now also “mechanising” cognitive activities. Nevertheless, we must bear in mind „… that it is and remains a machine, even if it appears to be human or intelligent.”

The prerequisite for the creation of AI is the mass availability of data, which is analysed at high speed. An AI learns independently from this data. However, it would be programmed by humans.

The presenter also pointed out a problem. Just as the term “intelligence” is not clearly defined, the term “artificial intelligence” can hardly be. This is why some people prefer the term “machine learning”. Nevertheless, a distinction is made between so-called strong and weak AI. Strong AI can solve problems of a general nature – it does not (yet) exist. Weak AI are algorithms that can process specific tasks “whose solutions they have previously learnt independently”.

This means that when we talk about AI, we generally mean weak AI. It is also important to realise that AI does not have its own consciousness, cannot understand independently and is not creative. These are all characteristics of humans.

Dimensions of the discussion of AI

If we want to systematically take into account the fact that AI is made by humans, the question of the social conditions under which AI is created and for which it is produced also arises. According to the moderator, it is therefore not enough to discuss AI solely in relation to technological change, as is often the case. She suggested discussing it embedded in production contexts and social situations. She named the

socio-technical dimension, the connection between technological development and economic growth. Following on from this, she asked how work would change as a result of AI. What kind of work will (still) exist in the future? Under what conditions will “good work” be possible? And how can growth and sustainability be harmonised with AI?

macroeconomic and societal dimension, i.e. the various crises, for example with regard to the increasing gap between rich and poor, the destruction of nature, the inability to resolve conflicts peacefully, the increasingly fragile democracies, etc., in the context of which AI is being developed and used,

economic policy dimension, i.e. the current design of the capitalist economic system according to liberal economic concepts of privatisation, liberalisation and deregulation and thus the increased orientation of work and life towards principles of (monetary) profit or benefit. Which AI will be created under these conditions – and which will not?

philosophical and socio-psychological dimension: What power do people give to technology? At what point do they recognise an “entity” in a humanoid robot, for example, by which they – unconsciously – allow themselves to be controlled? Or do they see technology as an instrument, as a tool? Do ChatGPT, Alexa, Siri, LamBDA etc. have a consciousness? Can they be regarded as subjects? Finally, are we entering a “post-human age” (André Gorz) in which the human brain will be replaced by “more efficient” and “smarter” artificial intelligence?

democracy dimension: What “upgrade” of democracy is required to ensure that the rapid technological and social change is organised democratically? Do the consequences of changing power constellations due to the emergence of large corporations such as Google, Amazon and others in the last 20 years, whose power is no longer just economic, also need to be taken into account? And how can citizens’ data be effectively protected?

Utopia or dystopia?

The moderator then drew a contrast: could the use of artificial intelligence help to bring us closer to the utopia of a socially just and ecological economy and way of life? Or is the horror scenario of surveillance capitalism (Shoshanna Zuboff) more likely to emerge, which would probably exacerbate these crises while focussing on getting everything under control? As a technical solution to social problems, as it were, and as an orientation towards the creation of new markets?

And because an “either-or” approach is rarely observed, the question is to what extent there are elements of both. That AI would be used, for example, to make work in the care sector significantly easier, but also to put workers under permanent pressure through meticulous monitoring, as is the case with Amazon, for example. Reflecting on these developments leads to the question: What social conditions do we, the citizens, want – with and without AI?

Demystifying AI, domesticating corporations

Fabian Scheidler spoke in favour of demystifying AI. It is less a question of whether technology is impressively good and whether humans just need to adapt. Rather, we need to realise that it is man-made. This raises the question of who it serves and for what purpose. He emphasised the large ecological footprint of AI – high energy consumption, raw materials such as rare earths that would be required – and linked this downside of digitalisation as a whole to the problem of the growth of capitalist economies. The question of ownership should be taken into account. This also includes “breaking up corporations”.

This idea was echoed several times by the audience. One participant critically objected that this was only a “pseudo-radicalism” and itself a neoliberal argument. After all, even if companies were split up into smaller units, everything would remain completely market-driven. He also said that nationalisation was not automatically the better choice and cited the example of the state banks of some federal states, which got into major problems during the financial crisis from 2007/2008 because they had “gambled themselves away”. Instead, he spoke in favour of better regulation of corporations and significantly more employee co-determination. They should have a say in deciding which AI is used and which is not.

Crisis of civilisation

Another participant pointed out an aspect that is almost never mentioned in debates about digitalisation: “If the mass of added value starts to shrink globally due to new technologies, then the reproduction system of capitalism will collapse.” He sees this process happening in the USA and Europe, but now also in China: “We have long since entered the meltdown of the mass of value in China. […] If this process continues, an overall structure of reproduction will collapse and something new may emerge – in an attempt to retain the old property titles. And that is what will cause the situation to explode.” Fabian Scheidler stated in the course of the conversation: “We are in a crisis of civilisation.”

The reactions to the discussion were positive. One participant said: “It was a real relief for me. At last, we were once again discussing social issues.” One visitor was impressed that it was possible to actually get the larger number of participants talking. Several asked when the next events would take place…


1 Jan Raveling, Was ist Künstliche Intelligenz? 11.04.2023, https://www.wfb-bremen.de/de/page/stories/digitalisierung-industrie40/was-ist-kuenstliche-intelligenz-definition-ki

– – –

The event was part of the project „Time to Fresh Up. Cultivating Economic Literacy for Resilient Work in Europe“ funded by the Erasmus+ programme of the European Union.