Funded by: República Portuguesa – Cultura / Direção-Geral das Artes
Support: Lisbon City Hall, DINAMIA’CET (ISCTE-IUL) and NOVA LINCS through FCT.IP funds
Design: Marco Balesteros (LETRA)
Photography: Joana Linda
Sound: LAMS
The sixth edition of Human Entities, culture in the age of artificial intelligence, returns to Palácio Sinel de Cordes (Campo de Santa Clara, 142-145, Lisbon) in May and June 2022. An initiative of the art group CADA, the programme of public talks is focused on technological change and its impacts – the ways technology and culture influence each other.
All welcome, free entry, booking required here.
The truth has always been contested. But in recent years the tacit knowledge which provides a common ground upon which people argue has become increasingly reduced, as if consensual reality is slowly slipping away. It’s hard to tell whether this epistemic crisis is an overblown idea or is indeed the consequence of science denial, postmodern theory, or the result of contemporary capitalism.
Clearly, while radio and tv remain big drivers, the means of truth-telling has changed. The rise of social media based on data extraction is unprecedented. And although the link with post-truth might not be direct, the novel feedback loops between our social, cultural and political lives must surely play a part – as do the way AI technologies which power the platforms impact our collective cognitive processes. But it’s not our aim to debate whether social media is good or bad, of course it is both. For cultural distribution and production alone, the advantages are clear.
Human Entities 2022 will focus on how network algorithms used to group like-minded individuals reinforce our worldviews and are deliberately designed to foment division. We will also discuss post-truth as a product of platform capitalism and examine how recent battles between a handful of companies to dominate the market in AI reflect a structural transformation in global power. This year’s event continues to present alternative more equitable visions and we will hear how experimental practice in sound art offers an escape.
CADA (Jared Hawkey/Sofia Oliveira)
How should we share the truth about the environmental crisis? At a moment when even the most basic facts about ecology and the climate face contestation and contempt, environmental advocates are at an impasse. Many have turned to social media and digital technologies to shift the tide. But what if their strategy is not only flawed, but dangerous?
In this presentation, Bram Büscher traces how environmental action is transformed through the political economy of digital platforms and the algorithmic feeds that have been instrumental to the rise of post-truth politics. Building on a novel account of post-truth as an expression of power under platform capitalism, he shows how environmental actors mediate between structural forms of platform power and the contingency of environmental issues in particular places. Key in understanding this mediation is a reconfiguration of the relations between nature, truth and power in the 21st century. Its upshot is the need for an environmental politics that radically reignites the art of speaking truth to power.
Bram Büscher is Professor and Chair of the Sociology of Development and Change group at Wageningen University and is a visiting professor at the Department of Geography, Environmental Management and Energy Studies at the University of Johannesburg. His research and writing revolve around the political economy of environment and development with specific interests in biodiversity, conservation, new media, digitalization and violence. He developed the concept of Nature 2.0, which focuses on the political economy of new media and its implications for participation in nature conservation. He is the author of Transforming the Frontier: Peace Parks and the Politics of Neoliberal Conservation in Southern Africa (2013), co-author, with Robert Fletcher, of The Conservation Revolution: Radical Ideas for Saving Nature Beyond the Anthropocene (2020) and author of The Truth About Nature: Environmentalism in the Era of Post-Truth Politics and Platform Capitalism (2021).
The political economy of artificial intelligence is pivotal to the future of the current technology giants. Yet when we turn to the existing research on AI’s impact on the economy, nearly all the attention has been on what we might call the automation/productivity channel, with discussion centered around whether, when, and how the spread of machine learning will automate and/or augment existing jobs. Much less attention has been given to how the nature of AI today may facilitate the concentration of capital. This talk will examine the impact of AI’s centralization under the control of the major planetary platforms, and ask whether alternatives might be possible.
Nick Srnicek is a Lecturer in Digital Economy in the Department of Digital Humanities at King’s College London. His most recent book, Platform Capitalism (2016), sets out a framework for understanding the novelties of businesses like Google, Amazon, and Alibaba – as well as how digital platforms generate new tendencies within our economies. His current research is continuing this focus by examining the political economy of AI and looking at how (beyond automation) AI will affect the dynamics of contemporary capitalism. Nick’s work is also engaged in the long tradition of anti-work politics. His first book, Inventing the Future (2015, with Alex Williams), was an attempt to elaborate an anti-work politics in the context of modern technological changes. His forthcoming book, After Work: The Fight for Free Time (2023, with Helen Hester), seeks to expand anti-work politics into the field of social reproduction by looking at how the often unwaged work of cleaning, cooking, and caring can be recognised, redistributed, and reduced.
Somewhat similar to what it is commonly said about migrants, autonomous machines are taken to be a potential threat to some human labour. In military environments, these systems and their efficiency can, in fact, be more lethal than those controlled by people. This idea allows us to roll back to the core definition of intelligence which, since the Industrial Revolution has been deeply linked with efficiency-as-productivity, and subsequent avoidance of errors. This definition which is the heir of a type of rationality, with origins in the Enlightenment, is placed at the top of a hierarchy above all other human thought systems. Problems linked to managing the natural environment, where other later "non-rational" human cultures are encountered, have been solved through domination and even annihilation. We can now see that some AI systems continue this legacy.
In this context, AIELSON [a machine learning model Torres trained to generate spoken-word poetry] reflects upon the zeitgeist, incorporating a complex critique where the system is seen to be connected to humanity (as a reflection) since imperfections are not discarded but embraced. Consequently this contradicts the notion of intelligence as the epitome of flawless efficiency and perfection. Hence, Torres proposes that we should now discuss machine creativity, and how creativity informs human imagination. Her work asks the question: Can we envision another future of possible cooperation between humans and machines, where the natural world is no longer seen as a territory to conquer?
Paola Torres Núñez del Prado departs from the exploration of the limits of the senses to examine the concepts of interpretation, translation, and misrepresentation, reflecting on the mediated sensorial experiences that (re)construct our perceived reality and that in turn serve to establish a cultural hegemony within the history of technology and the arts. Recently, she received an Honorary Mention at the Prix Ars Electronica for AIELSON, a system developed during her residence on Google’s Artists + Machine Intelligence program 2019-20.
Her performances and her artworks, which are also part of the collections of Malmo Art Museum and the Public Art Agency of Sweden, have been presented in several countries of the Americas, Central Europe, and Scandinavia, where she is currently based.
It is with regret that CADA has to inform that for reasons of force majeure on the part of the speaker, this talk has had to be postponed. A new date will be announced as soon as possible.
In Discriminating Data [2021], Wendy Hui Kyong Chun reveals how polarization is a goal—not an error—within big data and machine learning. These methods, she argues, encode segregation, eugenics, and identity politics through their default assumptions and conditions. Correlation, which grounds big data's predictive potential, stems from twentieth-century eugenic attempts to “breed” a better future. Recommender systems foster angry clusters of sameness through homophily. Users are “trained” to become authentically predictable via a politics and technology of recognition. Machine learning and data analytics thus seek to disrupt the future by making disruption impossible.
In this conversation, Chun will discuss the themes of her book with Andrea Pavoni, assistant research professor at DINAMIA’CET and then take questions from the audience.
Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute. She studied Systems Design Engineering and English Literature, which she combines in her current work on digital media, and is the author of Control and Freedom: Power and Paranoia in the Age of Fiber Optics (2006), Programmed Visions: Software and Memory (2011), Updating to Remain the Same: Habitual New Media (2016) and, more recently, Discriminating Data (2021).