Aesthetics of New AI

How do we find the balance between researching AI as a technical object, without distancing it as “other” to the human?

– Mercedes Bunz

What new aspects does the technical framework of machine learning bring to art-making? And conversely, what can artworks that use AI point to in AI research and development? These questions formed the basis for discussion during an online panel event, convened by the Creative AI Lab in collaboration with the NYU Digital Theory H-Lab for Frieze Week 2020, with Leif Weatherby, Nora Khan, Joanna Zylinska and Murad Khan.

The Creative AI Lab is a collaboration between the R&D Platform at Serpentine and the King’s College London’s Department of Digital Humanities. It follows the premise that collectively, we are at the early stages of understanding the aesthetics of ‘AI’: locating a new poetics, investigating what it means to work with systems that are able to calculate meaning, and practicing art-making in the so-called ‘black box’ of machine learning. This panel formed part of an ongoing and collaborative investigation into the AI aesthetics, one of multiple lines of inquiry explored by the Lab. The event was hosted by the Lab’s principal investigators Dr Mercedes Bunz, Senior Lecturer in Digital Society, KCL, and Eva Jäger Assistant Curator of Arts Technologies, Serpentine Galleries. A reader of the panellists’ work accompanied the event – it can be downloaded here. As Eva Jäger outlined, ‘the Lab holds space for this work, convening diverse conversations, like this one, and compiling a database of tools and resources. The lab also produces knowledge and approaches to digesting and communicating this media through experimental research projects’.

‘It’s not just a black box, It’s at least grey. When you open that up you start to see things that have either aesthetic value, critical value, or both’. This initial provocation from Leif Weatherby (NYU Digital Theory H-Lab) aligned with the impetus for such practices outlined by Mercedes Bunz: that ‘we need to reflect on the societal impact of this technology critically, but also on its creative capacity – art is offering a space for the creative and playful exploration of this technology’.

The panel responded to these considerations alongside a tele-present audience through presentation and discussion. A range of divergent approaches emerged for understanding the reciprocal relationship between AI and artistic research, encompassing philosophical, art historical and conceptual-technological approaches. In particular, two often overlapping modes of practice: working with machine learning to develop a distinctive machinic aesthetics through the generation and display of ‘front-end’ imagery, and works which foreground conceptual exploration, revealing the ‘back-end’ processes and mechanisms.

Joanna Zylinska, Professor of New Media and Communications at Goldsmiths, University of London focused on the possibility that a shift in trajectory is required in AI art research from a binary view of humanity and technology. Critically questioning the motivations behind AI art production and its market, which in her view often reproduces persistent and ‘seductive’ human notions of creativity, she posited the possibility that contemporary machine learning practices should in fact tend towards a different “AI” – art for ‘another intelligence’, exemplified by Katja Novitskova’s work Pattern of Activation (2020)

For Nora Khan, Professor in Digital + Media at Rhode Island School of Design, art criticism is unequipped to attend to the optics of AI. She proposed the need for a new glossary of terminology to confront the shortcomings of our linguistic tools. She outlined the particular necessity to go beyond the prevailing view that ML-generated imagery can be comprehended through the lexicon of the dreamlike or the unconscious; in her view, this can lead to a ‘divorce from a critical reading’. However, it is these moments where language fails that offer her hope for the emergence of a new understanding.

Murad Khan, PhD student at University College London and a Visiting Practitioner at Central Saint Martins dug deeper into the notion of the adversarial. Liberating the concept from its place within the widespread Generative Adversarial Network model of ML practices, in which the adversarial component plays an evaluative role in a productive process, Murad highlighted an alternative operation, derived for a cybersecurity context, in which the adversarial leads the ML model to an incorrect conclusion, or initiates a subversive chain of events. It is through this method of induced error that the black box can be opened up, revealing that as ML processes produce knowledge, latent biases are magnified, and then distorted through aberration. He forwarded the possibility that the failings of AI/ML to avoid racial bias and its magnification, whilst a problem, is not one that is necessarily best solved; perhaps it is better confronted through the adversarial’s ‘refusal of an image which is already dictated by the machine’.

The panel agreed that art institutions, and particularly curators, play an essential role in creating spaces in which the technology of AI can be explored. In particular, there was a consensus that the operational logic of such technologies can be meaningfully communicated to the public without any need for specialist technical knowledge. In response, Joanna Zylinska contended that alongside the educational role of museums and galleries, universities and educational experiences hold the ability to open up ‘different modes of communicating, sensing, creating environments’, further diversifying and enriching public-facing discourse. For the panel, this led to the question of how we might facilitate such conversations whilst navigating the ‘terminological deadlock’ between metaphorical and ‘literal’ language. Nora Khan highlighted how unstable the lexicon of AI becomes under critical examination; as such, the best path forward might be to develop a language which is co-created across the sciences and humanities. As Mercedes Bunz summarised, it is by finding this balance between researching AI as a technical object, without distancing it as “other” to the human, that its operational logic can begin to be understood.

More Creative AI Lab events are in the pipeline – head to to sign up to the newsletter and explore a wealth of commissioned and compiled resources.

Text by researcher and editor Alasdair Milne. Milne is the recipient of the LAHP/AHRC-funded Collaborative Doctoral Award at King’s College London Department of Digital Humanities in collaboration with Serpentine’s R&D Platform. His work is broadly concerned with collaboration – how to comprehend practices of thinking and making which incorporate both the human and the non-human. His PhD will tackle creative AI as a medium in artistic and curatorial practices.

How do we find the balance between researching AI as a technical object, without distancing it as “other” to the human?

– Mercedes Bunz


Discover over 50 years of the Serpentine

From the architectural Pavilion and digital commissions to the ideas Marathons and research-led initiatives, explore our past projects and exhibitions.

View archive