IS SEEING BELIEVING
IS SEEING BELIEVING?
A Collaborative Art and Design Research Project with Culture A and Lisa Talia Moretti
How do we evolve the visual language of AI? How much of what we think we know about AI is influenced by what we ‘see’ it to be from images transferred across the world? How do we represent the heard but unseen?
The aim of this project is to robustly explore perceptions and understanding of Artificial Intelligence (AI) through visual and non-visual stimuli. The project is divided into two phases: the first - a prep phase - is an analysis of the visual language of AI in mass media. The second phase culminates in an immersive sound exhibition. The data gathered during both phases will be used to study socio-technical blindness with an aim to create new frameworks for AI guidelines in ethical design approach.
Inspired by the sound art of Bernhard Leitner and recent scholarship in intelligent human-machine collaboration, Is Seeing Believing? explores society’s understanding of AI through active participation. Language shapes the way we come to understand the world and our place within it. Early research by Lisa Talia Moretti shows that the language and visuals we’re using to talk about AI and get AI to talk is very quickly shaping both how we understand this kind of technology as well as our place within the system. Lisa and her team recently published a paper on how to manage bias within language data sets through the use of a strategic framework called PIIE. However, PIIE isn’t just a framework; it’s a new language that anyone within a business can use to start to interrogate an AI system, product or service. Built around the vocabulary of Purpose, Intent, Experience and Impact, and featuring five key principles that support ethical big data practice, PIIE introduces a new way of coming to know, understand, question and ultimately, democratise AI. This project approaches the topic from a new perspective as activated art.