We Live in an Ocean of Air is a multi-sensory immersive installation illuminating the fundamental connection between animal and plant. Step through the canvas and share a breath with the giants of the plant kingdom.
This multi user VR experience premiered at Saatchi gallery London Dec 7th – 5th May 2019.
‘We Live in an Ocean of Air’ has been created by London based immersive art collective Marshmallow Laser Feast in collaboration with Natan Sinigaglia and Mileece I’Anson.
Everyone is happy were tasked with developing a VR enabled avatar system that would allow guests to see each other, themselves and their relationship with the environment. As well as following design and art direction this had to integrate a very special blend of sensing technologies including body tracking, heart rate monitors, and breath sensors.
Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.
All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.
Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).
Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.
Setup is using vvvv, Python and Ableton Live.
StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia, gwern.net/Danbooru2018, OpenAI and Google respectively.
Click below to see snapshots with some of the generated lyrics.
Music: Paul Jebanasam
Sound design: Echoic Audio
Production: Juliette Bibasse
The Layered Realities weekend 5G showcase brings together the University of Bristol’s Smart Internet Lab and Watershed, We The Curious, BT, Nokia, Zeetta, Cambridge Communications Systems, PureLiFi and BiO.
In our first real foray in to programing with machine learning we left a Convolutional Neural Network (CNN) to ‘meditate’ on an image of Saraswati, Goddess of knowledge, music, arts, and learning. Once trained in this fashion you can feed the CNN with other images, and it will ‘paint’ them in the style of the image it has learnt, using a process known as Neural Style Transfer. The source images are photos we have taken whilst living in Bhutan, a Buddhist kingdom on the Himalayas’ eastern edge.
We’ve also done some pretty cool R&D into using the same methods with realtime video.
Click below to see the full gallery.
Made in vvvv & HLSL, this program runs in full HD at 120fps on a GTX 1080 graphics card.
This capture was featured as a short film at the 2017 Prix Ars Electronica in Linz, Austria. Music & Coding by Kyle McLean.