Image Image Image Image Image
Scroll to Top

To Top

Project Showcase

On 22, Apr 2019 | In | By Kyle

We Live in an Ocean of Air

We Live in an Ocean of Air is a multi-sensory immersive installation illuminating the fundamental connection between animal and plant. Step through the canvas and share a breath with the giants of the plant kingdom.

This multi user VR experience premiered at Saatchi gallery London Dec 7th – 5th May 2019.

‘We Live in an Ocean of Air’ has been created by London based immersive art collective Marshmallow Laser Feast in collaboration with Natan Sinigaglia and Mileece I’Anson.

Everyone is happy were tasked with developing a VR enabled avatar system that would allow guests to see each other, themselves and their relationship with the environment.  As well as following design and art direction this had to integrate a very special blend of sensing technologies including body tracking, heart rate monitors, and breath sensors.

Read more…

On 17, Mar 2019 | In | By Kyle

Waifu Synthesis- real time generative anime

Bit of a playful project investigating real-time generation of singing anime characters, a neural mashup if you will.

All of the animation is made in real-time using a StyleGan neural network trained on the Danbooru2018 dataset, a large scale anime image database with 3.33m+ images annotated with 99.7m+ tags.

Lyrics were produced with GPT-2, a large scale language model trained on 40GB of internet text. I used the recently released 345 million parameter version- the full model has 1.5 billion parameters, and has currently not been released due to concerns about malicious use (think fake news).

Music was made in part using models from Magenta, a research project exploring the role of machine learning in the process of creating art and music.

Setup is using vvvv, Python and Ableton Live.

StyleGan, Danbooru2018, GPT-2 and Magenta were developed by Nvidia, gwern.net/Danbooru2018, OpenAI and Google respectively.

Click below to see snapshots with some of the generated lyrics.

Read more…

On 28, Nov 2018 | In | By Kyle

CONSTELLATIONS – Joanie Lemercier

Was a pleasure to do some programing and interaction design for Joanie Lemercier, who just happens to be one of our heroes!
Using a unique floating screen-less projection as a canvas “Constellations” is an audiovisual installation and interactive piece, part of Lemercier’s ongoing artwork series on geometry and cosmos.

Conception and visuals: Joanie Lemercier
Music: Paul Jebanasam
Sound design: Echoic Audio
Production: Juliette Bibasse
Additional code: Kyle McLean

The Layered Realities 5G showcase is produced by Watershed on behalf of the Smart Internet Lab, University of Bristol. Smart Internet Lab has secured funds to establish ‘5GUK Test Networks’ a national asset, funded by the UK Government’s Department for Digital, Culture, Media and Sport (DDCMS) ‘5GUK Test Networks’.
The Layered Realities weekend 5G showcase brings together the University of Bristol’s Smart Internet Lab and Watershed, We The Curious, BT, Nokia, Zeetta, Cambridge Communications Systems, PureLiFi and BiO.
Deep neural network paintings of the Land of the Thunder Dragon Deep neural network paintings of the Land of the Thunder Dragon

On 04, May 2018 | In | By Kyle

Deep neural network paintings of the Land of the Thunder Dragon

In our first real foray in to programing with machine learning we left a Convolutional Neural Network (CNN) to ‘meditate’ on an image of Saraswati, Goddess of knowledge, music, arts, and learning. Once trained in this fashion you can feed the CNN with other images, and it will ‘paint’ them in the style of the image it has learnt, using a process known as Neural Style Transfer. The source images are photos we have taken whilst living in Bhutan, a Buddhist kingdom on the Himalayas’ eastern edge.

We’ve also done some pretty cool R&D into using the same methods with realtime video.

Click below to see the full gallery.

Read more…

Mycelium Droid Mycelium Droid

On 04, May 2018 | No Comments | In | By Kyle

Mycelium Droid

R&D for new content creation and rendering techniques in vvvv.

All geometry, rendering, tonemapping & post done procedurally in realtime.

Click below to see the full gallery.

Read more…

TripTech

Made in vvvv & HLSL, this program runs in full HD at 120fps on a GTX 1080 graphics card.

This capture was featured as a short film at the 2017 Prix Ars Electronica in Linz, Austria.  Music & Coding by Kyle McLean.

Voxel Noise Voxel Noise Voxel Noise

On 09, Dec 2016 | No Comments | In | By Kyle

Voxel Noise

Research in realtime raymarching of distance fields in vvvv

structsynth sphinx nest ladysitting fol2 flyingcouple buddha2 bucky

Save