Experiments & happy accidents
My v first live AV performance. Berlin Feb 2019.
Duration: 20 minutes
It's Valentines day and I just found out my Instagram project got featured in Computer Arts Magazine almost a year ago (Issue 278, May 2018). Best gift ever lol.
Writer forgot to send update after the interview. Couldn't get a hold of a hard copy myself so here is the digital version:
Develop ideas to acquire audio material of German language learners speaking German (ideally 500 hours) in order to create a training dataset for create a large training set for German accent recognition.
Produce audio-based language lessons based on the Pimsleur Method™, which requires you to be an active listener, never passive. The audio lessons ask you how to say something or to respond to a native speaker.
Why Pimsleur Method™?
The audio series based on Pimsleur Method™ adheres to a fairly rigid structure and timeline, which means you know exactly when and what the learner speak (text transcript).
- Simple setup (people only need to put on a headphone and listen, no textbook to look at)
- Lesson structure supports mass data pre-processing and clean data labelling
Concept illustrated below:
The brief was announced beforehand and ideas were submited before the hackathon for approval. 7 ideas were selected. On the hack day, as project owner, I worked together with a team of 6 to realise the concept.
We did not win.
AI erases Romanian communist dictator Nicholas Ceausescu out of his last speech in 1989 using Deepangel AI.
Turned my Github contribution graph into sound. As we moved through times, notes would be played to represent the amount of daily contributions, and the larger the amount, the higher and louder the note would be.
Data mapped over 3 octaves, starting with octave 4. Sound done in Sonic Pi.
Inspired by Robbie Barrat's AI generated Balenciaga fashion show, I created my own pose2pose demo that translates my webcam image into Björk in one of her music videos. However, the result was not quite as good as I had expected, hence the title.
Using char-rnn to generate fake beauty product reviews on Yestyle (Asia's equivalent of Sephora/Douglas). Lazy customers can now benefit from their review rewarding system — the more reviews you submit, the more discount you get.
After submitting, these reviews were manually checked by Yesstyle and they all went through!
Caption generated for a series of images of police officers posing with cananbis plants using Densecap with pre-trained model, when a computer detects objects in images and describes them in natural language.
I love how it is so good at picking smiling men and green plants.
Images curated by Max Siedentopf
Google searched my name. Scraped and cropped faces out of the first 500 results. Images clustered via t-SNE dimensionality reduction technique. It helps to mention that I have a fairly common Vietnamese name.
Using rocks to destroy your webcam, the amount of rocks equal to the amount of damage/cracks. Then bring it back to life by throwing in some tapes.
How it works:
How damage level is calculated:
Experience how it's like to be Rachel Uwa (one of my favourite humans!). Made with Tensorflow implementation of pix2pix. Running real-time on a webcam. Input on the left, output on the right.
Inspired by Gene Kogan's Trump puppet, in which he used face tracker to create a generative model that mimic Trump.
A few Shadertoy shaders I converted to work with Processing 3. More to come.
Source code will soon be added to Github.
Custom LSTM trained on 5MB of Southpark transcripts using Andrej Karpathy's char-rnn code. This was poorly trained (25 epochs) so the result came out somewhat incoherent.
Generated transcript:Sain Pandportal Gobbles
Audio reactive water ripples. Processing sketch forked from OpenProcessing, modified with the help of Aarón. Would be nice to get this projected on the entire wall of an empty chamber.
According to Chinese astrology, facial moles can tell your fortune, give insight into your personality. Read more here.
Browser-based mole reader made using Gene Kogan's ml5js face tracker, trained with Wekinator. As you smile, the meaning of these moles become significantly more positive.
Processing + "Slitscan" GLSL shader. Trained with supervised machine learning via Wekinator.
Application made for my sister based on the concept of Study with Me. This plays an 1-hour long video of a student quitely studying and will nag her everytime she looks at the computer. Audio voice generated using Google Translate.
Audio transcripts (Vietnamese):