Experiments & happy accidents
Concept
Real-time bitcoin transaction visualizer written in Max. Inspired by machines, anonymity, and the work of Ryoji Ikeda & Alva Noto, data converted to visual and sound is presented in an abstract way. The sound you hear is mainly controlled by how frequent transactions occurred.
Development
Use a Bitcoin WebSocket (https://www.blockchain.info) and Node for Max to get new transaction data in Max. Convert received data to binary, combine with some Max magic, and voila!
Credits
Part of the code was incorporated from ryoz_ikedar_ln patch by mesa.elech/tele. Made with help from my colleagues at Ableton: Christian Kleine, Lillia Betz, Kostas Katsikas, and James Gray. Thank you!
Download
https://github.com/nmtrang29/bitcoin-transaction-sonification
My first live AV performance at Public Art Lab, Berlin Feb 2019.
Duration: 20 minutes
It's Valentines day and I just found out my Instagram project got featured in Computer Arts Magazine almost a year ago (Issue 278, May 2018). Best gift ever lol.
Writer forgot to send update after the interview. Couldn't get a hold of a hard copy myself so here is the digital version:
My application for Servus Alexa - Machine Learning Hackathon organised by Goethe Institut and N3XTCODER.
Brief
Develop ideas to acquire audio material of German language learners speaking German (ideally 500 hours) in order to create a training dataset for create a large training set for German accent recognition.
Concept
Produce audio-based language lessons based on the Pimsleur Method™, which requires you to be an active listener, never passive. The audio lessons ask you how to say something or to respond to a native speaker.
Why Pimsleur Method™?
The audio series based on Pimsleur Method™ adheres to a fairly rigid structure and timeline, which means you know exactly when and what the learner speak (text transcript).
In short:
- Simple setup (people only need to put on a headphone and listen, no textbook to look at)
- Lesson structure supports mass data pre-processing and clean data labelling
Concept illustrated below:
The brief was announced beforehand and ideas were submited before the hackathon for approval. 7 ideas were selected. On the hack day, as project owner, I worked together with a team of 6 to realise the concept.
We did not win.
AI erases Romanian communist dictator Nicholas Ceausescu out of his last speech in 1989 using Deepangel AI.
Turned my Github contribution graph into sound. As we moved through times, notes would be played to represent the amount of daily contributions, and the larger the amount, the higher and louder the note would be.
Data mapped over 3 octaves, starting with octave 4. Sound done in Sonic Pi.
Inspired by Robbie Barrat's AI generated Balenciaga fashion show, I created my own pose2pose demo that translates my webcam image into Björk in one of her music videos. However, the result was not quite as good as I had expected, hence the title.
To create the dataset, I used Posenet to detect Björk's poses in 400 frames extracted from her 'Big Time Sensuality' music viceo. For training, I used Gene Kogan's version of pix2pix Tensorflow.
Using char-rnn to generate fake beauty product reviews on Yestyle (Asia's equivalent of Sephora/Douglas). Lazy customers can now benefit from their review rewarding system — the more reviews you submit, the more discount you get.
After submitting, these reviews were manually checked by Yesstyle and they all went through!
Caption generated for a series of images of police officers posing with cananbis plants using Densecap with pre-trained model, when a computer detects objects in images and describes them in natural language.
I love how it is so good at picking smiling men and green plants.
Images curated by Max Siedentopf
Google searched my name. Scraped and cropped faces out of the first 500 results. Images clustered via t-SNE dimensionality reduction technique. It helps to mention that I have a fairly common Vietnamese name.
Made possible with Aarón Montoya-moraga's tool for scrapping Google Images and Andreas Refgaard's face-croppping Processing sketch.
Using Robbie Barrat's art-DCGAN - modified implementation of DCGAN focused on generative art, I trained my own model on a dataset of 1700 Ukiyo-e paintings scraped from Wikiart.
Using rocks to destroy your webcam, the amount of rocks equal to the amount of damage/cracks. Then bring it back to life by throwing in some tapes.
How it works:
How damage level is calculated:
Experience how it's like to be Rachel Uwa (one of my favourite humans!). Made with Tensorflow implementation of pix2pix. Running real-time on a webcam. Input on the left, output on the right.
Inspired by Gene Kogan's Trump puppet, in which he used face tracker to create a generative model that mimic Trump.
Placing similar-sounding audio recordings near each other using t-SNE dimensionality reduction technique. Sound on!
Dataset: ESC-50, a collection of 2000 environmetal audio recordings.
A few Shadertoy shaders I converted to work with Processing 3. More to come.
Source code will soon be added to Github.
Custom LSTM trained on 5MB of Southpark transcripts using Andrej Karpathy's char-rnn code. This was poorly trained (25 epochs) so the result came out somewhat incoherent.
Generated transcript:
Sain Pandportal GobblesAudio reactive water ripples. Processing sketch forked from OpenProcessing, modified with the help of Aarón. Would be nice to get this projected on the entire wall of an empty chamber.
According to Chinese astrology, facial moles can tell your fortune, give insight into your personality. Read more here.
Browser-based mole reader made using Gene Kogan's ml5js face tracker, trained with Wekinator. As you smile, the meaning of these moles become significantly more positive.
Processing + "Slitscan" GLSL shader. Trained with supervised machine learning via Wekinator.
Detect (human) neck and stretch it. Made with massive help from Andreas Refgaard and Meredith Thomas.
Red light as prop. :-)
Application made for my sister based on the concept of Study with Me. This plays an 1-hour long video of a student quitely studying and will nag her everytime she looks at the computer. Audio voice generated using Google Translate.
Audio transcripts (Vietnamese):