back one page arrow button
Go Back One Page
August 24, 2018
main post image
Music + Sound
A Hardware and Software repository

Biometric Sound System

Research and Technology for a Biometric Sound System

This will be a repository for information and various edge tech experiments. I will use it to keep links to research and thoughts about technology, EEG devices, music creation hardware and software readily available. None of the links are affiliates in this post. I will produce further, deeper posts for each aspect of the research path that may contain those in the future.

I have had an idea for a generative music application for several years now. There have been many attempts since the 1960s to fuse biometric measurements with music creation. Some used EKG or ECG to measure pulse and heartbeats. Others used EEG measurements to generate soundwaves. Most required access to Matlab programming software, research grants and expensive medical devices. Recently the sensors have become easier and cheaper to obtain, and now there are also many more software options.

"Bloom is an endless music machine, a music box for the 21st Century. You can play it, or you can watch it play itself."

BRIAN ENO

One such application that peaked my interest is Bloom. It is a generative music application developed by ambient pioneer Brian Eno and musician / software designer Peter Chilvers. I am a longtime fan of Eno, ambient soundscapes and Musique Concrète. Created for the iPhone and iPod touch, Bloom is part instrument, part composition and part artwork, Bloom's innovative controls allow anyone to create elaborate patterns and unique melodies by simply tapping the screen. While it uses tapping to initiate the music generation, when left idle it will begin to self-generate. I used this as a proof of concept that technology had progressed far enough for me to get serious about the idea. Originally developed in Flash, it uses loops and grids in an array and different interactions trigger them. Since I developed Flash applications and knew JavaScript and ActionScript, I thought that would be a good starting point.

I wanted to use EEG measurements to trigger loops kept in a database array. That was my initial idea. Eno created ambient loops and Bloom had algorithms and interactions to trigger them. I wanted to measure different brainwaves to do the same. Alpha, Beta, Gamma, Theta, and Delta are the brainwave frequencies that can be easily measured.

I have a friend, Jason Decker, that used EKG input to trigger sounds in an interactive music experiment in 1999 when he was at Digital Anvil. My first inquires into EKG/ECG devices led me to a couple of options. The first was Arduino, which can interact with sensors for ECG, temperature and accelerometers. I wanted at least 3-4 signals to measure for input. The Arduino platform, along with Adafruit sensors was a cheap solution that required some hacking and coding, but the cost was low. I found some commercial ECG wearables before researching Raspberry Pi, though it may offer the necessary options and programability since Adafruit offers add-ons for that platform as well. Note: Adafruit now makes EEG sensors available.

There are several commercial devices available that aren't much more expensive than hacking DIY kits. Shimmer is an Irish company that has created a very innovative sensor platform.

"Shimmer’s wearable sensor platform and equipment allows for simple and effective biophysical and kinematic data capture in real-time for a wide range of application areas."

This wearable device can measure ECG, temperature and movement, much like the Arduino, but I wouldn't have to hack the hardware myself and they have an open Java based SDK and encourage experimental experience projects.

Synch Project is a Boston based start-up that has an application called Unwind that uses biometric data to tailor music to your mood. Unwind measures your heart beat via your smartphone’s accelerometer and uses these readings to tweak a relaxing ambient track by UK band Marconi Union. They are offering a bit of AI and a side of psychology with that as well. Still, I would rather go a different route and use EEG for the data measurables.

One common trend with my original idea and these devices is that they can play music in response to input from biometrics. Your body acts as a DJ, selecting what loops to play from an array. All these wearable devices were fine for that, but I began to think that I would like the interactive experience to be more active or performance based and less passive. I had come across a device called Muse a few times in my research, and during a visit to Austin my friend Jason mentioned that he had a colleague that was experimenting with Muse for extrapolating data measurements. Since Muse is a headband sensor array and not a group of sensors attached to fingers, arms or chests, it wouldn't get in the way of performing while it transmitted data.

Muse also actively encourages experimentation with its device and has an open API that can be programmed with JavaScript, Python or MAX/MSP. It is a relatively low cost device that measures EEG and ERP. That sounds like the perfect hardware interface for my project, and it wasn't much more expensive than the Arduino projects that I would have needed to build and group together.

Muse can transmit over wi-fi or Bluetooth to mobile devices or connect via USB to an Apple computer. Using the muse-io SDK you can stream data from the MUSE EEG system directly to MATLAB via the open sound control (OSC) protocol  http://www.neuroeconlab.com/muse.html.

Muse can also connect to Midi devices through Muse Port, a MAX/MSP interface built in Max for Live to communicate with Ableton Live. Since Muse has been used successfully with Ableton Live, I think that is the path of least resistance, since I am familiar with that platform and the MAX/MSP visual programming editor. In addition to Ableton, Muse has made SDKs available for Python, Android, and Unity.

Unfortunately on January 29, 2019; two days after I signed up for their developer forum, the Muse team decided to discontinue support for their SDK.

"The SDK has been a popular tool, but it requires a volume of technical support that’s difficult to sustain. To support the community until we can provide a more sustainable solution, we’re happy to continue to offer researcher & developer tools like Muse Direct and File Viewer. Additionally, open source tools built by scientists & tinkerers, like MuseLSL and EEG Notebooks, offer a powerful suite of scientific research options to work with Muse’s sensor data."

Muse R&D team at research@choosemuse.com.

I can understand that. Developers, medical researchers, musicians and artists trying to hack their device to do things they never intended aren't part of their meditation target market. Quite a few developers were hounding their team for updates to the SDK and they don't have the time or money to placate them, so they closed it down. I wish they had made an unsupported archive available, but they did mention that there are still several pathways available to access and stream EEG and other measurements, like pulse and eye movements.

After looking at Muse Direct (an iOS app) and MuseLSL (a 3rd party app available on Github), I think I will still be able to connect data from Muse to Ableton Live. Initially I planned on writing my own loop playing software in Java. Years ago I built an audio mixer and a few synth modules in Java and thought I could group those with an array of loops similar to Ableton Live or Bitwig for playback. After looking further into several projects that were able to integrhas a lot of module synth development options, but doesn't support MAX/MSP as directly as Ableton Live does, since Ableton now owns and develops that programming language. MAX is lightweight and great for prototyping, more so than Python or Java, for me at least, So I think wether I use the Muse 2 or Muse 2016 EEG device, Ableton will still be the playback engine.

Another option is a wearable headset device from Emotiv called Insight. Similar to Muse 2, and in the same price range, it measures and streams 5 bandwidths of brain activity, as well as eye movement, temperature, and head turning/tilting through several sensors.

At this time they have an open SDK and an app for streaming data, so I should be able to capture that in a similar manner as I would with Muse 2.

OpenEEG is an open source project that supports research into neurofeedback and EEG biofeedback training using several other commercially available headset devices. They also catalog projects and links to code and project examples.

I bookmarked a few sites researching converting EEG signals to audio. Neurobb has information about converting OpenBCI files to WAV files and EEG Hacker has several simple ways to do the same within commercially available audio software like Logic Pro, and yet again, Ableton Live.

Quite a few projects have involved converting brainwaves into audio waves. Some used streaming and some used sampling with various degrees of success. As far back as 2005. (I didn't realize I was so far behind!) A few others have tried to convert brainwaves into Midi signals instead, and have those play notes on a synthesizer. Again, with varied levels of success. Most were able to do what they set out to, however, the results were seldom musical. Brain2Midi is one application available for Android phones and MindStream is another.

My plan, as I mentioned at the start, is to do neither of those things.

I want to build something closer to Brian Eno's Bloom, or one of his other generative music projects, and have brainwaves trigger loops. After all this research it looks like converting the streaming data into Midi is the best course of action, but rather than playing notes, the signals would trigger clips in Ableton. (Yet another thing Ableton Live does better than other DAWs) These clips could be prerecorded audio loops, or they could be midi tracks that would then play Ableton Live synths. Another option might be CV : controlled voltage : messages, but I haven't found as many projects for that as for Midi.

The Ableton Live audio engine, and it's Session playback system, along with the Max for Live programming integration make it the perfect platform for what I want to do, I just need to decide if the Emotiv Insight or Muse 2 is the better device, and finding the best way to convert EEG data to midi.