This is an important area to explore as there seems to be a lot of potential for using musicians whole bodies, arms, hand and even faces to make music, but we needed to know if these methods would suit our participants’ preferred ways to make music.
We also took the opportunity in the session to find out a bit more about how our participants like to make music using digital technology and what they were hoping to create in the SoundLabs.
What did we want to learn?
We wanted find out if people really connected to using these devices and whether they felt they could control the music in a way that worked for them.
We also wanted to find out how much guidance people needed to get the most out of the experience of using those devices.
Another big question for us is about how visual these music making experiences need to be. It is becoming clearer that some kind of visual activity can help the player understand what effect they are having.
Who was involved?
We had four participants for this session.
Lilly continued her experimentation with the iPad, working closely with Ashley to test a range of different beat making apps to see which ones worked best for her.
Dean of the Dean Rodney Singers project came and tried out both devices. He is using a laptop and Garageband to do a lot of his music making at the moment, so these devices presented a very different way to make sounds.
Wayne has been with us at nearly all the SoundLab sessions. He makes a lot of music with his iPad but is really keen to find new ways to work and new sounds to make.
Stefan is new to digital music making but had just got himself a tablet a couple of days before the session.
How did it work?
Session length: 3 hours
We tried two different computer vision setups.
One setup used a camera to detect the shape of faces. The idea was that the player could try different facial expression to shape the music. The plan was that the player could train the machine to do different things based on different expressions. This is called machine learning. We used a simple synth sound that was creating a melody that could be changed by the player. The idea was that if you raised your eyebrows the machine would always do a certain musical thing, whereas if you opened your mouth it would do something different.
The other setup used a Kinnect camera to track a player’s arm/hand position. The player could move their arm through space to trigger different samples. This setup used a visualisation to show the player when and what samples were being triggered.
What did we learn?
We learned that computer vision is tricky, particularly when combined with machine learning. What does that all mean?
It means that training a machine to learn facial or arm gestures is a slow process. Although Dean really liked the look of the facial recognition setup it was hard for him to control the sounds. We started to question whether this training process that allows for gestures to be repeated with the same musical result was something that our artists could make use of.
Stefan played with the Kinnect setup for a while, but it was hard for him to make the connection between certain movements he was making and the sounds that were being produced. I had a go and I found it hard to feel connected to the experience too. We needed to go back and think about what makes a musical instrument feel connected to us. Clearly visual and sound feedback is really critical, but also the sounds we make must feel like they are coming from us in some way.
It was a challenging session but in a good way. We know that computer vision technologies are very powerful, but they need to amplify and support a real musical experience. Designing that core musical experience is not easy. We have very few instrument types that have stood the test of time. There is great amount of inventiveness going on now to find the new instruments for our current era, but this inventiveness is challenging us to consider how expressivity really works.