The visual and tangible parts of MoveSound.

2008/2009 by [intlink id=”2″ type=”page”]Till Bovermann[/intlink].
MoveSound is an azimuth panning interface for up to 16 sources in a ring of an arbitrary number of loudspeakers. Both the position and width of arbitrary sound sources can be adjusted with it. By providing the user with an interface to select either one or more sources to operate on, the system allows controlling several such sources at the same time. Together with the integrated azimut- and width panning control, its functionality opens the field of dynamic sound spatialisation also to untrained users. MoveSound was designed as a software system that can be easily attached to human interface devices. Its usage scenarios are spatial control of unobtrusive ambient soundscapes, and dynamic spatial control of sound sources for artistic contexts.

Case Study

We conducted a case study to gain insights into the usefulness of MoveSound as an interface to control spatial parameters of soundscapes. Its primary goal was to find out, if the minimalist TAI realised with MoveSound is sufficient for people to control spatial distribution of sound, and how they feel in operating it. Although the survey was designed to be explorative, we particularly searched for indicators regarding the questions:

Example Visualisation of the data collected in the MoveSound Case Study

MoveSound Case Study Participant 4 Challenge 4

  1. Do people understand MoveSound’s capabilities?
  2. Do they experience any controlling limits?
  3. Is there a difference in action that depends on the detail of the visual feedback system?

It turned out that people do understand MoveSound’s capabilities, as far as the challenges of this case study are concerned. In most cases, they experienced controlling limits only regarding the sound source selection, a problem we want to address in future extensions of the interface. Although all participants were able to fulfil the challenges to their satisfaction, we found differences in their action depending on the visual feedback system. While the full display exclusively attracted the user’s gaze, the reduced visual display made the participants’ gaze around. We believe that this turns them more to be present in the soundscape itself, then focusing on the model provided by the MoveSound interface.

Additional Material

  • PhD Thesis pp. 87–103 ([intlink id=”102″ type=”page”]Publications page with link to PhD Thesis[/intlink])
  • Video – see below

A MoveSound User. In the background you see the loudspeaker ring.

People involved in the Production Process

Till Bovermann, René Tünnermann.