AudioDB's technical implementation.

2007/2008/2009 by [intlink id=”2″ type=”page”]Till Bovermann[/intlink], Christof Elbrechter, Thomas Hermann, and Helge Ritter.

Digital audio in its various appearances is ubiquitous in our everyday life. Searching and sorting of sounds collected in extensive databases e.g. sampling libraries for musical production or seismographical surveys is difficult and often bound to tight restrictions of the used standard human-computer interface technique of keyboard and mouse. Also the common technique of tagging sounds and other media files has the drawback that it needs descriptive words, which is a not to underestimated difficulty for sounds.
AudioDB was designed to support collaborative navigation in information databases by means of a surface-based Tangible Auditory Interface. It provides a tangible environment to sonically sort, group and select auditory representations of data, represented as physical artefacts on the canvas. AudioDB’s intend was to serve as a low-threshold interface to audio data that is used by several people during a discussion. It can also be used for grounding work on how humans handle digital information that is projected onto physical artifacts.

Sorting and arranging objects

Observed sorting strategies in the AudioDB Case Study.

Additional Material

  • Original Publication (pdf)
  • PhD Thesis pp. 132–143 ([intlink id=”102″ type=”page”]Publications page with link to PhD Thesis[/intlink])
  • Videos below

People involved in the Production Process

Till Bovermann, René Tünnermann, Thomas Hermann, Eckard Riedenklau, Christof Elbrechter.



  Author = {Bovermann, T. and Elbrechter, C. and Hermann, T. and Ritter, H.},
  Booktitle = {Proc. of the Int. Conf. on Auditory Display 2008},
  Title = {AudioDB: Get in Touch with Sounds},
  Year = {2008}