Tangible Auditory Interfaces
A Tangible Auditory Interface (TAI) combines Tangible Interfaces with Auditory Displays to mediate information back and forth between abstract data space and user- perceivable reality. The two parts form an integral system for the representation of abstract objects like data or algorithms as physical and graspable artefacts with inherent sonic feedback. The tangible part hereby provides the means for the manipulation of data, algorithms, or their parameterisation, whereas the auditory part serves as the primary medium to display the virtual dynamics to the users.
This definition implies the following information flow in TAIs (also visualised in Figure 7.1): The data or algorithmic functionality to be represented by the TAI is pre-processed by a (more or less advanced) model. The AD transforms the pre-processed data into sound that is perceived by the user. Depending on the perceived sounds and the user’s imagination, he manipulates the TI, and thereby controls parameters of the data pre-processing, which results in a change of the auditory representation. From the user’s point of view, the system directly reacts to his physical manipulations. Due to immediate and possibly diverse sonic reactions, a flow in operation is established, and the TAI is perceived to be Ready-to-Hand in the Heideggerian sense.
Although technically every combination of AD and TI can be called a TAI, there are specific combinations that assemble into more powerful setups than others. This effect is based on the interconnection design that is established between the tangible control and the auditory output. The physical part of a TI hereby suggests possibilities on how and, particularly, in which detail the data set and its manipulation should be represented. Due to the inherent gestalt of the interface, the interaction designer is guided by object reactions he may find in the natural environment. In the same way, an AD representing a data set possibly arouses associations of the rendered sounds with the manipulation of physical objects. This observation can used to design a TI that complements the AD to an integral TAI. Both fields, TI and AD, therefore induce nature-inspired constraints to the design of their complementing part. This limits the number of possible representations for data and algorithmic processes such that only those are left that are based on commonly used associations. Let us consider for example the integration of an Auditory Display that represents a data item by a short auditory event into a TAI. The given association of one data item to one auditory event suggests the linkage of its manipulation to one physical object. The structure induced by the auditory representation then is reflected by the TI. Furthermore, the data-driven sound event can be linked with the object’s physical interaction with the TI’s canvas. This effect then can be associated by the user with the physical interactions of rigid bodies, which are usually the cause for structure-borne sounds.
Superimposed and Separated Layers
In their publication Bricks, Fitzmaurice et al. propose a design space for brick- like Tangible Interfaces. Among other things, they differentiate between superimposed (i.e. directly coupled), or separated (i.e. indirectly coupled) physical and virtual layers of a brick-utilising interfaces. By looking at TAIs, however, this point of view turns out to be biased by the means of the visual dominance in their displays: Looking at TAIs unveils that other aspects also have to be considered: We interpret observed sounds often as being connected to a synchronous visual action. This does not necessarily have to be the case, since the sound may have been caused by something completely different, but we are used to interpret the temporal correlation as one common event that causes both the visual and the auditory part. A possible reason for this may be that physical processes almost always generate sounds while changing their visually observable state, and it is very unlikely for a synchronised audiovisual stimuli to be caused by two unrelated events. Therefore, our mind tends to bind these time-synchronous events together, even if other features like their origin are contradictive. Since it is technically possible to trigger sound events at almost exactly the same time when other (e.g. visual) events are observed, the human mind can be tricked by making it believe that it observes an actual sound-action coherence. This effect is called the ventriloquist effect on the perception and identification of sound with other simultaneously perceived values, or – in short, referring to one of it’s discoverer – the McGurk effect. Having this in mind it can be said that, differing from the prerequisites needed for the visual augmentation of tangible objects where either the objects or the canvas have to be electronically enhanced, TAIs do not require the placement of an active feedback system at the same location as the incorporated objects. Moreover, it is sufficient to surround the canvas with a spatial audio system (see e.g. [intlink id=”42″ type=”page”]AudioDB[/intlink]), or, assuming an implementation that features a close coupling between action and auditory feedback, even a mono loudspeaker setup near the canvas is sufficient (see e.g. [intlink id=”8″ type=”page”]Auditory Augmentation[/intlink]).
Certainly, it is also possible to electronically enhance the objects or their canvas, just as in the visual augmentation pendant. In addition, none of these audio-based implementations lead to situations in which users physically occlude the system’s (auditory) feedback, a dedicated problem of visual display systems. Taking all these observations into account, it can be sais that the direct and superimposed indicators for Tangible Interfaces as proposed by Fitzmaurice et al. cannot be interpreted as a duality where either the one or the other is true for a system. Moreover, they have to be recognised as independent from each other when dealing with TAIs.