Staffordshire University logo
STORE - Staffordshire Online Repository


WAITE, Si (2015) Kafka-Esque. [Composition]

This is the latest version of this item.

Video (Demonstration performance) - AUTHOR'S ACCEPTED Version (default)
Available under License All Rights Reserved.

Download (29MB) | Preview

Abstract or description

Kafka-Esque involves the use of typed text as a real-time input for an interactive performance system. The piece explores text-based generative systems, links between typing and playing percussion instruments and the use of typing gestures in contemporary performance practice. The system aims to demonstrate liveness through clear, perceptible links between the performer’s gestures and the system’s audio-visual outputs. The system also provides a novel approach to the use of generative techniques in the composition and live performance of songs, in which lyrics are placed at the heart of the performance. “Kafka-Esque” explores how the rhythmic and melodic aspects of typing can be captured to create musical output that is not totally consciously designed by the performer. It is anticipated that audiences will sense that the music has a rhythmic and melodic quality, but that these qualities remain tantalisingly elusive. This kind of approach to songwriting and performance is indicative of the author’s wider creative goals (Waite, 2014).

There is a clear similarity between the act of typing on a keyboard and that of playing a percussion instrument such as a piano (Hirt, 2010). A recent study has demonstrated that proficient piano-players are able to generate text at comparable speeds to touch-typists (Feit and Oulasvirta, 2013). This gestural relationship has been exploited in compositions such as Leroy Anderson’s “The Typewriter” (Anderson, 1995) and Steve Reich and Beryl Korot’s “The Cave” (Reich and Korot, 2007), which also featured the live projection of the text as it was rhythmically typed by the performers. It has been argued that many computer users display a degree of virtuosity on a computer keyboard that is comparable to virtuosity on a musical instrument. Digital instrument designers have exploited this to create computer keyboard-based instruments that do not require extensive practice (Fallgatter, 2013). Furthermore, each key does not need to be tied to a particular pitch, meaning that similar gestures can be easily transformed to yield very different sonic results (Kirn, 2009).

The interactive system for the performance of Kafka-Esque is realised in Max. The inputs (under direct control by the performer) are a computer keyboard and a USB control surface to manipulate the volumes and stereo positions of the various sound-producing elements. The text of the piece is treated as the score, which is performed through typing. The live stream of text controls and influences melody, rhythm, timbre and visuals. This stream is projected as it is typed, letter by letter, to reinforce the perception of liveness (a strong connection between a performer’s physical gesture and resultant sound) for both audience and performer. Several keywords in the text are identified which serve as triggers for visual outputs.

Stored samples of sung vowel sounds as well as synthesized vowel sounds are triggered by the live text input. For example, typing “you” or “room” would initiate playback of a sung “oo” sound. The pitches of vocal sounds are controlled by a real-time version of Guido’s system (Rowe, 1993), a basic generative system that assigns incoming vowels a pitch value and by a cyclical, pre-determined melody in which each press of the space bar instigates the next note in the sequence. Two methods for capturing rhythmic gestures are used that involve the use of a double “listener and player” mechanism to enable simultaneous listening and playback. Using keyboard shortcuts, the performer is able to initiate, change or stop rhythmic playback during the course of the performance.

Although this system does not introduce new techniques, the combination of existing techniques into a novel system affords the performer low latency response; the simultaneous creation of layered melodies and rhythms; the display of text as it is typed and a high degree of control and expression. Together with the emphasis on gestural performance and the system’s ability to respond to these gestures (not to mention the immediate display of typing errors!), audiences should perceive a high degree of liveness. The system is also successful in providing a novel approach to the performance of songs, by taking the focus away from the performer and their vocal/instrumental prowess and and placing it instead on the lyrics. The combination of typing rhythms, electronic and natural timbres, cyclical and generative melodies and glitchy video create an aesthetic that sits well with both contemporary popular and experimental styles.
While designed for the performances of one piece, the system is still highly adaptable and configurable for different pieces to be performed.

Kafka-Esque has been performed at:
- MTI concert series, De Montofort University, 2014
- Sonorities, Queen's University , 2015
- International Festival of Artistic Innovation, Leeds, 2016

And is available at:
- YouTube:
- Vimeo:

Related papers and citations:
Lee, S., Essl, G. and Martinez, M. (2016) Live Writing: Writing as a Real-time Audiovisual Performance. Proceedings of New Interfaces for Musical Expression. Brisbane, Australia.
Lee, S. and Essl, G. (2017) Live Writing: Gloomy Streets. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. New York, USA.
Waite, S. (2014). Sensation and Control: Indeterminate Approaches in Popular Music. Leonardo Music Journal. (24). p.pp. 78–79.
Waite, S. (2015). Reimagining the Computer Keyboard As a Musical Interface. Proceedings of New Interfaces for Musical Expression. Baton Rouge, USA.

Anderson, L. (1995). The Typewriter. Leroy Anderson Favorites.
Fallgatter, J. (2013). Foundation of Aqwertyan™ Music. [Online]. 2013. Aqwertyan Music Systems. Available from: [Accessed: 7 January 2015].
Feit, A.M. & Oulasvirta, A. (2013). PianoText: Transferring Musical Expertise to Text Entry. In: CHI ’13 Extended Abstracts on Human Factors in Computing Systems. CHI EA ’13. [Online]. 2013, New York, NY, USA: ACM, pp. 3043–3046. Available from: [Accessed: 9 January 2015].
Hirt, K. (2010). When Machines Play Chopin: Musical Spirit and Automation in Nineteenth-Century German Literature. Walter de Gruyter.
Kirn, P. (2004). QWERTY Keyboard Instrument: Samchillian Tip Tip Tip Cheeepeeeee. Create Digital Music. [Online]. Available from: [Accessed: 7 January 2015].
Reich, S. & Korot, B. (2007). Reich: The Cave.
Rowe, R. (1993). Interactive Music Systems: Machine Listening and Composition. Cambridge MA: MIT Press.

Item Type: Composition
Uncontrolled Keywords: Typing music systems interactive generative audio-visual performance live liveness computer keyboard instrument digital experimental popular lyrics
Subjects: J900 Others in Technology
W300 Music
W800 Imaginative Writing
Faculty: School of Computing and Digital Technologies > Film, Media and Journalism
Depositing User: Si WAITE
Date Deposited: 05 Sep 2017 11:39
Last Modified: 05 Apr 2018 15:06

Available Versions of this Item

Actions (login required)

View Item View Item

DisabledGo Staffordshire University is a recognised   Investor in People. Sustain Staffs
Legal | Freedom of Information | Site Map | Job Vacancies
Staffordshire University, College Road, Stoke-on-Trent, Staffordshire ST4 2DE t: +44 (0)1782 294000