Staffordshire University logo
STORE - Staffordshire Online Repository

Kafka-Esque

WAITE, Si (2015) Kafka-Esque. [Composition]

This is the latest version of this item.

[img]
Preview
Video (Demonstration performance)
SiWaite_DemonstrationVideo.mov - AUTHOR'S ACCEPTED Version (default)
Available under License All Rights Reserved.

Download (29MB) | Preview

Abstract or description

Short Abstract

Kafka-Esque explores how the computer keyboard can be implemented into an interactive system for the performing music with lyrics by replacing sung lyrics with visually-projected typed text. Composing the piece was central to the research process (Candy and Edmonds, 2018) and involved a cyclical, iterative process of literature review, system-building/composing and reflection.
The system builds on previous work in the New Interfaces for Musical Expression (NIME) community that explores the use of QWERTY keyboards for live performance (Fiebrink et al, 2007; Lee et al, 2016) and builds on other works that use typing gestures in live performance, such as Anderson’s The Typewriter (1953) and Reich and Korot’s The Cave (1994). Unlike these works, Kafka-Esque reveals connections between the act of singing and that of typing, while demonstrating how typing gestures can be captured and processed in several ways to create a multi-timbral audio-visual work. The practice also suggests techniques and strategies for implementation in popular music contexts, which are typically under-represented in work with interactive systems (Marchini et al, 2017). These findings are disseminated in the related NIME paper (Waite, 2015). Furthermore, live performances of Kafka-Esque demonstrate high levels of several aspects of liveness (Sanden, 2013).
Findings have been shared with international academic and professional audiences at Innovations in Music 2017 (London); Tracking the Creative Process in Music 2017 (Huddersfield) and Loop 2017 (Berlin). The piece was the subject of a NIME 2015 paper and demonstration (Baton Rouge, USA) and was discussed in an Artist Statement in the Leonardo Music Journal (2014). Recordings of the piece and accompanying commentary have been published online and the piece has been performed at Sonorities 2015 (Queen’s University), MTI concerts (De Montfort University) and NoiseFloor (Staffordshire University). The software created for the piece is available for free download.

Extended Abstract

Kafka-Esque involves the use of typed text as a real-time input for an interactive performance system. The piece explores text-based generative systems, links between typing and playing percussion instruments and the use of typing gestures in contemporary performance practice. The system aims to demonstrate liveness through clear, perceptible links between the performer’s gestures and the system’s audio-visual outputs. The system also provides a novel approach to the use of generative techniques in the composition and live performance of songs, in which lyrics are placed at the heart of the performance. “Kafka-Esque” explores how the rhythmic and melodic aspects of typing can be captured to create musical output that is not totally consciously designed by the performer. It is anticipated that audiences will sense that the music has a rhythmic and melodic quality, but that these qualities remain tantalisingly elusive. This kind of approach to songwriting and performance is indicative of the author’s wider creative goals (Waite, 2014).

There is a clear similarity between the act of typing on a keyboard and that of playing a percussion instrument such as a piano (Hirt, 2010). A recent study has demonstrated that proficient piano-players are able to generate text at comparable speeds to touch-typists (Feit and Oulasvirta, 2013). This gestural relationship has been exploited in compositions such as Leroy Anderson’s “The Typewriter” (Anderson, 1995) and Steve Reich and Beryl Korot’s “The Cave” (Reich and Korot, 2007), which also featured the live projection of the text as it was rhythmically typed by the performers. It has been argued that many computer users display a degree of virtuosity on a computer keyboard that is comparable to virtuosity on a musical instrument. Digital instrument designers have exploited this to create computer keyboard-based instruments that do not require extensive practice (Fallgatter, 2013). Furthermore, each key does not need to be tied to a particular pitch, meaning that similar gestures can be easily transformed to yield very different sonic results (Kirn, 2009).

The interactive system for the performance of Kafka-Esque is realised in Max. The inputs (under direct control by the performer) are a computer keyboard and a USB control surface to manipulate the volumes and stereo positions of the various sound-producing elements. The text of the piece is treated as the score, which is performed through typing. The live stream of text controls and influences melody, rhythm, timbre and visuals. This stream is projected as it is typed, letter by letter, to reinforce the perception of liveness (a strong connection between a performer’s physical gesture and resultant sound) for both audience and performer. Several keywords in the text are identified which serve as triggers for visual outputs.

Stored samples of sung vowel sounds as well as synthesized vowel sounds are triggered by the live text input. For example, typing “you” or “room” would initiate playback of a sung “oo” sound. The pitches of vocal sounds are controlled by a real-time version of Guido’s system (Rowe, 1993), a basic generative system that assigns incoming vowels a pitch value and by a cyclical, pre-determined melody in which each press of the space bar instigates the next note in the sequence. Two methods for capturing rhythmic gestures are used that involve the use of a double “listener and player” mechanism to enable simultaneous listening and playback. Using keyboard shortcuts, the performer is able to initiate, change or stop rhythmic playback during the course of the performance.

Although this system does not introduce new techniques, the combination of existing techniques into a novel system affords the performer low latency response; the simultaneous creation of layered melodies and rhythms; the display of text as it is typed and a high degree of control and expression. Together with the emphasis on gestural performance and the system’s ability to respond to these gestures (not to mention the immediate display of typing errors!), audiences should perceive a high degree of liveness. The system is also successful in providing a novel approach to the performance of songs, by taking the focus away from the performer and their vocal/instrumental prowess and and placing it instead on the lyrics. The combination of typing rhythms, electronic and natural timbres, cyclical and generative melodies and glitchy video create an aesthetic that sits well with both contemporary popular and experimental styles.
While designed for the performances of one piece, the system is still highly adaptable and configurable for different pieces to be performed.

Kafka-Esque has been performed at:
- MTI concert series, De Montofort University, 2014
- Sonorities, Queen's University , 2015
- International Festival of Artistic Innovation, Leeds, 2016

And is available at:
- YouTube: https://www.youtube.com/watch?v=GjiQNfu8tjA
- Vimeo: https://vimeo.com/83867750

Related papers and citations:
Lee, S., Essl, G. and Martinez, M. (2016) Live Writing: Writing as a Real-time Audiovisual Performance. Proceedings of New Interfaces for Musical Expression. Brisbane, Australia.
Lee, S. and Essl, G. (2017) Live Writing: Gloomy Streets. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. New York, USA.
Waite, S. (2014). Sensation and Control: Indeterminate Approaches in Popular Music. Leonardo Music Journal. (24). p.pp. 78–79.
Waite, S. (2015). Reimagining the Computer Keyboard As a Musical Interface. Proceedings of New Interfaces for Musical Expression. Baton Rouge, USA.

References:
Anderson, L. (1995). The Typewriter. Leroy Anderson Favorites.
Fallgatter, J. (2013). Foundation of Aqwertyan™ Music. [Online]. 2013. Aqwertyan Music Systems. Available from: http://www.aqwertian.com/. [Accessed: 7 January 2015].
Feit, A.M. & Oulasvirta, A. (2013). PianoText: Transferring Musical Expertise to Text Entry. In: CHI ’13 Extended Abstracts on Human Factors in Computing Systems. CHI EA ’13. [Online]. 2013, New York, NY, USA: ACM, pp. 3043–3046. Available from: http://doi.acm.org/10.1145/2468356.2479606. [Accessed: 9 January 2015].
Hirt, K. (2010). When Machines Play Chopin: Musical Spirit and Automation in Nineteenth-Century German Literature. Walter de Gruyter.
Kirn, P. (2004). QWERTY Keyboard Instrument: Samchillian Tip Tip Tip Cheeepeeeee. Create Digital Music. [Online]. Available from: http://createdigitalmusic.com/2004/11/qwerty-keyboard-instrument-samchillian-tip-tip-tip-cheeepeeeee/. [Accessed: 7 January 2015].
Reich, S. & Korot, B. (2007). Reich: The Cave.
Rowe, R. (1993). Interactive Music Systems: Machine Listening and Composition. Cambridge MA: MIT Press.

Item Type: Composition
Additional Information: Kafka-Esque explores how the computer keyboard can be implemented into an interactive system for the performing music with lyrics by replacing sung lyrics with visually-projected typed text. Composing the piece was central to the research process (Candy and Edmonds, 2018) and involved a cyclical, iterative process of literature review, system-building/composing and reflection. The system builds on previous work in the New Interfaces for Musical Expression (NIME) community that explores the use of QWERTY keyboards for live performance (Fiebrink et al, 2007; Lee et al, 2016) and builds on other works that use typing gestures in live performance, such as Anderson’s The Typewriter (1953) and Reich and Korot’s The Cave (1994). Unlike these works, Kafka-Esque reveals connections between the act of singing and that of typing, while demonstrating how typing gestures can be captured and processed in several ways to create a multi-timbral audio-visual work. The practice also suggests techniques and strategies for implementation in popular music contexts, which are typically under-represented in work with interactive systems (Marchini et al, 2017). These findings are disseminated in the related NIME paper (Waite, 2015). Furthermore, live performances of Kafka-Esque demonstrate high levels of several aspects of liveness (Sanden, 2013). Findings have been shared with international academic and professional audiences at Innovations in Music 2017 (London); Tracking the Creative Process in Music 2017 (Huddersfield) and Loop 2017 (Berlin). The piece was the subject of a NIME 2015 paper and demonstration (Baton Rouge, USA) and was discussed in an Artist Statement in the Leonardo Music Journal (2014). Recordings of the piece and accompanying commentary have been published online and the piece has been performed at Sonorities 2015 (Queen’s University), MTI concerts (De Montfort University) and NoiseFloor (Staffordshire University). The software created for the piece is available for free download.
Uncontrolled Keywords: Typing music systems interactive generative audio-visual performance live liveness computer keyboard instrument digital experimental popular lyrics
Subjects: J900 Others in Technology
W300 Music
W800 Imaginative Writing
Faculty: School of Computing and Digital Technologies > Film, Media and Journalism
Event Title: Sonorities 2015
Event Location: Belfast
Event Dates: 17-26th April 2015
Depositing User: Si WAITE
Date Deposited: 05 Sep 2017 11:39
Last Modified: 21 Feb 2019 10:41
URI: http://eprints.staffs.ac.uk/id/eprint/3768

Available Versions of this Item

Actions (login required)

View Item View Item

DisabledGo Staffordshire University is a recognised   Investor in People. Sustain Staffs
Legal | Freedom of Information | Site Map | Job Vacancies
Staffordshire University, College Road, Stoke-on-Trent, Staffordshire ST4 2DE t: +44 (0)1782 294000