Click the icons below to access author handouts.

Studio E • Sunday morning, 9:00–12:00

Computational and Corpus-Based Approaches to Music

Johanna Devaney (The Ohio State University), Chair

Daniel C. Tompkins (Microsoft)

A Machine Learning Approach to Modality and Tonality in Early Music

Alexander Morgan (L’université libre de Bruxelles)

Automated Contrapuntal-Rhythm Detection and Reduction for Renaissance Music

Malcolm Sailor and Andie Sigler (McGill University) 

Renaissance “Dissonance Fingerprints”: A Corpus Study of Dissonance Treatment from Dufay to Victoria

Robert T. Kelley (Lander University) 

A Corpus-Based Model of Voice Leading in Tonal Music

Abstracts

A Machine Learning Approach to Modality and Tonality in Early Music

Daniel C. Tompkins (Microsoft)

This paper presents a corpus study that identifies the number of statistically distinct modes used in sacred and secular genres from 1400–1750. Corpora used for the study include Masses, motets, and secular songs from the Franco-Flemish School, works by Palestrina, secular Italian songs with alfabeto guitar tablature from the early seventeenth century, and works by J.S. Bach. K-means clustering of key profiles are used to determine the number of distinguishable modes in each corpus. The results of this study show that the number of modes present in a corpus depends not only on date of publication but also on the genre of a composition. Secular genres are more likely to cluster into two modes while sacred genres cluster into several modes. This paper also explores the differences between systems of notation and musical practice and suggests other ways in which machine learning techniques can be in dialogue with the study of harmonic practice in early music.

Automated Contrapuntal-Rhythm Detection and Reduction for Renaissance Music

Alexander Morgan (L’université libre de Bruxelles)

Contrapuntal rhythm (hereafter CR) is the rate at which fundamental counterpoint progresses, expressed as a durational value, most often the minim. Accurate assessment of CR in Renaissance music is crucial for several analytical procedures including reduction, similarity comparisons, and the quantification of style change. While the concept of CR has been extensively theorized by Ruth DeFord (2015), the present study is the first to offer a precise and dynamic means of ascertaining it in Renaissance music. The main analytical considerations of my period-inspired and fully reproducible approach for assessing a piece’s CR are dissonance treatment, attack density, and cadence placement. When done by an analyst, counterpoint reduction can be convincing and dynamic, but difficult to reproduce by others; conversely, the two prevailing automated methods take observations at every new note (“salami slicing”, Christopher White, Ian Quinn, 2014) or at regular rhythmic intervals (Christopher Antila, Julie Cumming); both of these approaches are perfectly reproducible but not dynamic and therefore necessarily occasionally inaccurate. By contrast, an essential tenet of my approach is that CR is generally stable, but can and does vary within a single piece, without the need of a change in notated time signature. I begin with an examination of how CR and reduction are addressed by Johannes Tinctoris (1477) and Pietro Pontio (1588), then I describe each step of my method with sample analyses.

Renaissance “Dissonance Fingerprints”: A Corpus Study of Dissonance Treatment from Dufay to Victoria

Malcolm Sailor and Andie Sigler (McGill University)

Using new software that sorts dissonant note-pairs into dissonant idioms (e.g., suspension, cambiata), we present the results of a corpus study of over 2,100 movements of Renaissance counterpoint, from Dufay (1397–1474) to Victoria (1548–1611). As a major nexus of stylistic evolution in Renaissance music, dissonance treatment has long been studied, but computer-assisted analysis enables us to: (1) re-examine it on a more comprehensive scale, (2) describe, search for, and count new dissonant idioms, and (3) calculate individual composers’ “dissonance fingerprint,” which may prove useful in questions of attribution.

Dissonant idiom definitions in Palestrina’s style, as described by theorists such as Jeppesen and Schubert, successfully categorize virtually all High Renaissance dissonance but cannot account for a significant minority of dissonances in earlier music (e.g., Ockeghem). Defining new idioms to account for these earlier dissonances, we show that the trajectory of Renaissance dissonance treatment was a consolidation where general tendencies of dissonance treatment (e.g., downwards resolution) became uniform, and the most common dissonant idioms (passing tones, etc.) became very nearly the only acceptable idioms.

The rigor required for computer analysis leads us to identify a new family of dissonant idioms: a pair of voices simultaneously attack a dissonant interval, but one of these voices sounds a pitch-class either still sounding in another voice or left at the moment of onset. Since explaining such dissonances requires invoking notes from outside the dissonant pair, such idioms indicate an emerging necessity for something like a chordal model to explain dissonance treatment.

A Corpus-Based Model of Voice Leading in Tonal Music

Robert T. Kelley (Lander University)

This study proposes an empirically derived model of tonal voice leading that can be used to teach part writing in undergraduate harmony courses. Machine-learning algorithms divided scale degrees into classes based on their voice-leading tendencies in the corpus. The resulting theory therefore extends Harrison’s (1994) harmonic-function-based voice-leading models, Quinn’s (2005) “Harmonic Function without Primary Triads,” and Shaffer’s (2014) scale-degree categories in Open Music Theory, and provides corpus-derived evidence for the use of scale-degree functions. The dataset used in this study was generated from MIDI files of 1582 four-voice homorhythmic chorale-style hymns selected from The Cyber Hymnal. An algorithm for determining harmonic function was applied to create a list of each voice’s scale-degree numbers and their harmonic functional context. Using this data, the machine learning produced a hidden Markov model with six states that can be interpreted as scale-degree functions. Dividing the model into 21 scale-degree classes clarifies how each of the original six classes behaves in every harmonic context. This computer-generated statistical voice-leading model was then consolidated into a simple set of voice-leading procedures applicable to all tonal contexts that undergraduate music majors can easily learn in order to master the technique of tonal part writing and analysis.