Extant quasi-linguistic theories of jazz harmonic syntax typically rely on recursive generative grammars, Schenkerian reduction rules, or the identification of western-classical chord types. The pre-conceptions embedded in these modes of analysis limit both the kinds of chords analysts treat and the kinds of progressions in which they participate. Working from a novel corpus of jazz piano performances, this talk introduces a chord representation scheme attentive to temporality and which generalizes across chord structures and progression types. Elementary machine learning methods including principal component analysis and agglomerative hierarchical clustering produce syntactic categories of chord behavior in a data-driven way explicitly accountable to the temporal statistics of the corpus. The resulting pipeline suggests applications to computational music parsing and algorithmic improvisation.
Andrew Jones is a Ph.D. candidate in music theory at Yale University. He holds an AB in Physics from Princeton University, where his thesis work on computational cosmology resulted in publication in The Astrophysical Journal. His interest in extracting meaningful signals from noisy data sets carries over into his forthcoming dissertation, which treats jazz piano performance as a complex temporal ordering problem amenable to machine learning. With commitments to corpus analysis, musical ontology, linguistics, and data science, Andrew serves as a co-convener for the Yale Sound Studies Working Group, a digitization specialist for the Yale Collection of Historical Sound Recordings, and the assistant editor for the Journal of Music Theory.