View code at →
The emotional content of Beijing opera is conveyed through historically pre-defined rhythmic patterns known as banshi. There are eight main banshi, categorized by meter, tempo, expressive function, and pitch changes per syllable. The goal of this project is to automatically identify banshi in traditional Beijing Opera (jingju). The motivation lies in the novelty of this task: Caro Repetto et al. (2014) pointed out that this identification would be musically meaningful, but computationally challenging due to individual performer's flexibility. The application of a successful identification would greatly help the segmentation of jingju music.
The dataset we used was the Jingju Audio Recording Collection (Caro Repetto et al., 2014) with banshi labels in MusicBrainz. We extracted 6 low-level features and implemented a tempo detection algorithm, and used a multinomial logistic regression model for classification. 
However, the classification report showed that such classification is not meaningfu. Through this project, I learned a valuable lesson that the existing approaches of music information retrieval for western music might not be applicable for ethnomusicology studies, and should be drastically adapted when one attempts such tasks.
Back to Top