Online music databases have increased significantly as a consequence of the rapid growth of the Internet and digital audio, requiring the development of faster and more efficient tools for music content analysis. Musical genres are widely used to organize music collections. In this paper, the problem of automatic single and multi-label music genre classification is addressed by exploring rhythm-based features obtained from a respective complex network representation. A Markov model is built in order to analyse the temporal sequence of rhythmic notation events. Feature analysis is performed by using two multivariate statistical approaches: principal components analysis (unsupervised) and linear discriminant analysis (supervised). Similarly, two classifiers are applied in order to identify the category of rhythms: parametric Bayesian classifier under the Gaussian hypothesis (supervised) and agglomerative hierarchical clustering (unsupervised). Qualitative results obtained by using the kappa coefficient and the obtained clusters corroborated the effectiveness of the proposed method.
GENERAL SCIENTIFIC SUMMARY
Introduction and background. The constant increase in online music databases has required the development of reliable and fast tools for automatic music content analysis. Widely used to organize music collections, music genres are interesting descriptors, since they express common characteristics in pieces of music. However, there is not a definitive and clear taxonomy of music genres. The problem of automatically classifying them is a nontrivial task because the same music piece can be associated with more than one genre. There are many studies about automatic genre classification, but only a few concerning multi-genre classification, which is particularly appropriate in this scenario.
Main results. We analysed similar and different characteristics of rhythms from four music genres in terms of the occurrence of sequences of events represented by rhythmic notation. We found that the rhythms are very complex, and that many features are required to separate them. In addition, the multi-genre classification allowed a generalization of the genre taxonomy, since new sub-genres were originated from the originals.
Wider implications. This study provided strong evidence that music genres are surprisingly complex and contain many redundancies. It is often difficult even for experts to distinguish between genres. We focused only on rhythmic analysis, which offered a substantial reduction of computational cost through a more compact representation. The viability of the proposed methodology indicates that a deeper analysis of the rhythms can be performed to enhance the effectiveness of the method.