This shows you the differences between two versions of the page.

Link to this comparison view

staff:vatolkin:publications:abstracts [2019-01-12 11:18] (current)
Line 1: Line 1:
 +====== Abstracts ======
 +===== 2019 =====
 +=== [29] ===
 +<​html><​i>​I. Vatolkin, D. Stoller</​i>:​ <​b><​font color=#​0000FF>​Evolutionary Multi-Objective Training Set Selection of Data Instances and Augmentations for Vocal Detection</​font></​b>​. Accepted for EvoMusArt 2019</​html>​
 +The size of publicly available music data sets has grown significantly in recent years, which allows training better classification models. However, training on large data sets is time-intensive and cumbersome, and some training instances might be unrepresentative and thus hurt classification performance regardless of the used model. On the other hand, it is often beneficial to extend the original training data with augmentations,​ but only if they are carefully chosen. Therefore, identifying a ``smart''​ selection of training instances should improve performance. In this paper, we introduce a novel, multi-objective framework for training set selection with the target to simultaneously minimise the number of training instances and the classification error. Experimentally,​ we apply our method to vocal activity detection on a multi-track database extended with various audio augmentations for accompaniment and vocals. Results show that our approach is very effective at reducing classification error on a separate validation set, and that the resulting training set selections either reduce classification error or require only a small fraction of training instances for comparable performance.
 +===== 2018 =====
 +=== [28] ===
 +<​html><​i>​I. Vatolkin, G. Rudolph</​i>:​ <​b><​font color=#​0000FF>​Comparison of Audio Features for Recognition of Western and Ethnic Instruments in Polyphonic Mixtures</​font></​b>​. Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), pp. 554-560</​html>​
 +Studies on instrument recognition are almost always restricted to either Western or ethnic music. Only little work has been done to compare both musical worlds. In this paper, we analyse the performance of various audio features for recognition of Western and ethnic instruments in chords. The feature selection is done with the help of a minimum redundancy - maximum relevance strategy and a multi-objective evolutionary algorithm. We compare the features found to be the best for individual categories and propose a novel strategy based on non-dominated sorting to evaluate and select trade-off features which may contribute as best as possible to the recognition of individual and all instruments.
Last modified: 2019-01-12 11:18 (external edit)