Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
staff:vatolkin:publications [2022-02-01 16:01]
igor.vatolkin [Journal Articles]
staff:vatolkin:publications [2023-03-21 18:36]
igor.vatolkin
Line 6: Line 6:
  
 ===== Journal Articles ===== ===== Journal Articles =====
 +<​html><​b><​font color=#​006633>​[j9]</​font></​b>​ <i>F. Ostermann, I. Vatolkin, and M. Ebeling</​i>:<​b><​font color=#​0000FF>​ AAM: A Dataset of Artificial Audio Multitracks for Diverse Music Information Retrieval Tasks</​font></​b>​. Accepted for EURASIP Journal on Audio, Speech, and Music Processing, <​html><​font color=#​996600>​2023</​font></​html>​
 +
 <​html><​b><​font color=#​006633>​[j8]</​font></​b>​ <i>I. Vatolkin and C. McKay</​i>:<​b><​font color=#​0000FF>​ Multi-Objective Investigation of Six Feature Source Types for Multi-Modal Music Classification</​font></​b>​. Transactions of the International Society for Music Information Retrieval, 5(1):1-19, <​html><​font color=#​996600>​2022</​font></​html>​ <​html><​b><​font color=#​006633>​[j8]</​font></​b>​ <i>I. Vatolkin and C. McKay</​i>:<​b><​font color=#​0000FF>​ Multi-Objective Investigation of Six Feature Source Types for Multi-Modal Music Classification</​font></​b>​. Transactions of the International Society for Music Information Retrieval, 5(1):1-19, <​html><​font color=#​996600>​2022</​font></​html>​
  
Line 40: Line 42:
  
 ===== Peer-Reviewed Conference Proceedings ===== ===== Peer-Reviewed Conference Proceedings =====
 +
 +<​html><​b><​font color=#​006633>​[c43]</​font></​b>​ <i> L. Fricke, I. Vatolkin, and F. Ostermann</​i>:<​b><​font color=#​0000FF>​ Application of Neural Architecture Search to Instrument Recognition in Polyphonic Audio</​font></​b>​. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART),​ <font color=#​996600>​2023</​font></​html>​
 +
 +<​html><​b><​font color=#​006633>​[c42]</​font></​b>​ <i> I. Vatolkin, M. Gotham, N. Nápoles López, and F. Ostermann</​i>:<​b><​font color=#​0000FF>​ Musical Genre Recognition based on Deep Descriptors of Harmony, Instrumentation,​ and Segments</​font></​b>​. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART),​ <font color=#​996600>​2023</​font></​html>​
 +
 +<​html><​b><​font color=#​006633>​[c41]</​font></​b>​ <i> L. Fricke, J. Kuzmic, and I. Vatolkin</​i>:<​b><​font color=#​0000FF>​ Suppression of Background Noise in Speech Signals with Artificial Neural Networks, Exemplarily Applied to Keyboard Sounds</​font></​b>​. Proceedings of the 14th International Conference on Neural Computation Theory and Applications (NCTA), pp. 367-374, <font color=#​996600>​2022</​font></​html>​
 +
 +<​html><​b><​font color=#​006633>​[c40]</​font></​b>​ <i> I. Vatolkin and C. McKay</​i>:<​b><​font color=#​0000FF>​ Stability of Symbolic Feature Group Importance in the Context of Multi-Modal Music Classification</​font></​b>​. Proceedings of the The 23rd International Society for Music Information Retrieval Conference (ISMIR), pp.469-476, <font color=#​996600>​2022</​font></​html>​
 +
 +<​html><​b><​font color=#​006633>​[c39]</​font></​b>​ <i> F. Ostermann, I. Vatolkin, and G. Rudolph</​i>:<​b><​font color=#​0000FF>​ Artificial Music Producer: Filtering Music Compositions by Artificial Taste</​font></​b>​. Proceedings of the 3rd Conference on AI Music Creativity (AIMC), <font color=#​996600>​2022</​font></​html>​
 +
 +<​html><​b><​font color=#​006633>​[c38]</​font></​b>​ <i> I. Vatolkin</​i>:<​b><​font color=#​0000FF>​ Identification of the Most Relevant Zygonic Statistics and Semantic Audio Features for Genre Recognition</​font></​b>​. Proceedings of the International Computer Music Conference (ICMC), <font color=#​996600>​2022</​font></​html>​
  
 <​html><​b><​font color=#​006633>​[c37]</​font></​b>​ <i> I. Vatolkin</​i>:<​b><​font color=#​0000FF>​ Improving Interpretable Genre Recognition with Audio Feature Statistics Based on Zygonic Theory</​font></​b>​. Proceedings of the 2nd Nordic Sound and Computing Conference (NordicSMC),​ <font color=#​996600>​2021</​font></​html>​ <​html><​b><​font color=#​006633>​[c37]</​font></​b>​ <i> I. Vatolkin</​i>:<​b><​font color=#​0000FF>​ Improving Interpretable Genre Recognition with Audio Feature Statistics Based on Zygonic Theory</​font></​b>​. Proceedings of the 2nd Nordic Sound and Computing Conference (NordicSMC),​ <font color=#​996600>​2021</​font></​html>​
  
-<​html><​b><​font color=#​006633>​[c36]</​font></​b>​ <i> I. Vatolkin, P. Ginsel, and G. Rudolph</​i>:<​b><​font color=#​0000FF>​ Advancements in the Music Information Retrieval Framework AMUSE over the Last Decade</​font></​b>​. ​Accepted for Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 2383-2389, <font color=#​996600>​2021</​font></​html>​+<​html><​b><​font color=#​006633>​[c36]</​font></​b>​ <i> I. Vatolkin, P. Ginsel, and G. Rudolph</​i>:<​b><​font color=#​0000FF>​ Advancements in the Music Information Retrieval Framework AMUSE over the Last Decade</​font></​b>​. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 2383-2389, <font color=#​996600>​2021</​font></​html>​
  
-<​html><​b><​font color=#​006633>​[c35]</​font></​b>​ <i> I. Vatolkin, F. Ostermann, and M. Müller</​i>:<​b><​font color=#​0000FF>​ An Evolutionary Multi-Objective Feature Selection Approach for Detecting Music Segment Boundaries of Specific Types</​font></​b>​. ​Accepted for Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pp. 1061-1069, <font color=#​996600>​2021</​font></​html>​+<​html><​b><​font color=#​006633>​[c35]</​font></​b>​ <i> I. Vatolkin, F. Ostermann, and M. Müller</​i>:<​b><​font color=#​0000FF>​ An Evolutionary Multi-Objective Feature Selection Approach for Detecting Music Segment Boundaries of Specific Types</​font></​b>​. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), pp. 1061-1069, <font color=#​996600>​2021</​font></​html>​
  
-<​html><​b><​font color=#​006633>​[c34]</​font></​b>​ <i> I. Vatolkin, B. Adrian, and J. Kuzmic</​i>:<​b><​font color=#​0000FF>​ A Fusion of Deep and Shallow Learning to Predict Genres Based on Instrument and Timbre Features</​font></​b>​. ​Accepted for Proceedings of the 10th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART),​ pp. 313-326, <font color=#​996600>​2021</​font></​html>​+<​html><​b><​font color=#​006633>​[c34]</​font></​b>​ <i> I. Vatolkin, B. Adrian, and J. Kuzmic</​i>:<​b><​font color=#​0000FF>​ A Fusion of Deep and Shallow Learning to Predict Genres Based on Instrument and Timbre Features</​font></​b>​. Proceedings of the 10th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART),​ pp. 313-326, <font color=#​996600>​2021</​font></​html>​
  
-<​html><​b><​font color=#​006633>​[c33]</​font></​b>​ <i> I. Vatolkin, M. Koch, and M. Müller</​i>:<​b><​font color=#​0000FF>​ A Multi-Objective Evolutionary Approach to Identify Relevant Audio Features for Music Segmentation</​font></​b>​. ​Accepted for Proceedings of the 10th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART),​ pp. 327-343, <font color=#​996600>​2021</​font></​html>​+<​html><​b><​font color=#​006633>​[c33]</​font></​b>​ <i> I. Vatolkin, M. Koch, and M. Müller</​i>:<​b><​font color=#​0000FF>​ A Multi-Objective Evolutionary Approach to Identify Relevant Audio Features for Music Segmentation</​font></​b>​. Proceedings of the 10th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART),​ pp. 327-343, <font color=#​996600>​2021</​font></​html>​
  
 <​html><​b><​font color=#​006633>​[c32]</​font></​b>​ <i> I. Vatolkin</​i>:<​b><​font color=#​0000FF>​ Evolutionary Approximation of Instrumental Texture in Polyphonic Audio Recordings</​font></​b>​. Proceedings of the IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, <font color=#​996600>​2020</​font></​html>​ <​html><​b><​font color=#​006633>​[c32]</​font></​b>​ <i> I. Vatolkin</​i>:<​b><​font color=#​0000FF>​ Evolutionary Approximation of Instrumental Texture in Polyphonic Audio Recordings</​font></​b>​. Proceedings of the IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, <font color=#​996600>​2020</​font></​html>​
 
Last modified: 2023-03-21 18:36 by igor.vatolkin
DokuWikiRSS-Feed