Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
staff:vatolkin [2022-08-13 18:05]
igor.vatolkin [Teaching]
staff:vatolkin [2022-08-13 18:16]
igor.vatolkin
Line 33: Line 33:
 ===== Recent Publications ===== ===== Recent Publications =====
  
 +{{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​08.2022</​font>​ <​html><​b><​font color=#​3346ff>​[c41]</​font></​b></​font>​ <i>L. Fricke, J. Kuzmic, I. Vatolkin</​i>:<​b>​ Suppression of Background Noise in Speech Signals with Artificial Neural Networks, Exemplarily Applied to Keyboard Sounds</​b>​ is accepted for <a href="​https://​ncta.scitevents.org/">​NCTA 2022</​a></​html>​\\
 +{{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​07.2022</​font>​ <​html><​b><​font color=#​3346ff>​[c40]</​font></​b></​font>​ <i>I. Vatolkin, C. McKay</​i>:<​b>​ Stability of Symbolic Feature Group Importance in the Context of Multi-Modal Music Classification</​b>​ is accepted for <a href="​https://​ismir2022.ismir.net">​ISMIR 2022</​a></​html>​\\
 +{{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​07.2022</​font>​ <​html><​b><​font color=#​3346ff>​[c39]</​font></​b></​font>​ <i>F. Ostermann, I. Vatolkin, G. Rudolph</​i>:<​b>​ Artificial Music Producer: Filtering Music Compositions by Artificial Taste</​b>​ is accepted for <a href="​https://​2022.aimusiccreativity.org/">​AIMC 2022</​a></​html>​\\
 {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​03.2022</​font>​ <​html><​b><​font color=#​3346ff>​[c38]</​font></​b>​ <i> I. Vatolkin</​i>:<​b>​ Identification of the Most Relevant Zygonic Statistics and Semantic Audio Features for Genre Recognition</​b>​. Accepted for Proceedings of the International Computer Music Conference 2022 (ICMC)</​html>​\\ {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​03.2022</​font>​ <​html><​b><​font color=#​3346ff>​[c38]</​font></​b>​ <i> I. Vatolkin</​i>:<​b>​ Identification of the Most Relevant Zygonic Statistics and Semantic Audio Features for Genre Recognition</​b>​. Accepted for Proceedings of the International Computer Music Conference 2022 (ICMC)</​html>​\\
 {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​11.2021</​font>​ <​html><​b><​font color=#​ee1528>​[j7]</​font></​b>​ <i>F. Ostermann, I. Vatolkin, G. Rudolph</​i>:<​b>​ Evaluating Creativity in Automatic Reactive Accompaniment of Jazz Improvisation</​b>​. Transactions of the International Society for Music Information Retrieval, 4(1):​210-222</​html>​\\ {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​11.2021</​font>​ <​html><​b><​font color=#​ee1528>​[j7]</​font></​b>​ <i>F. Ostermann, I. Vatolkin, G. Rudolph</​i>:<​b>​ Evaluating Creativity in Automatic Reactive Accompaniment of Jazz Improvisation</​b>​. Transactions of the International Society for Music Information Retrieval, 4(1):​210-222</​html>​\\
Line 40: Line 43:
 {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​07.2021</​font>​ <​html><​b><​font color=#​3346ff>​[c36]</​font></​b>​ <i> I. Vatolkin, P. Ginsel, and G. Rudolph</​i>:<​b>​ Advancements in the Music Information Retrieval Framework AMUSE over the Last Decade</​b>​. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)</​html>​\\ {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​07.2021</​font>​ <​html><​b><​font color=#​3346ff>​[c36]</​font></​b>​ <i> I. Vatolkin, P. Ginsel, and G. Rudolph</​i>:<​b>​ Advancements in the Music Information Retrieval Framework AMUSE over the Last Decade</​b>​. Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)</​html>​\\
 {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​07.2021</​font>​ <​html><​b><​font color=#​3346ff>​[c35]</​font></​b>​ <i> I. Vatolkin, F. Ostermann, and M. Müller</​i>:<​b>​ An Evolutionary Multi-Objective Feature Selection Approach for Detecting Music Segment Boundaries of Specific Types</​b>​. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)</​html>​\\ {{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​07.2021</​font>​ <​html><​b><​font color=#​3346ff>​[c35]</​font></​b>​ <i> I. Vatolkin, F. Ostermann, and M. Müller</​i>:<​b>​ An Evolutionary Multi-Objective Feature Selection Approach for Detecting Music Segment Boundaries of Specific Types</​b>​. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)</​html>​\\
-{{:​staff:​bullet_gross.gif|}} <​html><​font color=#​996600>​04.2021</​font>​ <​html><​b><​font color=#​3346ff>​[c34]</​font></​b>​ <i> I. Vatolkin, B. Adrian, and J. Kuzmic</​i>:<​b>​ A Fusion of Deep and Shallow Learning to Predict Genres Based on Instrument and Timbre Features</​b>​. Proceedings of the 10th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART)</​html>​\\ 
 [see also [[staff/​vatolkin/​Publications|the complete publication list]]]\\ [see also [[staff/​vatolkin/​Publications|the complete publication list]]]\\
  
 
Last modified: 2024-02-04 09:26 by igor.vatolkin
DokuWikiRSS-Feed