Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
staff:vatolkin:publications [2023-01-26 15:37] Igor Vatolkin [Peer-Reviewed Conference Proceedings] |
staff:vatolkin:publications [2023-01-26 15:53] Igor Vatolkin [Peer-Reviewed Conference Proceedings] |
||
---|---|---|---|
Line 41: | Line 41: | ||
===== Peer-Reviewed Conference Proceedings ===== | ===== Peer-Reviewed Conference Proceedings ===== | ||
- | <html><b><font color=#006633>[c43]</font></b> <i> L.Fricke, I. Vatolkin, and F. Ostermann:</i>:<b><font color=#0000FF> Application of Neural Architecture Search to Instrument Recognition in Polyphonic Audio</font></b>. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART), <font color=#996600>2023</font></html> | + | <html><b><font color=#006633>[c43]</font></b> <i> L. Fricke, I. Vatolkin, and F. Ostermann</i>:<b><font color=#0000FF> Application of Neural Architecture Search to Instrument Recognition in Polyphonic Audio</font></b>. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART), <font color=#996600>2023</font></html> |
- | <html><b><font color=#006633>[c42]</font></b> <i> I. Vatolkin, M. Gotham, N. Nápoles López, and F. Ostermann:</i>:<b><font color=#0000FF> Musical Genre Recognition based on Deep Descriptors of Harmony, Instrumentation, and Segments</font></b>. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART), <font color=#996600>2023</font></html> | + | <html><b><font color=#006633>[c42]</font></b> <i> I. Vatolkin, M. Gotham, N. Nápoles López, and F. Ostermann</i>:<b><font color=#0000FF> Musical Genre Recognition based on Deep Descriptors of Harmony, Instrumentation, and Segments</font></b>. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART), <font color=#996600>2023</font></html> |
- | <html><b><font color=#006633>[c41]</font></b> <i> L. Fricke, J. Kuzmic, and I. Vatolkin:</i>:<b><font color=#0000FF> Suppression of Background Noise in Speech Signals with Artificial Neural Networks, Exemplarily Applied to Keyboard Sounds</font></b>. Proceedings of the 14th International Conference on Neural Computation Theory and Applications (NCTA), pp. 367-374, <font color=#996600>2022</font></html> | + | <html><b><font color=#006633>[c41]</font></b> <i> L. Fricke, J. Kuzmic, and I. Vatolkin</i>:<b><font color=#0000FF> Suppression of Background Noise in Speech Signals with Artificial Neural Networks, Exemplarily Applied to Keyboard Sounds</font></b>. Proceedings of the 14th International Conference on Neural Computation Theory and Applications (NCTA), pp. 367-374, <font color=#996600>2022</font></html> |
<html><b><font color=#006633>[c40]</font></b> <i> I. Vatolkin and C. McKay</i>:<b><font color=#0000FF> Stability of Symbolic Feature Group Importance in the Context of Multi-Modal Music Classification</font></b>. Proceedings of the The 23rd International Society for Music Information Retrieval Conference (ISMIR), pp.469-476, <font color=#996600>2022</font></html> | <html><b><font color=#006633>[c40]</font></b> <i> I. Vatolkin and C. McKay</i>:<b><font color=#0000FF> Stability of Symbolic Feature Group Importance in the Context of Multi-Modal Music Classification</font></b>. Proceedings of the The 23rd International Society for Music Information Retrieval Conference (ISMIR), pp.469-476, <font color=#996600>2022</font></html> |