Publications

M. Evrard, “Transformers in Automatic Speech Recognition,” Advanced course on Human-Centered AI, Editors M. Chetouani, V. Dignum, P. Lukowicz, and C. Sierra, Springer Lecture Notes in Artificial Intelligence (LNAI), 2022 (tbp).

J. Cauzinille, M. Evrard, N. Kiselov, A. Rilliard, “Annotation of expressive dimensions on a multimodal French corpus of political interviews”, International Conference on Language Resources and Evaluation (LREC), Workshop on Natural Language Processing for Political Sciences (PoliticalNLP), 2022.

M. Evrard, R. Uro, N. Hervé, and B. Mazoyer, “French Tweet Corpus for Automatic StanceDetection,” International Conference on Language Resources and Evaluation (LREC), 2020.

R. Uro, M. Evrard, N. Hervé, and B. Mazoyer, “The Constitution of a French Tweet Corpus for Automatic Stance Detection,” SLSP 2019, Ljubljana, Slovenia, 2019.

D. Doukhan, E. Lechapt, M. Evrard, and J. Carrive, “INA’S MIREX 2018 music and speech detection system,” in Music Information Retrieval Evaluation eXchange (MIREX), 2018.

A. Rilliard, C. d’Alessandro, and M. Evrard, “Paradigmatic variation of vowels in expressive speech: acoustic description and dimensional analysis,” in Journal of the Acoustical Society of Americal (Jasa), 2018.

M. Evrard, M. Miwa. and Y. Sasaki, “Semantic graph embeddings and a neural language model for WSD,” Second International Workshop on Symbolic-Neural Learning (SNL), 2018.

M. Evrard, M. Miwa. and Y. Sasaki, “TTI's Approaches to Symbolic-Neural Learning,” First International Workshop on Symbolic-Neural Learning (SNL), 2017.

M. Evrard, “Synthèse de parole expressive à partir du texte: Des phonostyles au contrôle gestuel pour la synthèse paramétrique statistique,” PhD thesis, Université de Paris-Sud, 2015.

M. Evrard, S. Delalez, C. d’Alessandro, and A. Rilliard, “Comparison of chironomic stylization versus statistical modeling of prosody for expressive speech synthesis,” in Sixteenth Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015.

M. Evrard, C. d’Alessandro, and A. Rilliard, “Evaluation of the impact of corpus phonetic alignment on the hmm-based speech synthesis quality,” in Statistical Language and Speech Processing, Springer, 2015.

C.-T. Do, M. Evrard, A. Leman, C. d’Alessandro, A. Rilliard, and J.-L. Crebouw, “Objective evaluation of hmm-based speech synthesis system using Kullback-Leibler divergence,” in Fifteenth Annual Conference of the International Speech Communication Association (INTERSPEECH), 2014.

M. Evrard, C. R. André, J. G. Verly, J.-J. Embrechts, and B. F. Katz, “Object-based sound re-mix for the spatially coherent audio rendering of an existing stereoscopic-3D animation movie,” in Audio Engineering Society Convention 131, Audio Engineering Society, New York, NY, United States, 2011.

M. Evrard, C. André, J. Verly, and J.-J. Embrechts, “Adding wave-field-synthesis 3D audio to an existing stereoscopic-3D animation movie,” in Third edition of the 3D Stereo MEDIA international summit, Liège, Belgium, 2011.

M. Evrard, C. André, J.-J. Embrechts, and J. Verly, “3D audio acquisition and reproduction systems,” in Journée ABAV (Association Belge des Acousticiens), Neder-over-Heembeek, Belgium, 2011.

M. Evrard, A. Rilliard, and C. d’Alessandro, “Reproduction de la personnalité vocale d’un acteur,” in Journées Jeunes Chercheurs en Audition, Acoustique musicale, et Signal audio (JJCAAS), Marseille, France, 2012.