The Winter School on Speech and Audio Processing (WiSSAP) is an annual school, organized in India since 2006. It provides a forum for researchers to enhance their expertise by exposing them to areas at the forefront of speech and audio signal processing.
The past 14 WiSSAPs have covered different aspects of speech and audio processing – perception/ recognition/ coding/ enhancement/ synthesis/ spatialization/ production/ hearing-aids etc.
The theme for WiSSAP 2020 is Machine Listening: making sense of sound.
The event will host invited talks and tutorials by eminent researchers in the field of machine perception, soundscape analysis and deep learning.
Paris Smaragdis is faculty at the Computer Science and the Electrical and Computer Engineering departments of the University of Illinois at Urbana-Champaign, as well as a senior research scientist at Adobe Research. He completed his masters, PhD, and postdoctoral studies at MIT, performing research on computational audition. His research is focused on machine learning approaches to solving various audio signal processing problems. In 2006 he was selected by MIT’s Technology Review as one of the year’s top young technology innovators for his work on machine listening, in 2015 he was elevated to an IEEE Fellow for contributions in audio source separation and audio processing, and he served as an IEEE SPS Distinguished Lecturer in 2016-2017. He is currently the chair of IEEE's Audio and Acoustic Signal Processing Technical Committee, and a member-at-large of the IEEE Signal Processing Society Board of Governors.
He has been the chair of the ICA/LCA steering committee, the chair of the IEEE Machine Learning for Signal Processing Technical Committee, a member of the IEEE Audio and Acoustic Signal Processing Technical Committee, an Associate Editor for the IEEE Signal Processing Letters, and a Senior Area Editor for the IEEE Signal Processing Transactions. He holds 37 US patents and many others internationally. More here.
Dan Stowell is a research fellow applying machine learning to sound. He develops new techniques in structured "machine listening", using both machine learning and signal processing, to analyse soundscapes with multiple birds. He also worked on voice, music, birdsong and environmental soundscapes.
He is an EPSRC research fellow based at Queen Mary University of London.
He is also a Fellow of the Alan Turing Institute. More here.
Sharath Adavanne is a doctoral researcher (graduating 2020) at the audio research group at Tampere University, Finland, as well as a research scientist at ZAPR Media Labs. In his doctoral work, he developed techniques for computational auditory scene analysis using signal processing and deep learning methods. Currently, at ZAPR, he is working in the domain of machine listening, speech recognition, and synthesis. Previously, he has also worked in the industry solving problems related to music information retrieval, audio fingerprinting, and general audio content analysis. More here.