Robust Recognition of Cellular Telephone Speech by Adaptive Vector Quantization
M.K. Sonmez R. Rajasekaran and J.S. Baras
IEEE (ICASSP 96) Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp. 503-506, Atlanta, Georgia, May 7-10, 1996
The performance degradation as a result of acoustical environment mismatch remains an important practical problem in speech recognition. The problem carries a greater significance in applications over telecommunication channels, especially with the wider use of personal communications systems such as cellular phones which invariably present challenging acoustical conditions. In this work, we introduce a vector quantization (VQ) based compensation technique which both makes use of a priori information about likely acoustical environments and adapts to the test environment to improve recognition. The technique is progressive and requires neither simultaneiously recorded speech from the training and the testing environments nor EM-type batch iterations. Instead of using simultaneously recorded data, the integrity of the updated VQ codebooks with respect to acoustical classes is maintained by endowing the codebooks with a topology of reference environment. We report results on the McCaw Cellular Corpus where the technique decreases the word error for continous ten digit recognition of cellular hands free microphone speech with land line trained models from 23.8% to 13.6% and the speaker dependent voice calling sentence error from 16.5% to 10.6%.