Speech recognition is almost as natural as breathing for us, but for a computer, it has taken more than half a century to solve this ‘problem’. Previously, the fundamental drawbacks to Speech Recognition were its poor accuracy, sensitivity to noise, over dependence on training to a particular voice and similar problems meant it worked in principle, but not in practice. This has been hugely improved now, it is often reaching the high nineties in percentage terms for several reasons: the general increase in the availability of affordable computing power, the advent of the cloud and the vast numbers of people now using it. Last year, IBM announced a major milestone in conversational speech recognition by buildingÂ a system that achieved a 6.9 percent word error rate. Since then, it has continued to push the boundaries of speech recognition. Today, it reached a new industry record of 5.5 percent where these are measured on a very difficult speech recognition task: recorded conversations between humans discussing day-to-day topics like â€œbuying a car.â€ This recorded corpus, known as the â€œSWITCHBOARDâ€ corpus and has been used for over two decades to benchmark speech recognition systems.
The 21st century has seen many improvements in this field. In the 2000s DARPA sponsored two speech recognition programs: Effective Affordable Reusable Speech-to-Text (EARS) in 2002 andÂ Global Autonomous Language ExploitationÂ (GALE).Â TheÂ National Security AgencyÂ has made use of a type of speech recognition forÂ keyword spottingÂ since 2006.Â This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of keywords. Google‘s first effort at Speech Recognition came in 2007 when its first product â€œGOOG-411â€ a telephone based directory service was released. Now, Google voice searchÂ is supported in over 30 languages and particularly in 2015, Google’s speech recognition reportedly experienced a dramatic performance jump of 49% through new techniques involving deep learning.
These advancements in Speech Recognition Technology had diversified its application. It has been implemented by many Healthcare and Military organizations:
In theÂ health careÂ sector, Speech Recognition is implemented in front-end or back-end of the medical documentation process. In Front-end speech recognition, the provider dictates into a speech-recognition engine, the recognized words are displayed as they are spoken, and the dictator is responsible for editing and signing off on the document. Whereas, in Back-end or deferred speech recognition the provider dictates into aÂ digital dictationÂ system, the voice is recognized and a draft document is made out of it which is routed along with the original voice file to the editor, where the draft is edited and finalized. Deferred speech recognition is widely used in the industry currently.
Particularly in short-term-memory re-strengthening ofÂ brain AVMÂ patients, the use of speech recognition software in conjunction withÂ word processorsÂ has shown significant benefits. Further research needs to be conducted to determine cognitive benefits for individuals whose AVMs have been treated using radiologic techniques.
High-performance fighter aircraft
Significant progress in the test and evaluation of Speech Recognition inÂ fighter aircraft has taken place in the last decade. Of particular note have been the US program in speech recognition for theÂ Advanced Fighter Technology Integration (AFTI)/F-16Â aircraft (F-16 VISTA). In this program, speech recognizers have been operated successfully in fighter aircraft, with applications including setting radio frequencies, commanding an autopilot system, setting steer-point coordinates and weapons release parameters, and controlling flight display.
Also, speaker-independent systems are being developed and are under test for theÂ F35 Lightning IIÂ (JSF). This system has produced word accuracy scores in excess of 98%.
Training air traffic controllers
Training for air traffic controllers (ATC) represents an excellent application for speech recognition systems. In the current scenario, many ATC training systems need a person to act as a “pseudo-pilot”, engaging in a voice dialog with the trainee controller, which simulates the dialog that the controller would have to conduct with pilots in a real ATC situation. Speech recognition techniques can eliminate the need for a person to act as pseudo-pilot, thus reducing training and support personnel. The USAF, USMC, US Army, US Navy, and FAA as well as a number of international ATC training organizations are currently using ATC simulators with speech recognition from different vendors.
Editor’s note: Original Sources
- Schutte, John (15 October 2007).Â “Researchers fine-tune F-35 pilot-aircraft speech system”. United States Air Force. Archived fromÂ the originalÂ on 20 October 2007