Voice Recognition Ready for Consumer Devices

Linley Gwennap

Issue #75, July 2000

This looks like the year that voice recognition finally reaches the mainstream.

This looks like the year that voice recognition finally reaches the mainstream. Motorola unveiled “Mya, the 24-hour talking Internet” at the Oscars. Tellme.com and other startups are deploying voice portals that accept speech commands and read web content over a standard telephone. The latest Jaguar allows drivers to adjust the climate and sound systems using their voice.

Most of these services run on remote servers or PCs where plenty of processing power is available. But the Jaguar example is telling: CPU performance has reached the point that even an inexpensive embedded processor can perform useful voice recognition. Over the next few years, voice will become a common interface in a variety of non-PC devices, many of which will be running Linux.

Until recently, voice recognition required each user to train the system to recognize his or her particular speech patterns. Like most other software, however, voice recognition improves given faster processors and more memory. Recent products reduce training time dramatically. Speaker-independent software eliminates training entirely. To achieve highly accurate speaker-independent recognition with moderate processing requirements, designers must limit the context and vocabulary of the application. For example, a car needs to recognize only a few dozen words, including “temperature”, “radio”, and the numbers needed to select a station.

Lernout & Hauspie (http://www.lhsl.com/), a leading supplier of voice software, supplies speech engines for applications as simple as these, as well as far more complex ones. According to Klaus Schleicher, a director of product management at L&H, the simplest speech engine provides speaker-independent recognition of up to 100 words, but requires less than 200K of memory. L&H offers a more-powerful speech engine that can recognize up to 1,000 words, again without training. This engine requires 2MB of memory and can run on a 200MHz processor. This hardware costs a bit more, but is still easily obtainable for $30 today, and that price will drop over time. The larger vocabulary is suitable for applications such as a TV set-top box that can be programmed by speaking the name of a show or a hand-held PDA that can manage calendars and address books via voice.

Composing arbitrary text, such as an e-mail message, requires a much larger vocabulary. For this purpose, L&H has a speech engine with a 20,000-word vocabulary—twice as large as the average adult's. This engine requires some training, but only about five minutes per user. Even this large vocabulary doesn't require a full-blown PC or server; the company has demonstrated it using a 200MHz StrongArm processor and 32MB of memory. This speech engine could be incorporated into a webpad, allowing users to compose e-mail and other documents without using a keyboard.

One problem is that these speech engines are still not 100% reliable. The smaller the vocabulary, the smaller the error rate—after all, there are fewer words to confuse. In addition, a “command and control” application has natural opportunities to seek clarification. For example, if the user says “Turn off the TV” in a noisy room, the system might respond “I didn't understand that; please try again” or “Do you want the TV off?” In these limited-domain applications, the software actually interprets the voice input to determine its meaning, in this case, to turn off the TV. One possible interpretation of the input phonemes might be “turnips are meaty”, but the software would quickly discard this possibility as irrelevant in the context of controlling the television. This intelligent interpretation is called natural language processing (NLP). The combination of good voice recognition and a well-programmed NLP back end can produce a reliable system.

A working example is MIT's Jupiter system, a conversation interface for weather information built by the university's Spoken Language Systems group. You can call it (1-888-573-8255, but it is often busy) and ask about the weather anywhere in the U.S. or around the world. It uses a 500MHz Pentium III PC running Linux, but it hasn't been optimized to reduce CPU overhead. Jupiter has a vocabulary of about 2,000 words and is very usable. Text dictation, however, has a much larger vocabulary and an unbounded content domain: an e-mail message could have any subject matter, even turnips. NLP for this application is much harder and generally limited to putting nouns and verbs in the right places. After dictating a few hundred words into even the best speech engine, a user is likely to have to go back and correct at least a dozen errors.

Thus, for applications where a keyboard is available and the user can type reasonably well, typing is likely to be the most efficient interface for the foreseeable future. But L&H's Schleicher says, “the human voice is the most natural user interface for communication and computing on a variety of devices.” For command and control applications in cars, information appliances, set-top boxes and even PCs, voice recognition is an excellent interface. The hardware just needs the right programming—and the sound of your voice.

Linley Gwennap (linleyg@linleygroup.com) is the founder and principal analyst of The Linley Group (http://www.linleygroup.com/), a technology analysis firm in Mt. View, California. He is a former editor-in-chief of Microprocessor Report.