One of the more interesting aspects of the computer systems on Enterprise is the human-computer interface. Computer stations are equipped with audio I/O, and a seemingly unlimited set of words, in unrestricted English. Here’s an example:
Computer. Digest log recordings for past five solar minutes. Correlate hypotheses. Compare with life forms register. Question: Could such an entity within discussed limits exist in this galaxy? (Episode: Wolf in the Fold)
There is no way with current technology that we could ever fathom such an understanding of English, or any language for that matter. The request also implies quite a high level of intelligence for the computer itself.
What about the whole speech thing?
So the Enterprise relies heavily on speech recognition and semantic comprehension of a natural language. Speech recognition takes phonemes (speech sounds) and tries to make them into words. In Star Trek, recognition of spoken words has been completely solved. In 1977 the capabilities were akin to 1000 words recognized for one speaker. Is it any better today? Today we have Siri, maybe the forefront of speech I/O. Microsoft apparently has a word-error-rate (WER) of only 6.3%, slightly lower than IBM’s Watson team at 6.9%. In 1995, the WER was 43% (IBM). Speech recognition has always been challenging because every persons speech is so different, but great strides are being made.
Aside from this, semantic comprehension, or understanding is a completely different ballgame. What progress has there been on the design of algorithms to analyze statements?
Schmucker, K.J., Tarr, R.M., “The computers of Star Trek”, BYTE, Dec. pp.12-14, 172-183 (1977)