The computers of Star Trek

The original starship USS Enterprise (circa 2245) was a marvel of engineering. Powered by a controlled matter-antimatter reaction the USS Enterprise was brimming with technology, with the computer tasked with controlling much of it, from monitoring life-support systems to food synthesis. The technology dealt with problems computer science is still dealing with – things like voice control, automatic programming, and computer analysis of complex problems.

In 1977, Schmucker and Tarr [1] wrote a paper from which this discussion is based with the same title as this blog post [1]. They postulated the size of Enterprise’s data repository as 10²² bits with an access time of 10^-15 seconds, something called a femtosecond. It would have to be extremely large to hold the amount of data it does. Now 10²² bits is equivalent to 1.25 zettabytes, which is about 1.25 trillion gigabytes. To put this into perspective, a recent study estimated that nearly 8 zettabytes of data will be generated globally in 2015. It is set to increase to 35 zettabytes by 2020. So we are generating a lot of data, which doesn’t even take into account data which already exists, or is yet to be digitized. Imagine storing imagery of the entire planet? So we’re talking a lot of data. Now couple all this with a couple of centuries more of data, and data from other planets in the United Federation of Planets, and it is almost impossible to predict how much data storage would be needed. Likely somewhat more than 10²² bits.

isolinearChip

Isolinear chips

Does the Enterprise use magnetic memory? I doubt it. Later starships made use of Isolinear Optical Chips (circa 2349), which is the size of a modern USB stick. They hold 2.15 Kiloquads, which is equivalent to 2^215 bytes of data [see here for an explanation]. Now this might make more sense from a data storage point of view. Imagine being able to carry around the equivalent of 5.26561458 × 10^43 zettabytes of information? The nice thing about these chips is that they integrate storage and processing into one entity, which may reduce the reliance on access time. The Galaxy class ships had three redundant cores, each had 2048 storage modules with 144 Isolinear chips each for a total of 294,912 chips per core.

So from a futuristic perspective, I doubt the ability to store data will be an issue. Current solid state technology allows for capacities of 4GB, which I’m sure will increase.  There has to be a better way – and maybe there is. Scientists at the Swiss Federal Institute of Technology are working on a way to encode data onto DNA. One gram of DNA can potentially hold up to 455 exabytes of data – truly amazing. This leads us down the road of bio-inspired computing, which is certainly possible, it will just take time.

As for access time? It’s hard to compare without more information on the type of storage. Network speeds are moving towards 32 terabytes per second on a single piece of glass fiber, enough to transfer a 1GB movie in 0.03 milliseconds. This may be fast enough for a contained environment like the Enterprise. The downside is of course data access from the storage medium – current technology puts this at 500MB/s on a top-line solid state drive – fast, but still a bottleneck.

[1] Schmucker, K.J., Tarr, R.M., “The computers of Star Trek”, BYTE, Dec. pp.12-14, 172-183 (1977)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s