Intelligent Design Icon Intelligent Design

The Difficulty of Designing an Error-Correcting Code

MIt-Interactive-Coding-01.jpg

Your job is to design a code that can fix its own errors. Could you do it?

MIT News announced "New Frontier in Error-Correcting Codes: Coding scheme for interactive communication is the first to near optimality on three classical measures." What, exactly, is an error-correcting code?

Error-correcting codes are one of the glories of the information age: They’re what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence of the corrupting influences that engineers call "noise." (Emphasis added.)

Computer geeks might remember "parity bits" as an early attempt to check data integrity across transmission lines. The parity bit is calculated based on an algorithm, and is appended to each data "byte." Running the calculation again on the receiving end checks if the same result obtains; if not, the byte is re-sent. This is costly and time-consuming.

The news item mentions that the problem gets much harder when the "chunks of data" get smaller. Cell phones that exchange information, for instance, tend to send small chunks over long periods of time, pushing the limits of error correction.

The theory of error-correcting codes dates back to Claude Shannon’s work in 1948. He worked on random noise, with each bit having an equal chance of disruption. Today, with hacking and terrorism on everyone’s mind, the "more stringent case" of "adversarial noise" is attracting the attention of computer scientists. They feel that if you can minimize that, you can minimize random noise at the same time. How might one go about it?

Error-correcting codes — both classical and interactive — work by adding some extra information to the message to be transmitted. They might, for instance, tack on some bits that describe arithmetic relationships between the message bits. Both the message bits and the extra bits are liable to corruption, so decoding a message — extracting the true sequence of message bits from the sequence that arrives at the receiver — is usually a process of iterating back and forth between the message bits and the extra bits, trying to iron out discrepancies.

Here are the "three classical measures" of optimality for error-correcting codes:

  1. How much noise can they tolerate?
  2. What’s the maximum transmission rate they afford?
  3. How time-consuming are the encoding and decoding processes?

This month, the MIT grad students are presenting their latest achievement at an IEEE symposium: "the first interactive coding scheme to approach the optimum on all three measures." Previous efforts only scored two out of three, like the joke about "Faster, better, cheaper: pick any two." If MIT’s team can score on all three, that’s really something.

The news release describes the strategy the MIT team used to keep the error rate to one-fourth or better, considered the maximum tolerable amount. It took some pretty sophisticated brainpower to design a system this good, but there’s more work to do.

"We still need to worry a little bit about constants," Braverman says. "But before you can worry about constants, you have to know that there is a constant-rate scheme. This is very nice progress and a prerequisite to asking those next questions."

You know what we’re going to say next, don’t you? Living cells employ high-fidelity error-correcting codes.

Image credit: �Jose-Luis Olivares/MIT News.

Evolution News

Evolution News & Science Today (EN) provides original reporting and analysis about evolution, neuroscience, bioethics, intelligent design and other science-related issues, including breaking news about scientific research. It also covers the impact of science on culture and conflicts over free speech and academic freedom in science. Finally, it fact-checks and critiques media coverage of scientific issues.

Share

Tags

Computational SciencesScience