Error correcting codes (by )

I've just finished reading A Commonsense Approach to the Theory of Error-Correcting Codes. The book does exactly what it says on the tin; it explains error correcting codes in terms of linear modulo-2 algebra, only getting into Galois fields and all that in an appendix.

And, as such, it does little more than scratch the surface. It only goes into Reed-Solomon codes that can correct single word errors, for example. But hey, I'm not complaining - it's done a great job of giving me an intuitive understanding of Hamming codes, cyclic codes (such as CRCs), the single-word-correcting RS codes, and so on. And I've learnt a lot about Linear feedback shift registers.

But it strikes me that the whole field of error correcting code is a bit insular. The maths required to really grasp it are really complex. Finite fields are bizarre things. While lots of people can experiment with things like data compression or basic encryption, error correction codes, the third cornerstone of low-level coding technology, is quite inscrutable.

This sucks.

Let's go back to basics. An error correcting code is a way of taking some arbitrary binary data, say k bits, and then adding some extra bits to make a larger message of n bits. Since n > k, clearly only a subset of the possible n-bit blocks are "valid". If an error occurs in the n-bit message, one or more bits of it are flipped - in effect, the message is XORed with an "error bitmap" indicating which bits are flipped. If the resulting received message is then not a valid message, we can at least report that fact and discard the message or, assuming that smaller numbers of bits wrong are more common than larger numbers, assume that the original message was probably the "nearest" valid one to the one received.

Simple parity involves adding an extra bit to, say, 7-bit messages to make 8-bit messages. This means that exactly half of the 8-bit messages are valid, so an error that produces an invalid block can only be detected, not corrected - since any invalid message has several valid messages that are just a single bit's change away from it, you can't tell which one it was originally.

But if we expand 8-bit byte messages, for example, to n-bit messages, and choose our valid 9-bit messages such that each one is at least three bits different from any other, then any single-bit error can be corrected, as the resulting invalid message will always have a single nearest valid message. Hamming codes will let you do this by splitting the 8-bit bytes into two 4-bit nibbles and applying the standard (7,4) Hamming code to turn them into two 7-bit messages, for a total expansion of each 8-bit byte into 14 bits. Which is better than the 16 bits it would take to transmit everything twice (which would only give you error detection, not correction, anyway). But is it optimal?

If we have an 8-bit message expanding into 14 bits, then for every one of 256 valid 14-bit messages, we need to 'reserve' 14 invalid 14-bit messages by looking at the 14 possible one-bit errors that could happen in that message. So that means we need, from basic principles, 15*256 = 3,840 possible messages, but 14 bits encodes 16,384 different possible messages. Perhaps we should be able to do single-bit error correction of 8-bit bytes in just 12 bits, which would give us 4,096 possibilities...

But are there 256 different messages we can pick out of 4,096 which are all sufficiently separate? That's an interesting question. And what better way to find out than brute force...

Pages: 1 2

1 Comment

  • By @ndy Macolleague, Wed 30th Apr 2008 @ 2:42 pm

    You're writing recreationally in C again? 🙂

    Maybe this can be simplified into a deduction problem and specified in Prolog? I guess that would take longer to write and you'd end up with a less impressive piece of code. 😉

Other Links to this Post

RSS feed for comments on this post.

Leave a comment

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales