Worldwide demand for wireless network access is increasing at a tremendous pace. To cope with this demand and provide high throughput and low latency to applications, wireless network protocols need to combat noise, interference from other nodes as well as exogenous sources, and channel fading. These factors cause wireless channel conditions to vary with time, making it challenging to achieve high performance.
A good, perhaps even ideal, solution to this problem---and an attractive architecture for practical wireless networks---is to make communication links rateless. In a rateless network, the sender encodes data without any explicit estimation or adaptation, implicitly adapting to the vagaries introduced by noise, interference, or fading. The receiver processes data (symbols or bits) as it arrives, decoding it, until the message is received successfully. If the code is good, then the network can perform well without incurring the complexities and challenges of implementing multiple fixed-rate codes and picking between them, implementing several bits to symbols mappings and picking a suitable one, and last but not least, estimating channel quality and picking well even in face of imperfect and delayed estimates.
Spinal codes are a new family of rateless codes. They apply a random hash function over the message bits to produce pseudo-random bits, which in turn can (optionally) be mapped directly to a dense Gaussian or uniform constellation for transmission. The schematic of the encoder is shown below (left):
Spinal codes can be decoded efficiently, for example using a bubble decoder shown schematically above (right). This decoding is sequential, over a tree, where each edge of the tree has a branch cost equal to the "distance" between the received symbol (or bits) and the symbol (bits) that would have been received had there been no noise. The decoder finds the path with minimum total cost, which corresponds to perfect maximum likelihood decoding if one were to maintain the full tree. That, unfortunately, is computationally intractable, so we prune the tree as shown in the picture above. We expand for d steps (d is "small"), then compute the B best ancestors d edges in the past, and then throw out all ancestors other than the best B. B might be about 256 in practice.
Spinal decoding using a polynomial-time bubble decoder provably essentially achieve Shannon capacity over both additive white Gaussian noise (AWGN) channels and binary symmetric channels (BSC). We have developed software and hardware implementations of spinal codes with several experimental results (see the papers below). Ultimately, hash functions provide spinal codes with their many desirable properties (and one may view spinal codes as an approach to de-randomize and render practical Shannon's random codebook construction from the 1940s).
Spinal codes achieve a large fraction of channel capacity. The instance above (left) has 30% higher average throughput than Strider, and 26% higher than Raptor.
The bubble decoder explores a tree. By exploring more nodes, the decoder becomes more resilient to noise bursts, thereby achieving higher throughput. This, however, comes at a cost of more computation. Device implementors have can trade off performance with throughput, as can be seen from experimental results (above, right).
Spinal codes are also noteworthy for performing close to capacity for small message (packet) sizes, which is unusual for a code.
The structure of spinal encoders and decoders helps exploit parallel architectures. We built an Airblue-based FPGA prototype that decodes at 10Mbps (below, left). Over-the-air experiments with the hardware prototype closely match simulation results (below, middle).
To contact us please email cortex at nms . csail . mit . edu.
The full source code of spinal codes, along with the simulation framework and implementations of compared codes (LDPC, strider and raptor) has been released under the MIT license and is available on github. See the release page for the full information.
National Science Foundation
Graduate fellowships: Irwin and Joan Jacobs Fellowship, Claude E. Shannon Assistantship, Intel Fellowship.