Worldwide demand for wireless network access is increasing at a tremendous pace. To cope with this demand and provide high throughput and low latency to applications, wireless network protocols need to combat noise, interference from other nodes as well as exogenous sources, and channel fading. These factors cause wireless channel conditions to vary with time, making it challenging to achieve high performance.

A good, perhaps even ideal, solution to this problem---and an
attractive architecture for practical wireless networks---is to make
communication links **rateless**. In a rateless network, the
sender encodes data without any explicit estimation or adaptation,
implicitly adapting to the vagaries introduced by noise, interference,
or fading. The receiver processes data (symbols or bits) as it
arrives, decoding it, until the message is received successfully. If
the code is good, then the network can perform well without incurring
the complexities and challenges of implementing multiple fixed-rate
codes and picking between them, implementing several bits to symbols
mappings and picking a suitable one, and last but not least,
estimating channel quality and picking well even in face of imperfect
and delayed estimates.

**Spinal codes** are a new family of rateless codes. They apply
a random hash function over the message bits to produce pseudo-random
bits, which in turn can (optionally) be mapped *directly* to a
dense Gaussian or uniform constellation for transmission. The
schematic of the encoder is shown below (left):

Spinal codes can be decoded efficiently, for example using a
*bubble decoder* shown schematically above (right). This decoding
is sequential, over a tree, where each edge of the tree has a branch
cost equal to the "distance" between the received symbol (or bits) and
the symbol (bits) that would have been received had there been no
noise. The decoder finds the path with minimum total cost, which
corresponds to perfect maximum likelihood decoding if one were to
maintain the full tree. That, unfortunately, is computationally
intractable, so we prune the tree as shown in the picture above. We
expand for d steps (d is "small"), then compute the B best ancestors d
edges in the past, and then throw out all ancestors other than the
best B. B might be about 256 in practice.

Spinal decoding using a polynomial-time bubble decoder provably essentially achieve Shannon capacity over both additive white Gaussian noise (AWGN) channels and binary symmetric channels (BSC). We have developed software and hardware implementations of spinal codes with several experimental results (see the papers below). Ultimately, hash functions provide spinal codes with their many desirable properties (and one may view spinal codes as an approach to de-randomize and render practical Shannon's random codebook construction from the 1940s).

Spinal codes achieve a large fraction of channel capacity. The instance above (left) has 30% higher average throughput than Strider, and 26% higher than Raptor.

The bubble decoder explores a tree. By exploring more nodes, the decoder becomes more resilient to noise bursts, thereby achieving higher throughput. This, however, comes at a cost of more computation. Device implementors have can trade off performance with throughput, as can be seen from experimental results (above, right).

Spinal codes are also noteworthy for performing close to capacity for small message (packet) sizes, which is unusual for a code.

The structure of spinal encoders and decoders helps exploit parallel architectures. We built an Airblue-based FPGA prototype that decodes at 10Mbps (below, left). Over-the-air experiments with the hardware prototype closely match simulation results (below, middle).

People

- PhD students: Jonathan Perry, Peter Iannucci, Kermin Elliott Fleming
- PIs: Hari Balakrishnan and Devavrat Shah
To contact us please email cortex at nms . csail . mit . edu.

Publications

- J. Perry, H. Balakrishnan, D. Shah,
**Rateless Spinal Codes**, HotNets 2011.

*A short position paper motivating the main idea and intuition behind spinal codes. No results or proofs, just the "case for" and basic concept.* - J. Perry, P. Iannucci, K. Fleming, H. Balakrishnan, D, Shah,
**Spinal Codes**, SIGCOMM 2012.

*The detailed paper on the construction of the encoder, the efficient bubble decoder for spinal codes, and the results of several experiments. Also a proof (sketch) of the decoder's efficiency and capacity results for AWGN.* - P. Iannucci, J. Perry, H. Balakrishnan, D. Shah,
**No Symbol Left Behind: A Link-Layer Protocol for Rateless Codes**, MobiCom 2012.

*A general link-layer coordination protocol for rateless codes such as Raptor and layered rateless codes such as Strider to determine when the sender should pause for feedback, using a dynamic programming method over the "decoding cumulative distribution function", the distribution of the number of symbols required for a complete decoding of a packet.* - D. Shah, J. Perry, P. Iannucci, H. Balakrishnan,
**De-randomizing Shannon: The Design and Analysis of a Capacity-Achieving Rateless Code**, Manuscript in preparation/submission.

*Proofs that spinal codes achieve capacity over AWGN and BSC using the method of error-exponents, demonstrating an effective de-randomization of Shannon's random codebook idea. The proofs here are very different and more general than the SIGCOMM paper above.* - P. Iannucci, K. Fleming, J. Perry, H. Balakrishnan, D. Shah,
**A Hardware Spinal Decoder**, ANCS 2012.

*The hardware design and implementation of spinal codes on Airblue, demonstrating 12.5 Megabits/s throughput on an FPGA. The paper also demonstrates over-the-air experimental results for SNR between 2 dB and 14 dB.*

Software

The full source code of spinal codes, along with the simulation framework and implementations of compared codes (LDPC, strider and raptor) has been released under the MIT license and is available on github. See the release page for the full information.

Related Work

- Shannon: His Gaussian random codebook scheme achieves capacity for AWGN and BSC, and is inherently rateless, but the approach is computationally intractable.
- LT codes: Luby's LT codes use random graph structures to provide
excellent codes for the binary
*erasure*channel (packet loss channel) with an efficient encoder and decoder. - Raptor codes: Shokrollahi's Raptor codes, built on top of LT codes, provably achieve capacity over the binary erasure channel. In our project, we have adapted Raptor codes for AWGN channels using the methods of Palanki/Yedidia, and have found that Raptor codes perform quite well over AWGN and BSC too. (Spinal codes perform somewhat better, for reasons explained in the SIGCOMM paper above.)
- Layered approach: Erez, Trott, and Wornell use existing base codes and combine them to produce transmission symbols in a rateless way. They show that by producing rateless symbols using the appropriate selection of linear combinations of symbols generated by the base codes, capacity can be achieved by the resulting rateless code as the number of layers becomes large enough, provided the fixed-rate base code achieves capacity at some fixed SNR.
- Strider: Gudipati and Katti developed Strider, which is a code that is essentially the layered approach described by Erez et al. Our experiments in the SIGCOMM paper compare Raptor, Strider, and spinal codes. (Thanks to Aditya Gudipati for providing access to his Strider software.)
- Turbo-codes: Not rateless, but widely used and a very good fixed-rate code. We believe its decoder implementation complexity is higher than spinal codes.
- LDPC codes: Gallager's low-density parity check code is another excellent fixed-rate code. We compare LDPC codes to spinal codes experimentally in the SIGCOMM paper, finding that spinal codes perform better even when the channel SNR is constant.

Thanks to...

National Science Foundation

Graduate fellowships: Irwin and Joan Jacobs Fellowship, Claude E. Shannon Assistantship, Intel Fellowship.