8 thoughts on “Bandwidth”

  1. I still want to know technical details, myself.

    I don’t disbelieve they’ve got something – a way to improve on TCP failure retries is a plausible thing.

    But I want more than the vague “we use algebra!” explanation before I dive in.

    (Also, their “Code-on” licensing company has the most pathetic web presence I’ve seen in a while…

    Really, MIT? You’re actively trying to license this stuff, and that’s the best you can do?)

  2. An extraordinary claim.

    As best I can make out, it uses some kind of foreward error correction applied to groups of packets, allowing entire lost packets to be recovered. How that produces a net improvement in throughput, or doesn’t kill latency, escapes me.

  3. So, their algorithm is a lot cleverer than this, but I can give you the general flavor; there’s an error-correcting code at the inter-packet level. Think of RAID-5 applied to packets and you have the general idea. As long as the packet loss doesn’t exceed some specified rate it can rebuild the lost packet from the others.

    This doesn’t actually help with bandwidth per se – no free lunch – but it might help a lot in situations where raw bandwidth isn’t the limiting factor and pauses for retransmission of lost data happens frequently (mostly wireless networks?).

  4. The New York Times came up with its own bandwidth solution. In its original form, it consisted of simply reclassifying bad data as good. A breakthrough occurred recently, however, in which it was found that one could simply make up data to fill in for any missing pieces. More recently, it was found that simply making up the entire data stream not only resulted in a huge advance in bandwidth, but also seemed to result in news that Times reporters liked much better…

    1. Newsweek has pioneered a further advance: Stop sending any data. Not quite fully implemented yet, but the bandwidth drop of going entirely online has to astronomical. (The magazines that were published and distributed to stores but never ‘picked up’ should count, IMNSHO)

  5. If that works, it would be quite helpful. Lost packets are a major contributor to high latency because blow the pipeline. The cost isn’t the extra bandwidth for the retransmit (although that matters) it is having to stop a smooth flowing sequence and restart, then wait two round trip times (once to get the alert to the source, then the source to send it again). Packet loss costs go up faster as a network gets busy because lost packets are more likely and it’s more expensive to retransmit.

Comments are closed.