- What if it were possible to actually determine the exact link that formed the bottleneck in a connection? I don't think this is possible with the current architecture, but if it were, routers might be able to update costs temporarily for a particular TCP connection. This would require a small amount of transient state, but could lead to more optimized data transfer and less packet queueing if an alternative route could be discovered that had a higher bottleneck rate.
- The paper concludes that certain pathological scenarios like out-of-order-delivered packets and data and ACK packet loss are highly connection-specific or specific to the current level of traffic. One interesting avenue of exploration might be to analyze the connections that generated the disproportionate amount of pathological cases to determine their cause. If they are only congestion-specific, for example, then some transient router state to detect such cases might prove valuable in reducing the number of retransmissions that occur.
- The authors hypothesize that 16-bit TCP checksums are insufficient for the large amount of data that is currently transferred over the Internet, but that the chance of a corrupted packet being accepted as valid is too high. This claim is hard to believe 15 years later, as we still use the same size checksums and our data transfer rates and Internet usage have only increased. Even still, it might be valuable to mathematically analyze certain pathological scenarios like the probability of a file transfer of a certain size containing corrupted data, and then empirically evaluating the probability with different connections.
As our teacher told us that this paper is seminal in its field, it is likely that all of these questions have been explored already, but I find them interesting to think about nonetheless and perhaps useful in understanding how the Internet currently works.
No comments:
Post a Comment