Summary of design documents from the TCP/UDP characteristics, network security and other considerations,.
A lot of design ideas addressing the 4 disadvantages of SPDY:
- A single packet loss (packet) will block the whole flow(stream).
- TCP congestion avoidance mechanisms do not, lead to bandwidth reduction and serialization latency overhead.
- TLS session reconnected waiting time overhead. Holding a mobile phone for additional Round Trip.
- TLS decryption overhead. The first packet must wait behind the packet arrival can decrypt.
You can think of the QUIC is to solve the bottleneck of SPDY met in TCP and in UDP do exploration of the scheme. The reference SPDY to understand, can be considered to transfer the contents of QUIC is divided into two layers, the top is similar to the SPDY, the lower layer is the encryption process imitation implemented on the UDP TCP connection oriented characteristics and reliability with similar TLS.
What are some of the distinctive techniques being tested in QUIC?
QUIC will employ bandwidth estimation in each direction into congestion avoidance, and then pace packet transmissions evenly to reduce packet loss. It will also use packet-level error correction codes to reduce the need to retransmit lost packet data. QUIC aligns cryptographic block boundaries with packet boundaries, so that packet loss impact is further contained.
Doesn’t SPDY already provide multiplexed connections over SSL?
Yes, but SPDY currently runs across TCP, and that induces some undesirable latency costs (even though SPDY is already producing lower latency results than traditional HTTP over TCP).
Why isn’t SPDY over TCP good enough?
A single lost packet in an underlying TCP connection stalls all of the multiplexed SPDY streams over that connection. By comparison, a single lost packet for X parallel HTTP connections will only stall 1 out of X connections. With UDP, QUIC can support out-of-order delivery, so that a lost packet will typically impact (stall) at most one stream. TCP’s congestion avoidance via a single congestion window also puts SPDY at a disadvantage over TCP when compared to several HTTP connections, each with a separate congestion window. Separate congestion windows are not impacted as much by a packet loss, and we hope that QUIC will be able to more equitably handle congestion for a set of multiplexed connections.
Are there any other reasons why TCP isn’t good enough?
TCP, and TLS/SSL, routinely require one or more round trip times (RTTs) during connection establishment. We’re hopeful that QUIC can commonly reduce connection costs towards zero RTTs. (i.e., send hello, and then send data request without waiting).
Why can’t you just evolve and improve TCP under SPDY?
That is our goal. TCP support is built into the kernel of operating systems. Considering how slowly users around the world upgrade their OS, it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years. QUIC allows us to test and experiment with new ideas, and to get results sooner. We are hopeful that QUIC features will migrate into TCP and TLS if they prove effective.
Why didn’t you build a whole new protocol, rather than using UDP?
Middle boxes on the Internet today will generally block traffic unless it is TCP or UDP traffic. Since we couldn’t significantly modify TCP, we had to use UDP. UDP is used today by many game systems, as well as VOIP and streaming video, so its use seems plausible.
Why does QUIC always require encryption of the entire channel?
As we learned with SPDY and other protocols, if we don’t encrypt the traffic, then middle boxes are guaranteed to (wittingly, or unwittingly) corrupt the transmissions when they try to “helpfully” filter or “improve” the traffic.
UDP doesn’t have congestion control, so won’t QUIC cause Internet collapse if widely adopted?
QUIC employs congestion controls, just as it employs automatic retransmission to support reliable transport. QUIC will attempt to be fair with competing TCP traffic. For instance, when conveying Q multiplexed flows, and sharing bandwidth with T concurrent TCP flows, we will try to use resources in the range of Q / (Q+T) bandwidth (i.e., “a fair share” for Q additional flows).
Why didn’t you use existing standards such as SCTP over DTLS?
QUIC incorporates many techniques in an effort to reduce latency. SCTP and DTLS were not designed to minimize latency, and this is significantly apparent even during the connection establishment phases. Several of the techniques that QUIC is experimenting with would be difficult technically to incorporate into existing standards. As an example, each of these other protocols require several round trips to establish a connection, which is at odds with our target of 0-RTT connectivity overhead.
The QUIC multiplexing feature has been inherited from SPDY and provides:
- Prioritization among QUIC streams.
- Traffic bundling over the same UDP connection.
- Compression of HTTP headers over the same connection.
- Connection startup latency and security
The time required to setup a TCP connection is one RTT for a handshake and at least one extra RTT or two in the case of an encrypted connection over TLS. When QUIC is used the time taken to set up a connection is at most one RTT. Here, the startup latency takes zero RTT even in the case of an encrypted connection. QUIC-Crypto decrypts packets independently and avoids serialized decoding dependency, which would damage QUIC’s ability to provide out of order delivery to reduce the HOL.
- Forward Error Correction
The Forward Error Correction module comes with packet losses. The module is particularly effective in reducing HOL over a single QUIC stream by promptly recovering a lost packet, especially in the case of high RTT where retransmissions can considerably affect the HOL latency.
- Pluggable Congestion Control
QUIC has been designed to support two congestion control algorithms. The first is an implementation of the TCP CUBIC and the second is pacing-based congestion control algorithm that computes the application sending rate based on an estimate of the relative forward delay defined as the difference between the inter-arrival time of two consecutive data packets at the receiver and the inter-departure time of the same packets at the sender.
- Connection Identifier
A QUIC connection is uniquely identified by a CID (Connection Identifier) at the application layer and not by the pairs of IP addresses and port number. The first advantage is that since CIDs are not based on IP addresses, any handover between two networks can be transparently handled by QUIC without re-establishing the connection. Moreover, the CID is useful in the case of NAT unbinding and while restoring a connection, a new pair of IP addresses is typically required.
- It’s UDP:
QUIC bundles streams over the same UDP connection and just UDP alone already had been an ongoing challenge for middle boxes (firewalls, DPI and NAT engines) and have contributed to the inability of the Internet to evolve. [I-D.hildebrand-middlebox-erosion]
Currently QUIC can not be distinguished from non-QUIC UDP traffic, so networks can not defend themselves from attack and networks can not defend hosts from attack. It is obvious that is a problem if we can’t distinguish QUIC traffic from attack traffic, such as DoS/DDoS.
There is a strong desire to avoid this ossification with QUIC. At the same time, there is a desire to treat QUIC better than normal UDP traffic; that is, to treat QUIC as well as TCP traffic. Unfortunately, the lack of header information in QUIC prevents the network path from identifying QUIC traffic and prevents the path from treating QUIC as a transport protocol on par with TCP.
- Consent to Receive and Rate Limiting:
On many networks UDP is rate-limited or completely blocked, or a per-host or per-link basis. The limits are imposed to prevent compromised hosts from generating high volumes of UDP traffic towards a victim [I-D.byrne-opsec-udp-advisory].
Some protocols are request/response and could have higher rate limits because consent to receive is visible to the path (e.g., DNS, NTP) but others are send-only (e.g., SNMP traps, SYSLOG). The configuration expense and fear of ossification involved in deeper packet inspection is not commensurate with the benefit of higher rate limits for those request/response protocols, so many networks simply rate limit or block UDP.
As of today the QUIC protocol document says nothing about ICMP. So one can only guess what QUIC implementations will do.
If ICMP is ignored, the middle-box can corrupt, delay, or rate-limit (including rate limiting to 0 bytes per day, a.k.a. ‘drop’).
If ICMP is received, validated, and handled, the endpoint can more quickly react to the block (or path MTU problem, or whatever the ICMP said).
Many fortune 1000 companies block bi-directional UDP traffic. Which means QUIC is blocked also. If the QUIC stack ignores ICMPs, the firewalls can’t help the QUIC stack fall back to TCP quickly, harming user experience — or requiring more aggressive “Happy Eyeballs” timers (doubling network traffic when trying both QUIC and TCP).
- Association with Existing Consent
Once a consent to receive is established, multiple packets will usually be received in response to a single request. In TCP, both the 5-tuple and the sequence numbers on a given packet are used to provide hints to the path about this association, in an attempt make the job of off-path attackers more difficult. QUIC does not allow the path to associate packets with a consent at greater assurance therefore the network cannot filter attacks such as denials of service.
The externally-visible QUIC version number is useful for future protocol agility. However, as this is visible to the path, it is likely to ossify around that value. Thus, having something else to identify QUIC is useful, so that the version number can change while retaining the same identification of a QUIC packet. Therefore a path-visible mechanism to identify a QUIC packet. as also a path-invisible version number would be needed.
- Spurious Packets
A spurious packet may arrive when an endpoint (client or server):
- loses state due to a reboot
- experiences a QUIC application crash
- acquires another host’s prior IP address
- receives a malicious or accidental QUIC packet.
- In those cases, the host might have a QUIC application listening on that port, a non-QUIC application listening on that port, or no application listening on that port.
- Path State Loss
If a firewall, NAT, or load balancer discards its mapping state without notifying the endpoint, both endpoints can take a long time to discover the path state has been lost. To avoid this delay, it is desirable to send a signal that the path state will be lost or has been lost. QUIC in the current state does not provide a way for on-path middleboxes to signal that their mapping will be lost or has been lost.
This section is courtesy of Dan Wing and Joe Hildebrand.
For more detailed reading please have a look at: draft-wing-quic-network-req-00