Precision Time Measurement in USB Enhanced SuperSpeed Protocol

 

ABSTRACT

USB Enhanced SuperSpeed Protocol provides support for Gen 1 and Gen 2 modes of transfer. An additional concept of Precision Time Measurement (PTM) has been included in the specification in-order to concentrate on the performance of the host controller and device. The propagation delays along the link and between hosts, hubs and devices are causative factors for performance optimization. This white paper concentrates on the concepts and mechanisms of PTM in a simple straight-forward approach that can be deciphered from the specification. Certain challenges pertaining to implementation of PTM have also been discussed under isolated topics for tangible understanding and assimilation of the concepts.

INTRODUCTION

USB Enhanced SuperSpeed Protocol defines a new mechanism that can be incorporated into both Gen 1 and Gen 2 support so as to provide additional features effecting improvement in the performance of the USB devices. The idea of Precision Time Measurement (PTM) circles around precise and accurate propagation delay measurement between upstream and downstream facing ports of hubs and devices with respect to hosts within the USB topology.

What is Precision Time Measurement or PTM?

PTM is a mechanism that enables devices to precisely characterize link delays and propagation delays through hubs. This can be classified into two parts –

  • Link Delay Measurement (LDM) – This mechanism is used to compute link delay of a device upstream link using PTM Link Management Packets/ LMPs for the upstream port supporting PTM. The delay associated is termed as LDM Link Delay.
  • Hub Delay Measurement (HDM) – This mechanism consists of features to improve accuracy of Isochronous Timestamps of ITPs as they are forwarded downstream. The delay associated is termed as bus interval boundary timing.

PTM is achieved through utilization of timestamps recorded by Requestor or Responder.

A Requestor is a hub or device communicating with an upstream host controller. Requestor is an upstream facing port.

A Responder is a USB host controller or hub communicating with a downstream hub or device. Responder is a downstream facing port.

Some of the constituents for implementation of PTM are as follows –

  1. PTM Capability Descriptor – Software discovers PTM through device level capabilities descriptor known as PTM Capability Descriptor which is supported by all hubs and devices.
  2. PTM Clock – A signal source with a period of tIsochTimestampGranularity units used to advance various PTM time clocks and time sources.
  3. PTM Root – This is a LDM Responder and the source of the bus interval boundary for a PTM domain i.e. USB host controller.
  4. PTM Domain – This is a set of LDM Responders associated with a PTM Root.
  5. PTM Local Time Source –This is a time clock associated with a LDM Requester or Responder which is advanced by PTM Clock transitions.

Deep into LDM…

Figure 1 shows the flow diagram for LDM mechanism.

 

Apropos to the name, LDM measures link delay between LDM Requestor and LDM Responder through LMPs and PTM Local Time Sources. LDM TS Request LMP corresponds to Requestor to Responder Path i.e. t2 – t1. LDM TS Response LMP corresponds to Responder to Requestor Path i.e. t4 – t3. The reference point is the first framing symbol of every LDM LMP. These are represented as timestamps t1, t2, t3, t4 where –

t1 – Timestamp at which LDM Requestor transmits LDM LMP request

t2 – Timestamp at which LDM Responder receives LDM LMP request

t3 – Timestamp at which LDM Responder transmits LDM LMP response

t4 – Timestamp at which LDM Requestor receives LDM LMP response

Response Delay – It is the delay between the timestamp at which LDM Responder receives LDM LMP request and transmits LDM LMP response. It is denoted by –

Response Delay = t3 – t2

LDM Link Delay – It is the delay between first symbol of packet transmitted by LDM Responder and first symbol of same packet received by LDM Requestor. It is calculated by LDM Requestor. LDM Link Delay is calculated based on LDM Valid, a status flag which validates the support for LDM by a port. Two more factors are considered for the same –

  1. TP Transmission Time – Time to transmit a TP including framing and encoding at UI nominal in actuality, usually greater than 40ns for Gen 1 or 16.5ns for Gen 2
  2. tTPTransmissionDelay – Default TP Transmission Time of around 40ns for Gen 1 or 16.5ns for Gen 2

LDM Link Delay is formulated as follows, where the first factor is nullified for a port not supporting PTM.

Each such LDM Exchange may not be limited to a single transaction in a LDM context and may be requested multiple times to arrive at the average of various values. This gives room for more accuracy and precision, thus contributing towards improving performance in the direction of link delay. However, the respective timestamps are updated with consecutive LDM context.

A port not supporting PTM shall drop the LDM LMP, but shall acknowledge the packet and return credits for the same.

Implementing LDM…

LDM is implemented through the LDM LMP whose details are as shown in Figure 2.

LDM Link Exchange and Performance

The goal of LDM Link Exchange is to achieve an optimum delay closer to a minimum and still adhere to the protocol specifications and time synchronization. Figure 3 shows the flow diagram consisting of PTM Path Performance Contributors.

Timestamps t1, t2, t3 and t4 contribute towards path performance along the link between Requestor and Responder. Timestamp measurement planes are taken into consideration on either ends. A certain amount of uncertainty in LDM LMP transmission and reception along Tx and Rx paths may be attributed to asymmetry which may be adjusted accordingly. Some requirements for optimal propagation delay measurement are as follows –

  • TS Delay is the delay between the Timestamp Measurement Plane and actual timestamp at link boundary
  • Link Delay between Requestor and Responder must be symmetric
  • LDM Timestamp values must be assigned to TS LMPs or ITPs at actual transmission or reception of the packets i.e. link boundary
  • Each Timestamp values should be adjusted to TS Delay so that LDM LMP or ITP approximates actual time at which packet crosses Requestor/Responder boundary and link during transmission or reception
  • In case of Asymmetry, the TS Delay is adjusted such that it appears to have been captured at link boundary
  • Link Delay between Requestor and Responder must be constant over time interval between respective LMPs.
  • The worst uncertainty should not exceed tPropagationDelayJitterLimit. Uncertainty may be reduced by two methods:
  1. Timestamp Measurement Plane should be very close to link boundary i.e. TS Delay is minimum
  2. Averaging Link Delay over multiple LDM Exchanges to end up at minimum value. The algorithms for averaging may be implementation specific.
  • Clock stability and precision must be within the clock accuracy requirements of Unit Interval

Challenges towards implementing LDM

Essentially, LDM makes use of Data Link and Transaction Layers, although LDM message protocol per say applies to the link. One of the challenges is, since logic within the layers are non-deterministic, capturing accurate timestamp at a point of particular physical event becomes very difficult.

The second challenge is to measure time at last symbols transacted at actual pins of D+/- transmitter and receiver ends at their actual pins. Since the measurement always happens at some internal point in Rx/Tx path, it must be approximated to minimum TS Delay.

The inherent challenge is to bring the Timestamp Measurement Planes and TS Link LMP Boundaries as close as possible. In other words, the TS Delay should be zero ideally or near to minimum in practical implementation.

More about HDM…

Figure 4 shows the flow diagram for HDM mechanism

HDM as the name suggests, is applicable to hubs that act as both Requestor and Responder. HDM makes use of ITP to calculate delay known as PTM Bus Interval Boundary timing using ITPs. Two Timestamps tITDFP and tITUFP are defined for downstream facing port and upstream facing port respectively. For this, the following are defined –

  1. PTM Bus Interval Boundary Counter – This is a pair of counters that uses format similar to 27bit ITP
  2. PTM Delta Counter – 13 bit Mod 7500 counter used to measure delay between consecutive bus interval boundaries. It is incremented by PTM Clock
  3. PTM Bus Interval Counter – 14 bit Mod 16384 counter that increments for every wrap of PTM Delta Counter
  4. PTM Root / Host – This is the source of PTM Domain that implements the above counters

While hubs need not implement the counters, PTM devices should implement the same. PTM Bus Interval Boundary calculations are done by device, host and hub as follows –

Device – Bus Interval Boundary calculations are done by device when ITP is received where ITP has 3 values :

  1. Bus Interval Counter(RxITP) – Contains current frame number
  2. Delta(RxITP) – Delay between current ITP and previous interval boundary
  3. Correction(RxITP) – Negative delay of ITP as it passes through hubs

Two conditions hold good for Delta(RxITP) . They are –

  • Delta(RxITP) >= 7500 – ITP is ignored
  • Delta(RxITP) < 7500 – PTM Delta Counter at tITUFP (upstream facing port of device) is given by –

PTM Delta Counter(tITUFP) = mod7500(ISOCH_DELAY+LDM Link Delay+ Delta(RxITP) – Correction(RxITP))

Bus Interval Counter(tITUFP) = Bus Interval Counter(RxITP) +ROUND_DOWN((LDM Link Delay+ Delta(RxITP))/7500)

The SET_ISOCH_DELAY containing the wvalue field is dependent on number of hubs as follows –

It can be observed that precise accurate delay may be obtained by successive approximation or averaging for multiple exchanges of ITPs.

 

Host – Bus Interval Boundary calculations are done by host by using current values of Bus Interval Counter(TxITP) and Delta(TxITP) and setting Correction(TxITP) to zero for transmitted ITPs downstream.

 

Hub – Hub can transmit packets downstream or upstream. Hence two cases are considered along with Delayed Bit in the ITP. It uses an ITP Delay Counter.

 

  • Delayed Bit is Not Set – Set ITP Delay Counter to zero
  1. For ITP received, ITPs are queued to be sent on downstream facing port
  2. For ITP transmitted,

Bus Interval Counter(TxITP) = Bus Interval Counter(RxITP)

Delta(tITPDFP) = LDM Link Delay+ Delta(RxITP) + (ITP Delay Counter – wHubDelay) – Correction(RxITP)

Three conditions hold good for Delta(tITPDFP). They are –

  • 7500 > Delta(tITPDFP) > 0 – then Delta(tITPDFP) = Delta(TxITP) and Correction(TxITP) = 0
  • Delta(tITPDFP) >=7500 – then Delta(TxITP) = 7500
  • Delta(tITPDFP) < 0 – then Delta(TxITP) = 0 and Correction(TxITP) = – Delta(tITPDFP)

 

Recalculate CRC-16 for modified ITP

  • Delayed Bit is Set – Forward ITP without modification in either downstream or upstream

Implementing HDM…

HDM is implemented through ITP whose details are as shown in Figure 5

Challenges towards implementing HDM/PTM

One challenge that is necessary to be faced is regarding Time Measurement Plane and the actual point of transmission of ITPs to-fro the hub. Either the delay between them should be minimal or the implementation should approximate them so as to compensate for the delay.

The second challenge is that it is always recommended but not a requirement for the hub to implement counters for more accuracy and precision in-order to avoid mismatch of current frame number etc.

Next challenge appears at the device side, where Delta(RxITP) must be maintained at a value less than 7500 or else it would lead to dropping the ITP. This holds importance during averaging of delay with multiple ITP transfers.

Another challenge is that the queued ITPs should be updated with values of actual time of transmission i.e. tITPDFP and not the queued timestamps.

CONCLUSION

This white paper on Precision Time Measurement has covered aspects that are found basically necessary to implement the mechanism in the USB topology. The discussion of mechanisms in specific for various conditions provides a comprehensive approach towards applying the concepts in the design and verification of re-usable IPs. The points of interests and challenges may be kept under consideration for enhancing the performance towards its achievable optimal value. While there is always a need to enhance system performance, this paves a way for improvement of scope of algorithms for averaging or other parameters that are implementation-specific. In totality, this white paper acts a reliable topic-specific source of information adhering to the standards of USB Enhanced SuperSpeed Protocol.

Feedback

If you have any suggestion/feedback please email it to feedback@inno-logic.com