«Carles Gómez Josep Paradells José E. Caballero Edita: Fundación Vodafone España Autores: Carles Gómez Montenegro* Universitat Politècnica de ...»
Siphon  was also developed assuming a WSN used for data collection from sensor nodes to a sink node. In particular, it was designed as an alternative to other congestion control mechanisms, which are based on decreasing transmission rates and even dropping packets when congestion occurs. In effect, application data delivery ratio at the sink node may be compromised by these mechanisms in certain scenarios.
Siphon comprises a set of algorithms that enable congestion to be detected and then mitigated by using virtual sink nodes, which are equipped with at least a secondary, long range, radio in addition to the primary one (e.g. cellular packet data services). The virtual sink nodes are defined as nodes with a different purpose than that of the physical sink node. The latter is the sink that typically exists in data collection WSNs. When congestion is detected in some area of the WSN, part of the traffic is transmitted to the virtual sink nodes, which forward the
Chapter 8. Transport
traffic to the physical sink node using the secondary radio. Hence, the virtual sink nodes fool the rest of nodes since they appear as new destination devices (i.e. additional sinks), but they actually by-pass traffic to the physical sink node using the secondary radio. Authors of Siphon justify that the benefits of the presence of some devices with two radios within a WSN can compensate their financial cost.
In Siphon, congestion can be detected either by a sensor node or by the physical sink node. In the first case, the techniques used are similar as those in CODA (i.e. measurement of channel load and buffer occupancy). In the second, the physical sink node monitors the fidelity and quality of the data received, and activates the use of virtual sink nodes. One advantage of this technique is that it does not require the sensor nodes to execute congestion detection tasks. Virtual sink node discovery is based on the transmission of periodic control traffic from the physical sink node.
8.2.6. Summary Table 8.1 summarizes the main characteristics of congestion control protocols presented in the previous sections 8.2.1 to 8.2.5.
8.3. Protocols for reliability in WSNs Reliability in WSNs can be classified into two categories: i) packet reliability, where applications require all packets to be successfully received, and ii) event reliability, where applications require event detection, but reception of all packets is not needed. This section presents various protocols of each category.
Reliable Information Forwarding (ReInForM)  is a protocol that offers stochastic packet reliability in WSNs, that is, packets are delivered with a certain probability. ReInForM uses neither Automatic Repeat reQuest (ARQ) mechanisms nor queues.
ReInForM requires that nodes know some network parameters, such as the hop distance between themselves and the sink node, as well as the hop distance between their neighbours and the sink node. In addition, a node must know the channel error probability. In order to do this, the sink node periodically broadcasts a packet called routing update. When a node receives this packet, the node can learn from this packet what the hop distance is from the sink, since a field in this packet is updated accordingly every time it is forwarded by a node. Through this packet, the node also discovers who its neighbours are and their hop distance from the sink node.
When a node has a packet to transmit to the sink node, the first step is to assign a priority level. ReInForM defines n priority levels, each one of which corresponds to a certain delivery probability. According to the hop distance between the node and the sink node and the channel error probability, the node calculates the number of copies of the packet that are required. The copies are primarily sent to one of the neighbours of the node which are one hop closer to the sink node, should such neighbours exist. Otherwise, they are sent to one of the neighbours that are at the same distance from the sink node, should they exist. Finally, if all neighbours are one hop further from the sink node than this node, the copies of the packet are sent to one of these neighbours. In each of these three options, the selected next hop is
chosen randomly, which allows the load for the network nodes to be balanced and node lifetime to be maximized.
8.3.2. RMST Reliable Multi-Segment Transport (RMST)  is a transport layer designed to operate in conjunction with the Directed Diffusion routing protocol  (see Chapter 7). RMST benefits from Directed Diffusion in two ways.
First, after the failure of a node, a new path is found by Directed Diffusion.
Secondly, Directed Diffusion finds paths from source to sink node which are
exploited by RMST as reverse paths. RMST provides two transport services:
fragmentation and reassembly of long data units and guaranteed end-toend packet delivery. RMST offers two mechanisms for the latter.
One option for guaranteeing packet delivery in RMST is based on a hop-by-hop scheme. Nodes may cache the fragments of a larger data unit on the path from source to sink node. This option allows nodes that identify that a fragment is missing to send a repair request to the previous node. The repair request, which is equivalent to a selective Negative Acknowledgment (NACK), indicates what fragment is missing. If such a fragment is in the local cache of the previous node, it is retransmitted.
Otherwise, the repair request is forwarded by the previous node to its own previous node.
The other mechanism for reliability in RMST is a pure end-to-end approach, where the sink nodes, which receive data from the sources, transmit selective NACKs to the sources.
Losses in RMST are detected upon timer expiration. RMST adds little overhead since the only control message introduced by RMST is the NACK.
Reliable Bursty Converge-cast (RBC)  was designed for WSN applications where the detection of an event generates a large burst of packets which need to be transported reliably to a sink node and with low delay. The need for transmitting a large number of packets in a short time leads to
Chapter 8. Transport
channel contention, which is magnified by the fact that packets are transmitted several times via multi-hop routes until they reach the sink node.
RBC takes advantage of the fact that bursty converge-cast does not require in-order delivery. RBC uses a hop-by-hop, window-less block acknowledgment scheme that guarantees continuous packet forwarding independent from packet or ACK losses. Otherwise, window-based mechanisms may suffer transmission stalls that lead to throughput decrease.
RBC uses virtual queues at each node, which are managed without any window-based control and allow newly arrived packets to be sent immediately, instead of waiting for the previously sent packets to be acknowledged. A node R that forwards data packets received from node S, includes in each transmitted packet the maximal sequence of packets without any loss in the middle. Node S can then overhear node R’s transmissions and learn which packets have been correctly received. After sending a packet, the sender starts a retransmission timer. If the timer expires and the packet has not been acknowledged, it is retransmitted.
RBC has a mechanism for detecting and dropping duplicate packets.
Duplicate packets may be generated when ACKs for correctly received packets are lost and the same packets are retransmitted. To enable this mechanism, each queue has a counter that is incremented every time a new packet is stored in the queue. Nodes maintain the last counter value piggybacked in the last packet from that queue.
When channel contention occurs, retransmissions can further contribute to that contention. To ameliorate this, RBC accounts with a distributed contention control scheme that schedules packet retransmissions. Within a node, the retransmission of a packet experiences a delay that grows with the number of times the packet has already been retransmitted. Across nodes, those having more packets to transmit of similar freshness (which is piggybacked to the data packets it sends) are allowed to transmit earlier.
8.3.4. PSFQ Pump Slowly, Fetch Quickly (PSFQ)  is a transport protocol designed to provide reliability in one-to-many communications in WSNs. An example of
this application is the transmission of code from a source to a group of sensor nodes for reprogramming them. The approach in PSFQ is a slow data transmission from a source node to the sensor nodes (“pump slowly”) and an aggressive recovery of missing segments from neighbours (“fetch quickly”).
The authors of PSFQ assumed a scenario where packet losses occur due to errors in the wireless links. In PSFQ, error recovery is performed hop-byhop, which offers better scalability and performance in the presence of errors than the end-to-end approach. Hence, all nodes are in charge of loss detection and recovery. In fact, nodes cache the data they forward (which is reasonable for the class of applications for which PSFQ is designed, since nodes are in many cases receivers of the same data).
The pump operation is performed as follows: a source node broadcasts data packets to its neighbours at a controlled rate until all the data have been transmitted. For each received packet, neighbours check if the packet has been received before, discarding any duplicates (packets include sequence numbers). Otherwise, the packet is stored and then broadcast after being purposefully delayed.
When a node finds a gap in the sequence of received packets, it initiates the fetch operation. The node requests a retransmission from neighbouring nodes once loss is detected. The request, which constitutes a NACK message, signals as many gaps within the same message as possible. NACKs are retransmitted if the missing segments cannot be recovered. Only if the number of NACK retransmissions reaches a certain maximum and the missing segments have not been recovered, is the NACK then relayed by the neighbours of the node. This technique controls the problem of message implosion, whereby the message would be retransmitted by all the nodes and might cause the network to collapse.
Because PSFQ is based on a NACK approach, and hence the data source has no explicit feedback about data reception in the destinations, PSFQ additionally supports a report operation by which the furthest node from the source transmits a report message (back to the source) on a hop-by-hop basis. Other nodes en route toward the source may append their own report to this message. This mechanism allows the source to have some feedback while the amount of messages involved in the operation is controlled.
8.3.5. Summary Table 8.2 summarizes the main characteristics of reliability control protocols presented in the previous sections 8.3.1 to 8.3.5. Some transport protocols for WSNs manage upstream reliability (i.e. from sensor nodes to a sink node), while others operate on downstream reliability.
8.4. Protocols for congestion control and reliability in WSNs This section describes the most relevant protocols that have been designed for both congestion control and reliability in WSNs. Both protocols assume a WSN where data are collected by sensor nodes and are transmitted to the sink node, as well as coping with congestion control and reliability for this traffic pattern.
Sensor Transmission Control Protocol (STCP)  was designed as a transport layer for WSNs which is generic in terms of the underlying protocols (e.g. MAC protocols) and the applications on top of it.
In STCP, each data flow from a sensor node to the sink node may have different characteristics in terms of flow type, transmission rate and reliability. These characteristics are codified in a session initiation packet, which is transmitted by a sensor node before transmitting packets. Different flows may exist between a sensor node and the sink node.
Two different reliability mechanisms are used, depending on the nature of flows, which can be classified as i) continuous flows, and ii) event-driven flows.
• In continuous flows, the transmission rate of the source is known by the sink node. Hence, the sink node can calculate the expected arrival time of successive packets. If the sink node does not receive a packet in the expected time, the sink node transmits a Negative Acknowledgment (NACK) for that packet to the source. Upon receipt of a NACK, the source retransmits the corresponding packet. To solve the problem of NACK loss, the sink node periodically checks a record of the packets that have not yet been received and retransmits NACKs if necessary.
• In event-driven flows, the sink node cannot know a priori the packet arrival times. Hence, in this case, the sink node transmits a positive acknowledgment (ACK) to inform the source that a packet has been
Chapter 8. Transport
correctly received. Transmitted packets are buffered by the source until their reception is acknowledged by the sink node. The sensor nodes maintain a timer so that buffered packets are retransmitted when the timer expires.
STCP controls variable reliability as follows.
• For continuous flows, the sink node calculates the ratio of successfully received packets. If a packet is not received at the expected time, but current reliability satisfies the one required, then the sink node does not send a NACK to the sensor node. A NACK is only sent if current reliability is below the one required.
• For event-driven flows, where instead of reliable transmission of each packet it is only relevant to assure that the event has been reported to the sink, STCP does not transmit acknowledgments. This is because data from different sensor nodes that detect the same event are correlated. Hence, the energy consumption due to transmission of acknowledgments is avoided.