next up previous contents
Next: 5.3 Complete Exchange Algorithms Up: 5. Complete Exchange on Previous: 5.1 Complete Exchange   Contents


5.2 Non-Blocking Switch

Demanding applications such as multimedia and data-intensive applications, require higher network transfer rates. Traditional bus-based LANs are not capable to serve for the needs. Therefore, the use of switches in LANs becomes an effective way to increase the network bandwidth. Besides of the improvement in network performance, these switches provide greater flexibility and interconnect scalability in network design. Driven by the market demands and trends, industrial sectors have invested considerable efforts in improving the quality of their commercial products. Particularly, some commercial network products even support non-blocking switching capability up to hundreds or even thousand ports [71]. A switch is said to be non-blocking if the switching fabric is capable of handling the theoretical total of all ports, such that any routing request to any free output port can be established successfully without interfering other traffics.

Theoretically, connecting all cluster nodes via a single non-blocking switch provides the best performance. However, to achieve good communication performance, balance of traffic flows as well as scheduling of communications should be stressed, as any misjudgment may result in congestion loss. Although having a non-blocking switching fabric guarantees high-performance, there are other internal factors that hinder the switch performance. In particular, the buffering mechanism used within the switches is one of the crucial factors. There are many variations of switch's buffering architecture, most commodity switches fall into one or a combination of these three basic types: input-buffered, output-buffered and shared-buffered. In Chapter 4, we have investigated and reported on how the switch's buffering architecture affects the congestion behavior under heavy congestive loss.

A known phenomenon that comes with the input-buffered switch is the Head-Of-Line (HOL) blocking problem. Packets block at the head of the queue also block the packets behind them, even if some of these packets are destined for idle output ports. By using queuing analysis, HOL blocking is shown to reduce available throughput to 58% even under uniform traffic pattern. However, input-port buffering is the simplest to design as the internal speed of the buffer only operates at the same speed as the input/output links. Therefore, they are cheap albeit have some physical constraint. While for the other two architectures, output-buffered and shared-buffered, they do not suffer from the HOL problem and thus could support higher throughput than input buffered switch on some traffic patterns. Due to technological constraints, the performance of the buffers must be fast enough to sustain simultaneous access [110], and this requires more complex and stringent design. Thus, these switches are usually more expensive than the input-buffered switch.


next up previous contents
Next: 5.3 Complete Exchange Algorithms Up: 5. Complete Exchange on Previous: 5.1 Complete Exchange   Contents