A nice primer on Infinband appeared in Computer Technology Review. The author, Dave Ellis, is the director of HPC Architecture, Engenio Storage Group, LSI Logic Corporation.
InfiniBand is one of a few I/O architectures initially developed to address high bandwidth, low latency requirements for High Performance Computing (HPC) environments. While early HPC deployments may have used Ethernet interconnects, a certain amount of latency inherent to TCP/IP limited the overall potential performance of the clusters. Since the transport requirements for compute intensive applications do not need all the features of TCP/IP, development began on streamlined I/O architectures. The resulting solutions like Myrinet and InfiniBand support a Message Passing Interface (MPI) over high bandwidth (10 Gigabits per second, Gb/s), very low latency transport architectures.Link: http://www.wwpi.com/index.php?option=com_content&task=view&amp;amp;id=1163&Itemid=44
InfiniBand is still a relatively new technology and today it is supported only in homogenous networks based on a Linux operating system. As the early adopters in HPC and data center environments continue to deploy it and reap the benefits of immensely increased speed and low latency, InfiniBand will eventually become more mainstream. It is expected to be adapted, in time, for use in more general purpose computing environments, and even has the potential to be a replacement for PCI bus architecture in high-end servers and PCs.