Technology insights - Why IP and media fail?

Transporting video and audio over electronic networks is vastly different from moving around conventional IT data.

Media contents require, above all, predictability: the right capacity and bandwidth, at the exact time, and with no glitches.

To implement this predictability with a traditional IT approach would call for heavy over-provisioning and over-investing: stacking equipment until it can handle all envisaged scenarios. But even despite investing in overcapacity, traditional network technology will not give you the strict predictability that a media solution requires when it really matters. 

FROM MULTIPLE NETWORKS TO ONE IP NETWORK

Today, most media organizations still run multiple types of networks side by side, each with their own technology. Real-time production environments are connected via synchronous SDI networks over coax cables. Storage networks are mostly based on lossless Fiber Channel technology running on fiber-optic cables. File-based media production runs on high-bandwidth 1 Gb/s or 10 Gb/s IP/Ethernet networks. And in addition, each company runs also its business processes and office environment over a classic enterprise IT-network.

As technology evolves, the trend is to move all these data streams onto only one type of network, compelled by the industry’s call: ‘Ethernet always wins’. This trend is driven by the promise of lower cost, greater flexibility, and the availability of mass-produced and ever more powerful hardware components.

If you are involved in media production, chances are that you will soon be looking into provisioning IP networks for transporting all your media contents. 

BEST-EFFORT IS NOT ENOUGH

IP network technology was created and evolved in enterprise IT environments. As a result, it follows assumptions that differ fundamentally from those of a media environment.

In a traditional IT environment, traffic runs between a few data servers and a high number of low-bandwidth clients. The traffic is mostly light, consisting of small files or messages. There are no real time-constraints. E-mail, browsing, office and most business applications expect only a best-effort response from the network. Slow e-mail is still e-mail.

Media companies, in contrast, have to transfer large files or data streams over their networks. Traffic may be any-to-any, with multiple servers communicating amongst each other. In most cases, the speed has to be as high as possible, trying to fill the available bandwidth to the maximum. And sometimes, in editing or play-out environments, they require a sustained real-time speed.

So for your media network, you are looking for fast, guaranteed and predictable traffic and storage. Because a slow video is no longer a video. 

CAN LOAD BALANCING COME TO THE RESCUE? 

To beef up the bandwidth, traditional IP networks combine parallel links that offer the aggregate capacity. IP/Ethernet technology provides the necessary plug-and-play functionality to do so readily, with components that have been fully standardized over the years.

The traffic is then load-balanced over these parallel paths. The throughput is ruled by a statistical ‘best-effort’. This together with vague throughput expectations (Quality of Service - QoS) bring a false feeling of linear scalability.

Despite this parallelization, in media environments the simultaneous occurrence of traffic patterns may still result in overloads. Continuous media transfers may even cause serious congestions. In addition, with the any-to-any traffic, two or more servers may be sending streams to the same destination. This may lead to an oversubscription in the network at the receiving server, where network links come together.

So adding links and load balancing will not work for media IP networks, where we need fully predictable behavior; where data transfers are time-constraint and in many cases even time critical, as the media stream has to maintain a strict delivery of data to be played-out.

While a slow e-mail is still an e-mail, a slow video is no longer a video. 

NEITHER IS OVER-PROVISIONING A SOLUTION

To provide ‘worse case’ capacity and bandwidth, the common practice of the IT industry is to heavily over-provision the network. This, of course, leads to continuously increasing investment costs in network, storage and servers, and to an even more inefficient use of resources.

But like load balancing, over-provisioning also falls short in providing the strict predictability that an operational solution requires. It may reduce or even mask part of the risk but it doesn’t guarantee the performance and reliability that you require. 

MEDIA TRAFFIC CANNOT BE AVERAGED

For decades network engineers have been trained to think in terms of macroscopic bandwidth when scaling their networks. They computed and provision the necessary bandwidth by simply adding the average throughput of all expected data transfers.

However, this doesn’t work very well with media traffic, which is bursty in nature. Packets are sent back to back in microbursts at maximum speed to transfer the media traffic over the network.

While IT traffic on an IP network could be compared to cars driving on a motorway, media traffic resembles bulky trucks or even freight trains suddenly appearing on that same motorway. So any network solution for your media transfers should take this into account. 

IN SUM IP LACKS PREDICTABILITY

As we look at how media contents fare on traditional IP networks, we see multiple streams of media channeled over shared queues, oversubscription at macroscopic timescales, but also at microscopic timescales, due to bursty characteristics. TCP will slow down transfers. Packet-loss causes retransmissions. The overall efficiency of the available link bandwidth drops far below the ‘expected’ transfer speed.

All this and more results in continuous traffic interference, delay and extra latency, leading to unpredictable behavior. As traffic patterns are dynamically changing, the perceived network behavior becomes random and totally unpredictable.