WAN Confusion: Why WAN Data Acceleration isn’t

WAN optimisation or SD-WAN WAN optimisation and SD-WAN vendors tend to talk about cost-efficiencies, scalability, virtual flexibility, their ability to accelerate any application despite the limitations of TCP/IP, being cloud-ready, industry-leading. In addition, it’s about being able to accelerate any application despite the inhibitions created by latency and packet loss. They are indeed great technologies, and each one has their purpose, but they often fail to live up to expectations – particularly regarding their vendors’ often touted claims that they enable more efficient usage of bandwidth. By Graham Jarvis, on behalf of Bridgeworks.

Even so, given the push from their vendors and their claims, it’s easy to get confused about what WAN Acceleration, also known as WAN Data Acceleration, does. People will be forgiven for thinking that it’s WAN Optimisation, or even an SD-WAN. However, WAN Optimisation can’t transmit encrypted data. In stark contrast WAN Acceleration can while more efficiently mitigating latency and packet loss. SD-WANs are great, but their performance can be significantly improved with WAN Acceleration overlays.

WAN differences

David Trossell, CEO and CTO of Bridgeworks, explains: “If there is one subject in the Wide Area Network (WAN) industry that causes clients more confusion and misunderstanding, it’s this one about the difference between WAN Optimisation, SD-WANs and WAN Acceleration. It is a very common misconception that WAN Optimisation and WAN Acceleration are different names for the same thing, and that SD-WAN will somehow reduce latency to the point that there is no longer a need for WAN Optimisation and WAN Acceleration.”

He’s therefore keen to discuss the pros and cons of each technology, highlighting how they differ. With this clarification he hopes that companies and organisations seeking to mitigate the effects of latency and packet loss will be more equipped to make an informed and better decision about which technology is going to provide higher performance for their needs – enabling them to make the right choice.

Begin at the beginning

Let’s begin at the beginning of the coining of WAN Optimisation (WAN-Op) by Gartner. He explains: “In reality, and I’m going up against the mighty Gartner Organisation to say, this is, or was, a misnomer – it really should be named Data Optimisation because in reality, this is what they are doing and there is little or no Optimisation of the WAN – there I have said it! To explain my statement, let’s look at how WAN Optimisation works.”

“When we create a document in Word or an Excel spreadsheet file, there are a lot of repeated character or data sequences. The same sequences can exist in other files we create, such as: Yours Sincerely, Company address, etc. So, rather than transmit these sequences in full each time we create a library in the WAN-OP device of these “ Sequences “ and create a unique reference pointer to each, so we only have transmit the short reference pointer rather the whole data sequence.

“At the other end of the WAN, the opposite happens in the WAN-Op device. It takes the pointer reference and replaces it with the text or data sequence that was in the original file. As more and more files are created or transmitted, the number of sequences in the library increases to a point where the reduction in file size transmitted over the WAN can reduce by 90%. This process is more commonly known as data deduplication.”

He then describes the “WAN optimisation” aspect of the technology as a “very blunt tool” because it forces the congestion window size to a large value and bludgeons its way over the WAN, while ignoring “that TCP employs to create a fair and controlled network.” He therefore views the technology as being more about data optimisation than WAN optimisation. The trouble is that the industry has become, he claims, conditioned to think that WAN Optimisation is the only tool available for “for working and transmitting data over the WAN and there are no alternatives.” Yet WAN Acceleration can both be a more effective and efficient alternative, requiring no new infrastructure and no need to increase bandwidth.

User experience

Trossell nevertheless recognises that WAN Optimisation excels in the field of user experience: “With the ability to service the user data locally when the user calls up the file it is near instant. Another aspect of WAN-OP is to do the same for application data streams such as CIFS, FTP and a whole host of other protocols.”

However, he explains that WAN Optimisation relies on reducing the data payload moving across the WAN. “Many of the new rich data formats such as compressed video, sound pictures, or deduplication data from backup processes and therefore, there is little data reduction possible and the file is send in its entirety across the WAN”, he says, before adding that there are three hidden gems of which he wants people to become aware:

1. The first is when a file is first committed, the whole file without deduplication is sent over the WAN - if this is a slow WAN with lots of latency, this can take a considerable time.

2. The second is the storage connected to the deduplication process; if this fills up and has to overwrite some sections, parts of the file may not be available to rebuild the file when called upon and will have to fetch the file from the Server over the WAN. If the store is completely over-written – such as misconfigured overnight backup, then the time taken to retrieve any file could run into minutes.

3. And the third, which sounds counter intuitive, is the increasing WAN bandwidth capability. All this deduplication process requires a lot of processing power and memory, as we move up into the Gb/s bandwidths we start to plateau out in performance, which restricts it ability to make use of the multi Gigabit WAN connections.

What about SD-WANs?

SD-WANs are, in his view, a great technology and one that has emerged over the past few years to have a major impact on the world of Wide Area Networks. He explains: “It is like having VMware for the WAN where we separate the controlling software layer from the data layer. This opens up so many possibilities to simplify the configuration and maintenance of the WAN.”

“One of the really useful features of SD-WANs is their ability to create virtual WANs out of combinations of different types of connections and control what data flows down which set of connections. With SD-WANs and their ability to utilise multiple Broadband, as well as 5G connections bonded together, it is possible to created high-speed, resilient WANs at a fraction of connection from traditional suppliers.”

It’s therefore no surprise that many of the WAN Optimisation vendors have entered the SD-WAN market with their own propositions to provide some performance improvement. The trouble is that

SD-WANs don’t reduce latency and packet loss – the two things that kill WAN performance. “You still have the issue with rich and compressed media files as well as, what is becoming a major factor in WAN traffic performance, encrypted data. WAN-OP can provide little or no data reduction to improve the performance over the WAN”, he elaborates.

Computational limits

He reveals that the industry has always looked to compression or deduplication to improve WAN performance. However, this has often been with computational limits and more compressed, deduped, and now encrypted data that is part of GDPR. There is also an increasing amount of other sensitive data whizzing around the world. Subsequently, he claims that WAN Optimisation no longer meets the need of large data movers.

He therefore asks: “So where does this leave us with more and more being created and moved over the WAN without and possibility of improving performance?” Well, it necessitates a different approach to how data is moved, and that is WAN Acceleration. This is required if organisations are going to have the true ability to exploit the full potential the new high capacity, high-bandwidth WANs.

“The two performance thieves of moving any data over these WANs are latency and packet loss”, he reminds us, before commenting that latency is about the time it takes to transmit data from one node to another. This is while acknowledging each way that the data got to its destination. Latency, he explains, is governed by the speed of light, which he doesn’t think is now fast enough. So, it is the main culprit when it comes to poor WAN performance, and it can lead to packet loss between the nodes. Those packets that are lost have to be accounted for, and then re-transmitted – adding a multi-factor to the effect of latency, thus causing everything to slow down even more excessively.

Tackling latency

To tackle latency the two end points can be moved closer together, which, he says: “defeats the whole object of the WAN, but we can reduce or mitigate the effects that latency has on WAN performance by thinking about the problem differently.” This can be done with WAN Acceleration, and by looking at the utilisation figures for a 10Gb/s WAN with 20ms of latency. With this a single stream of data will only use, he explains, about 10% of the potential bandwidth performance.

That means that 90% of the bandwidth is wasted. So, instead of focusing on latency, there is an opportunity to increase bandwidth utilisation – increasing it to the 90% area. He therefore concludes: “This can be done by putting more data on the WAN at the same time, thus bringing the throughput performance back with the help of WAN Acceleration solutions such as PORTrockIT.”

Increasing bandwidth alone won’t make any difference; nor will WAN Optimisation, and even SD-WANs will be boosted with a WAN Acceleration Overlay. They are all great technologies, but only WAN Acceleration is really performant in terms of mitigating latency and packet loss, while boosting bandwidth utilisation. Latency alone can’t be resolved. It’s a fact of physics, and its effects can only be mitigated. WAN Acceleration is therefore neither WAN Optimisation nor an SD-WAN. The focus is different, and it can even enable the transmissions and receipt of encrypted data.


For those working in technology, it’s long been recognised that data centres are the backbone of the digital economy, but Coronavirus saw the industry thrust into the public eye on a much wider scale. We’ve seen data centre operators deemed to be critical workers, and witnessed debate into whether all data centres should be classed as Critical National Infrastructure. By Darren Watkins, Managing Director for VIRTUS Data Centres
A global leading data centre company has recently enlisted the support of Bryland Fire Protection Limited to design and install an engineered solution to safeguard their 1,600-rack facility in Slough.
The provision of new data centre supply is a vital component of the European data centre market, not just to ensure there is enough product to satisfy levels of demand, but to ensure that it is the right type of product aligned to changing IT strategies and practices. By James Hart, CEO at BCS (Business Critical Systems).
There is increasing pressure on data centre Operators to make their facilities as energy efficient as possible with global drive towards carbon neutrality. To support this journey Graeme Shaw, Technical Application Manager at Zumtobel, explains how lighting can not only help data centres achieve their sustainability based objectives, but also make them more safe, secure and operationally efficient.
Power and data to remote devices over single twisted pair up to 1000-metres; compact cable (18AWG) and connector format increases flexibility and ease of use; converging corporate, factory and distribution information networks increases productivity, By Stuart McKay, Panduit
There are many different working parts to an effective physical security system. By Neil Killick, Leader of Strategic Business (EMEA), Milestone Systems