Attention
You have Cookie disabled in your browser. Displaying the site will be incorrect!

What latency is and how to reduce it

There are many factors that affect the speed of a web resource. One of them is network latency. Let’s take a closer look at what latency is, how it affects application performance, and how it can be reduced.

What is latency?

Broadly speaking, latency is any delay in the execution of some operations. There are different types of latencies: network latencies, audio latencies, when broadcasting video during livestreams, at the storage level, etc.

Basically, any type of latency results from the limitations of the speed at which any signal can be transmitted.

Most⁠—but not all⁠—latency types are measured in milliseconds. For example, when communicating between the CPU and SSD, microseconds are used to measure latency.

This article will focus on network latency, hereinafter referred to as “latency”.

Network latency (response time) is the delay that occurs when information is transferred across the network from point A to point B.

Imagine a web application deployed in a data center in Paris. This application is accessed by a user from Rome. The browser sends a request to the server at 9:22:03.000 CET. And the server receives it at 9:22:03.174 CET (UTC+1). The delay when sending this request is 174 ms.

This is a somewhat simplified example. It should be noted that data volume is not taken into account when measuring latency. It takes longer to transfer 1,000 MB of data than 1 KB. However, the data transfer rate can be the same, and the latency, in this case, will also be the same.

The concept of network latency is mainly used when discussing interactions between user devices and a data center. The lower the latency, the faster users will get access to the application that is hosted in the data center.

It is impossible to transmit data with no delays since nothing can travel faster than the speed of light.

What does network latency depend on?

The main factor that affects latency is distance. The closer the information source is to users, the faster the data will be transferred.

For example, a request from Rome to Naples (a little less than 200 km) takes about 10 ms. And the same request sent under the same conditions from Rome to Miami (a little over 8,000 km) will take about 120 ms.

There are other factors that affect network latency.

Network quality. At speeds above 10 Gbps, copper cables and connections show too much signal attenuation even over short distances, as little as within a few meters. With increasing interface speeds, fiber-optic network cables are mainly used.

Route. Data on the Internet is usually transmitted over more than one network. Information passes through several networks—autonomous systems. At the points of transition from one autonomous system to another, routers process data and send it to the required destination. Processing also takes time. Therefore, the more networks and IX there are on the package’s path, the longer it will take for it to be transferred.

Router performance. The faster the routers process data, the faster the information will reach its destination.

In some sources, the concept of network latency also includes the time the server needs to process a request and send a response. In this case, the server configuration, its capacity, and operation speed will also affect the latency. However, we will stick to the above definition, which includes only the time it takes to send the signal to its destination.

What is affected by network latency?

Latency affects other parameters of web resource performance, for example, the RTT and TTFB.

RTT (Round-Trip Time) is the time it takes for sent data to reach its destination, plus the time to confirm that the data has been received. Roughly speaking, this is the time it takes for data to travel back and forth.

TTFB (Time to First Byte) is the time from the moment the request is sent to the server until the first byte of information is received from it. Unlike the RTT, this indicator includes not only the time spent on delivering data but also the time the server takes to process it.

These indicators, in turn, affect the perception of speed and the user experience as a whole. The faster a web resource works, the more actively users will use it. Conversely, a slow application can negatively affect your online business.

What is considered optimal latency and how to measure it?

The easiest way to determine your resource’s latency is by measuring other speed indicators, for example, the RTT. This parameter is closest to latency. In many cases, it will be equal to twice the latency value (when the travel time to is equal to the travel time back).

It is very easy to measure it using the ping command. Open a command prompt, type “ping”, and enter the resource’s IP address or web address.

Let’s try to ping www.google.com as an example.

C:\Users\username>ping www.google.com

Exchange of packages with www.google.com [216.58.207.228] with 32 bytes of data

Response from 216.58.207.228: number of bytes=32 time=24ms TTL=16

Response from 216.58.207.228: number of bytes=32 time=24ms TTL=16

Response from 216.58.207.228: number of bytes=32 time=24ms TTL=16

Response from 216.58.207.228: number of bytes=32 time=24ms TTL=16

The time parameter is the RTT. In our example, it is 24 ms.

The optimal RTT value depends on the specifics of your project. On average, most specialists consider less than 100 ms to be a good indicator.

RTT value Its meaning
<100 ms Very good, no improvements required
100–200 ms Acceptable, but can be improved
>200 ms Unsatisfactory, improvements are required

How to reduce latency?

Here are some basic guidelines:

  • Reduce the distance between the data origin and the users. Try to place servers as close to your clients as possible.
  • Improve network connectivity. The more peer-to-peer partners (networks you can exchange traffic with) and route options you have, the better the route you can build and the faster the data will be transferred.
  • Improve traffic balancing. Distributing large amounts of data over different routes will help reduce the network load. In that way, information will be transferred faster.

CDN—a Content Delivery Network with many connected servers that collect information from the origin, cache it, and deliver it using the shortest route—will help with the first and second points. A global network with good connectivity will help you significantly reduce latency.

However, keep in mind that latency is only one factor affecting users’ perception of application performance. In some cases, the latency is very low, but the website still loads slowly. This happens, for example, when the server is slow in processing requests.

As a rule, complex optimization is required to significantly speed up the application. You can find the main acceleration tips in the article “How to increase your web resource speed”.

Summary

  1. Latency is the time it takes to deliver data across the network from one point to another.
  2. The main factor it depends on is distance. It is also affected by the network quality and the route (number of networks and traffic exchange points).
  3. Latency affects other parameters of the web resource performance, such as RTT and TTFB. They, in turn, affect conversion rates and search engine rankings.
  4. The easiest way to determine the latency of a resource is to measure the RTT. This can be done using the ping command. An optimal RTT is less than 100 ms.
  5. The most effective way to reduce latencies is to enable a CDN. Content delivery network will reduce the distance between the client and the data origin, as well as improve the routing. As a result, the information will be transferred faster.

G‑Core Labs CDN provides excellent data transfer speed. We deliver heavy files with minimal delays anywhere in the world.

We have a free plan. Test our network and see how your resource will speed up.

Subscribe to a useful newsletter

Favorable offers and important news once a month. No spam