Difference between Latency, Throughput and Bandwidth
Posted By : Suraj Mishra | 30-Jul-2019
In this blog, we are going to discuss the difference between Latency, Throughput, Bandwidth and Response Time. Today, We will discuss these with a simple example to understand the difference between all four.
Go through the above figure and you will get an idea about the three important components i.e. Latency, Bandwidth and Throughput.
Latency: The total time taken by water to travel from one end to another end is known Latency. It is measured in terms of time unit like milliseconds, seconds, minutes or hours. For performance testing, the term latency of a request is travel time from client to server and server to the client. Some tester called it “Delay”. Let’s say:
· A request starts at t=0
· Request Reaches to a server in 1 second (at t=1)
· Server takes 2 seconds to process the request (at t=3)
· The related Response Reaches to the client’s end in 1 second (at t=4)
So Latency will be 2 seconds.
Bandwidth: Bandwidth indicates the maximum water passes through the pipe. In testing term, The maximum amount of data transfer through a communication channel is called the channel’s bandwidth. Let’s say an ISDN have 64K of bandwidth and we can increase it by adding one more 64K channel, so total bandwidth will be 128K.
Throughput: The water is flowing from the pipe can be represented as Throughput. If we are talking about the testing term “The amount of data moved successfully from one place to another in a given time period, and typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps)”. For example, 20 bits data transferred at t=4th second, so throughput at t=4 is 20bps.
Response Time: Response time may be defined as “The amount of time from the moment that a user sends a request until the time that the application indicates that the request has been completed and reaches back to the user”. As per Latency example, Response time will be 4 seconds.
Some important points to be remembered:
-
Solving bandwidth is easier than solving latency.
-
If throughput nearly equals to bandwidth, it means that the full capacity of the network is being utilized.
-
Increase in response time with flat throughput graph shows the network bandwidth issue.
-
Consistent throughput indicates an expected capacity of network bandwidth.
-
Response time is directly proportional to throughput. If the throughput decreases with an increase in response time then it indicates towards the instability of application.
-
The Number of threads is directly proportional to the final throughput.
-
If you have low latency with a small bandwidth, it will take a longer time for data to travel from point A to point B compared to a connection that has low latency and high bandwidth.
-
Latency is affected by these three i.e connection type, distance, and network congestion.
Thanks
Cookies are important to the proper functioning of a site. To improve your experience, we use cookies to remember log-in details and provide secure log-in, collect statistics to optimize site functionality, and deliver content tailored to your interests. Click Agree and Proceed to accept cookies and go directly to the site or click on View Cookie Settings to see detailed descriptions of the types of cookies and choose whether to accept certain cookies while on the site.
About Author
Suraj Mishra
Suraj is a versatile Test Engineer. He has worked with ERP applications such as Odoo 12 Enterprise and Community, Apache OFBiz to implement customer’s requirements. He has good knowledge of HRMS, Accounting, Manufacturing, Logistics, Inventory, Supp