Latency vs bandwidth vs packet loss

Topic: Networking basics

Summary

Latency is round-trip or one-way delay; bandwidth is throughput capacity; packet loss is the fraction of packets that do not arrive. Learn how each affects applications and how to measure them so you can diagnose slow or unreliable connections. Use this when tuning or debugging network performance.

Intent: How-to

Quick answer

  • Latency: time for a packet to go from A to B (or round-trip RTT); measured in ms; high latency hurts interactive use (SSH, VoIP) and TCP throughput (until the window grows).
  • Bandwidth: max data rate (e.g. Mbps, Gbps); limits how fast you can send; large transfers are bandwidth-bound; latency and window size limit how much is in flight.
  • Packet loss: percentage of packets lost; causes retransmits and backoff; even 1% can significantly hurt throughput and cause timeouts; measure with ping loss or dedicated tools.

Steps

  1. Define and measure latency

    Latency is delay (one-way or RTT). Measure RTT with ping (e.g. ping -c 10 host); note min/avg/max. High RTT increases time to first byte and limits TCP throughput (throughput ≈ window/RTT until bandwidth cap).

  2. Define and measure bandwidth

    Bandwidth is the maximum sustainable rate. Measure with iperf3 or similar between two hosts; ensure no other traffic. Throughput can be less than bandwidth due to latency (TCP window) or loss.

  3. Define and measure packet loss

    Packet loss is the fraction of packets that never arrive. ping reports loss percentage; for deeper analysis use dedicated probes. Loss causes TCP retransmits and congestion backoff; 1% loss can cut throughput sharply.

  4. Relate to applications

    Interactive (SSH, games): sensitive to latency. Bulk transfer: sensitive to bandwidth and loss. VoIP/video: need low latency and low loss; use UDP and codecs that tolerate some loss.

Summary

Latency is delay; bandwidth is capacity; packet loss is the fraction of packets lost. Each affects applications differently: interactive use suffers from high latency; bulk transfer is limited by bandwidth and hurt by loss. Use this when measuring or explaining slow or unreliable links.

Prerequisites

  • None; this is a foundation concept.

Steps

Step 1: Define and measure latency

Latency is the time for a packet to travel (one-way or round-trip). Round-trip time (RTT) is often measured with ping. High RTT increases response time for interactive traffic and limits TCP throughput until the congestion window is large enough to fill the pipe.

Step 2: Define and measure bandwidth

Bandwidth is the maximum data rate the path can sustain (e.g. Mbps). Measure with tools like iperf3 between two endpoints. Actual throughput can be lower due to latency (TCP window), packet loss, or other traffic.

Step 3: Define and measure packet loss

Packet loss is the proportion of packets that never arrive. ping reports loss percentage over a number of probes. Loss triggers TCP retransmissions and congestion control; even small loss (e.g. 1%) can significantly reduce throughput.

Step 4: Relate to applications

  • Interactive (SSH, RDP, games): Sensitive to latency; high RTT feels sluggish.
  • Bulk transfer: Limited by bandwidth; also reduced by loss and RTT (window).
  • Real-time (VoIP, video): Need low latency and low loss; often use UDP with loss-tolerant codecs.

Verification

  • You can define latency, bandwidth, and packet loss; measure RTT and loss with ping; and explain how each affects at least one type of application.

Troubleshooting

Slow but no loss — Likely latency or limited bandwidth; measure RTT and throughput; check for path changes or congestion.

High loss — Check links, cables, and equipment; look for errors on interfaces; reduce congestion or fix faulty hardware.

Next steps

Continue to