|
I haven't done a longer exposition of duplex modes for a while, so let's have another go:
"Full-Duplex" means "can send and receive at the same time." e.g. ethernet over UTP can do this.
"Half-Duplex" means "can send OR receive but not at the same time." e.g. Wi-Fi, poweline, (ancient) ethernet over coax and some early ethernet over UTP used to be this way. Ethernet can still "fall back" to half-duplex operation in some circumstances (and it's there for legacy reasons.)
Duplex modes are nothing do to with "speeds" and half-duplex does not mean "divide the link rate by two and that's the effective rate in each direction." It is more nuanced than that. Duplex can have an effect on throughput, but in a a rather roundabout way...
It's kind of like driving down a road with a lane flowing unimpeded in each direction - "full-duplex" - compared to some road works with a single lane with signals controlling the traffic flow so that traffic can only flow one way or the other - "half-duplex" - but not both at the same time. That may reduce they amount of traffic that can be carried in each direction compared to two normal non-conflicting running lanes, but the actually "speed" of the traffic is not changed - it's just that less of it can get through when it's busy. If the traffic levels are low enough that there's no need to queue up for the single lane (and we arrange that the signals always change to green as one approaches) then it makes no difference and we proceed as if the restriction wasn't there. Similarly if all the traffic flows one way. Or the traffic pattern is asymmetric, but the total throughput (in both directions) is low enough that we can arrange passage without conflicts.
To stretch the roads analogy to more than one station - think of an uncontrolled crossroads with no route having priority and only one vehicle at a time can proceed through the junction. We've got four "stations" (ingress/egress routes) and a "common" area (the junction) but it's still half-duplex in that only one route at a time can admit traffic onto the junction and all other routes must wait until it's clear (or we get a collision - which happens in data networking - though they tend to result in the colliding data simply being discarded rather than a pile of crumpled metal that needs to be cleared away.) If traffic levels are low enough and all drivers "play nice" - there''s no conflict and nothing impairs anyone's progress whatever the respective traffic flow on any route. It's only when traffic levels grow to the point that collisions must be avoided that we start to get queuing which thence affects the traffic flows (throughput) - though once we're on the junction, we proceed at whatever "speed" we would have done without the queues. As traffic levels rise to the point at which conflict start to occur, the "throughput" on any given route then reduces. Notice, there's no requirement for any sense of "fairness" as to which route gets selected or balancing of the flow rates "nicely" - the same can happen in data networking.
The principal reasons for the difference between the "link rate" of data networking links (the "speeds" on all the kit, specs, NIC's, and cockpits) and the "throughput" (observed with speed tests, iperfs, copying files, and what most people "mean" when they say "speed,") is more down to the operating paradigm of the technology and in particular things like error correction and management traffic (which you don't "see" in your speed tests.)
To use a completely made up examplar, if micknet runs at 50mbps "link rate" (as reported in my cockpit) and it uses a very large amount of error correction such that for ever bit of "user" data there's an accompanying bit for FEC, then the throughput I observe with a speedtest is 25mbps. Most real data networking technologies are worse that this as they cannot transmit continuously. Even ethernet at it's very best is only about 97% efficient, Wi-Fi is of the order of 55%-75% efficient and HomePlug/Powerline is regularly cited at 45-55% - though the the latter two, are highly dependent on signalling conditions and the amount of forward error connection and retransmits required (ethernet doesn't have error correction or retransmits.)
Speedtests (and iperf, NetIO, and copying a file an timing on a watch, etc.) take no account of any of the underlying technology including it's operating paradigm, duplex modes, error correction, retransmit rates, management overheads, or anything I've discussed here - they only report what they observe at the highest levels in the networking "protocol stack" (this isn't just nerd speak - there's a reference model for data networking called the "ISO 7 Layer Model" which is widely adopted for anyone who want to research the detail further.)
Speedtests simply copy a measured amount of data of a measured time and compute a statistical average. In the same way as the trip computer in a car computes average speed without taking any account of the prevailing road traffic condition, speed limits, velocity, weather, road works, junction count (and complexity) the Saturday morning Ikea queue or anything else. It's the roughest of rough guides. Speedtest certainly has it's uses, but it is to data networking what a "wet finger in the air" (versus a thermometer) is to temperature measurement. |
|