|
It's certainly worth a test to see if eliminating your 10/100 switch from the pathway makes any difference. (Of course, if that switch is not in the pathway between source and sink device - NAS and media players - it'll make no difference.)
Ethernet links is a case where throughput scales almost directly with link rate - ie a 1000mbps link should have 10 times the capacity of a 100mbps link. The time is takes the packets to pass "through" the switch proper is almost negligible, but the time to transmit data to/from a switch has an impact. If one thinks of it in terms of "time" rather than data volume, pretty much it takes 1/10th of the time to transmit any given data packet over a 1000mbps link than a 100mbps link.
Most switches in the SOHO realm us a "store and forward" paradigm, (rather like sorting offices in the postal service.) To progress through a switch, a packet takes some time to ingress, thence the switch decides which port the packet needs to egress through, then the packet is despatched through the appropriate egress port(s - could be more than one for broadcast and mulitcasts.)
The time taken to decide which port to egress through is negligible (a really good switch will have decided this before it's even finished receiving the packet as the addresses needed to make the decision are at the start of the ethernet packet format.) But the ingress/egress times are significant, so if we shorten them (by making the links faster) we get a performance boost.
It's also worth mentioning that each ethernet lobe is of finite capacity - the more competition there is for the capacity the more likely you are to experience congestion. By increasing the "speed" of a link from 100 to 1000, you increase the capacity (about 10 fold) so paradoxically the "benefit" of the speed increase could not be the speed in and of itself, (though that's welcome,) but the capacity increase (we call this "bandwidth" in the business.) This could be particularly significant on the "uplink" between switches and/or switches and router as if they are constrained to 100mbps on some particular path, then increasing that to 1000mbps may be a big help, even if all the end stations downstream of the switch are "only" 100mbps.
For example, if I've got a switch with 4x100mbps end stations and a 100mbps uplink to the rest of the network, if all those end stations are going at it full chat, then the 100mbps uplink doesn't have the bandwidth to cope and increasing it to 1000mbps will "fix" the problem (or possibly move it somewhere else.) But if all of the 4 endstations "only" consume quarter capacity as (for illustration let's say the equivalent of) 25mbps then a 100mbps would be (just) adequate. Such is that black art of network planning - predicting traffic levels is our Nemesis.
Of course, Gigabit switches are cheap as chips these days - if eliminating your 10/100 makes the difference, then upgrading 10/100 to 10/100/1000 (Gigabit) is a cheap and simple fix.
Gosh, this ended up being a longer post than I planned. |
|