|
If it's anything like the enterprise kit I installed a few years ago that had "band steering," it basically worked thus: When a client first tries to connect on the 2.4GHz band, the AP ignores it in the hope it will give up and try "somewhere else" ie the 5GHz band (or another AP) if it tries again, it ignores it, but on the third attempt it lets in in anyway. Newer kit has a kind of "hint" mechanism whereby AP's can tell a client "you might be better off talking to X." In extremis, some kit will let you ban designated clients from given AP's, bands, SSID's etc but that's a pretty brutal approach and not very helpful to your average SOHO deployment.
Internet speed tests are not a very useful too for assessing local links (be they wired or Wi-Fi) as your Internet speed test is testing the entire pathway between the source and sink device which effectively tests the slowest "hop" in that path. Usually this is your ISP link, thusly it isn't "stressing" the local links to their maximum capacity.
If you have a couple of devices available locally, you could run up you own speed test server on a PC at home, then test against it instead of the Internet based test site, thusy taking the ISP link (and everything beyond) out of the equation. NetIO and iPerf (both free) are our favourite tools for so doing in these parts. Ideally you run up the "server" programme on a wired PC.
I am fond of saying, such speed tests don't actually test the "speed" of anything, what they do is send a measured amount of data, time how long it takes and compute a statistical average. As such, they don't take any account of other traffic on the network (amongst many other factors that could affect the results.) Thusly you want to run them a few times and look for average. Even then it's something of a "wet finger" metric. For example, if testing (say) 100mbps ethernet, we don't expect (say) NetIO to come up with the same number every time, and because of things like protocol overheads and so forth we don't expect to get 100mbps - we tend to be looking for order of magnitude indications and trend. For example if we tested a gigabit ethernet link and got "only" 89mbps performance, we'd suspect the link hasn't actually come up at gigabit and investigate. We wouldn't worry that our gigabit link "only" tested at (say) 872mbps - that's fine, for a quick and dirty test, it's in the right ball park.
Wi-Fi is fundamentally an "only-one-thing-at-a-time-can-transmit" technology - the more "things" there are (including any neighbours) the more data they want to transmit, the more competition there is for "air time." Thusly, you could run three speed tests in a row, and if your neighbours or one of the kids kicks off a download at the same time, it could hit your test results. Not to mention all the interference sources. It's fickle.
Wi-Fi, I'm afraid, is "just like that" - the transmission medium, ie the radio waves, don't "belong" to anyone and everyone is entitled to use them. Essentially, we all have to "play nice together." (There are mechanisms built into the standards to enforce this.) I live in flats, all the neighbours have Wi-Fi too and finding a radio channel all to myself is impossible.
So, particularly for Wi-Fi, run the tests quite a few times and at different times of day and look for trend rather than absolute numbers or occasional aborations. If at all possible, use the likes of NetIO or iPerf for testing your local links and leave the Internet speed test for testing your Internet service. |
|