I’m lucky enough to preside over the premier event in wireless networking, Wireless Field Day. As part of that event, I often get to witness cool new product and technology introductions, but the debut of 802.11ac Wi-Fi at WFD3 was perhaps the most notable. Unlike breathless press releases and premature product introductions, the Wireless Field Day delegates were treated to a real look at 802.11ac in action!
Spectrum Analysis of 802.11ac with MetaGeek
The first “ooh-aah” 802.11ac moment at WFD3 was when MetaGeek showed off a spectrum capture in their Chanalyzer Pro product.
As demonstrated by Trent Cutler, Chanalyzer the full 80 MHz channel utilization of current 802.11ac gear, taking up a substantial chunk of the 5 GHz spectrum available for Wi-Fi.
An Introduction by Aruba Networks
Aruba Networks spent some time presenting background on 802.11ac reality and took questions from the delegate panel.
According to Aruba’s Lane, 802.11ac clients could come as soon as the end of 2012, but we shouldn’t expect much use until 2014 arrives. Lane also points out that we shouldn’t expect 7 Gbit Wi-Fi (the maximum theoretical performance) any time soon since it would require some serious hardware to pull it off. In fact, he suggests that 8-stream 802.11ac might never appear!
One of the major issues of 2.4 GHz WiFi is the fact that there are only three non-overlapping channels. This limits the performance of today’s devices in crowded areas. Although 5 GHz has much more spectrum, this will be gobbled up by 80 MHz 802.11ac channels. In fact, the apparent wide-open spaces in 5 GHz actually contain just one non-overlapping 160 MHz channel in the US! There are five 80 MHz channels, however. See Peter’s discussion at the 3 minute mark for a useful diagram!
On the positive side, multi-user MIMO in 802.11ac promises to allow parallel downlinks to devices like smartphones, improving sharability and performance for real-world users.
Aruba expects close-in clients (about half the current range), giving about 600 Mbps to 1.3 Gbps of data rate for PCs, 433 Mbps for smartphones, and 400 Mbps for tablets. The difference relates to the channel width and number of streams, with smartphones limited to a single stream.
Cisco Demonstrates 802.11ac
Cisco provided the biggest gasp of WFD3 when they actually turned on an 802.11ac network right there in front of the delegates! They used a prototype 802.11ac Wave 1 Module for the AP3600 in this demo. This will be released in the first calendar quarter of 2013.
802.11ac will come in at least two “Waves”. Wave 1 (in 2013) will be limited to 1.3 Gbps with single-user MIMO, 80 MHz channels, and 3 spatial streams. Wave 2, expected in 2014, will up this to 3.5 Gbps with 4 spatial streams, 160 MHz channels, and multi-user MIMO.
Like Aruba, Cisco will focus on 40 and 80 MHz channels for now and both are quite interested in multi-user MIMO since it helps with the masses of user devices everyone expects to be connecting to Wi-Fi in the near future.
The demo showed 802.11ac using 80 MHz on channel 36 and passing about 550 Mbps of data. This was impressive since many other devices were in the room using overlapping channels and impacting the demo. Cisco claims to have reached 700 Mbps in shielded rooms.
Gregor Vucajnk, one of the WFD3 delegates, even made his own spectrum capture of the Cisco demo, calling it “the beast!”
Although “7 Gigabit performance” makes for great headlines, the real advance of 802.11ac will be better throughput for more clients. As we move to 5 GHz, 802.11ac will support more simultaneous client connections with MIMO stability. Although performance will improve to 400+ Mbps, users will likely never see multi-gigabit throughput.
For some more on 802.11ac, I recommend Tom Carpenter’s background post at CWNP and Tom Hollingsworth’s thoughts on the rush.
Thanks to Jennifer Huber for the photo of Gregor Vucajnk’s laptop (and arm!)
“expect 7 Gbit Wi-Fi (the maximum theoretical performance)”
In future it would be helpful to avoid the term “theoretical rate” because it is misleading to the ignorant and completely useless to those who understand the issues. There are TWO “theoretical” numbers of interest.
(a) The PHY rate. This is the rate at which bits can sent over the air. It’s essentially the bandwidth, times the number of bits per Hz (that is the modulation, whether BPSK, QAM64 or whatever), times an factor incorporating the error correction overhead (which can range from as high as say 5/6 in good conditions to as low as 1/2 in bad conditions), times an OFDM efficiency factor (incorporating the number of the OFDM carriers that can’t be used because they’re used as pilots, blanked to prevent interference, or whatever) of around 9/10 or so (I don’t know the exact value for 802.11ac), times another efficiency factor for the cyclic prefix and various obligatory guard intervals.
OK, so there is this number, the raw rate at which bits can stream out. The problem is the number you ACTUALLY care about is
(b) The MAC rate. The issue is that the whole point of 802.11 (as opposed to, eg, the cell phone system) is that there is no centralized co-ordination of who gets to talk when, rather there is a media access protocol which involves stations listening to see if the channel is unused, backing off if they start to transmit at the same time as another station, and a whole panoply of hacks that have been added to this basic idea over time to make it more efficient.
The bottom line is that, realistically, each new iteration of the spec starts off with a BEST CASE scenario, once MAC overhead is accounted for (ie all the time spent not transmitting because everyone is waiting for an unused channel, or has collided or whatever) of a goodput of about 50% of the PHY throughput. (This is the best case of one transmitter, one receiver. Situations with one base station and many clients can be quite a bit worse in their total goodput because the MAC overhead gets worse with more clients.)
Each of the specs so far has added various optional features which, as they get implemented over time, tend to increase the best case scenario to about 2/3 of the PHY throughput; but most of us rarely see this glorious situation because either the base station we’re connecting to or the chips in our device are not new enough.
This is a very unfortunate situation, and it is incredibly frustrating the amount of throughput that is wasted on MAC overhead. Unfortunately doing things this way, rather than having a central co-ordinator in the base station doling out access slots, appears to be a religious point of principle with the 802.11 team, and so even with 802.11ac we are still stuck in this rut.
The ethernet folks EVENTUALLY gave this up and accepted the reality that fast ethernet was NOT CSMA-CD but in fact a point-to-point protocol between workstation and hub; but the 802.11 group has so far refused to accept the same reality, in spite of the existence proofs present in WiMax and WCDMA/HSPA/LTE.