Posts tagged WCDMA

AT&T 4G LTE in Tucson, AZ – Coming Before End of 2012

It has been what seems like an eternity since I wrote a bit about Verizon 4G LTE coming to Tucson, AZ. Since then, the network has been deployed and working just fine, and made it into my mental take-it-for-granted state. Since then, Cricket has lit up their own LTE network on AWS (1700/2100 MHz), and next up is AT&T who just recently announced details about their LTE deployment for a bunch of markets before the end of this calendar year. I wrote about the AT&T LTE news at a high level at AnandTech, and the announcement comes a not-so-coincidentally timed week before the next iPhone announcement in an attempt to prevent lots and lots of LTE related churn.

I’m burying the lead a bit, but before the end of 2012 AT&T will have LTE finally lit up in my part of the world. There’s a relevant press release here which is relatively light on detail – there’s no outline for what parts of town will get LTE, whether it will include surrounding areas, or any further detail. I guess we can only hope that they mean the greater metro area. I’ve asked a few of my sources for a better timeline, but can only say that before December LTE should be lit up.

I hope it goes without saying, but LTE (3GPP Long Term Evolution) is completely different from the earlier announcements AT&T made about “4G” coming to Tucson in May 2011. That was really just deployment of HSPA+ with up to 16QAM on the downlink (HSDPA 14.4) and some additional WCDMA carriers for capacity reasons. I’m pretty pleased with the state of AT&T WCDMA in town, I see around 2-3 carriers on PCS (1900 MHz) around town and what I consider very good peak speeds.

AT&T Spectrum Holdings in Pima County (As of Sept. 7 2012)

Since AT&T LTE doesn’t use the same channel bandwidth everywhere, it’s worth noting that in this particular market (Pima County), AT&T can run 10 MHz FDD-LTE on Band 17, (Lower 700 MHz B+C blocks) and 5 MHz FDD-LTE on AWS (1700/2100) when the time arises. I haven’t seen AT&T enable any LTE on AWS quite yet, this is likely coming at some future date after the rollout is closer to completion or as a way to mitigate loading in the future.

Mobile Phone Signal Bars – Thoughts

Something that’s bugged me for a long time is how crude and arbitrary signal bars on mobile phones are. With a few limited exceptions, virtually every phone has the exact same design: four or five bars in ascending order by height, which correspond roughly to the perceived signal strength of the radio stack.

Or does it? Let me just start by saying this is an absolutely horrible way to present a quality metric, and I’m shocked that years later it still is essentially the de-facto standard. Let me convince you.

It isn’t 1990 anymore…

Let’s start from the beginning. The signal bar analogy is a throwback to times when screens were expensive, physically small, monochromatic if not 8 shades of grey, and anything over 100×100 pixels was outrageously lavish. Displaying the actual RSSI (Received Signal Strength Indicator) number would’ve been difficult and confusing for consumers, varying between 8 already difficult to distinguish shades of grey would have been hard to distinguish, and making one bar breathe in size could have sacrificed too much screen real estate.

It made sense in that context to abstract the signal quality visualization into something that was both simple, and readable. Thus, the “bars” metaphor was born.

Since then, there have been few if any deviations away from that design. In fact, the only major departure thus far has been Nokia, which has steadfastly adhered to a visualization that makes sense:

Ascending Strength Bars

Another Example

Namely, their display metaphor is vertically ascending bars that mirror call quality/strength. This makes sense, because it’s an optimal balance between screen use and communicating the quality in an easy to understand fashion. Moreover, they have 8 levels of signal, 0-7 bars showing. Nokia should be applauded for largely adhering to this vertical format. (In fact, you could argue that the reason nobody has adopted a similar metaphor is because Nokia has patented it, but I haven’t searched around)

It’s 2010, and the granularity of the quality metric on most phones is arbitrarily limited to 4 or 5 levels at best.

Better designs?

Thus, an optimal design balances understandability with level of detail. On one hand, you could arguably simply display the RSSI in dB, or on the other hand sacrifice all information reporting and simply report something boolean, “Can Call” Yes/No.

Personally, I’m waiting for something that either leverages color (by sweeping through a variety of colors corresponding to signal strength) or utilizes every pixel of length for displaying the signal strength in a much more analogue way.

Excuse the crudity of this rendering

Green and red are obvious choices for color, given their nearly universal meaning for OK and OH NOES, respectively. Something that literally takes advantage of every pixel by breathing around instead of arbitrarily limiting itself to just 4 or 5 levels also wouldn’t be hard to understand.

Fundamentally, however, the bars still have completely arbitrary meaning. What constitutes maximum “bars” on one network and device has a totally different meaning on another device or carrier. Even worse, comparing the same visual indicator across devices on the same network can often be misleading. For example, the past few months I’ve made a habit of switching between the actual RSSI and the resulting visualization, and I’ve noticed that the iPhone seems to have a very optimistic reporting algorithm.

There’s an important distinction to be made between the way signal is reported for WCDMA versus GSM as well:

First off one needs to understand that WCDMA (3G) is not the same thing as GSM (2G) and the bars or even the signal strength can not be compared in the same way, you are not comparing apples to apples. The RSCP values or the signal strength in WCDMA is not the most important value when dealing to the quality of the call from a radio point of view, it’s actually the signal quality (or the parameter Ec/No) that needs also to be taken into account. Source

That said, the cutoff for 4 bars on WCDMA seems to be relatively low, around -100 dB or lower. 3 bars seems around -103 dB, 2 bars around -107 dB, and 1 bar anything there and below. Even then, I’ve noticed that the iPhone seems to run a weighted average, preferring to gradually decrease the report instead of allowing for sharp declines, as is most usually the case.

Signal in dB

Use dB if you’re not averse to math

What you’re reading isn’t really dBm, dBmV, or anything really physical, but rather a quality metric that also happens to be reported in dB. For whatever reason, most people are averse to understanding dB, however, the most important thing to remember is that 3 dB corresponds to a factor of 2. Thus, a change  of -3 dB means that your signal has halved in power/quality.

The notation dBm is refrrenced to 1 mW. Strictly speaking, to convert to dBm given a signal in mW:

Signal_{dB}=10\log\left(\frac{Signal_{mW}}{1\, mW}\right)

Likewise, to convert a signal from dBm back to mW:

Signal_{mW}=10^{\left(\frac{Signal_{dB}}{10}\right)}

But even directly considering the received power strength or the quality metric from SNR isn’t the full picture.

In fact, most of the time, complaints that center around iPhones failing to make calls properly stem from overloaded signaling channels used to setup calls, or situations where even though the phone is in a completely acceptable signal area, the node is too overloaded. So, as an end user, you’re left without the quality metrics you need to completely judge whether you should or should not be able to make a data/voice transaction. Thus, the signal quality metric isn’t entirely a function of client-tower proximity, but rather node congestion.

Carriers have a lot to gain from making sure their users are properly informed about network conditions; both so they can make educated decisions about what to expect in their locale, as well as to properly diagnose what’s going on when the worst happens. Worse, perhaps, carriers have even more to gain from misreporting or misrepresenting signal as being better than reality. Arguably, the cutoffs I’ve seen on my iPhone 3GS are overly optimistic and compressed into ~13 dB. From my perspective, as soon as you’re below about -105 dB, connection quality is going to suffer on WCDMA, however, that shows up as a misleading 3-4 bars.

Conclusions:

What we need is simple:

  1. Transparency and standardization of reporting – Standardize a certain visualization that is uniform across technology and devices. Choose something that makes sense, so customers can compare hardware in the same area and diagnose issues.
  2. Advanced modes – For those of us that can read and understand the meaning of dB and real quality metrics from the hardware, give the opportunity to display it. Hell, perhaps you’ll even encourage some people to delve deeper and become RF engineers in the future. It’s annoying to have to launch a Field Trial application every time we want to know why something is the way it is.
  3. Leverage recent advances in displays - Limiting display granularity to 4 or 5 levels doesn’t make sense anymore; we aren’t constrained by tiny monochromatic screens.
  4. Tower load reporting - Be honest with subscribers and have the tower report some sort of quality metric/received SNR of its own so we know which path of the link is messed up. If a node is congested, tell the user. Again, people are more likely to be happy if they’re at least made aware of the link quality rather than left in the dark.