A while back, Verizon announced more detail about their plans to bring 4G LTE to the Tucson area. I’ve been paying hyper-close attention to each carrier’s 4G rollout plans in my area, primarily out of personal interest, secondarily because that means when phones launch I won’t have to keep driving to Phoenix to test them. The actual press release is here, if you want to read it. If you want the actual nugget of new information, however, just read this:
The 4G LTE network will extend through Tucson between Interstate 10 and Harrison Road, north to Sunrise Drive and south to Valencia Road, including the Tucson International Airport.
That’s a bit curious actually, since the four thoroughfares specified don’t completely bound a region. Sunrise doesn’t extend all the way to Harrison, and Valencia is a bit discontinuous as well. Further, Interstate 10 bounds the bottom and left side of the box. I spent some time figuring out what that actually looks like, and created a google maps/earth .kmz overlay image, and image.
There’s a bit of interpolation going on here, namely assuming that Ina will bound the north part after Sunrise disappears, and that the jump from Sunrise to Harrison takes place like shown. It gives a decent impression of what the initial profile will be like, however.
A few things immediately stand out. First, there’s a bit of the east side that is genuinely clipped off. Second, south tucson between I-19 and I-10 doesn’t seem to make sense. It’s definitely a part of “Tucson,” yet the bounded region that Verizon stipulates would seem to preclude coverage making it down there. But perhaps the most head-scratchingly surreal part of the box is the fact that coverage will only extend to Sunrise.
Beyond and around the Sunrise/Skyline line is the foothills. This is the region where it makes the most sense to deploy 4G LTE due to the kind of neighborhood it is. Extending only to Skyline (and not even a little beyond) seems like a completely missed opportunity. It’ll be interesting to see the actual coverage profile and when things start rolling out. As of right now, I can confirm that there isn’t 4G LTE anywhere in town – I’ve tested at the airport, downtown, U of A, and throughout town with the HTC Thunderbolt, Pantech UML290, Samsung hotspot, and another unreleased datacard thus far to no avail. Hopefully it comes soon. Verizon has 22 MHz of 700 MHz spectrum (upper c block) in most of Arizona including Phoenix and Tucson. Only the far west part of Arizona has 34 MHz.
Today Verizon Wireless announced that the Tucson, AZ market will be included in the August 18 nationwide LTE rollout. Last week I heard from a good friend of mine with a Droid Charge that LTE was working in various parts of Tucson already, no doubt as Verizon tests individual eNodeBs for functionality.
At around 10:30 PM on August 17, Verizon 4G LTE went live in Tucson. Some people on Twitter sent me notifications about them seeing the service light up in areas that were even outside the circle painted by earlier press releases, so if you’re reasonably close to the boundary outlined in the press release, there’s a good chance LTE is active in your area.
One person tweeted a link to some speedtests, which show that things are indeed working:
Currently I don’t have any 4G LTE devices, but when I get another one for testing we’ll have a better picture of coverage and speeds in this market.
It’s live, and it’s fast! I’ve tested it thoroughly and published some results already in the context of the Droid Bionic review, which is only a UE Category 2 device. Soon as I get a UE Category 3 LTE device I will run some more tests and get a better picture.
Earlier today, I was reading yet another Digg article on Arizona’s immigration bill. For the large part, most of the articles and comments I’ve been reading have accused Arizonans of either being gun-toting crazies or racist white elites. I’m sure (read: certain) there’s some demographic of the state that probably is, but the entire state people? What a way to typecast.
Anyhow, something about what I was reading there finally compelled me to write a bit, and what started small quickly ballooned to a huge comment I left on the post. I’m reproducing it below:
Epic Long Post:
It’s time we settle this illegal immigration dispute once and for all, honestly. I’m a native Arizonan, and I can honestly attest to how completely out of hand this situation is getting, and how completely misunderstood and misconstrued the current state of affairs are down here.
First of all, the majority of Arizonans support this legislation. Now, before you write us all off as being racially insensitive bigots and crazies, ask yourself what the rational reasons could be for passing such a sweeping piece of legislation. I’m shocked at the fact that this discussion is almost entirely centered around racial profiling (do you not show your ID for everything else already? Being pulled over? Getting on a plane? Buying something?) and the economy (albeit very superficially). The problem has gotten so immense that it literally has effects on almost every major issue.
To be honest, I don’t know how I feel about the bill, I just think it’s time this issue gets the serious attention it’s been sorely lacking for the greater part of two decades now. If nothing else, Brewer should be applauded for finally getting the border states in the limelight and *some* debate going, even if it’s entirely misplaced.
So just bear with me, put aside your misconceptions about the issue (because odds are, you don’t live here, you don’t follow the issue, and you’re probably not aware of the scope of the problem), and think.
1. The environmental aspect is being completely downplayed. This is something that has even the most liberal of the liberals supporting drastic immigration reform down here in the Sonoran Desert; the long and short of it is that Mexicans and drug traffickers are literally shitting all over the desert. The sheer volume of people crossing through these corridors in the desert, and the trash they bring with them, is absolutely stunning.
Don’t believe me? Look: http://www.tucsonweekly.com/tucson/trashing-arizona/Content?oid=1168857 Some of the locations in here are barely a 10 minute drive from where I sit now. Talk to me about the environment, and then look at the kind of mess being left out there. I don’t care what the solution is, this kind of dumping/shedding of stuff/massive ecological disaster cannot continue. It can’t. It’s literally devastating.
2. Drug trafficking. Has anyone even talked about this? It isn’t just about arresting working Mexican families, it’s about combating the completely out of control drug trafficking problem going on in our backyards. In fact, I’d say that probably the main catalyst has to deal with security rather than economical drain – in fact, there’s no arguing the fact that the Mexicans living here are probably helping us out with their labor and efforts, rather than draining the local economy.
In case you haven’t been following, the drug cartels are now nearly out of control in Mexico, in fact, it’s a problem that’s of more immediate concern to us down here (in terms of security) than terrorism. In fact, screw terrorism, I’m more worried about my family being shot or killed in the crossfire of this ongoing drug battle than some terrorist setting a bomb off. Read about how insane this is: http://online.wsj.com/article/SB123518102536038463.html
“The U.S. Justice Department considers the Mexican drug cartels as the greatest organized crime threat to the United States.” http://en.wikipedia.org/wiki/Mexican_Drug_War You better believe it. People are being killed in Juarez, Nogales, everywhere. This is literally next door, folks! Not a continent away! Full scale political unrest! Talk about a threat to national security.
3. The murder of Rob Krentz has galvanized support for serious, strong, kid-gloves-off reform in the state. If you aren’t familiar, this super high profile incident involved the murder of a well liked, peaceful Arizona rancher on his own property some weeks ago. http://azstarnet.com/news/local/crime/article_db544bc6-3b5b-11df-843b-001cc4c03286.html It’s now been found that marijuana was found on the site, and there’s definite drug trafficking ties as the ranch lies one of the numerous well-known migration and trafficking corridors that dominate southern Arizona.
I think when the history books are written, this guy’s shooting will be a real inflection point you can point to as leading to this kind of legislation. The sentiment for structured amnesty or some other kind of reform almost completely disappeared after a few similar incidents. Violent, often fatal crime near the border is literally making it a physical hazard to be down here.
Want more proof? Look no further than the concealed carry legislation that also just passed. It isn’t that we’re all a bunch of friggin psychos, it’s that we’re honest-to-god scared of being shot in our homes or out in the desert. I know I sure as hell wouldn’t go walking around out there when even the border patrol is worried about some parts of the desert even just an hour away.
4. Sure the economy has something to do with it, absolutely. Hell, our economy is worse off than California’s by percentage and by capita: http://www.bizjournals.com/phoenix/stories/2008/02/25/daily29.html
The major public universities in the state are struggling for dollars to keep classes going, mandatory furloughs everywhere, and we’re paying for the rest in fees and still not going to break even. Hell yes, the economy has something to do with the perception that illegals are partly responsible. (however true or untrue that actually may be, since personally I’d wager Mexican migrant labor probably has a net positive effect on the local economy; let’s be honest, profiling them as lazy people really *is* racism)
So there are a few good arguments I don’t really feel have been emphasized enough online, anywhere. Sit around and discuss the finer points of constitutional law and whether this is “racial profiling,” honestly, that debate has already been beaten and played out enough already.
Meanwhile, the problems down here are getting worse, and worse, and worse, and the very real drug war raging in the desert just continues to get scarier. I think this will be a very interesting and potentially huge states rights issue. In the meantime, some of the points I touched on (I hope) are good food for thought if you think that Arizona suddenly just decided to “go insane” or “lose our collective shit.”
I promise you, it isn’t the case.
See the update at the bottom for the real deal, I was partly wrong about some of the antennas in iPhone 4, though I was indeed right about the connector locations for the bottom, and partly for the top.
I’ve been following the iPhone 4G/HD leak saga like a hawk, and until now I haven’t been able to really add anything to all that’s been said. However, today, Gizmodo published pictures of the inside of the iPhone 4G hardware they obtained. They didn’t talk about much other than the absurd number of screws (upwards of 30), battery size, packaging, and potential ease of replacement. In fact, their primary aim seems to have been locating “APPLE” markings on the few ribbon cables inside, rather than picking apart Apple’s hardware choices. No doubt disassembly was challenging, potentially explaining why there aren’t any photos of the iPhone with the “connect to iTunes” lock screen (broken after disassembly?).
They neglected to remove the EMI shields atop the interesting bits on the PCB, what I would’ve considered the biggest news about the device. So we still don’t know virtually anything about SoC, how much NAND flash there is onboard, RAM, the hugely important baseband (and whether this thing is potentially dual CDMA/GSM and UMTS for it to work on Verizon/Sprint alongside T-Mobile and AT&T), WiFi or Bluetooth choices (likely the same as the iPad, however), or anything else you’d expect to glean without those shields in place. In short, all the squares in this diagram from the iPhone 3GS are big question marks for the iPhone 4G. Still, we can make very good guesses about what the likely choices are.
However, being the RF-obsessed dude I am, I scrutinized the photos for some time looking for other interesting bits. I think I’ve found some interesting things.
First and foremost, I think that there are two discrete antenna assemblies in the phone. One at the top, one at the bottom (as you’d hold it in your hand).
Note that the phone in this picture has been rotated; the red circled area on the hardware is actually the bottom. Now, look at the two places I’ve marked with the white arrows. You can very clearly see a pigtail and standard radio connector on the top one, and a connector pad at the tip of the arrow at right. This is 100% certainly an antenna, and it’s also in the same region of the hardware (at the bottom) as the 3GS.
Above is what I’m talking about at 100% resolution.
Above shows the antenna before being removed, with the pigtail clearly connected to the mainboard PCB. We can make an educated guess that whatever is under the EMI shield next door is the baseband.
Now, compare and contrast to the iPhone 3GS’s ribbon/kapton antenna assembly:
And see it inside the black plastic holder (only the trailing ribbon connector is visible at bottom left):
If I’m not mistaken, the two connectors there are for discrete antennas inside, for cellular radio and WiFi/Bluetooth. I’m not infinitely familiar, but there only seems to be one antenna assembly in the 3GS at the bottom.
Now, on the iPhone 4G photos, there appears to possibly be a second possible antenna at the top.
I’ve labeled the connector that I can make out. Given the similar black packaging (possibly housing the flex PCB like in the 3GS), it seems likely this is another antenna.
I’ll leave you to speculate about why Apple might potentially want two discrete cellular antennas in their next generation phone…
After looking through the FCC OET internal photos of a huge number of other dual CDMA/UMTS design phones, all of which only require one antenna, I’m pretty sure the other top component is something less insidious. It’s entirely possible this is nothing more than a connector, some support structure, or perhaps maybe it is indeed an antenna, but for WiFi (N?). Whatever the case, I’m completely uncertain what this thing is, or if it’s part of the baseband. Obviously, the part at the bottom is an antenna, but the top part I’m more and more uncertain about.
We’ll see as time goes on and better pictures are made available what it is, but I’m not confident it’s an antenna anymore.
Of course, we now know the real deal with the iPhone 4. I was wrong about what the antennas were, but right about the connectors. Up at the top, if you scrutinize iFixit’s teardown, you can see a small gold pad right above a test junction for the WiFi/GPS/BT 2.4 GHz antenna. There’s a trace on the EMI shield which leads to a contact screw (gold, so it’s visible) leading directly to the antenna. So the connector for the 2.4 GHz antenna is up at the top near that seam.
For the UMTS/GSM antenna, the connector snakes across from the PCB to the left side of the phone facing up (facing down, it snakes to the right, like in this photo):
You can see the test point and connector at the left, the pigtail leading to the right across the EMI shield, and the gold screw which connects the whole deal to the aluminum antenna.
Of course, the interesting part is that this becomes the most active region of the antenna. It’s a monopole, rather than a dipole – in this configuration. The result is that for 1/4 wavelength, that part of the aluminum is very active at radiating RF. This is also the location your palm rests, interestingly.
I’m going to talk about the real deal on AnandTech shortly, so stay tuned…
It’s live here now: http://www.anandtech.com/show/3794/the-iphone-4-review/1
While I was in Las Vegas for MIX10, I couldn’t suppress my inexplicable urge to run as many speedtests as I could muster. Of course, I was packing the usual iPhone 3GS with AT&T. Sadly, nearly the entire visit speeds were barely 250 kilobits/s down, 220 kilobits/s up, if I could even get the speedtest.net application to run. Take a look at the following:
This data is from 13 tests taken during my 3 day stay. They’re from over 3G UMTS when it did work, and GSM EDGE when it didn’t, and that was virtually the entire time. 3G was either slow, or didn’t work at all; switching to EDGE was the only way to do anything.
How is this possible?
Now, it’s fair to say that some of this is sampling bias and the fact that I was at a conference, but even then, there’s no excuse. This is a city used to a huge flux of visitors in a short time for trade conferences. Frankly, I can only begin to imagine how overloaded networks are during major conferences like E3.
Take a look at the following plot of the average speeds for each day:
Can you spot which three days are the ones I’m talking about? Note that on the 16th, I couldn’t even get a test to run to completion; it just didn’t work. There’s nothing more to really say about the issue than simply how bad this is. If this is the kind of performance AT&T users see and complain so vocally about in the San Fransisco Bay Area and Manhattan, I can completely understand. Frankly, I can see no other reason for that kind of performance degradation other than congestion.
Over spring break I spent an amazing – and busy – three days in Las Vegas at Microsoft’s MIX10. I got to see a complete platform reboot with Windows Phone 7 Series, some interesting news about IE 9, and most importantly got to meet some awesome people.
I’ve been writing a lot over that time with AnandTech, which I’ll wrap up here:
- First day MIX10 Windows Phone 7 Series Impressions – link
- Internet Explorer 9 Platform Preview – link
- Windows Phone 7 Series: The AnandTech Guide – link
- If you had to read any one of these, this would be the one to be. It’s over 8000 words and comprehensively wraps up the platform in my opinion.
There were a couple hilarious quotes that I overheard at the conference, which I think I’ll just share briefly. Keep in mind this is at a development conference.
- “…and we call this checkbox driven development. We can do everything we want just with checkboxes”
- “…and we only had to write one line of code! Just one line, and we’re done!”
- But my favorite: “Can I use the back button for fire? What if I really really want to use the back button?” – immediately after a presentation about how the back button is reserved for going back.
I didn’t have much time this year to follow TED (In fact, when I first sat down to write this, it was still going on). To be honest, I usually watch the videos a few months afterward, once they’re all finally uploaded and the hype has died down. It’s easy to get caught up in how much certain talks are plugged compared to others, especially with how much live information leaks out over twitter.
But I did break that trend this year a bit. I noticed an intriguing project by Robert Scoble on a blog post of his involving taking photos of notes by the attendees and posting them to flickr. Intrigued, I expected to be wowed by the different creative and thoughtful methods employed which I could use myself for note-taking.
Imagine my disappointment, then, when what I saw that most attendees were either using their iPhones or BlackBerries, scraps of paper, nonstandard spiral bound notebooks, or just generally chaotic methods for taking notes. I mean, aside from the now-famous mind-mapping note girl (photo here; I can’t look at it again because it makes my brain hurt and my teeth start gnashing), there really wasn’t anything TED-level-inspiring.
Let’s just break it down for a second:
- 34 pictures in the set
- Mobile devices: 9 – 26.5%
- iPhones: 7 – 20.6%
- BlackBerry: 2 – 5.9%
- Paper: 25 – 73.5%
- Notebooks (spiral or bound): 14 – 41.2%
- Mini Notebooks (or similarly sized): 6 – 17.6%
- Program/Scraps: 4 – 14.7%
- PowerPoint Handouts (Bill Gates): 1 – 2.9%
- Mobile devices: 9 – 26.5%
Generally, I abhor excel plots, but this does a good job communicating my point:
But that’s not all; of the iPhone note photos, virtually every single one used the built-in notes application. Yeah, the notes application that ships with the iPhone which lacks just about everything imaginable.
No Evernote love? No Google Documents love? That’s certainly surprising. Yet these attendees consider themselves shakers and movers? Definitely avant-garde? Perhaps ahead of the curve at adoption of new tech? Sorry, virtually every one of you was thoroughly beaten by mind-map girl entirely by default, entirely because of her uniqueness factor. Even more surprising, the journalists in the photo set aren’t even using Steno pads.
With the exception of Bill Gates (who obviously is using PowerPoint handouts for his presentation), there’s really no excuse.
Granted, this could entirely just be bad sampling on Scoble’s part. Whatever the case, it’s a unique opportunity to segue into how much I love the way I take notes.
OneNote – The best kept secret for organizing everything
Ok, those words aren’t entirely my own, but they’re the truth. Microsoft OneNote 2007 (and its predecessor) aren’t just about notes, they’re about collecting, organizing, searching, and making accessible just about anything and everything. You don’t need a tablet, and it isn’t just about text. I think it’s pretty fair to say that OneNote is almost the best kept secret and most undiscovered part of Office 2007.
My freshman year of college, I decided that I wanted to try using it for all of my notes. At the time, I was intrigued by the notion of using a Samsung Q1 Ultra V, a UMPC, due to its tiny form factor and long battery life. That worked, but I’ve since moved on to a Latitude XT in favor of its active digitizer and capacitive multitouch screen. Regardless, I’ve used OneNote for virtually all my notes since, and it has numerous advantages over paper:
- My notes are searchable, entirely. Not just text in its native form either, but handwritten text from the tablet, images (it searches the images), and audio.
- I don’t have to carry around spiral bound notebooks that are heavy, or waste money on dead trees (hey, this is one aspect of my life that actually is green).
- I can annotate and take notes directly atop PDFs, PowerPoints, or whatever materials are being studied without having to print them beforehand. This is extremely useful as I can get anything into notes by printing it to OneNote.
- My notes can be (and are) backed up regularly. That’s something you can’t really do with paper notes, short of making copies or scanning every day.
- I can keep every year’s worth of notes in one place. Obviously, that’s a ton of stuff 3 years in. I think you’d be hard pressed to carry around your spiral bound notebooks every day.
- I can organize with sections, tabs, notebooks, and pages. The analogues to a notebook are obvious, but there are other things as well that make a lot more sense in the context of digital notes.
- Something which always comes in handy is being able to instantly send my notes to other people; I can make PDFs of pages, sections, or entire notebooks.
- Everything lives in one place: text notes, powerpoints, images, clips of webpages, even file.
I honestly can’t see how it’d be possible to take notes electronically without OneNote at this point. Granted, there are a lot of other alternatives, but I find that they either have gamestopping flaws or are otherwise unwieldy:
- Microsoft Word
- I see this one a lot in classes, and don’t even know where to start. Word is great as a word processing tool, but that’s about all. Sure, you can take notes, but they won’t be searchable (which is a huge drawback for me), and ultimately you’re constrained by this page-by-page model that lies at its core. Combining graphics with notes is possible, but hard. OneNote is almost like Word without pages.
- How the heck are you supposed to take equations down quickly in Word? If you’ve used the equation editor, you know what a lesson in frustration it is.
- Google Docs
- I think using google documents makes a lot of sense, especially given the online nature, but it seems just as difficult to manage with lots of media. Of course, the fact that you can access it anywhere is a huge plus.
- A while back on Slashdot I read a great article I could relate to about taking notes in class for science and engineering. It discussed/asked what the optimal computerized note-taking suite was given an emphasis on entering equations. Of course, came up, along with its GUI-wrapped similar cousin LyX. I’m a big big fan of , especially for documents and other things, but I can’t see it being practical or fast enough for taking notes every day. Granted, there are people out there (like some of my crazier friends) that are faster at typing the equations than writing them, but I find myself being able to write faster.
- You run into the same problems that Word has here; you’re stuck managing files for each set of notes.
I’ve been meaning to try Evernote, and have heard great things about integration across virtually every platform. It seems like the way to go, and if I’d definitely like to try it out.
I guess the point that I’m trying to make is that there are so many better solutions than just using pen and paper or the default notes application that ships with most smartphones. Even though those are what you might grab for at first, you’re setting yourself up to be locked into two methods that leave much to be desired.
Bandwidth and Latency Data
I’ve always kind of been obsessed with bandwidth. I find myself constantly testing latency, bandwidth, and connection quality (mostly, in fact, through smokeping). Needless to say, that same obsession applies to my mobile habit, and especially given the often-congested perception of AT&T.
It sounds weird, but the two most-run applications on my iPhone are Speedtest.net Speed Test and Xtreme Labs SpeedTest. The Xtreme labs test used to be my favorite, largely because of its superior accuracy and stability. As great as Speedtest.net’s website is for testing, the iPhone app continually fell short. Tests ended before throughput stabilized, often the test would start, then the data would start being calculated a second later (skewing the average), or it’d just crash entirely. I could go on and on about the myriad problems I saw which no doubt contributed negatively to perception of network performance.
A few months ago, I wrote a big review and threw it up on the App Store. In the review, I noted that being able to export data would be an amazing feature. At the time, I had emailed Xtreme Labs and asked whether I could get a sample of my speed test results for analysis (I have yet to hear back). On Feb. 2nd, Ookla finally got around to releasing an update to the Speedtest.net app; it included the ability to export data as CSV.
Since then, I’ve been using it exclusively. I’ve gathered a bit of data, and thought it relevant to finally go over some of it. This is all from my iPhone 3GS in the Tucson, AZ market, largely in the central area. I’ve gathered a relatively modest 76 data points. Stats follow:
These stats really mirror my perceptions. Speeds on UMTS/HSPA vary from extremely fast (over 4.2 megabits/s!) to as slow as 82 kilobits/s, but generally hang out around 1.2 megabits/s. On the whole, this is much faster than the average 600 kilobits/s I used to see when I was on Sprint across 3 different HTC phones.
Next, I became curious whether there was any correlation between time of day and down/up speeds. Given the sensitivity of cellular data networks to user congestion (through cell breathing, strain on backhaul, and of course the air link itself), I expected to see a strong correlation. I decided to plot my data per hour, and got the following:
Some interesting trends appear…
- I apparently sample at roughly the same time each day (given the large vertical lines that are evident if you squint hard enough). Makes sense because I habitually test after class, while walking to the next.
- There is a relatively large variation per day for those regular samples, sometimes upwards of a megabit.
- There does appear to be a rough correlation between time of day and bandwidth, but the fact that I’m moving around from cell to cell during the day makes it difficult to gauge.
- Upstream bandwidth is extremely regular, and relatively fast at that.
I’m still mentally processing what to make of the whole dataset. Obviously, I’m going to continue testing and gathering more data, and hopefully more trends will emerge. You can grab the data here in excel form. I’ve redacted my latitude and longitude, just because my daily trends would be pretty easily extracted from those points, and that’s just creepy.
3G Bands – Where is the 850?
Lately I’ve been getting an interesting number of hits regarding the 850/1900 MHz coverage of AT&T here in Tucson.
To be honest, I’ve read a number of different things; everything from certainty that our market has migrated HSPA (3G) to 850 MHz, to that AT&T doesn’t even have a license for that band in Arizona. For those of you that don’t know, migrating 3G to the 850 MHz bands is favorable because lower frequencies propagate better through walls and buildings compared to the 1900 MHz bands. In general, there’s an industry wide trend to move 3G to lower frequencies for just that reason.
I’ve been personally interested in this myself for some time, and finally decided to take the time to look it up.
Maps, maps, maps…
The data I’ve found is conflicting. Cellularmaps.com shows the following on this page:
Note that the entire state of Arizona doesn’t have 850 MHz coverage/licensing.
Note that the 3G data coverage map is labeled ambiguously; HSPA coverage exists, but it could be on either 1900 or 850. However, what we do glean is that (at least according to GSM world) there is equal 850 and 1900 MHz coverage in Tucson and the surrounding area. This contradicts the earlier map.
Then you have maps like these, which are relatively difficult to decipher but supposedly show regions of 800-band coverage from Cingular and AT&T before the merger:
Finally, you have websites such as these that claim Arizona is only 1900 MHz.
So what’s the reality? Uncertain at this point.
The map given by cellularmaps.com is sourced from 2008, whereas the GSM world maps are undated, and ostensibly current. The other maps are also undated, but the majority consensus is that AT&T isn’t licensed to use 800 MHz in this market.
If anyone knows about some better resources or information, I’d love to see it.
Update – 3/24/2010
I finally spoke with someone at AT&T, and it turns out that my initial suspicions were correct – Arizona does not have the 850 MHz UMTS Band 5. It’s as simple as that.
Oh well, at least we know now!
If you’ve read my big post on the Zoneminder configuration I have at home, you’ll notice that I favored capture of JPEG stills over using MJPEG during initial configuration.
At the time, the reason was simple; I couldn’t make MJPEG work. I’ve now succeed in doing so, and understand why it didn’t work the first time.
I remembered reading something in the Zoneminder documentation about a shared memory setting resulting in capture at higher resolutions failing. Originally, when I first encountered the problem I decided that it was simply me getting something wrong with the path to the .mjpeg streams on the cameras, since I was more familiar with capture of jpeg stills from prior scripting.
However, I stumbled across some documentation here from another tinkerer, which also pointed to the memory sharing issue.
The problem is that the buffer of frames (usually between 50 and 100 for the camera) must be contained in memory for processing. If the size of the image:
Exceeds this shared memory maximum, you’ll run into errors or see the camera status go to yellow/orange instead of green. (It can get pretty confusing trying to troubleshoot based on those status colors unless you’re checking the logs… /doh)
In fact, the problem I was seeing was likely directly as a result of the large capture image size of my Axis 207mW, as they cite it directly:
Note that with Megapixel cameras like the Axis 207mw becoming cheaper and more attractive, the above memory settings are not adequate. To get Zoneminder working with a full 1280×1024 resolution camera in full colour, increase 134217728 to, for example, 268424446
/facepalm. I really wish I had come across this the first time around. Either way, you’re going to ultimately run into this problem with either higher framerate connections, color, or higher resolutions.
I followed the tips, here, but doubled them since the machine I’m running ZM has a pretty good chunk of memory available.
The process is simple. You’re going to have to edit /etc/sysctl.conf to include the following somewhere:
# Memory modifications for ZoneMinder (kernel.shmall = 32 MB, kernel.shmmax = 512 MB)
kernel.shmall = 33554432
kernel.shmmax = 536870912
Now, apply the settings with
Which forces a reload of that file. Next, you can check that the memory parameters have been changed:
brian@brian-desktop:~$ cat /proc/sys/kernel/shmall
brian@brian-desktop:~$ cat /proc/sys/kernel/shmmax
Which is successful. You can also check it with ipcs -l. Now, reboot ZoneMinder and you shouldn’t have any problems.
Motion JPEG Time!
Having made these changes, I was ready to finally explore whether MJPEG works! I went ahead and decided to use the MJPEG streams from my two respective types of cameras in place of the static video links. These are:
Linksys WVC54GCA: http://YOURIPADDY/img/video.mjpeg
Axis 207mW: http://YOURIPADDY/axis-cgi/mjpg/video.cgi?resolution=640×480&clock=0&date=0&text=0
I also discovered (by reading the manual) that there’s a handy utility on the Axis config page (under Live Video Config -> HTML Examples -> Motion JPEG) which generates the proper URL based on a handy configuration tool where you can select size, compression, and other options:
The idle load on the system has increased, as expected, but that’s partly from me raising the FPS limit to 10 which seems reasonable, and enabling continual recording with motion detection (mocord).
I’m making a lot of tweaks as I get ready to transition everything onto a VM on a faster computer with much more disk space (on the order of 8 TB). If you’re interested in reading more about the Linux kernel shared memory settings, I found some good documentation:
Something that’s bugged me for a long time is how crude and arbitrary signal bars on mobile phones are. With a few limited exceptions, virtually every phone has the exact same design: four or five bars in ascending order by height, which correspond roughly to the perceived signal strength of the radio stack.
Or does it? Let me just start by saying this is an absolutely horrible way to present a quality metric, and I’m shocked that years later it still is essentially the de-facto standard. Let me convince you.
It isn’t 1990 anymore…
Let’s start from the beginning. The signal bar analogy is a throwback to times when screens were expensive, physically small, monochromatic if not 8 shades of grey, and anything over 100×100 pixels was outrageously lavish. Displaying the actual RSSI (Received Signal Strength Indicator) number would’ve been difficult and confusing for consumers, varying between 8 already difficult to distinguish shades of grey would have been hard to distinguish, and making one bar breathe in size could have sacrificed too much screen real estate.
It made sense in that context to abstract the signal quality visualization into something that was both simple, and readable. Thus, the “bars” metaphor was born.
Since then, there have been few if any deviations away from that design. In fact, the only major departure thus far has been Nokia, which has steadfastly adhered to a visualization that makes sense:
Namely, their display metaphor is vertically ascending bars that mirror call quality/strength. This makes sense, because it’s an optimal balance between screen use and communicating the quality in an easy to understand fashion. Moreover, they have 8 levels of signal, 0-7 bars showing. Nokia should be applauded for largely adhering to this vertical format. (In fact, you could argue that the reason nobody has adopted a similar metaphor is because Nokia has patented it, but I haven’t searched around)
It’s 2010, and the granularity of the quality metric on most phones is arbitrarily limited to 4 or 5 levels at best.
Thus, an optimal design balances understandability with level of detail. On one hand, you could arguably simply display the RSSI in dB, or on the other hand sacrifice all information reporting and simply report something boolean, “Can Call” Yes/No.
Personally, I’m waiting for something that either leverages color (by sweeping through a variety of colors corresponding to signal strength) or utilizes every pixel of length for displaying the signal strength in a much more analogue way.
Green and red are obvious choices for color, given their nearly universal meaning for OK and OH NOES, respectively. Something that literally takes advantage of every pixel by breathing around instead of arbitrarily limiting itself to just 4 or 5 levels also wouldn’t be hard to understand.
Fundamentally, however, the bars still have completely arbitrary meaning. What constitutes maximum “bars” on one network and device has a totally different meaning on another device or carrier. Even worse, comparing the same visual indicator across devices on the same network can often be misleading. For example, the past few months I’ve made a habit of switching between the actual RSSI and the resulting visualization, and I’ve noticed that the iPhone seems to have a very optimistic reporting algorithm.
There’s an important distinction to be made between the way signal is reported for WCDMA versus GSM as well:
First off one needs to understand that WCDMA (3G) is not the same thing as GSM (2G) and the bars or even the signal strength can not be compared in the same way, you are not comparing apples to apples. The RSCP values or the signal strength in WCDMA is not the most important value when dealing to the quality of the call from a radio point of view, it’s actually the signal quality (or the parameter Ec/No) that needs also to be taken into account. Source
That said, the cutoff for 4 bars on WCDMA seems to be relatively low, around -100 dB or lower. 3 bars seems around -103 dB, 2 bars around -107 dB, and 1 bar anything there and below. Even then, I’ve noticed that the iPhone seems to run a weighted average, preferring to gradually decrease the report instead of allowing for sharp declines, as is most usually the case.
Use dB if you’re not averse to math
What you’re reading isn’t really dBm, dBmV, or anything really physical, but rather a quality metric that also happens to be reported in dB. For whatever reason, most people are averse to understanding dB, however, the most important thing to remember is that 3 dB corresponds to a factor of 2. Thus, a change of -3 dB means that your signal has halved in power/quality.
The notation dBm is refrrenced to 1 mW. Strictly speaking, to convert to dBm given a signal in mW:
Likewise, to convert a signal from dBm back to mW:
But even directly considering the received power strength or the quality metric from SNR isn’t the full picture.
In fact, most of the time, complaints that center around iPhones failing to make calls properly stem from overloaded signaling channels used to setup calls, or situations where even though the phone is in a completely acceptable signal area, the node is too overloaded. So, as an end user, you’re left without the quality metrics you need to completely judge whether you should or should not be able to make a data/voice transaction. Thus, the signal quality metric isn’t entirely a function of client-tower proximity, but rather node congestion.
Carriers have a lot to gain from making sure their users are properly informed about network conditions; both so they can make educated decisions about what to expect in their locale, as well as to properly diagnose what’s going on when the worst happens. Worse, perhaps, carriers have even more to gain from misreporting or misrepresenting signal as being better than reality. Arguably, the cutoffs I’ve seen on my iPhone 3GS are overly optimistic and compressed into ~13 dB. From my perspective, as soon as you’re below about -105 dB, connection quality is going to suffer on WCDMA, however, that shows up as a misleading 3-4 bars.
What we need is simple:
- Transparency and standardization of reporting – Standardize a certain visualization that is uniform across technology and devices. Choose something that makes sense, so customers can compare hardware in the same area and diagnose issues.
- Advanced modes – For those of us that can read and understand the meaning of dB and real quality metrics from the hardware, give the opportunity to display it. Hell, perhaps you’ll even encourage some people to delve deeper and become RF engineers in the future. It’s annoying to have to launch a Field Trial application every time we want to know why something is the way it is.
- Leverage recent advances in displays - Limiting display granularity to 4 or 5 levels doesn’t make sense anymore; we aren’t constrained by tiny monochromatic screens.
- Tower load reporting - Be honest with subscribers and have the tower report some sort of quality metric/received SNR of its own so we know which path of the link is messed up. If a node is congested, tell the user. Again, people are more likely to be happy if they’re at least made aware of the link quality rather than left in the dark.