Self Referential

AT&T Bands in Las Vegas – 850 GSM/EDGE, 1900 UMTS/3G

Last time I was in Las Vegas it was for MIX 10 and Windows Phone 7 (back when it included ‘series’ at the end). This time, the reason is CES 2011 with AnandTech and a whole bunch more mobile devices.

I thought it was interesting last time I came that most casino floors in Las Vegas had shockingly poor or non-existant UMTS (3G) coverage on AT&T. I guess I didn’t find it too shocking, since coverage inside buildings in a dense urban environment is probably the most challenging for mobile networks, but it seemed to be a consistent problem. After getting frustrated about 6 hours into my stay, I decided to switch entirely to EDGE for the duration just because of how annoying being constantly handed between GSM/EDGE and UMTS is when you’re trying to do things. For whatever reason, back then I didn’t think to pull up field test on the iPhone 3GS I was currently carrying to see what bands were assigned to which network technology.

Now that I’m back, I decided to check. Thankfully, Apple has restored most if not all of the Field Test data products in iOS 4.2.1, a huge step forward from 4.1 just allowing signal strength in dBm at top left, and a far cry from 4.0 which shipped with no field test whatsoever. To save potential readers some googling, to get here, enter *3001#12345#* from the dialer and hit call – if it hasn’t been removed yet, you’ll get dumped into Field Test on iOS.

In EDGE and tapping on GSM RR Info, it’s immediately obvious why I saw that behavior last time I was here:

ARFCN dictates what channel inside what band we’re on, and 142 just happens to lie inside the GSM 850 band. It’s a number basically used to refer to the FDD pair of frequencies the phone is currently using. You can calculate exactly what frequency downlink and uplink are on with a little math and some reference guide (there’s a good table here), but basically with an ARFCN of 142 we know immediately that GSM/EDGE is on AT&T’s 850 MHz spectrum. Between 128 and 251 is that GSM850 spectrum.

Now, what about UMTS/3G? Enabling 3G (look at how weak that signal is…) and going into UMTS RR info, I saw the following:

Looking at the fields “Downlink Frequency” and “Uplink Frequency” we can see the device’s UARFCN channel numbers. It’s the same thing, but U for UMTS. Again, with a reference aide (read: wikipedia) we can see that UMTS/3G is working in the PCS 1900 MHz band.

Remember that higher frequencies are less effective at propagating through buildings. It’s pretty obvious now why getting good 3G coverage on AT&T is a challenge deep inside a casino in Las Vegas. There’s nothing inherently wrong with putting GSM/EDGE on 850 and UMTS on 1900, it’s just interesting in practice how immediately obvious the difference is walking around. Propagation is a challenge in dense urban environments with lots of people moving around to begin with, I’m sure this doesn’t help in Las Vegas. AT&T promised to put all of its 3G (UMTS) network on the 850 MHz band (wherever it’s licensed to use it) by the end of 2010, but sadly that hasn’t happened quite yet, at least in this market. I’ll keep checking, but thus far it’s been solidly in 1900 PCS. Oh well.

Arizona SB1070 – My Thoughts

Earlier today, I was reading yet another Digg article on Arizona’s immigration bill. For the large part, most of the articles and comments I’ve been reading have accused Arizonans of either being gun-toting crazies or racist white elites. I’m sure (read: certain) there’s some demographic of the state that probably is, but the entire state people? What a way to typecast.

Anyhow, something about what I was reading there finally compelled me to write a bit, and what started small quickly ballooned to a huge comment I left on the post. I’m reproducing it below:


Epic Long Post:

It’s time we settle this illegal immigration dispute once and for all, honestly. I’m a native Arizonan, and I can honestly attest to how completely out of hand this situation is getting, and how completely misunderstood and misconstrued the current state of affairs are down here.

First of all, the majority of Arizonans support this legislation. Now, before you write us all off as being racially insensitive bigots and crazies, ask yourself what the rational reasons could be for passing such a sweeping piece of legislation. I’m shocked at the fact that this discussion is almost entirely centered around racial profiling (do you not show your ID for everything else already? Being pulled over? Getting on a plane? Buying something?) and the economy (albeit very superficially). The problem has gotten so immense that it literally has effects on almost every major issue.

To be honest, I don’t know how I feel about the bill, I just think it’s time this issue gets the serious attention it’s been sorely lacking for the greater part of two decades now. If nothing else, Brewer should be applauded for finally getting the border states in the limelight and *some* debate going, even if it’s entirely misplaced.

So just bear with me, put aside your misconceptions about the issue (because odds are, you don’t live here, you don’t follow the issue, and you’re probably not aware of the scope of the problem), and think.

1. The environmental aspect is being completely downplayed. This is something that has even the most liberal of the liberals supporting drastic immigration reform down here in the Sonoran Desert; the long and short of it is that Mexicans and drug traffickers are literally shitting all over the desert. The sheer volume of people crossing through these corridors in the desert, and the trash they bring with them, is absolutely stunning.

Example photo of waste in the desert

Don’t believe me? Look: http://www.tucsonweekly.com/tucson/trashing-arizona/Content?oid=1168857 Some of the locations in here are barely a 10 minute drive from where I sit now. Talk to me about the environment, and then look at the kind of mess being left out there. I don’t care what the solution is, this kind of dumping/shedding of stuff/massive ecological disaster cannot continue. It can’t. It’s literally devastating.

2. Drug trafficking. Has anyone even talked about this? It isn’t just about arresting working Mexican families, it’s about combating the completely out of control drug trafficking problem going on in our backyards. In fact, I’d say that probably the main catalyst has to deal with security rather than economical drain – in fact, there’s no arguing the fact that the Mexicans living here are probably helping us out with their labor and efforts, rather than draining the local economy.

In case you haven’t been following, the drug cartels are now nearly out of control in Mexico, in fact, it’s a problem that’s of more immediate concern to us down here (in terms of security) than terrorism. In fact, screw terrorism, I’m more worried about my family being shot or killed in the crossfire of this ongoing drug battle than some terrorist setting a bomb off. Read about how insane this is: http://online.wsj.com/article/SB123518102536038463.html

“The U.S. Justice Department considers the Mexican drug cartels as the greatest organized crime threat to the United States.” http://en.wikipedia.org/wiki/Mexican_Drug_War You better believe it. People are being killed in Juarez, Nogales, everywhere. This is literally next door, folks! Not a continent away! Full scale political unrest! Talk about a threat to national security.

3. The murder of Rob Krentz has galvanized support for serious, strong, kid-gloves-off reform in the state. If you aren’t familiar, this super high profile incident involved the murder of a well liked, peaceful Arizona rancher on his own property some weeks ago. http://azstarnet.com/news/local/crime/article_db544bc6-3b5b-11df-843b-001cc4c03286.html It’s now been found that marijuana was found on the site, and there’s definite drug trafficking ties as the ranch lies one of the numerous well-known migration and trafficking corridors that dominate southern Arizona.

Robert Krentz

I think when the history books are written, this guy’s shooting will be a real inflection point you can point to as leading to this kind of legislation. The sentiment for structured amnesty or some other kind of reform almost completely disappeared after a few similar incidents. Violent, often fatal crime near the border is literally making it a physical hazard to be down here.

Want more proof? Look no further than the concealed carry legislation that also just passed. It isn’t that we’re all a bunch of friggin psychos, it’s that we’re honest-to-god scared of being shot in our homes or out in the desert. I know I sure as hell wouldn’t go walking around out there when even the border patrol is worried about some parts of the desert even just an hour away.

4. Sure the economy has something to do with it, absolutely. Hell, our economy is worse off than California’s by percentage and by capita: http://www.bizjournals.com/phoenix/stories/2008/02/25/daily29.html

The major public universities in the state are struggling for dollars to keep classes going, mandatory furloughs everywhere, and we’re paying for the rest in fees and still not going to break even. Hell yes, the economy has something to do with the perception that illegals are partly responsible. (however true or untrue that actually may be, since personally I’d wager Mexican migrant labor probably has a net positive effect on the local economy; let’s be honest, profiling them as lazy people really *is* racism)

So there are a few good arguments I don’t really feel have been emphasized enough online, anywhere. Sit around and discuss the finer points of constitutional law and whether this is “racial profiling,” honestly, that debate has already been beaten and played out enough already.

Meanwhile, the problems down here are getting worse, and worse, and worse, and the very real drug war raging in the desert just continues to get scarier. I think this will be a very interesting and potentially huge states rights issue. In the meantime, some of the points I touched on (I hope) are good food for thought if you think that Arizona suddenly just decided to “go insane” or “lose our collective shit.”

I promise you, it isn’t the case.

iPhone OS 4.0 – How close was I?

A few months ago, I made a post about what changes I would love to see in iPhone OS 4.0 when it rolled around, if it ever rolled around. Flash forward to today, where iPhone OS 4.0 is an officially announced, almost ready for release platform update. In the spirit of conclusion, let’s see how much I wanted that actually made it into the update:

1 – Google Voice Integration: No Go

This still remains a no-show. Apple and Google relations have only continued to sour, despite the Steve-Eric coffee shop PR stunt meeting that was hugely popularized a few weeks ago. In fact, because Apple has repeatedly demonstrated no motivation to popularize any Google services anymore, it’ll likely never happen. This is yet another unfortunate artifact of the ongoing Google and Apple divorce process, and it just ends up stifling innovation. Apple and Google both give end-user focused experience an awful lot of lip service, up to the point where they have to integrate with other competitors offerings.

Google Voice is just one such example, but there are others. Mail on the iPhone still lacks support for Google’s unique organizational methods, and for the same token, Google refuses to this day to make their own iPhone OS gmail client. It works both ways, and both are equally guilty.

Back to that lip service I was talking about, you can really see just how far that philosophy goes from both companies actions – they still speak louder than words. As an end user, I don’t care about corporate bickering or what the political reasons are for Google not making a Gmail app for the iPhone, or Apple not integrating Google Voice – I just want the best experience.

2 – Google Latitude: Maybe

I’m not sure how to mark this one down. On one hand, there is indeed multitasking present in the operating system, as well as the ability to have certain applications periodically get location through location services. Thus, it’s finally possible for some enterprising third party developer to make their own google latitude updater, or for Google themselves to do it. We’ll probably see the former much earlier than the latter for the reasons I mentioned in part 1.

Of course, the software to do continual scheduled Google latitude position updating already exists through the Cydia store. It’s called Longitude, and it work fabulously. I’m relatively puzzled by Apple’s claims that getting a full GPS fix requires too much battery, since I already run Longitude on a 15 minute update interval and have experienced negligible battery degradation. In fact, even with updating set on a 10 minute schedule, there was no perceptable difference in battery life.

Longitude - You know, opposite of Latitude

Longitude - You know, opposite of Latitude

I really have to wonder whether location services through Skyhook without using AGPS (eg only WiFi triliteration augmented with cellular positioning data) will be accurate enough for services like Foursquare. Time will tell, and arguably GPS won’t solve everything since users that are already inside those locations can’t get a GPS fix anyways.

3 – Better Gmail Integration: Sort of

So the Mail application is getting a definite overhaul in this new revision of iPhone OS – more than one exchange account, faster switching between each inbox, unified inbox, and support for threaded conversations. These are long overdue features that the competition has had almost forever. WebOS has had it, BlackBerry is famous for it, Android has it alongside even a Gmail-specific version, and even Windows Mobile had multiple exchange account and fast switching integration.

So it’s nice to see everything finally getting revamped. Apple’s interface still is minimalist though; there’s no font settings or style options to be found.

4 – Notification Overhaul: Nope

This is probably the most sorely lacking area, and simultaneously the most inexplicably neglected. Every single other mobile platform has better notifications than iPhone OS. Every one of them, even old and exiled Windows Mobile. In fact, during the Stevenote today Apple showed off some local application notifications (from applications running in the background) that still resulted in annoying centered blue bubbles – and touted them as being a good thing!

I don’t know what more there is to say here other than that with a more robust multitasking framework needs to come a better notification framework. The two go hand in hand completely: if you lack the screen real estate to show more than one thing at a time, but can still run it on the hardware, get information to the user effectively. That shouldn’t still equate to pausing and interrupting the current interaction with a gigantic blue popup that needs to be dismissed before interaction can continue.

5- Background apps done right: Yes

Apple needed to nail this one, and they did. There’s no arguing that the multitasking framework they’ve demoed is the way things should be. I’ve argued a few times with developers that the best way to deliver multitasking without sacrificing performance is to open APIs for the most common use scenarios. Apple enumerated all of them: music in the background, task completion, location-specific scenarios (turn by turn GPS, Google Latitude, e.t.c.), and a few others. This is effectively what I’ve heard described as a secondary “lite” binary running the core services in the background, using fewer resources and a few background specific APIs the OS can manage. That way, the background experience is consistent across use scenarios.

I think that this will work really well in the long run. In fact, Apple really did have little choice but to adapt a scheme employing lite binaries; they’re limited to 256 MB of RAM on the 3GS and iPad. Steve Jobs gets it – giving the user a task manager might appeal in the short term for how much control it offers, but it’s just too much. If the user is honestly expected to micromanage application launches and closes, they’ll eventually forget and nuke the battery. It just happens.

6- Better App organization: Yes

Thank goodness this is finally being addressed. I’ve almost reached the 180 application limit for the iPhone 3.x’s page specific interaction schema, and getting to applications on pages at the very end is as frustrating as it is time consuming. Finally getting some high level organization in the picture, even if it isn’t forward thinking, revolutionary, or something new, still is valuable.

Folders - Kind of ad-hoc, but better than nothing

7- Better power management: Nope

Definite no, in fact, we’ll probably never see this, at least on the iPhone OS. This particular platform is all about lowest common denominator usability – it’s simultaneously what makes the platform so alluring and magical, and the subject of so much griping. You can’t build something a baby can use, and then expect them to understand how to manage their power.

At the same time however, the option should be there for those of us that are knowledgeable about it. I realize I’m asking too much, but it’d be amazingly cool to see hardware reports on projected battery longevity, current draw from individual hardware components, and a trend of power use.

Conclusions: 4/7 ~ 57% Nailed

So Apple implemented 4 out of the 7 things I outlined, if we’re pretty generous about our criteria. You know, on the whole, 57% isn’t bad, but it simultaneously isn’t a slam dunk on my part.

In fact, that’s what makes this industry so interesting. Unlike the desktop, we haven’t yet settled on a paradigm user interaction model – each major platform is actively innovating and evolving, and it’s happening rapidly. Even in the last two years, we’ve seen Android go from being an iPhone OS wannabe to a seriously polished, worthy competitor. We’ve seen that cross carrier availability is hugely important for success (people just don’t want to switch, and they’ll convince themselves that their network is best). We’ve seen that none of the platforms have it all worked out. Apple’s iPhone OS platform is too closed, while Android’s might be just too open (a-la Windows Mobile). It’s a rapidly evolving market out there folks; I’m enjoying scrutinizing every bit of it.

AT&T 3G MicroCell Review

In case you missed it early, early this morning, my AT&T 3G MicroCell review is up and live at AnandTech here.

I played around with the product all last week and finally think I know all there is to be gleaned about it - undoubtedly in time the handover performance (which is pretty abysmal) will improve. It’s something that I talk about a lot in the article itself, but exists across all the major femtocells, and T-Mobile’s implementation of UMA. From a technical standpoint, the problem seems to be that the phone almost treats the femtocell like a roaming tower – implicitly disabling soft handovers to the public network. It’s handled this way most likely for a billing segmentation reason, but that’s unclear.

I learned in the comments that there are enterprise picocells, although I’m not sure what kind of carrier interaction is required for installation. I’d really like to investigate those for something future. Whatever the case, if you’re interested definitely give it a read!

First AnandTech Story

I’ve been working on it for a while now, but I’m excited that my first AnandTech story is now up and live on the AT website here: http://anandtech.com/gadgets/showdoc.aspx?i=3749

It’s an analysis piece on Windows Phone 7 Series, and I expect more to develop as MIX10 creeps closer. There’s a lot more coming, and I’m definitely excited to write more reviews and posts. Just wanted to make note of it here.

I’ll definitely keep writing here as well!

My Wall got Pwned

Yesterday, just about everyone’s minds were on the iPad. Love it or hate it, what a ride that hype machine was, and what a launch too. But for me, my musings (or rather those of my roommate) were rudely interrupted by the loud boom of a car crashing through the retaining wall surrounding my house.

Apparently, an inebriated woman was proceeding northbound on Euclid in a white Infiniti when she struck a midsize black Mercedes SUV, and flew up, into, and through the cinder block wall surrounding my house. The force must have been pretty awesome, since the size of the hole is sizable. Definitely a lot of momentum (and resulting few meganewtons of force) went into that smash, since there’s shattered cinder block in my yard now.

They managed to destroy a lot of cactus, blocks, and the sign on the corner in the process. I’d like to point out the irony of an infinity with license plate ‘finiti’, crashing through my wall on Euclid (as in, the fabled “father of geometry”). ::shrug::

I know nothing about the occupants’ statuses or health, but hope they’re fairing ok. Lesson learned? Don’t drink and drive, kids.

Mobile Phone Signal Bars – Thoughts

Something that’s bugged me for a long time is how crude and arbitrary signal bars on mobile phones are. With a few limited exceptions, virtually every phone has the exact same design: four or five bars in ascending order by height, which correspond roughly to the perceived signal strength of the radio stack.

Or does it? Let me just start by saying this is an absolutely horrible way to present a quality metric, and I’m shocked that years later it still is essentially the de-facto standard. Let me convince you.

It isn’t 1990 anymore…

Let’s start from the beginning. The signal bar analogy is a throwback to times when screens were expensive, physically small, monochromatic if not 8 shades of grey, and anything over 100×100 pixels was outrageously lavish. Displaying the actual RSSI (Received Signal Strength Indicator) number would’ve been difficult and confusing for consumers, varying between 8 already difficult to distinguish shades of grey would have been hard to distinguish, and making one bar breathe in size could have sacrificed too much screen real estate.

It made sense in that context to abstract the signal quality visualization into something that was both simple, and readable. Thus, the “bars” metaphor was born.

Since then, there have been few if any deviations away from that design. In fact, the only major departure thus far has been Nokia, which has steadfastly adhered to a visualization that makes sense:

Ascending Strength Bars

Another Example

Namely, their display metaphor is vertically ascending bars that mirror call quality/strength. This makes sense, because it’s an optimal balance between screen use and communicating the quality in an easy to understand fashion. Moreover, they have 8 levels of signal, 0-7 bars showing. Nokia should be applauded for largely adhering to this vertical format. (In fact, you could argue that the reason nobody has adopted a similar metaphor is because Nokia has patented it, but I haven’t searched around)

It’s 2010, and the granularity of the quality metric on most phones is arbitrarily limited to 4 or 5 levels at best.

Better designs?

Thus, an optimal design balances understandability with level of detail. On one hand, you could arguably simply display the RSSI in dB, or on the other hand sacrifice all information reporting and simply report something boolean, “Can Call” Yes/No.

Personally, I’m waiting for something that either leverages color (by sweeping through a variety of colors corresponding to signal strength) or utilizes every pixel of length for displaying the signal strength in a much more analogue way.

Excuse the crudity of this rendering

Green and red are obvious choices for color, given their nearly universal meaning for OK and OH NOES, respectively. Something that literally takes advantage of every pixel by breathing around instead of arbitrarily limiting itself to just 4 or 5 levels also wouldn’t be hard to understand.

Fundamentally, however, the bars still have completely arbitrary meaning. What constitutes maximum “bars” on one network and device has a totally different meaning on another device or carrier. Even worse, comparing the same visual indicator across devices on the same network can often be misleading. For example, the past few months I’ve made a habit of switching between the actual RSSI and the resulting visualization, and I’ve noticed that the iPhone seems to have a very optimistic reporting algorithm.

There’s an important distinction to be made between the way signal is reported for WCDMA versus GSM as well:

First off one needs to understand that WCDMA (3G) is not the same thing as GSM (2G) and the bars or even the signal strength can not be compared in the same way, you are not comparing apples to apples. The RSCP values or the signal strength in WCDMA is not the most important value when dealing to the quality of the call from a radio point of view, it’s actually the signal quality (or the parameter Ec/No) that needs also to be taken into account. Source

That said, the cutoff for 4 bars on WCDMA seems to be relatively low, around -100 dB or lower. 3 bars seems around -103 dB, 2 bars around -107 dB, and 1 bar anything there and below. Even then, I’ve noticed that the iPhone seems to run a weighted average, preferring to gradually decrease the report instead of allowing for sharp declines, as is most usually the case.

Signal in dB

Use dB if you’re not averse to math

What you’re reading isn’t really dBm, dBmV, or anything really physical, but rather a quality metric that also happens to be reported in dB. For whatever reason, most people are averse to understanding dB, however, the most important thing to remember is that 3 dB corresponds to a factor of 2. Thus, a change  of -3 dB means that your signal has halved in power/quality.

The notation dBm is refrrenced to 1 mW. Strictly speaking, to convert to dBm given a signal in mW:

Signal_{dB}=10\log\left(\frac{Signal_{mW}}{1\, mW}\right)

Likewise, to convert a signal from dBm back to mW:

Signal_{mW}=10^{\left(\frac{Signal_{dB}}{10}\right)}

But even directly considering the received power strength or the quality metric from SNR isn’t the full picture.

In fact, most of the time, complaints that center around iPhones failing to make calls properly stem from overloaded signaling channels used to setup calls, or situations where even though the phone is in a completely acceptable signal area, the node is too overloaded. So, as an end user, you’re left without the quality metrics you need to completely judge whether you should or should not be able to make a data/voice transaction. Thus, the signal quality metric isn’t entirely a function of client-tower proximity, but rather node congestion.

Carriers have a lot to gain from making sure their users are properly informed about network conditions; both so they can make educated decisions about what to expect in their locale, as well as to properly diagnose what’s going on when the worst happens. Worse, perhaps, carriers have even more to gain from misreporting or misrepresenting signal as being better than reality. Arguably, the cutoffs I’ve seen on my iPhone 3GS are overly optimistic and compressed into ~13 dB. From my perspective, as soon as you’re below about -105 dB, connection quality is going to suffer on WCDMA, however, that shows up as a misleading 3-4 bars.

Conclusions:

What we need is simple:

  1. Transparency and standardization of reporting – Standardize a certain visualization that is uniform across technology and devices. Choose something that makes sense, so customers can compare hardware in the same area and diagnose issues.
  2. Advanced modes – For those of us that can read and understand the meaning of dB and real quality metrics from the hardware, give the opportunity to display it. Hell, perhaps you’ll even encourage some people to delve deeper and become RF engineers in the future. It’s annoying to have to launch a Field Trial application every time we want to know why something is the way it is.
  3. Leverage recent advances in displays - Limiting display granularity to 4 or 5 levels doesn’t make sense anymore; we aren’t constrained by tiny monochromatic screens.
  4. Tower load reporting - Be honest with subscribers and have the tower report some sort of quality metric/received SNR of its own so we know which path of the link is messed up. If a node is congested, tell the user. Again, people are more likely to be happy if they’re at least made aware of the link quality rather than left in the dark.

My ZoneMinder Configuration

Why Home Security?

In recent months, home security and monitoring has become a matter of increasing concern across the country. Whether the reason is local downturn due to a spike in crime or just peace of mind, the price and difficulty of setting up an enterprise-level security system at home is lower than ever.

That said, the variety of hardware, open and closed source monitoring software, and configuration options makes it a bit daunting to jump right into. I’ve worked and experimented with a number of configurations and finally settled on one that I think works best (at least for my needs).

Camera Hardware

Camera 1 – Linksys WVC54GCA

Wireless-G Internet Home Monitoring Camera

I originally started out with just one Linksys WVC54GCA. It’s a 640×480, wired/wireless 802.11b/g network camera with built in web server for stills and video, and some simple motion detection and alert functionality. The reason for its choice was simple; price. It’s Linksys’ primary network camera offering, and you can find it as of this writing for $89 at newegg. In addition, there’s a newer camera with 802.11n, the WVC80N.

However, it isn’t perfect. To quote the cons of my newegg review:

Cons: Wireless range isn’t excellent; I have a very powerful wireless AP with a 12 dBi omnidirectional antenna and a 6 dBi directional antenna, and I had to reposition it so the camera could send video back at a decent bitrate (around 2 megabits is where it sits).

An important thing to note is that the latest .24 firmware breaks WPA/WPA2 support. Mine shipped with .24 and I had to downgrade back to .22 for it to work. A bit disappointing, but hopefully future firmware will fix this glaring problem. The linksys forums have the link to a custom built .22 (oddly enough with german language selected by default, but don’t worry, all the menus are still english).

Motion detection isn’t perfect, sometimes false positives will get annoying. I have sensitivity set all the way down and still get a few random videos of nothing going on.

More recently, I discovered that the software (despite being open source and a *nix derivative) locks up after anywhere between 6-24 hours when the camera is connected wirelessly. This is fixable (in a haphazard sort of way) by calling an internal page that reboots the camera every 3 hours through a schedule in my Tomato router:

Tomato Scheduler Screenshot

Thus far, this has proven a robust fix and makes the cameras entirely usable. I’ve notified Linksys and even had a chat online with a higher level tech that passed my findings on to a firmware engineer. They’ve recently released an update which purports to fix stability issues:

Version v1.1.00 build 02, Jun 15, 2008
- Support of Setup Wizard is temporarily disabled to address security issue
- Fix security issues
- Fix Camera stability issues

I have yet to fully test it. As an aside, the cameras are actually embedded x86 inside, sporting an AMD Geode SC1100 processor, 32? MB of SDRAM (2x TSOP marked PSC A2V28S40CTP), and Ralink 802.11b/g/a(?) chipset (RT2561T) as pictured.

AMD Geode SC1100

Camera visible, other SoC

RaLink MiniPCI card

Image quality is a little above average but nothing wonderful due to the relatively tiny plastic fixed focus lens system. Low light sensitivity is ok, but nothing stellar; you still need moonlight or ambient street lighting to get usable results at night. If you don’t mind those caveats, you’ve basically got the beginnings of a very robust (and cheap) network camera.

Night versus day performance

There are a number of relevant pages that are undocumented on the camera itself:

Reboot: http://USER:PASS@ADDRESS/adm/reboot.cgi

MJPEG stream: http://USER:PASS@ADDRESS/img/mjpeg.jpg

JPEG still: http://USER:PASS@ADDRESS/img/snapshot.cgi?size=3 (3- 640×480, 2- 320×240, 1-  160×120)

The options offered in the camera’s internal setup pages aren’t very robust, but offer just enough for you to do almost everything you’d want to.

Image Settings

Camera 2 – AXIS 207W and MW

After acquiring another Linksys camera for myself (and another 3 for the parents), trudging my way through the reboot issue, and reasonable but not stellar image quality, I decided I was ready for something more. Axis seems to have very good support, choice, and performance, in addition to heaps more customization and options for the camera itself. The catch? Price.

I decided to start off with Axis’ cheapest offering, the 207-series of network cameras. I managed to snag an Axis 207MW that had been used just once at a trade show that was being sold as used on eBay, and my dad went ahead and just purchased outright a 207W from Newegg. The distinction between the 207W and MW is that the 207MW has a 1.3 megapixel camera supporting resolutions of up to 1280×1024, whereas the 207W is just 640×480. They’re both 802.11b/g so you can move them throughout the house, and have almost identical setup and configuration pages. Of course, like the Linksys WVC54GCA, there’s optional ethernet support as well. Virtually all the other features are the same between the 207W and 207MW.

Axis 207MW (the 207W looks identical)

As of this writing, the 207MW is $328 at Amazon, and the 207W is $286 at Amazon.

Right off the bat, you can tell this camera is much different. It’s got an actual glass lens system, focusing ring, and a compact form factor with a longer cable. In addition, the antenna is external and swivels and snaps out so you can position it however suits getting the best signal. There are a variety of status LEDs on the back that make troubleshooting limited wireless connectivity simpler. The front clear ring is actually a large lightguide for four LEDs that can be either green or amber depending on the status of the camera. These can be disabled as well.

Day versus night performance on the 207MW

Image quality is also much better on the Axis 207MW than the Linksys. Originally, I had a WVC54GCA mounted where the Axis is now inside the garage. The Axis is both much more stable, and also gets better wireless reception outside in an otherwise difficult to reach trouble spot.

Among other things, the Axis offers many more configuration options within its internal administrative pages, as well as (if you’re interested in running it) many more options for built in motion detection. One of the more important things I’ve come across is the ability to change exposure prioritization so the otherwise very well lit driveway doesn’t come out a homogeneous white from pixels saturating as often. This kind of exposure prioritization can be done on the Axis, but not on the Linksys as shown:

Exposure settings

There are just a wealth of options that really make the Axis shine over the cheaper Linksys if you delve deeper. I could write pages about the differences that the extra nearly $200 makes (if you can afford it). Both cameras offer the ability to upload images and 5-10 second video clips of motion detection events to an FTP share, or attach them to an email. Detailing the differences between the two (and the ultimate shortcomings of both) is another article in and of itself.

At the end of the day, I found motion detection somewhat unreliable on both the Axis and Linksys; either I wound up with far too many motion event video clips or nearly nothing. Even worse, downloading and then watching hundreds if not thousands of false positives a grueling task. If you’re a basic user or just interested in having a camera for temporary purposes while you’re away on a trip, perhaps just the in-camera features are enough. However, if you’re looking for something more robust for a number of permanent cameras with much better motion detection, keep reading. At the end of the day, I use both types of camera just as inputs for ZoneMinder as you’ll see later on.

ZoneMinder Setup

ZoneMinder is a GPL’d, LAMP-based web tool for managing and monitoring virtually every kind of possible video source. Its supported sources span everything from cheap USB Logitech webcams, to network security cameras with built in webservers (like the two I’ve covered), to traditional video sources through a video capture card. Their documentation is a bit overly complicated (you can get to their supported hardware list here), and at the end of the day you’re going to either need to have local linux driver support (and a path to video like you’d expect for a webcam/TV tuner), or a path for JPEG, MJPEG, or another kind of MPEG4 stream.

The aim of ZoneMinder is to do all motion detection, video archiving, and image processing in one centralized place; simplifying use and making it easier to keep track of new events as they happen. Of course, the only downside to this is that all that motion detection and video capture requires a relatively powerful computer. Official documentation claims that even an ancient Pentium II should be able to do motion detection and capture for one camera at 25 FPS.

On the old computer I’ve configured (with a Pentium 4 Northwood 2.8 GHz and 2 GB of RAM), I’ve found that adding an 8 FPS VGA network camera and doing motion detection and capture adds between 10-20% CPU load.

Installation

Luckily ZoneMinder is relatively easy to setup if you’ve ever been near a modern linux distro with aptitude. As I noted earlier, ZoneMinder should ideally be run on a LAMP or similar web server, however, they claim that distro, web server, and SQL database support is actually quite diverse. I performed my installation on a fresh install of Ubuntu 9.10 Karmic by following instructions similar to Linux * Screw’s:

sudo apt-get install zoneminder apache2 php5-mysql libapache2-mod-php5 mysql-server ffmpeg

Once that was finished, the following:

sudo ln -s /etc/zm/apache.conf /etc/apache2/conf.d/zoneminder.conf
sudo
/etc/init.d/apache2 force-reload

http://_YOUR WEBSERVER ADDRESS_/zm/

At this point, I’d encourage you to enable user authentication in Options -> System -> ZM_OPT_USE_AUTH. Ticking this box and saving will enable another tab, Users. I generally configure one  admin for making changes and a less privileged “User” account for simply viewing the cameras and motion detection events as shown:

Users configuration

The remainder of options are largely fine in their defaults; the only major thing that you should be concerned with are the paths if you care about certain disks being used. I’ve recorded almost 800 events so far in VGA resolution and have used up an additional 1% of the meager 80 GB HDD on the system.

Adding Sources

Now, to add some video sources. This is where you really have to either know the path to either JPEGs or a MJPEG stream on the camera.

Click “Add New Monitor” in the bottom right. Now, if you have an Axis or Panasonic network camera (arguably the two de-facto industry standards) there are some presets that are worth checking out that you can use by clicking “Presets” in the top right. Change the source type to Remote (if it isn’t already from using a preset), and name it appropriately. I also usually set the FPS to either 5 or 8; from what I’ve seen, higher really isn’t feasible.

Now click the “Source” tab. This is where if you used a preset, your life is much easier, as the remote host path is already filled in. If not, it’s still simple, you just have to know how to get the relevant data from your source. In my example, I have the following:

Adding Axis network camera

From the above screenshot:

Remote Host Name: user:pass@_PATH TO YOUR CAMERA_

Remote Host Port: 80 (Or something else, if you’ve changed it)

Remote Host Path: /axis-cgi/jpg/image.cgi?resolution=640×480

Note that this is for the Axis 207 series cameras, although in general all the Axis cameras follow the same syntax (nice, isn’t it?). You can’t use whatever resolution you want, however, all the major and obvious choices work. You’ll notice I’ve just used VGA instead of the full 1.3 MP 1280×1024 resolution image from the 207MW. This is because using full resolution does seem to generate too much network traffic for my 802.11g network (despite my best efforts, the garage remains a dead zone thanks to chicken wire stucco construction), and FPS takes a large hit. No doubt that if the connection were wired, higher would be feasible. However, VGA is more than enough for now.

Adding the Linksys sources are just as easy given the paths I outlined previously. I haven’t added any internal sources, personally, however I imagine that configuration is the same if not easier; it requires knowing the path and setting a few additional constraints so you don’t overload your server.

Configuration

Now that you’ve added sources, you can configure their function.

Video source functions

You would think these options would be intuitive, however, they caused a bit of confusion for myself personally due to their shortened names. They are as follows, from the ZM documentation:

  • None – The monitor is currently disabled and no streams can be viewed or events generated.
  • Monitor – The monitor will only stream feeds but no image analysis is done and so no alarms or events will be generated,
  • Modect – or MOtion DEteCTtion. All captured images will be analysed and events generated where motion is detected.
  • Record – In this case continuous events of a fixed length are generated regardless of motion which is analogous to a convention time-lapse video recorder. No motion detection takes place in this mode.
  • Mocord – This is a hybrid of Modect and Record and results in both fixed length events being recorded and also any motion being highlighted within those events.
  • Nodect – or No DEteCTtion. This is a special mode designed to be used with external triggers. In Nodect no motion detection takes place but events are recorded if external triggers require it

In practice, my cameras are generally set to Modect, unless I have an indoor camera with particularly high traffic, in which case Monitor makes more sense since all the motion detection events would be me moving around and about (take it from experience, you see yourself doing some pretty strange things). This is also a nice way of judging how much load each camera adds, as the setting is pretty immediate if you’re watching htop.

With time, you should now have a zoneminder console similar to mine:

ZM Main Console

I’ve noted in particular how offline hosts appear red, while online hosts appear green or orange depending on their function.

Perhaps the only last area of configuration are the zones themselves (finally, the zone in ZoneMinder!) These define the regions of interest, in each video source, that will be used for event detection. Clicking on “1″ under Zones will allow you to modify the default zone for the video source. This is where you’re really given a lot of control far beyond anything in-camera will ever offer from even Axis.  You can add points and create polygons, as well as tweak sensitivity on an interface that looks like this:

Here, I've rejected a region that gives false positives; cars going by on the road.

Returning to the main “Console” view, the rest of the interface itself is relatively self explanatory.

Daily Use/Monitoring

If you’re interested in viewing all of the video sources currently enabled, clicking “Montage” should give a view similar to the following:

Montage view, including author

You’re also given the FPS below each camera, this is also handy. Lower resolution (eg I show a 320×240 source blacked out) cameras are somewhat intelligently tiled as well, using the available space pretty well.

Clicking on any of the events back on the console page should bring up something similar to the following, showing details about all the motion detection events from the given source (or all sources):

Hope nobody studies this in too much detail...

Clicking on any of the event IDs or names pulls up a window where you can review the event, from a few seconds before, to just after. You can also click anywhere below on the scrub bar to jump ahead or back, as expected. It isn’t perfect, but does a surprisingly good job:

A car enters my driveway at night...

Perhaps the coolest is “Timeline” view, a high-level plot of motion detection activity across all cameras overlaid on a timeline. This gives you an at-a-glance overview of whether the same events were being detected across all cameras at the same time, or to quickly pick out what time of day generates the most activity. In this view, mousing over times and activity as demarked by red on the plot refreshes the thumbnail appropriately, as well as with the detection region highlighted in red.

My timeline for all zones

It isn’t always the most useful way to review events, but perhaps one of the more unique. I’ve found it useful for reviewing a few days or weeks at a glance when I’m gone. There’s also certainly a nice pattern that emerges over time, at least for me.

Mobile

ZoneMinder supposedly has a nice mobile view available, however, I’ve had relatively little experience with it and had difficulty enabling it on my iPhone 3GS. Viewing the normal ZM site works fine, however, motion detection playback doesn’t work all the time.

In the meantime, I continue to use IP Vision for monitoring all my MJPEG sources:

IP Vision on the iPhone. Streaming MJPEG!

If you’re interested in me detailing this, just let me know. Setup is again straightforward and merely involves knowing the correct paths and forwarding a few ports in your router. Also, it’s a great way to quickly consume tons of 3G bandwidth!

Conclusions

Setting up a robust, nearly commercial-level reliability home video surveillance system is now easier than ever thanks to the huge variety of video hardware and open source software available. I’ve moved from one single camera with in-camera motion detection sending alert emails with 5 second video clips to a gmail account (which quickly filled to the limit), to a secure and expandable motion detection suite monitoring at times 7 cameras that is accessible virtually anywhere I can get online.

If you’re only interested in home monitoring during a vacation or time away, setting up a system like this might not be the best solution, so long as you’re willing to sift through either an FTP dump full of videos and stills or a gmail account choc-full of videos. However, if you’re serious about having a manageable system with a number of fixed (or PTZ!) cameras that you need constantly monitored, ZoneMinder makes sense and gets the job done. In the future, for serious users, it can even be hosted commercially or simply store the event cache on a network share elsewhere to prevent physical tampering or theft.

First Post!

Well, it’s the new year (and on that note, I’d like to wish you a belated Happy New Year). With it, I’ve decided to finally overhaul my old, semi-static brianklug.org site and get it moving with a blog, as many of my esteemed peers have suggested.

I owe my friends a lot of credit for the concept, the idea (especially the tagline), and support. I’ll keep this going with a myriad things:

  • Documents, notes, sheets that have been created by me for classes, perhaps a few things typeset in \LaTeX
  • Projects and how-to’s that I’ve been working on
  • Reviews of some of the gadgets that I use
  • Observations and other things

Now that I’ve got everything setup, (including comment verification with reCAPTCHA and akismet, SEO with Google Analytics for WordPressGoogle XML Sitemap generator, plus enhancements from the Mystique theme, and LaTeX parsing from the amazing WP-LaTeX plugin), here we go!