Archive for February, 2010

First AnandTech Story

I’ve been working on it for a while now, but I’m excited that my first AnandTech story is now up and live on the AT website here: http://anandtech.com/gadgets/showdoc.aspx?i=3749

It’s an analysis piece on Windows Phone 7 Series, and I expect more to develop as MIX10 creeps closer. There’s a lot more coming, and I’m definitely excited to write more reviews and posts. Just wanted to make note of it here.

I’ll definitely keep writing here as well!

AT&T Observations and Bandwidth

Bandwidth and Latency Data

I’ve always kind of been obsessed with bandwidth. I find myself constantly testing latency, bandwidth, and connection quality (mostly, in fact, through smokeping). Needless to say, that same obsession applies to my mobile habit, and especially given the often-congested perception of AT&T.

It sounds weird, but the two most-run applications on my iPhone are Speedtest.net Speed Test and Xtreme Labs SpeedTest. The Xtreme labs test used to be my favorite, largely because of its superior accuracy and stability. As great as Speedtest.net’s website is for testing, the iPhone app continually fell short. Tests ended before throughput stabilized, often the test would start, then the data would start being calculated a second later (skewing the average), or it’d just crash entirely. I could go on and on about the myriad problems I saw which no doubt contributed negatively to perception of network performance.

A few months ago, I wrote a big review and threw it up on the App Store. In the review, I noted that being able to export data would be an amazing feature. At the time, I had emailed Xtreme Labs and asked whether I could get a sample of my speed test results for analysis (I have yet to hear back). On Feb. 2nd, Ookla finally got around to releasing an update to the Speedtest.net app; it included the ability to export data as CSV.

Since then, I’ve been using it exclusively. I’ve gathered a bit of data, and thought it relevant to finally go over some of it. This is all from my iPhone 3GS in the Tucson, AZ market, largely in the central area. I’ve gathered a relatively modest 76 data points. Stats follow:

Gathered Statistics

Downstream (kbps)
Upstream (kbps)
Latency (ms)
Average 1880.3 263.3 1029.2
St. Dev. 1179.6 101.6 1140.2
Max 4279.0 356.0 6011.0
Min 82.0 18.0 366.0

These stats really mirror my perceptions. Speeds on UMTS/HSPA vary from extremely fast (over 4.2 megabits/s!) to as slow as 82 kilobits/s, but generally hang out around 1.2 megabits/s. On the whole, this is much faster than the average 600 kilobits/s I used to see when I was on Sprint across 3 different HTC phones.

Next, I became curious whether there was any correlation between time of day and down/up speeds. Given the sensitivity of cellular data networks to user congestion (through cell breathing, strain on backhaul, and of course the air link itself), I expected to see a strong correlation. I decided to plot my data per hour, and got the following:

Downstream and Upstream Bandwidth

Some interesting trends appear…

  1. I apparently sample at roughly the same time each day (given the large vertical lines that are evident if you squint hard enough). Makes sense because I habitually test after class, while walking to the next.
  2. There is a relatively large variation per day for those regular samples, sometimes upwards of a megabit.
  3. There does appear to be a rough correlation between time of day and bandwidth, but the fact that I’m moving around from cell to cell during the day makes it difficult to gauge.
  4. Upstream bandwidth is extremely regular, and relatively fast at that.

I’m still mentally processing what to make of the whole dataset. Obviously, I’m going to continue testing and gathering more data, and hopefully more trends will emerge. You can grab the data here in excel form. I’ve redacted my latitude and longitude, just because my daily trends would be pretty easily extracted from those points, and that’s just creepy.

3G Bands – Where is the 850?

Lately I’ve been getting an interesting number of hits regarding the 850/1900 MHz coverage of AT&T here in Tucson.

To be honest, I’ve read a number of different things; everything from certainty that our market has migrated HSPA (3G) to 850 MHz, to that AT&T doesn’t even have a license for that band in Arizona. For those of you that don’t know, migrating 3G to the 850 MHz bands is favorable because lower frequencies propagate better through walls and buildings compared to the 1900 MHz bands. In general, there’s an industry wide trend to move 3G to lower frequencies for just that reason.

I’ve been personally interested in this myself for some time, and finally decided to take the time to look it up.

Maps, maps, maps…

The data I’ve found is conflicting. Cellularmaps.com shows the following on this page:

AT&T 1900 MHz

AT&T 850 MHz

Note that the entire state of Arizona doesn’t have 850 MHz coverage/licensing.

However, the GSM authority over at GSM World shows three very different maps:

HSPA 3G Coverage (yellow)

AT&T 850 MHz coverage

AT&T 1900 MHz coverage

Note that the 3G data coverage map is labeled ambiguously; HSPA coverage exists, but it could be on either 1900 or 850. However, what we do glean is that (at least according to GSM world) there is equal 850 and 1900 MHz coverage in Tucson and the surrounding area. This contradicts the earlier map.

Then you have maps like these, which are relatively difficult to decipher but supposedly show regions of 800-band coverage from Cingular and AT&T before the merger:

Cingular 800, AT&T 850

Finally, you have websites such as these that claim Arizona is only 1900 MHz.

So what’s the reality? Uncertain at this point.

The map given by cellularmaps.com is sourced from 2008, whereas the GSM world maps are undated, and ostensibly current. The other maps are also undated, but the majority consensus is that AT&T isn’t licensed to use 800 MHz in this market.

If anyone knows about some better resources or information, I’d love to see it.

Update – 3/24/2010

I finally spoke with someone at AT&T, and it turns out that my initial suspicions were correct – Arizona does not have the 850 MHz UMTS Band 5. It’s as simple as that.

Oh well, at least we know now!

Even Google doesn’t use “I’m Feeling Lucky”

Big News Today!

Whether you like it or not, the big news today wasn’t the outcome of “The Big Game,” the 2010 Toyota Prius Recall, or the fact that Verizon is “deliberately” blocking 4Chan for wireless customers (though those last two are admonishable attempts by the respective companies to submarine news).

It was the fact that today, Google advertised its core search product on TV in a $2.6 million Super Bowl ad. Wait, did I just say Super Bowl? I meant “Big Game.”

Hell proverbially froze over, by CEO Eric Schmidt’s own admission.

But if you actually watch the video, and watch closely, you’ll notice that very little of the advertisement focuses on the search experience itself. In fact, it spends so much effort building trite emotional appeal that it completely neglects at least half of the front-facing search experience. In fact, what it disregards is a feature so neglected, even I didn’t realize it was completely passed over until I watched a parody.

First, watch the “Parisian Love” ad itself:

Now watch the brilliant parody “Is Tiger Feeling Lucky Today” by slate:

Disregarding completely the message, the search terms, what the so-called “story” was, did you notice how differently Google advertised their own product compared to how well Slate did? Slate used “I’m Feeling Lucky.” Google? Not once. In fact, doing so could have been absolutely brilliant in the context of the ad’s cheezy romance theme. Imagine “will she marry me” -> I’m feeling lucky.

So what?

So what that communicates is that even Google doesn’t know what the heck “I’m Feeling Lucky” is doing there. Ask yourself, when is the last time you actually used it? Is it easily accessible? Is it part of that seamless, effortless Google experience they talk about? Is it so essential a part of the search experience that if it was missing, some part of your being would be inexorably changed forever?

You get the point. It isn’t.

No love for I'm Feeling Lucky

There’s nothing easy about using “I’m Feeling Lucky;” you can’t get to it with shift-enter or any other keyboard shortcut. It isn’t natural; everyone’s so used to just hitting enter or using the browser search bar. I ask then what purpose it’s serving.

For my answer, I googled. I didn’t use “I’m Feeling Lucky” :

The “I’m Feeling Lucky™” button on the Google search page takes you directly to the first webpage that returns for your query. When you click this button, you won’t see the other search results at all. An “I’m Feeling Lucky” search means you spend less time searching for web pages and more time looking at them. -Link

Oh really? That’s, you know, awesome, but isn’t diving head first into the first result of some search query just as dangerous as using link shorteners? As opening links in email blindly? As bad as everything we’ve always taught people not to do? Moreover, isn’t randomly guessing kind of a bad algorithm for mentally sorting through search results? I mean, if you use “I’m Feeling Lucky,” you’re going to have to come all the way back out to the front to re-submit your query. What’s elegant, beautiful, or simple about that?

Take a step back and think about the name of that button as well. What does “I’m Feeling Lucky” imply? Why the need for obscurity? Why not just call it “First Result” or “Dive In Blindly!™” or something else that’s approachable and friendly?

Years ago, the first time I clicked this, I half expected to be taken to some sort of contest entry form.

Of Simplicity and Sacred Cows

We’ve all read a lot, and I mean a lot about how much time, effort and money Google pours into keeping their famously-lightweight homepage simple. They’ve evolved the design. They’ve removed things. They make it fade in slowly so those of us challenged by reading aren’t scared or overwhelmed. They count and have sleepless nights over the number of words on it!

Oh, I know what you’ll say, it’s part of their “corporate identity,” part of their “product,” part of what makes Google, Google. Nonsense; that’s the kind of talk that turns innovation into stagnation for the sake of consistency. My high school English teacher would be proud, because two of his favorite quotes apply directly to the kind of idiotic allegiance they have to that worthless button:

  • A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. -Ralph Waldo Emerson
  • Consistency is the last refuge of the unimaginative. -Oscar Wilde

For all of Google’s engineering talent, all that time, all those fancy positions, titles, and critical thought, they don’t realize that their biggest Sacred Cow is staring them in the face. That “Sacred Cow” is ‘I’m Feeling Lucky.”

C’mon Google, even you don’t use it or know why it’s there.

Zoneminder MJPEG and Shared Memory Settings

If you’ve read my big post on the Zoneminder configuration I have at home, you’ll notice that I favored capture of JPEG stills over using MJPEG during initial configuration.

At the time, the reason was simple; I couldn’t make MJPEG work. I’ve now succeed in doing so, and understand why it didn’t work the first time.

Memory Settings

I remembered reading something in the Zoneminder documentation about a shared memory setting resulting in capture at higher resolutions failing. Originally, when I first encountered the problem I decided that it was simply me getting something wrong with the path to the .mjpeg streams on the cameras, since I was more familiar with capture of jpeg stills from prior scripting.

However, I stumbled across some documentation here from another tinkerer, which also pointed to the memory sharing issue.

The problem is that the buffer of frames (usually between 50 and 100 for the camera) must be contained in memory for processing. If the size of the image:

buffer\, size\times image\, width\times image\, height\times3_{for\,24\, bits}+overhead

Exceeds this shared memory maximum, you’ll run into errors or see the camera status go to yellow/orange instead of green. (It can get pretty confusing trying to troubleshoot based on those status colors unless you’re checking the logs… /doh)

In fact, the problem I was seeing was likely directly as a result of the large capture image size of my Axis 207mW, as they cite it directly:

Note that with Megapixel cameras like the Axis 207mw becoming cheaper and more attractive, the above memory settings are not adequate. To get Zoneminder working with a full 1280×1024 resolution camera in full colour, increase 134217728 to, for example, 268424446

/facepalm. I really wish I had come across this the first time around. Either way, you’re going to ultimately run into this problem with either higher framerate connections, color, or higher resolutions.

I followed the tips, here, but doubled them since the machine I’m running ZM has a pretty good chunk of memory available.

The process is simple. You’re going to have to edit /etc/sysctl.conf to include the following somewhere:

# Memory modifications for ZoneMinder (kernel.shmall = 32 MB, kernel.shmmax = 512 MB)
kernel.shmall = 33554432
kernel.shmmax = 536870912

Now, apply the settings with

sysctl -p

Which forces a reload of that file. Next, you can check that the memory parameters have been changed:

brian@brian-desktop:~$ cat /proc/sys/kernel/shmall
33554432
brian@brian-desktop:~$ cat /proc/sys/kernel/shmmax
536870912

Which is successful. You can also check it with ipcs -l. Now, reboot ZoneMinder and you shouldn’t have any problems.

Motion JPEG Time!

Having made these changes, I was ready to finally explore whether MJPEG works! I went ahead and decided to use the MJPEG streams from my two respective types of cameras in place of the static video links. These are:

Linksys WVC54GCA: http://YOURIPADDY/img/video.mjpeg

Axis 207mW: http://YOURIPADDY/axis-cgi/mjpg/video.cgi?resolution=640×480&clock=0&date=0&text=0

I also discovered (by reading the manual) that there’s a handy utility on the Axis config page (under Live Video Config -> HTML Examples -> Motion JPEG) which generates the proper URL based on a handy configuration tool where you can select size, compression, and other options:

Handy config page

The idle load on the system has increased, as expected, but that’s partly from me raising the FPS limit to 10 which seems reasonable, and enabling continual recording with motion detection (mocord).

Nice, 10 FPS! (still a gaping hole though...)

I’m making a lot of tweaks as I get ready to transition everything onto a VM on a faster computer with much more disk space (on the order of 8 TB). If you’re interested in reading more about the Linux kernel shared memory settings, I found some good documentation:

  • IBM/RedHat settings: here
  • Configuring shared memory: here

Photosynthing everything

These past couple of days, I’ve finally gotten some time to work on the tremendous backlog of photos that I have sitting around from a number of trips. Among those pictures are sets of photos in the hundreds destined for photosynth. A number of my friends have expressed interest in what the software is, what it does, how it works, and how to take photos best suited for processing. I think now is a great opportunity to go over the basics.

What Photosynth Does

First of all, what Photosynth does is create a 3D point cloud model/representation of an object or scene from a set of photos. Depending on the scene complexity, the number of photos might be in the tens, or hundreds for sufficiently complicated scenes. It all depends on the model and how much time you have on your hands.

Perhaps the best way to explain it, is to see it. The following is a synth of the Pantheon that I recently finished processing, constructed from photos taken by my brother and I from a D80 and D90:

How it does it

The software uses feature extraction to identify textures in parts of each image that are similar, then tries to fit each corresponding from each image together to create a perspective-correct view. The process is extremely computationally intensive, but only needs to be done on the initial set of images to determine position and location. The beauty, of course, is that this process requires no human input for reconstructing the scene; it’s entirely computationally derived.

I won’t claim to be the most qualified to talk about it, but it does use feature extraction and some fancy fitting to work. An important note is that the software works based on unique features in texture, not necessarily on structure. This is why synths with lots of unique patterns turn out extremely well, while others don’t.

Creation

Creating the actual Synth is actually the easiest step; just create an account, install the software, add your photos, and go.

The real work in that process is creating proper tags, descriptions, and then adding geotagging data from photos, or later on in the web interface. Doing so is a great way to get your synth recognized.

How to take the best shots

There’s a great how-to on the official Photosynth website that goes over how to take pictures optimally, but I’d like to share some of my own.

If I’m taking a photo of a single object, something like this column, for example, I’ll try to stay equal distance away from the object, and take photos in steady progression around the subject.

The important thing to keep in mind is that although Photosynth can extrapolate the point cloud from features, it still cannot extrapolate images that you haven’t given it. Simply put, if you want to get the nice scrubber bar to circle around an object, you’ll need to take the requisite photos to make it. I find that pacing steadily around while taking photos at regular intervals is the best way.

  1. Take photos of subjects from a variety of angles. If you can, from every angle possible in an equal manner.
  2. Take photos from a single perspective pointing in multiple directions. I find that spinning around taking photos from each corner of the room works marvelously well; even though you look slightly special in the process.
  3. The most important thing to keep in mind is that quantity is generally on your side; so long as there is variety in the shots, but overlap as well.
  4. Choose subjects that have a variety of textures and features. Things like the Sistine Chapel synth really well because unique texture is everywhere, while cars generally don’t because of their solid color.
  5. Take wide angle shots of the entire room from the four corners first, then one from the center. Afterwards, take photos of objects/features close up. These are things that people will want to focus on when viewing later; a good example are pictures on the wall in a museum or specific fresco sections on a large wall.

Shameless plug

Some of my favorite Photosynth creations are:

  • Piazza San Marco: Link
  • Sistine Chapel: Link
  • Artemision Bronze: Link
  • Salpointe Graduation 2009: Link
  • Replica of Equestrian Statue of Marcus Aurelius: Link
  • Library of Celsus: Link
  • Pantheon: Link

For equal comparison, here are some that didn’t turn out so well:

  • Parthenon: Link
  • Olympic Grounds: Link
  • Port of Dubrovnik: Link