Archive for January, 2010
Yesterday, just about everyone’s minds were on the iPad. Love it or hate it, what a ride that hype machine was, and what a launch too. But for me, my musings (or rather those of my roommate) were rudely interrupted by the loud boom of a car crashing through the retaining wall surrounding my house.
Apparently, an inebriated woman was proceeding northbound on Euclid in a white Infiniti when she struck a midsize black Mercedes SUV, and flew up, into, and through the cinder block wall surrounding my house. The force must have been pretty awesome, since the size of the hole is sizable. Definitely a lot of momentum (and resulting few meganewtons of force) went into that smash, since there’s shattered cinder block in my yard now.
They managed to destroy a lot of cactus, blocks, and the sign on the corner in the process. I’d like to point out the irony of an infinity with license plate ‘finiti’, crashing through my wall on Euclid (as in, the fabled “father of geometry”). ::shrug::
I know nothing about the occupants’ statuses or health, but hope they’re fairing ok. Lesson learned? Don’t drink and drive, kids.
Tomorrow’s big unveiling will likely be focused around the much-hyped, illusory tablet (whose name nobody even knows), but a large part of its launch will be iPhone OS 4.0.
Much debate has taken place regarding whether the tablet will run OS X, iPhone OS, or something in-between. Thanks in large part to an errant comment by the McGraw Hill CEO on MSNBC, I think it’s safe to assume that iPhone OS was the right guess all along.
Or was it? It’s likely that instead of releasing the tablet running the 3.x line OS, apple will launch a 4.x fork to bridge the handheld iPhone/iPod Touch experience with that of the tablet, and in so doing bring the mobile OS closer to its desktop counterpart. It makes sense considering keeping two disparate app stores running could cause a colossal flustercuck.
Since its launch in March of 2009, OS 3.x has begun showing some signs of age, especially compared Android 2.x. Here’s a list of what I think OS 4.0 needs to really keep the platform competitive:
1- Google Voice integration
Although Google just launched an HTML5 version of the google voice interface (no doubt specifically targeted at the iPhone and WebOS platforms), it still pales in comparison with how seamless Google Voice integration is on Android. Users of that platform can completely transition to their new number.
Plus, let’s be honest, using a web based version is just hackety compared to being able to use a much more responsive app without having to jailbreak. Until the day comes, I’ll stick with using the still-banned GV Mobile.
2- Google Latitude integration
This is the one that actually started the whole Google – Apple divorce, in case you have forgotten. It’d be amazing to finally see latitude integrated into the maps app the way it should have been the same month latitude launched.
Even better, the maps application (maintained by google) on BlackBerry OS and Android allows for seamless background position updating. As it is right now, iPhone OS users have to go to an HTML5-based version of the same application to update their position. Or jailbreak and use a solution like longitude (some screenshots/info here) and have it done on a schedule by a persistent background process. This is the solution I ultimately decided on
Perhaps this functionality isn’t allowed because of “duplication of functionality” with Mobile Me? Whatever.
3- Better gmail integration in the mail app
Let’s just come out and say it, the mail app on the iPhone is extremely barebones. Coming from Windows Mobile, I was kind of shocked at how barebones, in fact. No ability to change font, underline, bold, italicize, or do anything regarding formatting. As it is right now, the best you can do is some copy paste.
ArsTechnica really did a good job highlighting a number of subtleties that I’ve noticed in their article here. The most annoying of which is that folders aren’t fully synchronized until you go into them. For example, opening a sent folder will cause all the sent emails to load chronologically. This can get frustrating if you’ve sent a lot and just want to look at one; instead, you’ll have to wait for all of them to load. I can do without a unified inbox or unified messaging app, because honestly I view that as a more of a nightmare to be avoided than a feature.
But those aren’t my main gripe, it’s that there isn’t a gmail app (like what Android has) that supports Labels, Stars, or any of the features that make Gmail integration with email clients over IMAP or Exchange difficult. It’s that whole decision they made to not use “folders” and instead use labels that drives me crazy, and to this day, I’m lucky if I can find any sent email in my google apps account.
Forget about background and push, just fix the email client.
4- Notification customizations for SMS, scheduling
Even though the platform has good customization for ringtones, the alert sounds for system events such as email and new SMSes are surprisingly limited. In fact, at first, I assumed I was “doing it wrong” and failing at finding the proper way to load them. Nope, turns out, what you have is what you have, and what you will have forever.
That default “Tri-tone” sound is what everyone uses, and it’s annoying as hell to have it go off in a crowded room and watch 8 people all go for their phones (myself included). Allow some variety, without the need to jailbreak.
A lot of the other platforms also have alert profile scheduling. Namely, you can specify whether you should be alerted audibly, with vibration, or not at all, on a time schedule throughout the day. I’ve defaulted to always leaving my phone on vibrate simply because this is missing.
5- Background apps done right
This is probably everyone’s #1 wish for OS 4.0. Multitasking done right. Sure, you can jailbreak and do it, but it doesn’t lend itself to having a nice task-switcher. Instead, you’re left using what amounts to a task manager, which is completely the wrong way to do it.
Every other platform has it, only one platform (WebOS) has done it right so far. Can you, apple?
6- Better App organization
If you’re like me, you have 9 pages of applications that you’ve tediously organized. But sometimes, categories that are logical don’t come in sets of 16 (how many you can fit on one ‘page’). The real solution is to allow some sort of management. Be that folders, a menu, or something else.
Also, there’s no reason that people should be limited to 4 apps on the bottom row just for aesthetics when you have the room for 5. I couldn’t live without having 5 anymore.
7- Better power management – centralized reporting, on/off, scheduling
Something that I think Android really executed properly was the centralized power management screen. HTC has added this to virtually every single device in recent memory as well. That feature is centralized management of radio hardware and other large current draws.
This is something that, if executed properly, could also be a selling point for making hardware “green.” Hell, as a potential EE, I’d be absolutely in love with a screen showing current consumption from all the chipsets in the hardware that report it, plots of use vs. time, and more intelligent prediction of how much life I’ll get out of the device with current use.
But on a more basic level, what we really need is a feature that allows users to schedule the hardware itself. Imagine you’re on a trip without your charger; odds are, you don’t need the radio hardware on while you’re sleeping, but you do need the device on so the alarm works. Allowing users to schedule power events lets you balance use ahead of time.
But a feature I think is really needed is a so-called “last legs” setting. Basically, after the battery has crossed a user-defined threshold (say 15-25%), the software automatically does everything it can to preserve battery life; WiFi is turned off, 3G is turned off in favor of EDGE, screen brightness is reduced to 20%, push services are put on hold, email fetch intervals are doubled or quadrupled, background processes are killed.
The hardware and software essentially would work together to squeeze every last minute of use out of the hardware when battery gets low. This is especially important for when you cross the threshold while the phone is in your pocket, when you probably don’t even know it’s dangerously close to death.
Historically, Apple delivers products that have extremely polished, working features. Essentially, they err on the side of only releasing features that work, always work, and work well, instead of releasing features that don’t always work, or lack polish.
That said, a lot of the market has caught up since 2009. It’s time to address all of those gripes, and I’m hoping OS 4.0 fills some of the glaring holes in the feature set tomorrow. We’ll find out soon.
Something that’s bugged me for a long time is how crude and arbitrary signal bars on mobile phones are. With a few limited exceptions, virtually every phone has the exact same design: four or five bars in ascending order by height, which correspond roughly to the perceived signal strength of the radio stack.
Or does it? Let me just start by saying this is an absolutely horrible way to present a quality metric, and I’m shocked that years later it still is essentially the de-facto standard. Let me convince you.
It isn’t 1990 anymore…
Let’s start from the beginning. The signal bar analogy is a throwback to times when screens were expensive, physically small, monochromatic if not 8 shades of grey, and anything over 100×100 pixels was outrageously lavish. Displaying the actual RSSI (Received Signal Strength Indicator) number would’ve been difficult and confusing for consumers, varying between 8 already difficult to distinguish shades of grey would have been hard to distinguish, and making one bar breathe in size could have sacrificed too much screen real estate.
It made sense in that context to abstract the signal quality visualization into something that was both simple, and readable. Thus, the “bars” metaphor was born.
Since then, there have been few if any deviations away from that design. In fact, the only major departure thus far has been Nokia, which has steadfastly adhered to a visualization that makes sense:
Namely, their display metaphor is vertically ascending bars that mirror call quality/strength. This makes sense, because it’s an optimal balance between screen use and communicating the quality in an easy to understand fashion. Moreover, they have 8 levels of signal, 0-7 bars showing. Nokia should be applauded for largely adhering to this vertical format. (In fact, you could argue that the reason nobody has adopted a similar metaphor is because Nokia has patented it, but I haven’t searched around)
It’s 2010, and the granularity of the quality metric on most phones is arbitrarily limited to 4 or 5 levels at best.
Thus, an optimal design balances understandability with level of detail. On one hand, you could arguably simply display the RSSI in dB, or on the other hand sacrifice all information reporting and simply report something boolean, “Can Call” Yes/No.
Personally, I’m waiting for something that either leverages color (by sweeping through a variety of colors corresponding to signal strength) or utilizes every pixel of length for displaying the signal strength in a much more analogue way.
Green and red are obvious choices for color, given their nearly universal meaning for OK and OH NOES, respectively. Something that literally takes advantage of every pixel by breathing around instead of arbitrarily limiting itself to just 4 or 5 levels also wouldn’t be hard to understand.
Fundamentally, however, the bars still have completely arbitrary meaning. What constitutes maximum “bars” on one network and device has a totally different meaning on another device or carrier. Even worse, comparing the same visual indicator across devices on the same network can often be misleading. For example, the past few months I’ve made a habit of switching between the actual RSSI and the resulting visualization, and I’ve noticed that the iPhone seems to have a very optimistic reporting algorithm.
There’s an important distinction to be made between the way signal is reported for WCDMA versus GSM as well:
First off one needs to understand that WCDMA (3G) is not the same thing as GSM (2G) and the bars or even the signal strength can not be compared in the same way, you are not comparing apples to apples. The RSCP values or the signal strength in WCDMA is not the most important value when dealing to the quality of the call from a radio point of view, it’s actually the signal quality (or the parameter Ec/No) that needs also to be taken into account. Source
That said, the cutoff for 4 bars on WCDMA seems to be relatively low, around -100 dB or lower. 3 bars seems around -103 dB, 2 bars around -107 dB, and 1 bar anything there and below. Even then, I’ve noticed that the iPhone seems to run a weighted average, preferring to gradually decrease the report instead of allowing for sharp declines, as is most usually the case.
Use dB if you’re not averse to math
What you’re reading isn’t really dBm, dBmV, or anything really physical, but rather a quality metric that also happens to be reported in dB. For whatever reason, most people are averse to understanding dB, however, the most important thing to remember is that 3 dB corresponds to a factor of 2. Thus, a change of -3 dB means that your signal has halved in power/quality.
The notation dBm is refrrenced to 1 mW. Strictly speaking, to convert to dBm given a signal in mW:
Likewise, to convert a signal from dBm back to mW:
But even directly considering the received power strength or the quality metric from SNR isn’t the full picture.
In fact, most of the time, complaints that center around iPhones failing to make calls properly stem from overloaded signaling channels used to setup calls, or situations where even though the phone is in a completely acceptable signal area, the node is too overloaded. So, as an end user, you’re left without the quality metrics you need to completely judge whether you should or should not be able to make a data/voice transaction. Thus, the signal quality metric isn’t entirely a function of client-tower proximity, but rather node congestion.
Carriers have a lot to gain from making sure their users are properly informed about network conditions; both so they can make educated decisions about what to expect in their locale, as well as to properly diagnose what’s going on when the worst happens. Worse, perhaps, carriers have even more to gain from misreporting or misrepresenting signal as being better than reality. Arguably, the cutoffs I’ve seen on my iPhone 3GS are overly optimistic and compressed into ~13 dB. From my perspective, as soon as you’re below about -105 dB, connection quality is going to suffer on WCDMA, however, that shows up as a misleading 3-4 bars.
What we need is simple:
- Transparency and standardization of reporting – Standardize a certain visualization that is uniform across technology and devices. Choose something that makes sense, so customers can compare hardware in the same area and diagnose issues.
- Advanced modes – For those of us that can read and understand the meaning of dB and real quality metrics from the hardware, give the opportunity to display it. Hell, perhaps you’ll even encourage some people to delve deeper and become RF engineers in the future. It’s annoying to have to launch a Field Trial application every time we want to know why something is the way it is.
- Leverage recent advances in displays - Limiting display granularity to 4 or 5 levels doesn’t make sense anymore; we aren’t constrained by tiny monochromatic screens.
- Tower load reporting - Be honest with subscribers and have the tower report some sort of quality metric/received SNR of its own so we know which path of the link is messed up. If a node is congested, tell the user. Again, people are more likely to be happy if they’re at least made aware of the link quality rather than left in the dark.
Why Scan Books?
With the prevalence of eBook readers like the Nook, Kindle, Spring Design Alex and others, comes the necessity of building and maintaining a vast digital library. There are more resources online than one can easily list for both purchasing (and downloading) books in a suite of electronic formats, from PDF to DJVU, but what if you already own a book of the traditional dead-tree sort? What if you aren’t willing to purchase it again just for the convenience and ease of reading it on your brand new eBook reader?
Scanning becomes your only option.
I’ll be honest, the process isn’t easy, quick or glamorous. But it beats spending a day craning over your flatbed scanner or cutting the spine out of your expensive book to feed it through an equally expensive loose-leaf scanner (speaking of which, what the heck is up with how expensive they are?!). If the book is sufficiently expensive, it becomes an economical prospect quickly given the few hours required from start to finish.
I’m not going to address the legal/ethical/moral considerations. You could argue that making a PDF copy for yourself constitutes Fair Use, but the law being what it is, who the heck knows? Regardless, just exercise some moral introspection and decide for yourself.
- A relatively decent Digital SLR with wide to normal focal length lens
- Large sheet of black construction paper
- Tripod/Monopod and a way to hold the camera
- Snapter or other image processing software
- Adobe Acrobat/other PDF creating utility
- 2-4 hours of your time, depending on the book complexity
The specific equipment I use is:
- Nikon D80 with Tamron 17-50 f/2.8 lens
- Nikon SB600 flash (optional)
- Nikon remote shutter release (IR)
- Large piece of black construction paper from Michael’s
- Monopod, table, and a copy of my CRC Handbook (more on that weird combo later)
- Snapter for processing images
- Adobe Acrobat 9 Pro for making PDF and OCR
I’ve already mentioned Snapter twice, and although they’re commercial software (with a very generous 15 day free trial that gives you all the functionality of the real book), don’t let that fool you. I’ve had a lot of success with their software just because of how easy and functional it’s been in my experience. So much so that I went ahead and got the paid version.
That said, there are a few open source alternatives that do a pretty good job and are worth mentioning:
Scan Tailor is pretty good, has a nice GUI, and is very active. Unpaper doesn’t have a GUI but offers a lot for a command line tool. There’s always the advantage both OSS solutions offer that you can either code/propose functionality changes in the software itself with the active developers.
Another relevant article with tips is from /. , which posted ironically the week after I had already embarked on and discovered the ins and outs of scanning with a digital camera myself.
My setup is simple: I mount the camera on the monopod, stick it on the table, and balance it there with my trusty CRC handbook and some other heavy books.
You might be wondering why I didn’t just use a tripod. The reason is that it’s a much more challenging prospect to carefully both tilt the tripod and balance it so the camera is completely perpendicular to the book’s surface. For the best photo quality, one needs the book to be as close to coplanar with the camera sensor as possible. It makes sense, otherwise we’ll have a more challenging time getting the book totally in focus (depth of field will come into play), and have a harder time flattening the book in software.
I generally tape the black paper down to the floor, snap photos of the cover and back cover, and then tape those down as well. More on positioning later.
The whole thing looks like the following:
I have the flash set to bounce from the ceiling, just because in practice this yields the most readable photos. I also use all the light I can from the room itself.
A difficult consideration is that sometimes the print/copy itself has glare. This seems a lot more common with newer books than older ones; it’s almost like the print has a layer of varnish atop it. Just make sure you preview a few images and can actually read the copy.
Positioning the book is the tricky part; it’s difficult to balance between filling the frame with the book (so you have good resolution), and leaving enough space at the edges so that your software can do edge detection. Leave too little space around, and you’ll have a nightmarish time trying to field flatten. Leave too much, and you’ll be throwing away a ton of your image. Even worse, if you don’t tape the book down, it will gradually creep out of the frame.
Another big consideration is rotation. I’ve discovered that Snapter doesn’t really account that well for material that has even subtle rotation. You end up with slight skew in the resulting images. It isn’t a big problem, but rotation will immediately cause you headaches.
I usually go for something like this:
You could zoom in a bit more in this case if you wanted; in practice you’ll discover for yourself what works best.
I set the camera to use a relatively big F/# (in this case F/5.6) so there’s as much depth of field as possible. You want the whole book in focus.
Now just snap away
This is the grueling part, capture images of every page. Snag a friend or something as having two people makes this process go much faster. One can turn the page and crease stubborn ones into place, and the other can trigger the shutter with the remote and make sure the book isn’t creeping out of the frame.
I find this can take anywhere between a half hour to much longer, depending on how much trouble the book gives you. The most challenging parts are the very beginning and the end. At these points, the pages have the most curve to them, sometimes sticking up. This is where sometimes creasing them down or using some tape on the stubborn ones can make or break your day.
Eventually, you’ll have a directory full of images somewhere you need processed.
At this point, you can use whatever tool suits your fancy, but if you’re using Snapter, read on.
Click Book, grab all your photos, and go make yourself a drink as you wait for it to do initial edge detection and processing on images. Nothing is being changed, it’s just generating the initial traces around the book it finds.
After this is done comes the only other bothersome part. It’s very worthwhile to manually go through each page and make sure you’re happy with the edge detection. Frequently, pages that have black or dark color at the edge cause headaches. Drag the handles around until they match closer. This can be grueling, but it’s important.
Click Input, change the background color to black (since we’re using a black piece of paper, or at least I did). Under Output, I also generally turn cropping each page off since I’d rather deal with a spread. Grayscale output will save on space later, and I keep the DPI the same since I’ll compress and downsample later in Acrobat. Now, you can click process and have yourself another drink.
After this is done, you can preview the results on the right. If everything is right, click Save and wait a little longer.
Now you should have a directory full of images waiting to be made into a PDF.
You can use whatever you’d like to make the PDF from the resulting JPEGs, however, I’ve had luck just using Acrobat.
Click Create -> Merge Files into a Single PDF, and then grab all those images you have.
Combine them, and you should now have a huge PDF. Save it, but you aren’t done yet. At this point, I generally take a look at the PDF Optimizer under the Advanced tab, and click Audit Space Usage. Yeah, it should be pretty huge.
If you absolutely need color, just skip this. If your book is black and white, converting is going to save you a ton of space.
To convert pages to grayscale, under Advanced click Print Production -> Convert Colors. Check “Convert Colors to Output Intent” and select “Gray Gamma 1.8.” I usually then exclude the front and back covers from the page range, unless you don’t care about that pretty color you’ll be missing out on.
This process also will take some time. Adobe is multithreaded, but still doesn’t use all my 8 logical cores on my i7 920. Just be patient.
After this finishes, you should now see a dramatic difference under the space audit report for Images. There might be a lot of document overhead, however. Don’t worry, this is normal.
At this point, it usually makes the most sense to do some OCR if you want, just to make the document searchable. Document -> OCR Text Recognition -> Recognize Text Using OCR does the trick.
Click Edit and select Searchable Image (Exact). This won’t resize your images or do compression; we’ll do that later. Now, wait a long time while it consumes CPU cycles and hopefully makes your document so much more powerful and useful.
After this finishes, you’re ready to do some compression and hopefully make your document small enough to not be an embarrassment, you storage hog, you. I usually downsample to around 300 DPI, leave monochromatic images alone (since we don’t have any), and opt for JPEG2000. Check everything in the Discard Objects, Discard User Data, and Clean Up tabs.
Click Ok, and now be prepared to wait the longest you have yet. Even on my rig, this takes an hour or two.
Check the space audit once more, and you should now have a reasonable sized, fully searchable, readable PDF, ready for your enjoyment.
Like many others yesterday, I eagerly awaited the Microsoft CES keynote and the chance to see Steve Ballmer once again have a Developers Developers Developers moment on stage. Although it was initially marred by a power outage which delayed the conference some 20 minutes and damaged a Media Center TV and an ASUS eeeTV demo, what really made me pull the plug was what Microsoft did to the live stream itself.
Initially it was plagued with audio problems. The stream started too quiet, then suddenly lost the left channel, then the left channel came back but killed the right channel. At one point I’m certain there was some sort of loop in a volume normalization system, as gain increased continually for at least an entire minute. Of course, these issues are technical and completely understandable given the fact that nearly everything needed to be restarted after the power outage.
So imagine my disgust, and the disgust of others, when during the Microsoft Xbox 360 part of the keynote, the following comes up right as they prepare to show the Halo Reach trailer:
Absolutely incredible, censoring a live keynote because of IP concerns from the very company throwing the keynote. Even better, apparently the Xbox team wasn’t made aware that there was any problem at all with what was going to be shown:
Sorry that had to black that out….I did not know :(t -Major Nelson
Even more strange, the content that was shown wasn’t new, in spite of the fact that the announcer lead-up to the video made it sound like it was going to be. It was nothing more than the Halo Reach trailer released over a month ago.
It’s a video…not a #haloreach demo. -Major Nelson
Why then did this content merit censoring the live stream for nearly 3 minutes? Is Microsoft not comfortable with using the public spectacle and attention that is CES to promote its own products and games? Is it honestly concerned that showing a trailer for a game in a live video stream constitutes some sort of breach in IP? What?
That, by itself wouldn’t be noteworthy, it was what followed that really iced the proverbial cake for the Keynote.
Yes. They did it again. If you’re so inclined, the video is here for everyone to view, now that we’ve been all made feel like children.
There is seriously so much wrong with doing something like this to the thousands of people watching the live stream that aren’t at CES but are still interested, that I don’t even know where to begin. In fact, I don’t even have to, because so much of that is obvious. But not, apparently, to Microsoft. Shortly after was when I stopped watching.
Nice of Microsoft to leave end-user-facing employees that work and try hard like Major Nelson to pick up all the pieces:
Reagarding[sic] the Reach blackout on the stream…..I am going to talk to some folks about that #notcool -Major Nelson
Ok, I need to take a walk and have a little chat with some folks. -Major Nelson
Fast Forward to Today
Imagine how shocked I was today, when during Paul Otellini’s Intel CES keynote the following popped up on the livecast:
I’m still not entirely certain whether, once again, the stream had been interrupted due to intellectual property concerns, DRM, or simply because they didn’t want to show more 3D parallax (despite having done so just minutes before).
Whatever the case, this seriously needs to stop.
Although I couldn’t make it to CES this year, I have been following it pretty closely through reading liveblogs, news items, press releases, and unsurprisingly live webcasts. Unsurprisingly, probably the main highlight of the conference this year is popularization of 3D media. Displays, cameras, movies, and all the compute power to render, edit, and distribute it.
What’s become immediately obvious, however, are the challenges that this new format will face before becoming widespread. The most glaring of which, is how all the 3D I’ve been able to see so far is this:
No, not Bono or how content providers hope this will sell yet another copy of media we already have, or how 3D is somehow the end of movie piracy ( this time in a 3D format. Parallax.
Of course, parallax is fundamental to how 3D displays work; you present different images to each eye with the subject shifted proportional to how much depth should be perceived. The chief problem that I think adoption will face is that, ultimately, you need to see an example of 3D to become a fan of 3D. In essence, it’s impossible to convey what 3D displays look and seem like (especially over print or 2D monitors) until you already have one.
Forget the primary hurdle to 3D, the glasses (unless you have a very special 3D monitor that doesn’t require them because it uses voxels or a surface pattern to create the parallax). It seems to me like, already, you’re going to have to go either see a 3D movie or find a very lucky friend who has a 3D monitor to make an educated decision about it yourself. And although it seems like the industry has already decided this is the next big trend, consumers must first be convinced it’s the way to go.
Today, Google finally announced the much-hyped, finally-official googlephone at their Mountain View office. Admittedly, there wasn’t much about the actual announcement that wasn’t previously known; specs, photos, and even an entire review had already leaked out before the official announcement. But the announcement marks google’s first real step into distributing google-branded hardware directly to consumers, the entry of another google-blessed flagship for the Android platform, and a different, long-needed business model for selling phones and wireless contracts.
It’s been a year, 2 months, and 14 days since the launch of the T-Mobile G1, and Android has matured into a serious contender since its beginnings as an aspiring platform in a market dominated by Windows Mobile, Symbian, and the iPhone OS. While the G1 seemed awkward in a kind of adolescent manner, it’s chin a strange design ‘feature,’ its storage ultimately limited, and design-by-committee UI/navigation (anyone remember “how many clocks does it take for google engineers to tell time?”) the Nexus One is finally enough to make it a real iPhone contender alongside the Droid.
That said, the platform does still suffer from a number of notable shortcomings:
- Application storage limited to 512 MB partition
- (Which google says it will fix by allowing applications to reside on an encrypted partition on removable storage, soon)
- No multitouch within official applications
- (This strange choice likely stems from legal and patent concerns between Google using Apple IP/prior artwork. Google likely doesn’t want the whole board-member-sharing fiasco to undergo any more scrutiny than it already has)
- Slower web browsing
- (Flash? more on that in a second)
- No support for CDMA (Verizon) until spring, AT&T 3G band support unclear
- (Rumors abound that Google is working on an AT&T version, but only a “dozen or so employees have access to this hardware”)
I’m going to expand on numbers 2 and 3, since personally I find these the most interesting immediately.
Engadget did a rather thorough of the Nexus One, pre-empting it’s announcement (karma for the same way Google tried to pre-empt CES?), and notably included a quick and dirty comparison of the loading speed of browsers on the iPhone 3GS, Nexus One, and Droid. Qualitatively, the winners and losers are painfully obvious from the video, but I took down some times and came up with the following:
- iPhone 3GS – 17 seconds
- Nexus One – 71 seconds
- Motorola Droid – 82 seconds
What immediately sticks out is that it took both of the Android 2.x platforms roughly 4 times longer to load the same website as it did the 3GS.
Initially, this doesn’t make sense. The 3GS is sporting a relatively recent ARM Cortex A8 underclocked from 800 MHz to 600 MHz, while the Nexus One is running the latest (also much-hyped) Qualcomm Snapdragon SoC running at 1GHz (Qualcomm QSD 8250 according to Google). The Snapdragon SoC is extremely similar architecturally to the Cortex A8, so comparing clock speeds is roughly applicable. Then, why the heck is a newer, faster generation chipset clocked 66% faster 4 times slower? Heck, the Android browser even runs lean and mean WebKit at its core, same as Mobile Safari on the iPhone, and Chrome and Safari on the desktop. Why then is it so much slower?
My theory? Flash.
The same multimedia platform hogging resources on the desktop is now hogging resources and slowing down browsing on mobile devices. Excellent. Sure, there’s a lot of content out there that’s driven by Flash that’s useful: videos, games, navigation, photo websites, fancy UI. But what’s the biggest reason? Advertising. Adobe knows it, I mean, just watch their video on mobile flash and notice what they highlight. Advertising.
Up until now, browsing just about everything on all the modern smartphone browsers I’ve used (mobile safari, opera mobile, opera mini, IE mobile) has been usable without adblock primarily because they didn’t have support for flash. Unless mobile browsers begin allowing plugins such as adblock or similar (similar to how mobile firefox, aka Fennec has begun doing), Flash is something I’d rather see disabled than enabled. How much of a feature is it to waste not just performance, but ever-critical battery on a mobile device primarily to show animated and intrusive advertising?
No AT&T 3G, Just T-Mobile
A lot of what Google was really trying to do with the Open Handset Alliance, launching a phone that customers can buy almost directly from HTC (the Nexus One OEM) and introducing essentially a new mobile business model at the same time was abstracting the carrier away from the device.
That is, handsets are increasingly moving towards being carrier-agnostic. In reality, this is the way it was intended to be (and moreover, should have been, at least in GSM-land). In fact, while this business model is entirely new to the US, the portability of handsets using SIMs is nothing new to customers in Europe, who frequently purchase handsets unlocked and bring plans (and SIMs) with them. (As an aside, much of the reason CDMA became dominant in the US was because this same flexibility wasn’t part of the specification; the unique identifier is built into the phone itself in the form of an ESN/MEID.)
It follows, then, that if Google truly wanted to create a splash in the industry and achieve its goal of creating and directly selling the ultimate flagship device that’s totally carrier agnostic, they would have made absolutely certain that HTC built in either multiple radios for UMTS and CDMA, or some modern, hybrid UMTS/CDMA chipset similar to the Qualcomm MSM6200 (PDF link) rumored to be at the core of the next-gen iPhone.
Whatever the case, the decision to launch hardware that at present restricts it to T-Mobile for 3G, EDGE on AT&T, and no CDMA functionality at all, extremely limits the device and ultimate impact. Moreover, it means that HTC and Google are going to have to support 3 sets of unique hardware for the “Nexus One” name. One with a CDMA-stack version for Verizon/Sprint, the current incarnation for T-Mobile, and a final version supporting AT&T’s 3G frequencies. Perhaps even more if they eventually move to support additional carriers in Europe and Asia.
Personally, I find it heartily ironic that rumors abound Apple is using a hybrid UMTS/CDMA chipset in the next gen iPhone. If so, that would make the same iPhone so many complain about because of its exclusivity with AT&T, the most open.
More open, in fact, than the Nexus One.
Back on the old brianklug.org I had a number of documents which I’m preserving here in this legacy post. A number of things that, while still are relevant, don’t really merit a whole new individual post per document.
- A presentation I put together for an IEEE student contest. It details (in a high level fashion) the installation, benefits, and driver modification required to install an MTRON 7000 SSD in an intel 965 platform. PDF: SSD Installation in Intel 965 platform for IEEE student contest
- The original documentation prepared for the Imaging Technology Laboratory detailing installation of the MTRON 7000 SSD. It details all the driver modification necessary to enable AHCI and subsequent full throughput of the SSD on the ICH8 platform: Mtron_SSD
- Curving a CCD, Overview and Goals: Technical Note 1. A basic overview of the considerations, challenges, and state of progress regarding field-matching CCD curvature. This document should serve as a primer for why this line of work is both relevant, and important: Curving_CCD_Report
- Twitter Exploration for AI Lab: Feasibility study for spidering, social network analysis, and further research. A primer for what the service is, how it works, how it looks, and how we can leverage it for business intelligence. PPTX: Twitter Exploration, Twitter Update
- ECE 372 microprocessors organization and design final project. This is another Beamer class compiled presentation which details the design and construction of a network appliance monitor and restarter: Final Report Presentation