I’ve been meaning to write about a set of interesting new rechargeable AA batteries I came across for a while now. Last year (wow, has it really been that long?) I came across a review on engadget of some PowerGenix NiZn (Nickel Zinc) rechargeable batteries which promised better performance, higher voltage than NiMH, and greater capacity. I was compelled to invest in some otherwise experimental and new rechargeables for a few reasons:
Doing indoor photography with my girlfriend – especially weddings – it becomes apparent just how many AAs you can go through quickly. So many that it’s relatively expensive and prohibitive to keep up and carry all those batteries around. They’re expensive, and just don’t last long enough. One or two hundred shots or so, if I recall correctly.
Anyhow, right after getting them and charging them, I decided to shoot a wedding with my SB600 flash and the NiZn batteries. I was immediately floored at how fast the flash recharged and how performance never seemed to fade like alkalines do. Usually, flash performance seems to fall off exponentially with the generic alkaline batteries – eventually the time it takes to recharge gets so long you can’t take photos of anything. So what’s useful about the NiZn was the hugely fast, super quick recharge time.
That’s also… the problem. While shooting that wedding, I managed to somehow completely blow out the flash. This thing was under 2 months old, used at a few other weddings, without what I’d consider very many activations at all. The SB600 apparently has no thermal cutoff at all, allowing the whole thing to overheat. Whatever the case, while shotgunning some photos of the dance floor in low light, it stopped working. The flash didn’t feel notably hot, but the flash showed an error on the screen and wouldn’t work from then on. Anyhow, I shipped the flash back into Nikon and had a replacement about a month later, but the point is that I’m now far too scared to repeat the “experiment” again.
It seems that two things are possible:
- The SB600 lacks adequate/any thermal protection preventing the flash from overheating or being fired too quickly
- The SB600 possibly relies on alkaline AA battery performance to prevent the flash from being overheated
- I realize that the NiZn PowerGenix batteries are 1.6 volts (as opposed to the 1.5 standard for alkaline, and 1.2 for NiMH). At the same time, there should definitely be regulation of some kind preventing failure.
The batteries themselves are remarkable in their performance, but it’s that which scares me out of using them in the flash where they’re needed most.
I purchased the NiZn batteries after your initial review and was super stoked when they came. I’m an avid digital photographer, and replacing flash batteries at a wedding actually gets expensive enough to make buying a bunch of rechargables worthwhile.
That said, I had a brand new SB600 (just like yours) burn out with no warning while shooting with the NiZn batteries. I had to ship the whole thing in and get it replaced. I browsed the Fred Miranda forums some time later and found a bunch of people with the same issue – the SB600 relies on Alkaline batteries simply not being able to drive enough power quick enough when shotgunning that flash to avoid burning out. There isn’t any thermal safeguard.
So be warned, even though you’re testing on an SB600, if you actually do go out and abuse the batteries like you would at a big event firing the flash a lot, you WILL nuke your stuff. I’m too scared to use my NiZn batteries now.
That Fred Miranda forum thread I mentioned is here.
Earlier today, I was reading yet another Digg article on Arizona’s immigration bill. For the large part, most of the articles and comments I’ve been reading have accused Arizonans of either being gun-toting crazies or racist white elites. I’m sure (read: certain) there’s some demographic of the state that probably is, but the entire state people? What a way to typecast.
Anyhow, something about what I was reading there finally compelled me to write a bit, and what started small quickly ballooned to a huge comment I left on the post. I’m reproducing it below:
Epic Long Post:
It’s time we settle this illegal immigration dispute once and for all, honestly. I’m a native Arizonan, and I can honestly attest to how completely out of hand this situation is getting, and how completely misunderstood and misconstrued the current state of affairs are down here.
First of all, the majority of Arizonans support this legislation. Now, before you write us all off as being racially insensitive bigots and crazies, ask yourself what the rational reasons could be for passing such a sweeping piece of legislation. I’m shocked at the fact that this discussion is almost entirely centered around racial profiling (do you not show your ID for everything else already? Being pulled over? Getting on a plane? Buying something?) and the economy (albeit very superficially). The problem has gotten so immense that it literally has effects on almost every major issue.
To be honest, I don’t know how I feel about the bill, I just think it’s time this issue gets the serious attention it’s been sorely lacking for the greater part of two decades now. If nothing else, Brewer should be applauded for finally getting the border states in the limelight and *some* debate going, even if it’s entirely misplaced.
So just bear with me, put aside your misconceptions about the issue (because odds are, you don’t live here, you don’t follow the issue, and you’re probably not aware of the scope of the problem), and think.
1. The environmental aspect is being completely downplayed. This is something that has even the most liberal of the liberals supporting drastic immigration reform down here in the Sonoran Desert; the long and short of it is that Mexicans and drug traffickers are literally shitting all over the desert. The sheer volume of people crossing through these corridors in the desert, and the trash they bring with them, is absolutely stunning.
Don’t believe me? Look: http://www.tucsonweekly.com/tucson/trashing-arizona/Content?oid=1168857 Some of the locations in here are barely a 10 minute drive from where I sit now. Talk to me about the environment, and then look at the kind of mess being left out there. I don’t care what the solution is, this kind of dumping/shedding of stuff/massive ecological disaster cannot continue. It can’t. It’s literally devastating.
2. Drug trafficking. Has anyone even talked about this? It isn’t just about arresting working Mexican families, it’s about combating the completely out of control drug trafficking problem going on in our backyards. In fact, I’d say that probably the main catalyst has to deal with security rather than economical drain – in fact, there’s no arguing the fact that the Mexicans living here are probably helping us out with their labor and efforts, rather than draining the local economy.
In case you haven’t been following, the drug cartels are now nearly out of control in Mexico, in fact, it’s a problem that’s of more immediate concern to us down here (in terms of security) than terrorism. In fact, screw terrorism, I’m more worried about my family being shot or killed in the crossfire of this ongoing drug battle than some terrorist setting a bomb off. Read about how insane this is: http://online.wsj.com/article/SB123518102536038463.html
“The U.S. Justice Department considers the Mexican drug cartels as the greatest organized crime threat to the United States.” http://en.wikipedia.org/wiki/Mexican_Drug_War You better believe it. People are being killed in Juarez, Nogales, everywhere. This is literally next door, folks! Not a continent away! Full scale political unrest! Talk about a threat to national security.
3. The murder of Rob Krentz has galvanized support for serious, strong, kid-gloves-off reform in the state. If you aren’t familiar, this super high profile incident involved the murder of a well liked, peaceful Arizona rancher on his own property some weeks ago. http://azstarnet.com/news/local/crime/article_db544bc6-3b5b-11df-843b-001cc4c03286.html It’s now been found that marijuana was found on the site, and there’s definite drug trafficking ties as the ranch lies one of the numerous well-known migration and trafficking corridors that dominate southern Arizona.
I think when the history books are written, this guy’s shooting will be a real inflection point you can point to as leading to this kind of legislation. The sentiment for structured amnesty or some other kind of reform almost completely disappeared after a few similar incidents. Violent, often fatal crime near the border is literally making it a physical hazard to be down here.
Want more proof? Look no further than the concealed carry legislation that also just passed. It isn’t that we’re all a bunch of friggin psychos, it’s that we’re honest-to-god scared of being shot in our homes or out in the desert. I know I sure as hell wouldn’t go walking around out there when even the border patrol is worried about some parts of the desert even just an hour away.
4. Sure the economy has something to do with it, absolutely. Hell, our economy is worse off than California’s by percentage and by capita: http://www.bizjournals.com/phoenix/stories/2008/02/25/daily29.html
The major public universities in the state are struggling for dollars to keep classes going, mandatory furloughs everywhere, and we’re paying for the rest in fees and still not going to break even. Hell yes, the economy has something to do with the perception that illegals are partly responsible. (however true or untrue that actually may be, since personally I’d wager Mexican migrant labor probably has a net positive effect on the local economy; let’s be honest, profiling them as lazy people really *is* racism)
So there are a few good arguments I don’t really feel have been emphasized enough online, anywhere. Sit around and discuss the finer points of constitutional law and whether this is “racial profiling,” honestly, that debate has already been beaten and played out enough already.
Meanwhile, the problems down here are getting worse, and worse, and worse, and the very real drug war raging in the desert just continues to get scarier. I think this will be a very interesting and potentially huge states rights issue. In the meantime, some of the points I touched on (I hope) are good food for thought if you think that Arizona suddenly just decided to “go insane” or “lose our collective shit.”
I promise you, it isn’t the case.
If there’s one rule on the internet I find truer than all others, it’s the one on trolling. If you haven’t heard it before, it’s simple; don’t feed the trolls. It’s almost an anthropic approach to argument or discussion resolution, but for the sweeping majority of internet disputes, it’s the only higher-road way to approach those topics.
That said, what I’m going to do here by acknowledging and refuting something (that I consider trolling) directly breaks that rule. But bear with me.
Fishing for publicity
There’s been a lot of that going on this week, but what really started the week out for me was movie critic Roger Ebert’s second assertion that “games are not art,” and later that games can never be “high art.” It’s another attempt to fish for publicity and generally incite a wave of semantic debate stemming from a completely incorrect pretense of his. He did it five years ago, albeit in a very small blurb on his website:
I did indeed consider video games inherently inferior to film and literature. There is a structural reason for that: Video games by their nature require player choices, which is the opposite of the strategy of serious film and literature, which requires authorial control.
I am prepared to believe that video games can be elegant, subtle, sophisticated, challenging and visually wonderful. But I believe the nature of the medium prevents it from moving beyond craftsmanship to the stature of art. To my knowledge, no one in or out of the field has ever been able to cite a game worthy of comparison with the great dramatists, poets, filmmakers, novelists and composers. That a game can aspire to artistic importance as a visual experience, I accept. But for most gamers, video games represent a loss of those precious hours we have available to make ourselves more cultured, civilized and empathetic.
That was in 2005. Flash forward to 2010, and what does he decide to address just a month or so after his return to work from battling cancer? That very topic. Except, this time, his assertion is even more definite: video games can never be art. But never is a long time, so he hedges later by noting that games will never be art within our lifetime. Perhaps he refers to his own, admittedly shorter lifetime (seeing as the majority of gamers are at a median of 30 years old), but the clarification is nevertheless a “you know, just in case,” type of cop-out.
First of all, the timing of his post is telling; it’s obviously a publicity stunt to draw more attention to his recent return to movie reviewing. But I think it’s a relevant discussion to have at this point, whatever Ebert’s possibly misguided, possibly earnest motives for bringing up such an academic issue again.
Ebert is confused about what kind of argument he wants to do battle with gamers. In fact, if you break it down, there seem to be three.
- His claim in 2005 was that nothing truly interactive can be artistic like “serious film and literature.” He makes this claim again in his 2010 piece, in so many words, by claiming that games can never be art in principle because gamers actively participate in the outcome. He argues that art is, because it guides one’s experience through a singular, common vision or experience crafted by the artist. It doesn’t rely on decision making or input from the user to convey its message; a novel or painting appears – at a superficial level – the same way to everyone. It’s static and timeless, not interactive.
- The purely academic side of him wants to see formal, academic citations and comparison, like a critical essay or critique. He challenges, “no one in or out of the field has ever been able to cite a game worthy of comparison with the great poets, filmmakers, novelists and poets.” Perhaps Ebert is waiting for someone to write a really good critical essay with numerous citations and academic form. I’ll show in a few moments how absurd this point is.
- He takes issue with videogames as a money-making industry, noting that art cannot be commercial. He quips, “I allow Sangtiago the last word. Toward the end of her presentation, she shows a visual with six circles, which represent, I gather, the components now forming for her brave new world of video games as art. The circles are labeled: Development, Finance, Publishing, Marketing, Education, and Executive Management. I rest my case.” Apparently, art isn’t art if it requires the infrastructure a business does.
Theses are Ebert’s primary theses.
I’m going to tear them down.
Argument 1: The folly of a straw man
Ebert builds a classic straw man argument in his first argument. He claims art need to be static for it to be art. For example, that The Return of the Native ends the same way regardless of what the reader does; you can’t change the outcome, you can’t win, you can’t lose, you can’t stop Eustacia from drowning, and you can’t influence characters outcomes. You could tear pages out of the book, write a critical essay about how different Thomas Hardy’s message might have been if several small events happened differently, or even burn the novel and mail him the ashes – but the novel’s story, plot, and message are immutable and common. It isn’t interactive. Much the same way, painting, photography, and sculpture – other perceptual art forms – don’t rely on user interaction superficially. They just exist however the artist finished them, and remain that way. They aren’t interactive either.
But are they?
In fact, I would argue that interaction and perception is one of the bases of art. In fact, the unique interpretation of some art form – the ability for something to be singularly and dynamically interpreted by a viewer in a personal way – is what makes art powerful. Sure, superficial and first-order art is important, but what makes good, high-level artwork powerful is that it can be dynamically interpreted. The experience of viewing some real piece of art, I would argue, is that it can be dynamically interpreted in different, meaningful ways for different viewers. The message can indeed be timeless, but art doesn’t exist in a vacuum – there always will be context, and that context continually ensures that the mere thought of viewing it makes real art an interactive experience.
I’m confused why Ebert, or any critic, would want to define an experience that isn’t engaging as being desirable.
What decade are you living in, Ebert?
But all of this is beyond the point! Ebert is so out of tune that what he considers videogames are primitive constructs that existed, perhaps at latest, in the late 1980s. The vision of videogames he is stuck on is one where games existed solely to be won or lost, where perhaps the only secondary reward or outcome was some arbitrary high score. If you don’t believe me, consider this: Ebert constantly uses the example of someone playing a game of chess as an example. Yes, chess. Or mahjong. Or other basic board games. He’s so out of touch that at one point he brings up sports at one point. Yes, as in athletes and commercial competitive activities. This absolutely couldn’t be further from what gaming is or is about today. This isn’t “gaming,” it’s gaming, and this is where Ebert built his straw man. He’s arguing about an entirely different kind of video game than what people are thinking about; he’s talking about videogames that are little more than evolutionary progressions of board games. He’s thinking of things like frogger, tetris, or pac man. They exist to be played, not experienced. But videogames have changed completely since Ebert stopped paying attention and formed his misguided conclusions.
The irony, of course, is that telling a common, linear story for each player is actually a negative characteristic of a modern videogame. Titles like Crysis, for example, were lauded early on for allowing players to experience a game world in a unique and distinct way. Players can decide to charge enemies head-first and be destroyed in the uphill battle, or flank through the jungle, surprising an entrenched enemy force from the side, or perhaps snipe, miss the shot, and find themselves being flanked from both land and sea (since the entrenched force since radioed their buddies for help).
But at the end of the day, the game’s story remains the same. The outcome of the level is guided, and although the player’s decisions might have been unique at a small level, they still arrive at the same inevitable outcome. If you’re so inclined, it’s like a path-independent line integral – it doesn’t matter how you go through the scalar field, between two points, the difference is still the same. In the game, at a level-level, the beginning, middle, and end remain the same. These levels then (hopefully) tie together and lead to a story that ties levels together, giving the game a story that gives it some purpose.
To put it simply, modern games have a plot, a story – a beginning, middle, and end. In fact, you can extend the novel analogy further: Game levels are essentially chapters. The game itself is like a novel. There are even trilogies or series – look no further than Halo, Metal Gear Solid, Final Fantasy, Bioshock or hundreds, perhaps thousands of other spectacularly well put together, coherent game franchises.
Argument 2: It isn’t recognized by academia
It’s obvious to me that Ebert is living in the past, in a world at least 20 years ago. If he was willing to do some research, read an academic paper, or just use google scholar, he would’ve heard of Ludology by now. If you’re going to write about something, guy, do your research, or you just get caught with your pants down looking patiently ignorant. Here, Ebert looks just that: confused, outdated, and misguided. Nobody has engaged you, Ebert, with serious arguments about why videogames constitute a true artistic form, probably because they’re too busy out making their art. Even more likely, because, until now, you wouldn’t entertain the dialogue. You’ve picked your battles against a straw man, and against titles that nobody has ever heard of or cares about. That modern videogamers have never heard of. That they don’t identify as art, or are so crude and basic that they’re a mockery. They aren’t mainstream. They aren’t good.
I could go out, shoot a 20 minute movie of the sky, write a couple hundred words about how it lacked story, purpose, plot, or thought – and then claim that all movies aren’t art. That they’re for children. But you know what, I wouldn’t, because that’s not a valid argument. Just like movies, games span the gamut when it comes to quality. It doesn’t do justice to make claims about any medium solely on the basis of a few bad examples.
So you know what, fine, I’ll give you a list of some games worthy of comparison “with the great poets, filmmakers, novelists and poets.” (poets must be awesome in Ebert’s 1980s world, because he mentioned ‘em twice. Freudian slip?)
- Bioshock is gaming’s Citizen Kane. It’s as simple as that. It’s literally a game that retells Ayn Rand’s objectivist philosophy in a reimaged form.
- Zork is often cited as one of the first text-based interactive video games. It’s an immersive, engaging story without flashy graphics or artwork. The interaction and story are the game.
- Half Life is a narrative epic famous for being told almost entirely through the first-person perspective of the player. It’s a classic that’s nearly unrivaled in its genre. Doom, Quake, Return to Castle Wolfenstein – these are equally as epic.
- System Shock. This is a classic. System Shock is “the benchmark for intelligent first-person gaming”, “[it] kick-start[ed] the revolution which … has influenced the design of countless other games.”
- Fallout is an RPG staple that I would cite largely for being an example of great storytelling that isn’t linear. In fact, that’s the point of RPG – the story is almost your own.
- Knights of the Old Republic, the Jedi Knight series, X-Wing, TIE Fighter, X-Wing vs. TIE Fighter, and X-Wing Alliance are some of the best examples of games crafted from a movie world. Ebert seems to only consider video games that are awful adaptations of movies. Perhaps he’s under the impression every video game adapted from a movie is as godawful as the Back to the Future game, which has nothing to do with the movie. Or perhaps his mind closed with the E.T. Video Game, which many seriously cite as the tipping point that led to the video game crisis of 1983.
- Mass Effect is literally what Bioware (who developed Knights of the Old Republic) wanted to make when set loose. It’s a fully developed future universe, complete with characters, environments, races, and plots that are fully immersive. It’s amazing that movies like Avatar can be lauded over and over again for being so comprehensive in their vision, when games like Mass Effect have been doing the same for nearly a decade. Anything less would be half-baked.
- Heavy Rain is literally a game-novella for adults. It’s an absolutely amazing experience that plays like a movie, much the same way Metal Gear Solid IV plays just like a movie.
You want the modern equivalent of poets? The authors of prose presented in modern form? Look no further than:
- John Carmack
- Shigeru Miyamoto
- Cliff Bleszinski
- Casey Hudson
- Stieg Hedlund
- Sid Meier
- Yuji Naka
- Gabe Newell
- Will Wright
This list is only a 10th of the numerous acclaimed videogame designers, writers, programmers, and visionaries. Perhaps Ebert never ventured to this wikipedia page of famous videogame visionaries, or looked at any of those titles. Almost every single one is art. He’s paid to do write these pieces, why does it take me to find peer-reviewed sources of acclaimed examples that rival his acknowledged novelists, directors, and poets. Is it honestly beyond him to do research before writing? Or is his expertise so limitless that it needs no foundation?
Argument 3: Videogaming is an industry
Wait, and film-making isn’t? This point is so pleasantly confusing, conflicted, and wrong that anyone with a room-temperature IQ can see right through it.
- There are indie film-makers.
- There are indie video game designers. (look no further than the App Store, Xbox marketplace, Android marketplace, or homebrew communities)
- There are independent movies.
- There are independent game developers.
- There are huge companies making movies.
- There are huge companies making videogames. (Electronic Arts, Rockstar, Blizzard, e.t.c.)
- The bar of entry for making movies is as low as having a computer and a video camera.
- The bar of entry for making videogames is as low as having a computer and possibly a console, iPod Touch/iPhone/Android Phone.
- There are cult classic movies.
- There are cult classic videogames.
See what I did there? What the heck kind of argument is it that videogames are an industry? Last I checked, movies and film-making is also a huge industry. In fact, both are subsets of the “entertainment” industry. So is music, yet nobody is blue in the face or 1,400+ comments into an argument about those mediums.
If running a successful industry (and thus “Development, Finance, Publishing, Marketing, Education, and Executive Management”) makes an entire medium not art, then movies, music, and videogames are all not art. This is the classic argument fine-art photographers use to exclude photojournalism from being true art. Are we really going to have this argument again?
I guess what I find alarming really isn’t that Ebert doesn’t get it, it’s that he’s seriously vehemently engaged in killing the perception of videogames as an art form.
For the longest time, movies, cinema, film, whatever you want to call it (it’s the same thing – images presented in rapid visual succession to give the impression of continual video, perhaps accompanied with audio) fought an uphill battle to be considered art. In fact, cinema seemed no more of an art form than perception itself – it merely existed. No doubt some of the very first directors and cinema visionaries fought and argued with established critics of the time for the respect and accredited art form that Ebert takes for granted now.
Take a step backwards, and think about how long and how hard painters and photographers argued about which form was truer art. Which form was better. Is something captured, rather than crafted, artistic? Or can anything be crafted? Aren’t photographs taken with some form? Which one is art?
Go backwards again and consider the evolution from one media to another time and time again. Was realist art a truer art form than impressionism? Is modernism, pop art, or contemporary art less of an art form than photography? Than sculpture? Than cinema?
What makes art, art? Wikipedia defines art eloquently, as:
Art is the process or product of deliberately arranging elements in a way to affect the senses or emotions. It encompasses a diverse range of human activities, creations, and modes of expression, including music, literature, film, sculpture, and paintings. – Wikipedia
So how is a videogame – which arguably is what you’d get if you convolved all of these together – not art? Modern videogames include all of those, well, arts. Musicians for directing soundtracks, scores, and guiding the player’s emotions and the game’s feel. Literature for the game’s story and plot, tying it into a cohesive experience. Film for cutscenes, directing, posing, and when the player isn’t in direct control. Sculpture for the artists crafting player models, environment models, scenes, levels, and objects. Painting for those creating textures of virtually everything mapped on the models. All of these aren’t just part of the process, they are the process. So isn’t the sum of a videogame greater than its parts?
It’s a rhetorical question, of course. My theory? Ebert is jealous. Video games are already dwarfing all of the other entertainment forms. Music, literature, and cinema. Look no further than Call of Duty 4: Modern Warfare. Halo. Fallout. Unreal. Half-Life. All huge blockbuster titles dwarfing mere cinema.
But you know what? It doesn’t matter. At the end of the day, it’s semantic. What is “art?” I’d argue, art is that feeling left stirring in you after you’ve left the experience behind. After you’ve put down the controller, left the museum, closed the novel, or exited the theater. It’s that nagging presence you can’t ignore after you’ve been presented with something compelling. It’s obvious, really. Maybe video games aren’t art to Ebert – fair enough, but don’t presuppose that they can’t be art for everyone else, that art is something only you are equipped to appreciate, unless you really are arrogant.
Pundits can argue for all eternity; artists will still be out there practicing their trade – and real connoisseurs will appreciate their work, whatever the medium.
While I was in Las Vegas for MIX10, I couldn’t suppress my inexplicable urge to run as many speedtests as I could muster. Of course, I was packing the usual iPhone 3GS with AT&T. Sadly, nearly the entire visit speeds were barely 250 kilobits/s down, 220 kilobits/s up, if I could even get the speedtest.net application to run. Take a look at the following:
This data is from 13 tests taken during my 3 day stay. They’re from over 3G UMTS when it did work, and GSM EDGE when it didn’t, and that was virtually the entire time. 3G was either slow, or didn’t work at all; switching to EDGE was the only way to do anything.
How is this possible?
Now, it’s fair to say that some of this is sampling bias and the fact that I was at a conference, but even then, there’s no excuse. This is a city used to a huge flux of visitors in a short time for trade conferences. Frankly, I can only begin to imagine how overloaded networks are during major conferences like E3.
Take a look at the following plot of the average speeds for each day:
Can you spot which three days are the ones I’m talking about? Note that on the 16th, I couldn’t even get a test to run to completion; it just didn’t work. There’s nothing more to really say about the issue than simply how bad this is. If this is the kind of performance AT&T users see and complain so vocally about in the San Fransisco Bay Area and Manhattan, I can completely understand. Frankly, I can see no other reason for that kind of performance degradation other than congestion.
Big News Today!
Whether you like it or not, the big news today wasn’t the outcome of “The Big Game,” the 2010 Toyota Prius Recall, or the fact that Verizon is “deliberately” blocking 4Chan for wireless customers (though those last two are admonishable attempts by the respective companies to submarine news).
It was the fact that today, Google advertised its core search product on TV in a $2.6 million Super Bowl ad. Wait, did I just say Super Bowl? I meant “Big Game.”
Hell proverbially froze over, by CEO Eric Schmidt’s own admission.
But if you actually watch the video, and watch closely, you’ll notice that very little of the advertisement focuses on the search experience itself. In fact, it spends so much effort building trite emotional appeal that it completely neglects at least half of the front-facing search experience. In fact, what it disregards is a feature so neglected, even I didn’t realize it was completely passed over until I watched a parody.
First, watch the “Parisian Love” ad itself:
Now watch the brilliant parody “Is Tiger Feeling Lucky Today” by slate:
Disregarding completely the message, the search terms, what the so-called “story” was, did you notice how differently Google advertised their own product compared to how well Slate did? Slate used “I’m Feeling Lucky.” Google? Not once. In fact, doing so could have been absolutely brilliant in the context of the ad’s cheezy romance theme. Imagine “will she marry me” -> I’m feeling lucky.
So what that communicates is that even Google doesn’t know what the heck “I’m Feeling Lucky” is doing there. Ask yourself, when is the last time you actually used it? Is it easily accessible? Is it part of that seamless, effortless Google experience they talk about? Is it so essential a part of the search experience that if it was missing, some part of your being would be inexorably changed forever?
You get the point. It isn’t.
There’s nothing easy about using “I’m Feeling Lucky;” you can’t get to it with shift-enter or any other keyboard shortcut. It isn’t natural; everyone’s so used to just hitting enter or using the browser search bar. I ask then what purpose it’s serving.
For my answer, I googled. I didn’t use “I’m Feeling Lucky” :
The “I’m Feeling Lucky™” button on the Google search page takes you directly to the first webpage that returns for your query. When you click this button, you won’t see the other search results at all. An “I’m Feeling Lucky” search means you spend less time searching for web pages and more time looking at them. -Link
Oh really? That’s, you know, awesome, but isn’t diving head first into the first result of some search query just as dangerous as using link shorteners? As opening links in email blindly? As bad as everything we’ve always taught people not to do? Moreover, isn’t randomly guessing kind of a bad algorithm for mentally sorting through search results? I mean, if you use “I’m Feeling Lucky,” you’re going to have to come all the way back out to the front to re-submit your query. What’s elegant, beautiful, or simple about that?
Take a step back and think about the name of that button as well. What does “I’m Feeling Lucky” imply? Why the need for obscurity? Why not just call it “First Result” or “Dive In Blindly!™” or something else that’s approachable and friendly?
Years ago, the first time I clicked this, I half expected to be taken to some sort of contest entry form.
Of Simplicity and Sacred Cows
We’ve all read a lot, and I mean a lot about how much time, effort and money Google pours into keeping their famously-lightweight homepage simple. They’ve evolved the design. They’ve removed things. They make it fade in slowly so those of us challenged by reading aren’t scared or overwhelmed. They count and have sleepless nights over the number of words on it!
Oh, I know what you’ll say, it’s part of their “corporate identity,” part of their “product,” part of what makes Google, Google. Nonsense; that’s the kind of talk that turns innovation into stagnation for the sake of consistency. My high school English teacher would be proud, because two of his favorite quotes apply directly to the kind of idiotic allegiance they have to that worthless button:
- A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. -Ralph Waldo Emerson
- Consistency is the last refuge of the unimaginative. -Oscar Wilde
For all of Google’s engineering talent, all that time, all those fancy positions, titles, and critical thought, they don’t realize that their biggest Sacred Cow is staring them in the face. That “Sacred Cow” is ‘I’m Feeling Lucky.”
C’mon Google, even you don’t use it or know why it’s there.
Yesterday, just about everyone’s minds were on the iPad. Love it or hate it, what a ride that hype machine was, and what a launch too. But for me, my musings (or rather those of my roommate) were rudely interrupted by the loud boom of a car crashing through the retaining wall surrounding my house.
Apparently, an inebriated woman was proceeding northbound on Euclid in a white Infiniti when she struck a midsize black Mercedes SUV, and flew up, into, and through the cinder block wall surrounding my house. The force must have been pretty awesome, since the size of the hole is sizable. Definitely a lot of momentum (and resulting few meganewtons of force) went into that smash, since there’s shattered cinder block in my yard now.
They managed to destroy a lot of cactus, blocks, and the sign on the corner in the process. I’d like to point out the irony of an infinity with license plate ‘finiti’, crashing through my wall on Euclid (as in, the fabled “father of geometry”). ::shrug::
I know nothing about the occupants’ statuses or health, but hope they’re fairing ok. Lesson learned? Don’t drink and drive, kids.
Like many others yesterday, I eagerly awaited the Microsoft CES keynote and the chance to see Steve Ballmer once again have a Developers Developers Developers moment on stage. Although it was initially marred by a power outage which delayed the conference some 20 minutes and damaged a Media Center TV and an ASUS eeeTV demo, what really made me pull the plug was what Microsoft did to the live stream itself.
Initially it was plagued with audio problems. The stream started too quiet, then suddenly lost the left channel, then the left channel came back but killed the right channel. At one point I’m certain there was some sort of loop in a volume normalization system, as gain increased continually for at least an entire minute. Of course, these issues are technical and completely understandable given the fact that nearly everything needed to be restarted after the power outage.
So imagine my disgust, and the disgust of others, when during the Microsoft Xbox 360 part of the keynote, the following comes up right as they prepare to show the Halo Reach trailer:
Absolutely incredible, censoring a live keynote because of IP concerns from the very company throwing the keynote. Even better, apparently the Xbox team wasn’t made aware that there was any problem at all with what was going to be shown:
Sorry that had to black that out….I did not know :(t -Major Nelson
Even more strange, the content that was shown wasn’t new, in spite of the fact that the announcer lead-up to the video made it sound like it was going to be. It was nothing more than the Halo Reach trailer released over a month ago.
It’s a video…not a #haloreach demo. -Major Nelson
Why then did this content merit censoring the live stream for nearly 3 minutes? Is Microsoft not comfortable with using the public spectacle and attention that is CES to promote its own products and games? Is it honestly concerned that showing a trailer for a game in a live video stream constitutes some sort of breach in IP? What?
That, by itself wouldn’t be noteworthy, it was what followed that really iced the proverbial cake for the Keynote.
Yes. They did it again. If you’re so inclined, the video is here for everyone to view, now that we’ve been all made feel like children.
There is seriously so much wrong with doing something like this to the thousands of people watching the live stream that aren’t at CES but are still interested, that I don’t even know where to begin. In fact, I don’t even have to, because so much of that is obvious. But not, apparently, to Microsoft. Shortly after was when I stopped watching.
Nice of Microsoft to leave end-user-facing employees that work and try hard like Major Nelson to pick up all the pieces:
Reagarding[sic] the Reach blackout on the stream…..I am going to talk to some folks about that #notcool -Major Nelson
Ok, I need to take a walk and have a little chat with some folks. -Major Nelson
Fast Forward to Today
Imagine how shocked I was today, when during Paul Otellini’s Intel CES keynote the following popped up on the livecast:
I’m still not entirely certain whether, once again, the stream had been interrupted due to intellectual property concerns, DRM, or simply because they didn’t want to show more 3D parallax (despite having done so just minutes before).
Whatever the case, this seriously needs to stop.
Today, Google finally announced the much-hyped, finally-official googlephone at their Mountain View office. Admittedly, there wasn’t much about the actual announcement that wasn’t previously known; specs, photos, and even an entire review had already leaked out before the official announcement. But the announcement marks google’s first real step into distributing google-branded hardware directly to consumers, the entry of another google-blessed flagship for the Android platform, and a different, long-needed business model for selling phones and wireless contracts.
It’s been a year, 2 months, and 14 days since the launch of the T-Mobile G1, and Android has matured into a serious contender since its beginnings as an aspiring platform in a market dominated by Windows Mobile, Symbian, and the iPhone OS. While the G1 seemed awkward in a kind of adolescent manner, it’s chin a strange design ‘feature,’ its storage ultimately limited, and design-by-committee UI/navigation (anyone remember “how many clocks does it take for google engineers to tell time?”) the Nexus One is finally enough to make it a real iPhone contender alongside the Droid.
That said, the platform does still suffer from a number of notable shortcomings:
- Application storage limited to 512 MB partition
- (Which google says it will fix by allowing applications to reside on an encrypted partition on removable storage, soon)
- No multitouch within official applications
- (This strange choice likely stems from legal and patent concerns between Google using Apple IP/prior artwork. Google likely doesn’t want the whole board-member-sharing fiasco to undergo any more scrutiny than it already has)
- Slower web browsing
- (Flash? more on that in a second)
- No support for CDMA (Verizon) until spring, AT&T 3G band support unclear
- (Rumors abound that Google is working on an AT&T version, but only a “dozen or so employees have access to this hardware”)
I’m going to expand on numbers 2 and 3, since personally I find these the most interesting immediately.
Engadget did a rather thorough of the Nexus One, pre-empting it’s announcement (karma for the same way Google tried to pre-empt CES?), and notably included a quick and dirty comparison of the loading speed of browsers on the iPhone 3GS, Nexus One, and Droid. Qualitatively, the winners and losers are painfully obvious from the video, but I took down some times and came up with the following:
- iPhone 3GS – 17 seconds
- Nexus One – 71 seconds
- Motorola Droid – 82 seconds
What immediately sticks out is that it took both of the Android 2.x platforms roughly 4 times longer to load the same website as it did the 3GS.
Initially, this doesn’t make sense. The 3GS is sporting a relatively recent ARM Cortex A8 underclocked from 800 MHz to 600 MHz, while the Nexus One is running the latest (also much-hyped) Qualcomm Snapdragon SoC running at 1GHz (Qualcomm QSD 8250 according to Google). The Snapdragon SoC is extremely similar architecturally to the Cortex A8, so comparing clock speeds is roughly applicable. Then, why the heck is a newer, faster generation chipset clocked 66% faster 4 times slower? Heck, the Android browser even runs lean and mean WebKit at its core, same as Mobile Safari on the iPhone, and Chrome and Safari on the desktop. Why then is it so much slower?
My theory? Flash.
The same multimedia platform hogging resources on the desktop is now hogging resources and slowing down browsing on mobile devices. Excellent. Sure, there’s a lot of content out there that’s driven by Flash that’s useful: videos, games, navigation, photo websites, fancy UI. But what’s the biggest reason? Advertising. Adobe knows it, I mean, just watch their video on mobile flash and notice what they highlight. Advertising.
Up until now, browsing just about everything on all the modern smartphone browsers I’ve used (mobile safari, opera mobile, opera mini, IE mobile) has been usable without adblock primarily because they didn’t have support for flash. Unless mobile browsers begin allowing plugins such as adblock or similar (similar to how mobile firefox, aka Fennec has begun doing), Flash is something I’d rather see disabled than enabled. How much of a feature is it to waste not just performance, but ever-critical battery on a mobile device primarily to show animated and intrusive advertising?
No AT&T 3G, Just T-Mobile
A lot of what Google was really trying to do with the Open Handset Alliance, launching a phone that customers can buy almost directly from HTC (the Nexus One OEM) and introducing essentially a new mobile business model at the same time was abstracting the carrier away from the device.
That is, handsets are increasingly moving towards being carrier-agnostic. In reality, this is the way it was intended to be (and moreover, should have been, at least in GSM-land). In fact, while this business model is entirely new to the US, the portability of handsets using SIMs is nothing new to customers in Europe, who frequently purchase handsets unlocked and bring plans (and SIMs) with them. (As an aside, much of the reason CDMA became dominant in the US was because this same flexibility wasn’t part of the specification; the unique identifier is built into the phone itself in the form of an ESN/MEID.)
It follows, then, that if Google truly wanted to create a splash in the industry and achieve its goal of creating and directly selling the ultimate flagship device that’s totally carrier agnostic, they would have made absolutely certain that HTC built in either multiple radios for UMTS and CDMA, or some modern, hybrid UMTS/CDMA chipset similar to the Qualcomm MSM6200 (PDF link) rumored to be at the core of the next-gen iPhone.
Whatever the case, the decision to launch hardware that at present restricts it to T-Mobile for 3G, EDGE on AT&T, and no CDMA functionality at all, extremely limits the device and ultimate impact. Moreover, it means that HTC and Google are going to have to support 3 sets of unique hardware for the “Nexus One” name. One with a CDMA-stack version for Verizon/Sprint, the current incarnation for T-Mobile, and a final version supporting AT&T’s 3G frequencies. Perhaps even more if they eventually move to support additional carriers in Europe and Asia.
Personally, I find it heartily ironic that rumors abound Apple is using a hybrid UMTS/CDMA chipset in the next gen iPhone. If so, that would make the same iPhone so many complain about because of its exclusivity with AT&T, the most open.
More open, in fact, than the Nexus One.