Desert of My Real Life











Anyone who knows anything about me knows that I am an agnostic.  So it might be a surprise to learn that this post was inspired by a sign on a church.  I was out on my bike this afternoon, a place where I do some of my best thinking.  I was reflecting on the challenges that have faced me and my close friends over the last year or two and how we have supported each other through some very difficult times.  Near the end of my ride, the sign on the church in the center of Campton said, “If you’re headed in the wrong direction, God allows U-turns.”

People who know me also know that I am interested in games.  Any game, anywhere, any time.  That is, I’ll play any game, anywhere, any time.  But I’m also interested in studying them as an academic topic.  One of the game scholars that I ask my students to read is Greg Costikyan.  He wrote an article in 1994 (long before game studies was recognized as a legitimate area of academic interest) in which he tried to provide a definition of “game.”  The article, called I Have No Words and I Must Design, identifies six elements that an activity must have in order to be considered a game.  I don’t completely agree with Costikyan in his efforts in this article but I think that’s because I have the benefit of having read tons of articles that analyze games and “gameness.”   So even though I disagree on some points, I respect and admire most of what he says.

Costikyan says an activity must have six elements in order to be considered a game.  If it is missing any of the six elements, it is something other than a game, perhaps some other kind of play, but not a game.  His six elements are: tokens, goal(s), opposition, decision-making, information and managing resources.  By the way, here comes the geeky part of the post.  If you aren’t a geek and are just interested in the philosophical part, jump ahead six or seven paragraphs.

A game must have game tokesns.  He means that there must be something within the game that represents the player and the player’s status within the game.  In Monopoly, for example, the player’s piece (top hat, race car, horse, and so on) is a game token because it represents the player.  But the cards with the various properties that the player owns (Broadway, Marvin Garden, Illinois Ave, and so on) are also game tokens because they also represent the player’s status within the game.  In addition, the fake money that a player has represent how wealthy or poor the player is and, therefore, are game tokens.  In some games, like basketball, the player’s body is his/her game token.

A game must have a goal, something the player is striving for.  In Monopoly, for example, the goal is to be the last player with money or, in other words, to bankrupt all the other players.  in War, the goal is to obtain all of the cards in the deck.  This is an item that makes some activities that we normally consider to be games not games in Costikyan’s point of view.  For example, SimCity and The Sims are not games according to Costikyan because they don’t have goals that are set by the game.  The player can create a goal to strive for but the game doesn’t impose that on the player.

A game must have opposition, something that gets in the way of the player reaching his/her goal.  This is a simple, yet profound, statement.  By entering into the realm of the game, the player agrees to try to reach the goal of the game in a kind of circuitous manner.  The particpant in the game of War will not just grab all of the cards in the deck but will instead abide by the rules of the game and attempt to overcome the obstacles that the rules place in his/her way.  The opposition in a game typically comes from the rules of the game as well as any opponents who are trying to achieve the same goal.

A game must have decision-making.   This is perhaps the most important characteristic of a game.  A player must be presented with a series of choices, each of which impacts on his/her chances of reaching the goal before his/her opposition.  In fact, Costikyan would not consider the card game War a game because there is no decision-making.  In War, a player simply flips an unknown card at random from his/her deck and hopes for the best.  Decision-making allows the player to control his/her destiny (to an extent).  Through decision-making, the player expresses a personality, a strategy for how to win the game.

In order to make good decisions, a player must be presented with some information about those decisions.  To understand this concept, think about the game of War (which, again, Costikyan would not consider a game).  The player in this game is not presented with any decision-making opportunities.  He/she simply flips a card and hopes for the best.  Many of my students, presented with the challenge of adding decision-making to the game, suggest that the player’s deck of cards be split into two decks and the player must decide the deck from which to flip a card.  If the cards are all faced down, this clearly does not add a decision to the game, because the player has no information about the contents of each deck.  That information, in other words, is hidden from the player.  So in order to make a good decision, the player must be presented with SOME information.  In Chess, the player is presented with perfect information, that is,  nothing is hidden from the player.  In a game like Texas Hold ‘Em, on the other hand, the player is presented with imperfect, or mixed, information, that is, some of the information is known to the player while some is unknown.  The player must use the known information wisely and make an informed guess at the unknown information in order to make the best decision possible.

Finally, the player must manage his/her resources.  A resource is something the player uses in order to achieve the goal of the game.  For example, in Monopoly, one of the player’s resources is the space s/he lands on.  If the space has not currently been purchased, the player can use the information they have about who owns what, the price of each property, whether s/he will make a monopoly, and so on, to determine whether to purchase this property.  The relationship between decision-making, information and the management of resources is an intimate one, one that is difficult to pull apart.

So what does all of this have to do with philosophy and my bike ride?  On my ride, I was thinking about my life and the lives of my close friends.  Nearly all of us have dealt with major life changes and difficulties in the last year or two.  We have been an incredibly supportive community for each other while we make difficult decisions about our lives.  This made me think about games and decision-making.  I can only speak for myself, but my guess is that we are all seeking a particular goal in our lives.  For me personally, that goal is to be happy.  I want to live a happy, healthy life.  But many obstacles (my opposition) have been placed in my way.  So I have had to use my resources (which include these very friends that I am writing about) and the information given to me at a particular moment in time to make the best decisions possible to try to achieve my goal.  The information piece of things was really interesting to me as I thought about this.  In our lives, our information is always imperfect.  We never know everything as we’re trying to make decisions about the direction of our lives.  We must use the information we have and make educated guesses about the information we don’t have in order to make the best decisions we can.

I was thinking about all of this as I rode, thinking about the decisions I’ve made in the last year or so and how I came to those decisions.  And I was thinking about my friends and the decisions that they are being presented with and how they will go about deciding.  It’s all very game-like, especially if you think about our goal in life being to achieve happiness.  There are probably other goals but that’s what I was thinking about today.  And then I rode past this sign that said, ” If you are headed in the wrong direction, God allows U-turns.”  And suddenly, it was like everything came together in my head (even though I don’t believe in God). 

Here’s the connection.  Given imperfect information, we all make decisions about our lives that push us in a particular direction.  As we move forward, more information presents itself to us.  We use this new information to make new decisions about where to proceed.  At various points in our lives, we may figure out that at some fork in the road in our past, we have made the wrong decision,  We are now using the wrong strategy, headed down the wrong path to acheive our goal of happiness.  But  these are our lives.  It’s not too late to make a correction.  The fact that we have headed down a particular path for some amount of time in our lives does not condemn us to continue down that particular path, to use that particular strategy, for the rest of our lives.  We can reverse course and revisit our decisions.  It’s allowed.  We can use all of the information presented to us to try to achieve happiness.  It’s allowed.

Is that a philosophy of life?  I don’t know but I feel like I’ve lived it for the last year.  It’s been difficult but I think I’m on the right path now.  At least according to the information that has been presented to me up to this point.  That’s the best I can do.



{June 1, 2010}   FaceBook Profile Pictures

The BBC is sponsoring a very cool project to get people excited about science.  It’s called So You Want To Be a Scientist.  Over 1300 people submitted ideas for scientific questions to be answered and a panel of experts reviewed the submissions, looking for the most interesting, promising ideas.  The selected finalists, with the help of a professional scientist in a related field, will now design an experiment to address their question and will then perform the experiment, collect data, and prepare the results for presentation at the British Science Festival in September.  Judges will then choose a winner.

There are four finalists.  The experiment that I found most interesting was submitted by Nina Jones, a 17-year-old high school student from England.  She hypothesizes that adults and teens use their Facebook profile pictures differently.  Adults seem to use a photo of a significant event from their lives as their profile picture while teens tend to use a photo that shows them having a good time with their friends.  Nina will examine profile photos to determine whether this hypothesis is true and if it is, why it occurs.  She’s looking for volunteers to allow their profile pictures to be examined–she needs permission because profile pictures are one of the last items on Facebook that are private by default.  Once she has a bunch of volunteers, she will use statistical sampling techniques to choose the pictures she will examine.  If you want to volunteer, go to the Facebook fan page for her experiment and “like” it.  She will choose from among the people who like the page.  Here’s a link: http://www.facebook.com/profile.php?id=1383680154#!/BBC.picture.experiment?v=info

I think it’s an interesting experiment but I’m not sure her hypotheses will hold up.  My anecdotal experience with my friends (who mostly fall into the “adult” category) doesn’t seem to indicate that they overwhelmingly use pictures from their significant life events.  What do you think?



One of my favorite shows on NPR is On The Media.  Each week, the hosts examine a variety of topics related to the media, mostly in the US.  I hear the show on Sunday mornings on New Hampshire Public Radio.  On February 26, 2010, the show aired a story called “The Watchers.”  It brought me back to my graduate school days and my academic roots in computer science, specifically in pattern recognition and machine learning.

The story was about the value of the massive amounts of data that each of us leave behind as we go about our daily electronic lives.  In particular, John Poindexter, convicted of numerous felonies in the early 1990’s for his role in the Iran-Contra scandal (reversed on appeal), had the idea that the US government could use computers to troll through this data, looking for patterns.  When I was in graduate school, deficit hawks were interested in this idea as a way to find people who were scamming the welfare system and credit card companies were interested using it to ferret out credit card fraud.  Then George Bush became president and 9/11 occurred.  Suddenly, Poindexter’s ideas became hot within the defense department.

In 2002, Bush appointed Poindexter as the head of the Information Awareness Office, part of DARPA, and Poindexter pushed the agenda of “total information awareness,” a plan to use software to monitor the wide variety of electronic data that we each leave behind with our purchases and web browsing and cell phone calls and all of our other modern behaviors.  The idea was that by monitoring this data, the software would be able to alert us to potential terrorist activity.  In other words, the software would be able to detect the activities of terrorists as they plan their next attack.

The On The Media story described the problems with this program, problems that we knew about way back when I was in graduate school in the early 1990’s.  The biggest problem is that the software is overwhelmed by the sheer volume of data that is currently being collected.  This problem is similar to the problem of information overload in humans.  The software can’t make sense of so much data.  “Making sense” of the data is a prerequisite for being able to find patterns within the data.

Why do we care about this issue?  There are a couple of reasons.  The first is that we’re spending a lot of money on this software.  In a time when resources are scarce, it seems crazy to me that we’re wasting time and money on a program that isn’t working.  The second reason is that data about all of us is needlessly being collected and so our privacy is potentially being invaded (if anyone or any software happens to look at the data).  Poindexter’s original idea was that the data would be “scrubbed” so that identifying information was removed unless a problematic pattern was identified.  This particular requirement has been forgotten so that our identifying information is attached to each piece of data as it is collected.  But I think the main reason we should care about this wasted program is because it is another example of security theater, which I’ve written about before.  It does nothing to make us actually safer but is instead a way of pretending that we are safer.

When I was in graduate school, I would never have thought that we would still be talking about this idea all these years later.  Learning from the past isn’t something we do well.



{February 28, 2010}   Impressive Perform

Ann said I should write a post about this latest comment to my blog.  It was posted on my iPad and Education entry but I think it’s spam.  What do you think?  “Impressive perform on your own send. Hold up using the specatacular work.”

The website that it came from is: free-music-downloads2.tumblr.com  Don’t put that website into your browser.  It’s spam.  But the poetry of the comment is priceless, almost as good as the classic: “All your base are belong to us.”

So why didn’t my spam filter catch this comment as spam?  It’s not clear to me since it seems so obvious that it’s spam.  How did the authors bypass the spam filter?  I have no idea.  But I have my blog set up so that I have to moderate any comments from new posters.  So I was able to mark this particular comment as spam and not have it actually post as a comment.

I do, however, like “Impressive perform on your own send.”  Praise, even nonsensical praise, makes me feel good about myself.



{November 2, 2009}   Differences in Media

I’ve been thinking lately about the differences between media types.  This thinking was inspired by the new movie Disgrace based on J. M. Coetzee’s novel of the same name.  I will definitely see this movie (if it is ever released throughout the US) but I’m worried about the choices that the filmmakers have made.  I thought Coetzee’s novel was brilliant because it was told from the point of view of a character who is somewhat reprehensible.  But, of course, his reprehensibility must only be hinted at since he himself wouldn’t think he was reprehensible.  The subtlety of the novel is difficult to convey in a film.  And so the filmmakers have made choices that reduce the brilliant ambiguity of the novel.  And that makes me wonder whether I’m interested enough in the plot of the novel to enjoy the movie.

As I’ve mentioned before, I’ve been watching Battlestar Galactica on DVD.  The original series aired on television and so the commercial breaks are obvious on the DVD.  In the most recent episode that I watched, a character is in a room with a spiritual advisor, discussing a recurring dream.  At a dramatic moment in the telling of the dream, the screen goes black, clearly a commercial break.  When we return to the story (without having to watch a commercial, which is why we like NetFlix), we enter the story at exactly the same point that we left it.  We left the story at a tension point so that we would be sure to come back after the commercial.  This technique works well in television.

The same technique does not work well at all in novels.  I read and hated Dan Brown’s novel, The DaVinci Code.  I really wanted to like this novel.  Dan Brown, after all, is from New Hampshire, and the premise of the story is intriguing.  But I couldn’t get past the poor craftsmanship of the novel.  The characters were two-dimensional and indistinguishable from each other.  I figured out the “secret” of the novel (which I won’t spoil here) about half-way through.  But my biggest problem was the chapter breaks.  Dan Brown writes really short chapters, some of which are a page long.  And often it is completely unclear why these chapter breaks occur.  Why have a chapter that is one page long and then have the next chapter start right where the action of that really short chapter ended?  I felt as though Brown had thought about moving these two chapters around, away from each, in order to build tension, in much the same way that Battlestar Galactica’s breaks for commercials build tension.  A good editor could have made sure these two chapters did not appear one right after the other, unlike the two scenes with the commercial break between them.  These examples remind me that different media require different production techniques just as they require different analysis techniques.

On a side note, the use of the made-up word “fracking” on Battlestar Galactica is getting on my nerves.



{October 25, 2009}   Making Sense of All Things Digital

For the last few years, I have been volunteering my time at a local senior center, teaching computing skills.  One of the struggles is to explain the subtle cues that the computer provides to us as its users to let us know what we can do at that particular time.  What do I mean by “cues”?  This Friday, I talked about how you know when you can type text in a particular spot.  Think about it.  You look for your cursor to change to a straight vertical line that blinks.  Wherever it blinks is where your text will appear if you press the keys on your keyboard.  We all know this, right?  The problem is that there are thousands of these items.  Each appears to be a small thing, without much consequence.  And yet, by paying attention to these small, visual cues, we all know what we can do and when we can do it.  It’s challenging to teach people who aren’t used to paying attention to, much less deciphering, these subtle cues.  I love it but I’m constantly struggling to explain why things are as they are on PCs, to help make sense of the virtual world.

One of the things I have never been able to explain is why sometimes you need to click and why sometimes you need to double-click.  I would like to be able to articulate a rule about when to engage in each action but I have not yet been able to do so.  Instead, I tell the students in this class that they should first try to click on something and if nothing happens, they should double-click.  This explanation feels wholy unsatisfactory to me because I want to believe that computers are logical.  But deep in my heart, I know they aren’t.  They are just as subject to whims of culture-making as any other artifact of our culture.  And now I have proof of that.  Tim Berners-Lee (that’s SIR Berners-Lee to you) recently admitted that he regrets the double-slash.

Sir Berners-Lee invented the World Wide Web.  The author of the article I linked to says he is considered a father of the Internet but that’s not true.  There is much confusion about the difference between the Internet and the World Wide Web.  In fact, most people consider them to be the same thing.  But they are not.  The Internet is the hardware that the World Wide Web (which is comprised of information) resides on.  The Internet was created in the early 1970’s.  The World Wide Web was conceived of by Berners-Lee in the early 1990’s.  Berners-Lee’s achievement is monumental.  We don’t have to give him credit for the entire Internet.  He’s still an amazing guy.

The World Wide Web is comprised of web pages and social networking sites and blogs and such rather than the actual machines that hold all of that information.  When we browse the World Wide Web, we typically use a web browser like Internet Explorer or Firefox.  If you look in the address box of the web browser you’re using, you will see that the address there contains a number of pieces.  The first part of the address is the protocol that your computer is using to communicate with the computer that contains the information you want to see.  A protocol is simply a set of rules that both computers agree to abide by in their communication.  You can think of a protocol as a language that the computer agree to use in their communication.  Typically, the protocol these days for web browsing is http (hypertext transfer protocol) or https (hypertext transfer protocol secure).  Much of the text of the address of the web site you’re looking at specifies the name of a computer and the name of some space on that computer.  The thing that Berners-Lee regrets is the set of characters he chose to separate the protocol from the rest of the address.  He chose “://”.  He doesn’t regret the :.  It’s a piece of punctuation that represents a separation.  He does regret the //.  It’s superfluous, unnecessary.  This whole conversation makes me feel better about teaching the senior citizens who choose to take my class.  Some digital things are not logical.  They are whims.  Just ask Tim Berners-Lee.



{October 11, 2009}   Corpus Libris

Interesting “ongoing photo essay on books and the bodies that love them” at Corpus Libris.  I like this photo a lot.



{September 29, 2009}   The Ambassador of Semiotics

I heard Madeleine Albright this morning on Morning Edition, the fifth or sixth interview I’ve heard with her since Sunday morning.  Albright just released a new book and the ensuing media blitz has brought attention to the unusual tactics she used while pursuing her diplomatic duties in the Clinton administration.  In the new book, called Read My Pins, Albright discusses her tactic of using costume jewelry, brooches in particular, to send messages about the state of negotiations in which she was involved.  Albright’s articulation of her use of jewelry in this manner is an example of semiotics in action.

Semiotics is the study of signs and symbols in communication.  Albright first began using her brooches to send messages in diplomatic meetings when Saddam Hussein called her a serpent.  Consequently, whenever she dealt with Iraq, she would wear an antique snake pin on her left shoulder.  She then wore all sorts of pins to signal how she was feeling about diplomatic negotiations.

Semiotics is concerned with signification, the process of using symbols to encode messages.  Communication of a message requires a second step, the decoding of the message by the receiver.  Albright’s audiences learned that they could gauge her feelings by looking at the brooch she was wearing.  Vladimir Putin told Bill Clinton that he could tell what the tone of a meeting with Albright was going to be by looking at her left shoulder. 

Semiotics doesn’t make it to the mainstream very often.  Albright’s deliberate use of the field is a reminder that the safety of the world might in fact depend upon diplomats being good semioticians, being able to correctly read the symbols and signs in front of them.



{September 28, 2009}   Human Pain

I’ve been watching Battlestar Galatica on DVD.  One of the roles of science fiction, I think, is to raise controversial issues, to help us understand what it means to be human.  Although the original 1970’s miniseries was cheesy and not very interesting, a few changes to the original idea makes the recent TV show one of the best when it comes to asking difficult questions and making us think about things in a new way.

The basic plot of the show is that humans created machines which then evolved into autonomous, intelligent beings called Cylons.  Humans colonized twelve planets and after years of relative peace, the Cylons attacked the humans, destroying much of the human population of the colonies.  The survivors, including those aboard a number of space ships, are now on the run from the Cylons, struggling to survive a war with a superior enemy.

One of the major changes from the miniseries to the TV show is in the look of the Cylons.  In the miniseries, the Cylons were one of the cheesiest parts of the show, looking like robots made primarily of cardboard.  In the new show, some of the Cylons look like machines but now they are computer-generated and sophisticated.  But the most interesting change comes from the fact that Cylons can look and act just like humans.  They bleed and sweat and some of them are even programmed to think that they are human, leading to what appear to be emotional responses such as love.  Human-looking cylons allow the writers to raise questions about civil rights and justice and faith. 

For example, season one of the show, which aired in 2004 and 2005, raised issues about terrorism and torture and justice at a time when the Abu Ghraib scandal was fresh in the news.  The humans on the ship called Galactica discover a human-looking Cylon in their midst.  Their instinct is to kill the Cylon by putting it into space (because human-looking Cylons breathe oxygen just as humans do) but the Cylon claims that there are several bombs planted throughout the fleet, scheduled to go off in a short amount of time.  Sensing an opportunity to prevent these bomb attacks, the military commander sends the best human pilot, Starbuck, to question the Cylon (ok–so the plots are always completely logical).  The Cylon messes with Starbuck’s head, telling her lies containing just enough truth to make her wonder what’s true and what isn’t.  But he won’t tell her where the bombs are.  Starbuck notices that the Cylon sweats and reasons that if he sweats, he must feel fear and pain.  So she and her colleagues begin to torture the Cylon.

One of the most thought-provoking exchanges during this torture comes when Starbuck tells the Cylon that she recognizes the dilemma he is in.  He wants to be human because being human is better than being a machine.  But while he is being tortured, every instinct must be telling him to turn off his pain software.  But if he turns it off, he won’t be human anymore because the defining characteristic of being human is the capacity to feel pain.   I don’t know if I think that’s true or not but the conversation reminded me of research in machine learning that postulates that in order to really learn about the world, a robot must have a body. 

The importance of embodiment to learning comes from the observation that human knowledge, especially that most basic knowledge that makes up our “common sense”, is gained through via perception, through the interaction of our bodies with the physical world.  Not all AI researchers believe embodiment is necessary for learning.  Cyc is probably the most famous example of an attempt to codify all of human knowledge without the use of embodied machines.  The project was started in 1984 and has yet to be completed because of the difficulty of articulating all human knowledge.  Imagine trying to put all human knowledge into a computer by writing statements such as “Bill Clinton was a President”, “All trees are plants” and “Abraham Lincoln is dead.”  Each night, after spending the day coding statements like this, the researchers run some software (called an inference engine) which allows the computer to infer new statements about the world.  Each mornin, the researchers look at what the computer has inferred.  The inference process is somewhat flawed and the researchers find themselves having to correct some of the computer’s logic, encoding such bizarre facts as “If a person is dead, her left foot is also dead.”  Because of the difficulty of encoding these kind of facts, many researchers now believe that embodiment and direct experience of the world is a more efficient way to teach a machine about common sense knowledge.  So perhaps feeling pain is a necessary requirement for being human.

The same episode that contains this interesting conversation about the nature of humanity also contains a conversation about the purpose and effectiveness of torture.  After many hours of torturing the Cylon, Starbuck and her colleagues are visited by the President of the colonies who asks Starbuck whether she knows where the bombs are yet.  When Starbuck says no, the President asks why she has been torturing this man for eighteen hours, what makes her think she will get him to talk.  Starbuck replies that the Cylon is not a man which she seems to think justifies the torture.  The President orders that the torture be stopped since it has clearly not been effective.  The President later shows that this is not a sentimental choice, one that has been made because she is soft on the Cylons.  After getting the information she needs from the Cylon, she orders that he be placed in the airlock and sucked out into space so that he will no longer pose a threat.  The implication is that she ordered that the torture be stopped so that the humans would remain human, that the torture was damaging to the torturers and their humanity.

Themes of faith and love and treatment of outsiders and many other of the most interesting, controversial debates in our society run throughout this series.  I agree with Diane Winston, who said on Speaking of Faith that shows like Battlestar Galactica represent the great literature of our time, that people will come back to shows like this over and over, just as they read great books over and over.



{June 28, 2009}   The Decision Engine

I’ve seen a couple of commercials on TV for Microsoft’s newest product, Bing.  Microsoft claims that Bing is a “decision engine.”   What exactly is a “decision engine”?  According to a press release from Microsoft, a decision engine “goes beyond search to help customers deal with information overload.”  In other words, information is no longer power.  Products like Google (Microsoft’s competitor) present too much information in response to searches and humans now need help (more help than Google can give) to be able to make sense of it all.  And Microsoft steps in with Bing.

The traditional search engine does a good job of helping people find information, according to Microsoft’s press release, but the explosion of information means that people have difficulty actually using that information to make informed decisions.  So Bing will actually help us make decisions!  That seems like a bold claim to me especially since search engine optimization is typically incremental rather than revolutionary.  Is Bing as revolutionary as the phrase “decision engine” implies?  It’s difficult to say at this point but even Microsoft’s own promotional materials make me doubt it.

According to the press release, Microsoft did some research about the kinds of things that people search for and found that lots of people are interested in four areas when they search the web: “making a purchase decision, planning a trip, researching a health condition or finding a local business.”  Ok, so there’s the first way that Bing is not really a “decision engine.”  The tool will be optimized to deal with searches that are related to these four areas and the press release makes no mention of whether the tool will help me make other kinds of decisions.

The optimization strategy for dealing with these four areas also doesn’t seem particularly revolutionary to me.  The press release gives a bit of detail about the focus of the strategy.  In particular, Bing provides “great search results”, an “organized search experience”, and it simplifies tasks and provides insights.  What do these things mean?

“Great search results” simply means that Microsoft’s research found that only 25% of searches provide information that satisfies the searcher.  So in creating Bing, they tried to increase this percentage.  No details about how they’ve done this, however.  But don’t all search engine manufacturers try to provide results that are as relevant as possible?  So this is not a revolutionary strategy. 

Microsoft also did some research and found that people want the results of their searching to be organized.  So they added some organizational features to Bing.  These features include “Explore Pane, a dynamically relevant set of navigation and search tools on the left side of the page; Web Groups, which groups results in intuitive ways both on the Explore Pane and in the actual results; and Related Searches and Quick Tabs, which is essentially a table of contents for different categories of search results.”  When Microsoft uses the words “relevant” and “intuitive”, I am skeptical.  Remember “Clippy”, the paper clip cartoon character that was supposed to help us when we used Office?  Or how about the fact that Microsoft claims that they changed the menu structure in the Office suite for Vista so that the menus would be more “intuitive”?  There are too many examples that show that what Microsoft considers “relevant” and “intuitive” doesn’t match what most people consider “relevant” and “intuitive”.  So this statement from the press release doesn’t convince me that the claims that Bing is a “decision engine” is anything more than hype.

Finally, Microsoft claims that they use the strategy of simplifying tasks and providing insight.  Again, most search engine manufacturers probably want to do this so the strategy itself is probably not revolutionary.  But the fact that Bing focuses only on four primary areas of searching might mean that the tool can be optimized to simplify tasks and provide insights into these four types of searches. 

I haven’t yet used Bing.  The only way to know whether it really is a “decision engine” that will revolutionize the way we use the information provided on the Web is to use the tool.  Microsoft has had a search engine tool for a long time (quick–do you know what it’s called?).  It was called Live Search before it was upgraded and renamed to Bing.  But the fact that you probably didn’t know that name is an indication that the old tool was probably not very good, certainly not better than Google.  Given Microsoft’s record with upgrades, I feel pretty sure that calling Bing a “decision engine” is nothing more than hype.



et cetera