Desert of My Real Life











{June 11, 2012}   Interaction Design

I’m reading an interesting book by Janet Murray called Inventing the Medium: Principles of Interaction Design as a Cultural Practice. She is articulating things that I’ve thought for a long time but is also surprising me a lot, making me think about things in new ways. The book is about the digital medium and how objects that we use in this medium influence the way we think about the world. She argues that technological change is happening so quickly that our design for the medium hasn’t kept up. Designers use the conventions that work well in one environment in a different environment without really thinking about whether those conventions make sense in that second environment. As a result we get user interfaces (which is a term she doesn’t like but which I’ll use because most people interested in these things have a pretty good idea of what we mean by the term) that are far too complex and difficult to understand.

One idea that strikes me as particularly important and useful is Murray’s argument that designers create problems when they separate “content” from the technology on which the “content” is viewed. Like McLuhan, Murray believes that “the medium is the message,”  by which she means “there is no such thing as content without form.” She goes on to explain, “When the technical layer changes, the possibilities for meaning making change as well.” In other words, if you change the device through which you deliver the content, the tools needed to help consumers understand that content should probably also change. My favorite personal example of the failure of this idea is the Kindle, Amazon‘s e-reader. I’ve owned my Kindle for about three years and I mostly love it. One thing that feels problematic to me, however, is the reporting of where you are in the book that you’re reading. Printed books are divided into chapters and pages and it is easy to see how much further the reader has to go to the end of the book. Readers trying to read the same book might have difficulty if they are using different editions because page numbers won’t match up but the divisions into chapters should still be the same. If a page of text in a physical book corresponds to a screenful of text on an e-reader, page numbers don’t really make sense in e-books, mainly because the reader can change the size of the font so that less or more text is able to be shown on the screen at a given time. This means that the Kindle doesn’t have page numbers. But readers probably want to be able to jump around e-books just as they do in physical books. And they want to know how much progress they’ve made in an e-book just as they do in a physical book. So Amazon introduced the idea of a “location” in their e-books. The problem with a “location,” however, is that I have no idea what it corresponds to in terms of the length of the book so using locations doesn’t give me a sense of where I am in the book. For that purpose, the Kindle will tell me the percentage of the book that I’ve currently read. I think the problem with these solutions is that the designers of the Kindle have pretty much used the idea of pages, changed it only slightly and unimaginatively, and it isn’t as informative in the digital medium as it is with a physical book. I don’t know what the solution is but Murray suggests that the e-reader designers should think about the difference between “content” and “information” in their design.

Murray distinguishes between “content” and “information” and thinks that device designers have problematically tried to separate content from the technology on which this content will be viewed. So the designers of the Kindle see the text of the book as the content, something they don’t have to really think about in designing their device. Instead, Murray suggests that they focus on information design, where the content, which in this case is the text, and the device, in this case the Kindle, cannot be separated. The designers should think about the affordances provided by the device in helping to design the information, which is meaningful content, with which the reader will interact.

Another example appeared in my Facebook timeline last week, posted there by one of my friends pointing out the fact that the Mitt Romney campaign is insensitive at best and hostile at worst to women. The post is a video of Romney’s senior campaign advisor Eric Fehrnstrom, appearing on This Week with George Stephanopolous a week ago, calling women’s concerns “shiny objects of distraction.” Watching it, I was annoyed and horrified by what I was supposed to annoyed and horrified by. But I also noticed the ticker tape Twitter feed at the bottom of the video. The headline-type feeds at the bottom of the screen on television news have become commonplace, despite the fact that they don’t work particularly well (in my opinion). I’ve always felt that the news producers must know that the news they are presenting is boring if they feel they have to give us headlines in addition to the discussion of the news anchors. But in the video of Romney’s aide, the rolling text at the bottom of the screen is not news headlines but a Twitter feed. So the producers of This Week have decided that while the “conversation” of the show is going on, they want to present the “conversation” that is simultaneously happening on Twitter about the show. There are several problems with this idea, not least of which is that most of the tweets that are shown in the video are not very interesting. In addition, the tweets refer to parts of the program that have already gone by. And finally, the biggest problem is that the Twitter feed recycles. In other words, it’s not a live feed. They show the same few comments several times. Someone must have thought that it would be cool to show the Twitter conversation at the same time as the show’s conversation but they didn’t bother to think carefully about the design of that information or even which information might be useful to viewers. Instead, they simply used the conventions from other environments and contexts in a not very useful or interesting way.

Another of Murray’s ideas that strikes me as useful is the idea of focusing on designing transparent interfaces rather than intuitive interfaces. Intuition requires the user to already understand the metaphor being used. In other words, the user has to understand how an object in the real world relates whatever is happening on the computer screen. This is not particularly “intuitive,” especially for people who don’t use computers. I’ve been thinking about the idea of intuitive interfaces since I started teaching computing skills to senior citizens. For them, it is not “intuitive” that the first screen you see on a Windows computer is your desktop. And once they know that, it isn’t “intuitive” to them what they should do next because it’s all new to them and so they don’t have a sense of what they CAN do. For example, they can put a piece of paper down on a real desktop. Metaphorically, you can put a piece of paper (a file) down on the Windows desktop but the manner in which you do that is not “intuitive.” The first question I always get when I talk about this is: How do I create a piece of paper to be put on the desktop? Of course, that’s not the way they ask the question. They say, “How do I create a letter?” That’s a reasonable question, right? But the answer depends on lots of things, including the software that’s installed on the computer you’re using. So the metaphor only goes so far. And the limitations of the metaphor make the use of the device not particularly intuitive.

Murray argues that focusing on “intuition” is not what designers should do. Instead, designers should focus on “transparency,” which is the idea that when the user does something to the interface, the change should be immediately apparent and clear to the user. This allows the user to develop what we have typically called “intuition” as she uses the interface. In fact, lack of transparency is what makes many software programs feel complex and difficult to use. Moodle, the class management system that my University uses, is a perfect example of non-transparent software. When I create the gradebook, for example, there are many, many options available for how to aggregate and calculate grades. Moodle’s help modules are not actually very helpful but if the software was transparent, that wouldn’t matter. I would be able to make a choice and immediately see how it changed what I was trying to do. That makes perfect sense to me as a way to design software.

This book is full of illuminating observations and has already helped me to think more clearly about the technology that I encounter.



{June 5, 2012}   Magical Thinking

You probably haven’t noticed that I’ve been away for awhile. But I have. In fact, this is my first post of 2012. I have no excuse other than to say that being the chair of an academic department is a time sink. Despite my absence, there have been a number of things over the last five months that have caught my attention and that I thought, “I should write a blog entry about that.” I’m sure I’ll get to many of those topics as I renew my resolve to write this blog regularly. But today, I encountered a topic so important, so unbelievable, so ludicrous, that I have to write about it.

One of my friends posted a link to Stephen Colbert’s The Word segment from last night. Go watch it. It’s smart and funny but incredibly scary for its implications. For those of you who don’t watch it, I’ll summarize for you. The word is “Sink or Swim” (and yes, I’m sure Colbert knows that isn’t a word–he’s ironic). Colbert is commenting in this segment on the fact that North Carolina legislators want to write a law that scientists can only compute predicted sea level rises based on historical data and historical rates of change rather than using all data available. In other words, scientists are not allowed to predict future rates of change in sea levels, only future sea levels. They cannot use the data that they have available that show that the rate of change itself is increasing dramatically. Instead, they can only predict the sea level based on how fast it has risen in the past. Colbert has a great analogy for this. He suggests that his life insurance company should only be able to use historical data in predicting when he will die. Historical evidence shows that he has never died. Therefore, his life insurance company can only use that evidence in setting his life insurance rates. Never mind the fact that there is strong evidence from elsewhere that suggest it is highly likely that he will die at some point in the future. The analogy is not perfect but I think it illustrates the idea.

Using all evidence, scientists are predicting sea levels will rise by about a meter (Colbert makes a funny comment that no one understands what this means because it’s in metric–that’s the subject of another post) before the end of the 21st century. If this is true, anyone who develops property along the coast will see their property underwater in a relatively short amount of time. Insurance rates for such properties will probably be astronomical and it might even be impossible for such development to occur because without insurance, loans may not be able to be secured. That’s not good for business. In what can only be called “magical thinking,” the North Carolina legislature is putting it into law that climate change models can only use historical sea level rising rates to make predictions about future sea levels. Such models ignore the data that suggests that the rate of rise in sea levels is increasing. This will make the historical rates of increase look incredibly slow. In fact, the bill actually says, “These rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of seas-level rise may be extrapolated linearly … .” So despite evidence that sea levels are rising in a non-linear manner (because the rates of increase are actually increasing), predictions cannot use this fact. When scientists use a linear rate of increase, the models predict that sea levels will rise by “only” 8 inches by the end of the century. I think even these rates are scary, especially for coastal development projects, but scientists are pretty sure they vastly underestimate the extent of the danger. It’s as though these legislators think they can simply wish away climate change.

We live in a society where saying something is so is often as good as it being so. Is Barack Obama a citizen of the US? Evidence indicates that he actually is but critics persist in saying that he isn’t. As recently as 2010, 25% of survey respondents believed that he was born in another country and so isn’t eligible to be president. Were the 9/11 attackers from Iraq? Despite the objective evidence, 44% of the American public believe that several of them were Iraqis, which would then presumably be justification for the war in Iraq. Is global warming caused by humans? Despite overwhelming scientific opinion that it is, only 47% of the American public believe it is. Why do people believe these erroneous claims? Because the media (or at least parts of the media) advocate such positions. And because we are guilty of magical thinking. Say something is true and it will be true.

Scott Huler of Scientific American says it better than I can: “North Carolina legislators are now tossing around bills that not only protect themselves from concepts that make them uncomfortable, they’re DETERMINING HOW WE MEASURE REALITY.” Meanwhile, sea levels rise non-linearly, no matter what the North Carolina legislature legislates. And because we refuse to accept reality, we lose valuable time for an effort to reverse or at least to slow down this scary trend. So I have a tip for you: don’t buy any coastal property.



{April 24, 2011}   Games and Lessons for Life

I am a sucker for stories about the relationship between games and life.  When I was a graduate student, a story in the Tallahassee Democrat about the life of Warrick Dunn, a star football player whose police officer mother was killed in the line of duty while he was in high school, brought me to tears.  I love movies like Sea Biscuit and Brian’s Song.  I have myself written blog entries ruminating about what we can learn about life from playing games.

So you would think a story that I heard on NPR this morning would be right up my alley.  Weekend Edition Sunday host Liane Hansen interviewed Dan Barry, author of a new book called Bottom of the 33rd about the longest baseball game ever played in the history of US men’s professional baseball.  This particular game was played in 1981, between the Pawtucket Red Sox and the Rochester Red Wings, farm teams of the Boston Red Sox and the Baltimore Orioles, respectively.  The teams played 32 innings in 8.5 hours before the owner of the league called the umpires to tell them to halt the game.  That was at 4 in the morning on Easter Sunday and there were 19 people left in the chilly stands in Pawtucket, RI.  When the teams reunited 2 months later to finish the game, nearly 6000 fans showed up and over 140 reporters from all over the world came to cover it.  Pawtucket won the game in the bottom of the 33rd inning, a mere 18 minutes after the game resumed.

The subtitle of Dan Barry’s book is Hope, Redemption and Baseball’s Longest Game.  I expected the interview on NPR to touch on hope and redemption and perhaps something about how this longest game can teach us something about perseverance.  Instead, the interview focused on the facts of the game, including the fact that Cal Ripken, Jr., who went on to set the record for consecutive starts in the Major League, played all 33 innings and that Wade Boggs, future Hall of Famer, tied it up for Pawtucket in the twenty-first inning. Barry also told us that the original 19 fans who stuck it out for those 32 innings in April were annoyed that nearly 6000 people could now say they saw history being made when they really only had seen the last inning of that historic game.

But nothing in the interview touched on hope or redemption.  Or perseverance.  Or anything of importance.  Which annoyed me.  Not every sports story is a story about life, about issues larger than the game itself.  A book about a particular game that is the longest in professional history is probably of interest to baseball fanatics.  The fact that NPR picks the author of that book as someone deserving of an interview implies there is more to the story, something that we can all learn from.  As far as I can tell, that is not the case with this particular game or this particular book, the hyperbole of its subtitle notwithstanding.  Adding the words “hope” and “redemption” to the subtitle of a book will not make that book interesting for a general audience.  I realize I’m judging the book by its interview.  Maybe that’s not fair.  But neither is it fair to promise us a discussion of what a game can tell us about hope and redemption and instead waste our time with the facts and statistics of a particular game.  Come on, NPR.  With all the real, inspiring sports stories out there, we deserve better.  Did you choose to tell us about this book simply because the game went into the wee hours of Easter morning, 1981, which happens to be 30 years ago today?  That coincidence also doesn’t make this story interesting for the general reader.



{October 22, 2010}   Original Research–Good or Bad?

I recently rewatched Julia, the 1977 film starring Jane Fonda and Vanessa Redgrave.  It is based on a chapter in Lillian Hellman‘s memoir, Pentimento: A Book of Portraits.  That chapter tells the (probably fictional) story of Hellman’s longtime friendship with Julia, a girl from a wealthy family who grows up to fight fascism in Europe in the 1930s.  I loved this book when I read it in high school and I went on to read nearly all of Hellman’s other work as well as several biographies.

As I watched the movie, several questions occurred to me and so, being a modern media consumer, I immediately searched for answers online.  This search led me to Wikipedia, which for me is a fine source of answers to the kinds of questions I had.  In fact, I use Wikipedia all the time for this sort of thing.  I was surprised then to find the following qualifying statement on the entry for Pentimento:

This section may contain original research.  Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed.

As I said, I use Wikipedia a lot.  And I have never seen this qualifying statement before.  I think this statement implies that original research is somehow bad.  I don’t think that’s what the folks at Wikipedia mean.  At least, I hope it’s not what they mean.  So I decided to look into the statement a little more deeply.  There are a couple of parts of the statement that are interesting.   

First, the words “may contain” are in bold.  I think that’s supposed to indicate that the section or may or may not contain original research.  It’s clear that articles in Wikipedia should NOT contain original research but it isn’t clear why. 

I then checked to see how “original research” is defined by Wikipedia and found this on their policy pages: “The term ‘original research’ refers to material—such as facts, allegations, ideas, and stories—not already published by reliable sources.”  How would one determine whether a particular section contained “original research” or not?  Probably by looking for references to “reliable sources” in the section.  Therefore, if a section doesn’t contain references (or not enough references), it might be difficult to determine whether that’s because the author simply didn’t include references to other available sources, the work is based on “original research” or the work is completely fabricated.  Or, I guess, it could be some combination of the three reasons.  So I guess that’s why “may contain” is in bold.  The lack of references could mean any number of things.

The next part of the qualifying statement is even more interesting to me.  “Please improve it by verifying the claims made and adding references.”  This statement implies that “original research” is somehow less valid than work that has been taken from another source.  Again, I doubt that’s what the Wikipedia folks mean. 

So I continued to investigate their policies and found this: “Wikipedia does not publish original thought: all material in Wikipedia must be attributable to a reliable, published source. Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.”  Because of this policy against publishing original thought, to add references to an article or section of an article does indeed “improve” it by making it conform more closely to Wikipedia’s standards for what makes a good article.

This policy against publishing original thought explains the rest of the qualifying statement.  My investigations into Wikipedia’s policies found policies about what it means to “verify” statements in an article.  This is important because Wikipedia says that included articles must be verifiable (which is not the same as “true”), that is, users of Wikipedia must be able to find all material in Wikipedia elsewhere, in reliable , published sources.  And yes, Wikipedia explains what they mean by “reliable.”  That discussion is not easily summarized (and isn’t the point of this post) so anyone who is interested can look here

My surprise concerning the qualifying statement boils down to wording and I think the wording of the statement needs to be changed.  Currently, it implies that original research is bad.  But through my investigation, I’ve decided that Wikipedia probably means that articles should not contain unverified, unsourced statements.  Such statements could come from author sloppiness, original research or outright fabrication.  In any case, they should not be part of Wikipedia’s articles. 

Of course, I haven’t discussed whether the policy of not publishing original thought is an appropriate policy or not.  I have mixed feelings about this.  But that’s a subject for another post.



Anyone who knows anything about me knows that I am an agnostic.  So it might be a surprise to learn that this post was inspired by a sign on a church.  I was out on my bike this afternoon, a place where I do some of my best thinking.  I was reflecting on the challenges that have faced me and my close friends over the last year or two and how we have supported each other through some very difficult times.  Near the end of my ride, the sign on the church in the center of Campton said, “If you’re headed in the wrong direction, God allows U-turns.”

People who know me also know that I am interested in games.  Any game, anywhere, any time.  That is, I’ll play any game, anywhere, any time.  But I’m also interested in studying them as an academic topic.  One of the game scholars that I ask my students to read is Greg Costikyan.  He wrote an article in 1994 (long before game studies was recognized as a legitimate area of academic interest) in which he tried to provide a definition of “game.”  The article, called I Have No Words and I Must Design, identifies six elements that an activity must have in order to be considered a game.  I don’t completely agree with Costikyan in his efforts in this article but I think that’s because I have the benefit of having read tons of articles that analyze games and “gameness.”   So even though I disagree on some points, I respect and admire most of what he says.

Costikyan says an activity must have six elements in order to be considered a game.  If it is missing any of the six elements, it is something other than a game, perhaps some other kind of play, but not a game.  His six elements are: tokens, goal(s), opposition, decision-making, information and managing resources.  By the way, here comes the geeky part of the post.  If you aren’t a geek and are just interested in the philosophical part, jump ahead six or seven paragraphs.

A game must have game tokesns.  He means that there must be something within the game that represents the player and the player’s status within the game.  In Monopoly, for example, the player’s piece (top hat, race car, horse, and so on) is a game token because it represents the player.  But the cards with the various properties that the player owns (Broadway, Marvin Garden, Illinois Ave, and so on) are also game tokens because they also represent the player’s status within the game.  In addition, the fake money that a player has represent how wealthy or poor the player is and, therefore, are game tokens.  In some games, like basketball, the player’s body is his/her game token.

A game must have a goal, something the player is striving for.  In Monopoly, for example, the goal is to be the last player with money or, in other words, to bankrupt all the other players.  in War, the goal is to obtain all of the cards in the deck.  This is an item that makes some activities that we normally consider to be games not games in Costikyan’s point of view.  For example, SimCity and The Sims are not games according to Costikyan because they don’t have goals that are set by the game.  The player can create a goal to strive for but the game doesn’t impose that on the player.

A game must have opposition, something that gets in the way of the player reaching his/her goal.  This is a simple, yet profound, statement.  By entering into the realm of the game, the player agrees to try to reach the goal of the game in a kind of circuitous manner.  The particpant in the game of War will not just grab all of the cards in the deck but will instead abide by the rules of the game and attempt to overcome the obstacles that the rules place in his/her way.  The opposition in a game typically comes from the rules of the game as well as any opponents who are trying to achieve the same goal.

A game must have decision-making.   This is perhaps the most important characteristic of a game.  A player must be presented with a series of choices, each of which impacts on his/her chances of reaching the goal before his/her opposition.  In fact, Costikyan would not consider the card game War a game because there is no decision-making.  In War, a player simply flips an unknown card at random from his/her deck and hopes for the best.  Decision-making allows the player to control his/her destiny (to an extent).  Through decision-making, the player expresses a personality, a strategy for how to win the game.

In order to make good decisions, a player must be presented with some information about those decisions.  To understand this concept, think about the game of War (which, again, Costikyan would not consider a game).  The player in this game is not presented with any decision-making opportunities.  He/she simply flips a card and hopes for the best.  Many of my students, presented with the challenge of adding decision-making to the game, suggest that the player’s deck of cards be split into two decks and the player must decide the deck from which to flip a card.  If the cards are all faced down, this clearly does not add a decision to the game, because the player has no information about the contents of each deck.  That information, in other words, is hidden from the player.  So in order to make a good decision, the player must be presented with SOME information.  In Chess, the player is presented with perfect information, that is,  nothing is hidden from the player.  In a game like Texas Hold ‘Em, on the other hand, the player is presented with imperfect, or mixed, information, that is, some of the information is known to the player while some is unknown.  The player must use the known information wisely and make an informed guess at the unknown information in order to make the best decision possible.

Finally, the player must manage his/her resources.  A resource is something the player uses in order to achieve the goal of the game.  For example, in Monopoly, one of the player’s resources is the space s/he lands on.  If the space has not currently been purchased, the player can use the information they have about who owns what, the price of each property, whether s/he will make a monopoly, and so on, to determine whether to purchase this property.  The relationship between decision-making, information and the management of resources is an intimate one, one that is difficult to pull apart.

So what does all of this have to do with philosophy and my bike ride?  On my ride, I was thinking about my life and the lives of my close friends.  Nearly all of us have dealt with major life changes and difficulties in the last year or two.  We have been an incredibly supportive community for each other while we make difficult decisions about our lives.  This made me think about games and decision-making.  I can only speak for myself, but my guess is that we are all seeking a particular goal in our lives.  For me personally, that goal is to be happy.  I want to live a happy, healthy life.  But many obstacles (my opposition) have been placed in my way.  So I have had to use my resources (which include these very friends that I am writing about) and the information given to me at a particular moment in time to make the best decisions possible to try to achieve my goal.  The information piece of things was really interesting to me as I thought about this.  In our lives, our information is always imperfect.  We never know everything as we’re trying to make decisions about the direction of our lives.  We must use the information we have and make educated guesses about the information we don’t have in order to make the best decisions we can.

I was thinking about all of this as I rode, thinking about the decisions I’ve made in the last year or so and how I came to those decisions.  And I was thinking about my friends and the decisions that they are being presented with and how they will go about deciding.  It’s all very game-like, especially if you think about our goal in life being to achieve happiness.  There are probably other goals but that’s what I was thinking about today.  And then I rode past this sign that said, ” If you are headed in the wrong direction, God allows U-turns.”  And suddenly, it was like everything came together in my head (even though I don’t believe in God). 

Here’s the connection.  Given imperfect information, we all make decisions about our lives that push us in a particular direction.  As we move forward, more information presents itself to us.  We use this new information to make new decisions about where to proceed.  At various points in our lives, we may figure out that at some fork in the road in our past, we have made the wrong decision,  We are now using the wrong strategy, headed down the wrong path to acheive our goal of happiness.  But  these are our lives.  It’s not too late to make a correction.  The fact that we have headed down a particular path for some amount of time in our lives does not condemn us to continue down that particular path, to use that particular strategy, for the rest of our lives.  We can reverse course and revisit our decisions.  It’s allowed.  We can use all of the information presented to us to try to achieve happiness.  It’s allowed.

Is that a philosophy of life?  I don’t know but I feel like I’ve lived it for the last year.  It’s been difficult but I think I’m on the right path now.  At least according to the information that has been presented to me up to this point.  That’s the best I can do.



{September 28, 2009}   Human Pain

I’ve been watching Battlestar Galatica on DVD.  One of the roles of science fiction, I think, is to raise controversial issues, to help us understand what it means to be human.  Although the original 1970’s miniseries was cheesy and not very interesting, a few changes to the original idea makes the recent TV show one of the best when it comes to asking difficult questions and making us think about things in a new way.

The basic plot of the show is that humans created machines which then evolved into autonomous, intelligent beings called Cylons.  Humans colonized twelve planets and after years of relative peace, the Cylons attacked the humans, destroying much of the human population of the colonies.  The survivors, including those aboard a number of space ships, are now on the run from the Cylons, struggling to survive a war with a superior enemy.

One of the major changes from the miniseries to the TV show is in the look of the Cylons.  In the miniseries, the Cylons were one of the cheesiest parts of the show, looking like robots made primarily of cardboard.  In the new show, some of the Cylons look like machines but now they are computer-generated and sophisticated.  But the most interesting change comes from the fact that Cylons can look and act just like humans.  They bleed and sweat and some of them are even programmed to think that they are human, leading to what appear to be emotional responses such as love.  Human-looking cylons allow the writers to raise questions about civil rights and justice and faith. 

For example, season one of the show, which aired in 2004 and 2005, raised issues about terrorism and torture and justice at a time when the Abu Ghraib scandal was fresh in the news.  The humans on the ship called Galactica discover a human-looking Cylon in their midst.  Their instinct is to kill the Cylon by putting it into space (because human-looking Cylons breathe oxygen just as humans do) but the Cylon claims that there are several bombs planted throughout the fleet, scheduled to go off in a short amount of time.  Sensing an opportunity to prevent these bomb attacks, the military commander sends the best human pilot, Starbuck, to question the Cylon (ok–so the plots are always completely logical).  The Cylon messes with Starbuck’s head, telling her lies containing just enough truth to make her wonder what’s true and what isn’t.  But he won’t tell her where the bombs are.  Starbuck notices that the Cylon sweats and reasons that if he sweats, he must feel fear and pain.  So she and her colleagues begin to torture the Cylon.

One of the most thought-provoking exchanges during this torture comes when Starbuck tells the Cylon that she recognizes the dilemma he is in.  He wants to be human because being human is better than being a machine.  But while he is being tortured, every instinct must be telling him to turn off his pain software.  But if he turns it off, he won’t be human anymore because the defining characteristic of being human is the capacity to feel pain.   I don’t know if I think that’s true or not but the conversation reminded me of research in machine learning that postulates that in order to really learn about the world, a robot must have a body. 

The importance of embodiment to learning comes from the observation that human knowledge, especially that most basic knowledge that makes up our “common sense”, is gained through via perception, through the interaction of our bodies with the physical world.  Not all AI researchers believe embodiment is necessary for learning.  Cyc is probably the most famous example of an attempt to codify all of human knowledge without the use of embodied machines.  The project was started in 1984 and has yet to be completed because of the difficulty of articulating all human knowledge.  Imagine trying to put all human knowledge into a computer by writing statements such as “Bill Clinton was a President”, “All trees are plants” and “Abraham Lincoln is dead.”  Each night, after spending the day coding statements like this, the researchers run some software (called an inference engine) which allows the computer to infer new statements about the world.  Each mornin, the researchers look at what the computer has inferred.  The inference process is somewhat flawed and the researchers find themselves having to correct some of the computer’s logic, encoding such bizarre facts as “If a person is dead, her left foot is also dead.”  Because of the difficulty of encoding these kind of facts, many researchers now believe that embodiment and direct experience of the world is a more efficient way to teach a machine about common sense knowledge.  So perhaps feeling pain is a necessary requirement for being human.

The same episode that contains this interesting conversation about the nature of humanity also contains a conversation about the purpose and effectiveness of torture.  After many hours of torturing the Cylon, Starbuck and her colleagues are visited by the President of the colonies who asks Starbuck whether she knows where the bombs are yet.  When Starbuck says no, the President asks why she has been torturing this man for eighteen hours, what makes her think she will get him to talk.  Starbuck replies that the Cylon is not a man which she seems to think justifies the torture.  The President orders that the torture be stopped since it has clearly not been effective.  The President later shows that this is not a sentimental choice, one that has been made because she is soft on the Cylons.  After getting the information she needs from the Cylon, she orders that he be placed in the airlock and sucked out into space so that he will no longer pose a threat.  The implication is that she ordered that the torture be stopped so that the humans would remain human, that the torture was damaging to the torturers and their humanity.

Themes of faith and love and treatment of outsiders and many other of the most interesting, controversial debates in our society run throughout this series.  I agree with Diane Winston, who said on Speaking of Faith that shows like Battlestar Galactica represent the great literature of our time, that people will come back to shows like this over and over, just as they read great books over and over.



{July 24, 2008}   Philosophy of Jokes

On Word of Mouth today on NHPR, I heard an interview with Jim Holt, the author of Stop Me if You’ve Heard This: A History and Philosophy of Jokes. His book is short, only 160 pages, for a history of jokes. The fact that it contains history and philosophy makes me even more suspicious about how much coverage he could possibly have included in the book. But I was intrigued by the philosophy part of what he had to say. He discussed several theories about why people find some jokes funny and I think these theories can illuminate why people find some things fun.

The description of the interview on the Word of Mouth web site says, “Humor lives in the moment and the more you take it apart, the less humorous it becomes.” I think I disagree with this statement–the reason I say “I think I disagree” is because I haven’t thought about this kind of comment in relation to humor but I have heard similar comments about other phenomena and I disagree with those comments. An acquaintance once said to me that she doesn’t want to know too much about astronomy because that knowledge would take away from the beauty of the stars. I completely disagree with this statement. In fact, Richard Dawkins wrote a book called Unweaving the Rainbow: Science, Delusion, and the Appetite for Wonder in which he argues that the more you know about how the world works, the more wondrous the world becomes. In other words, ignorance is NOT bliss! I also hear comments like this when it comes to talking about games. Students often want to begin and end their analysis of a game with the statement: “It was (or wasn’t) fun.” But if I press them to articulate why it was fun, they often complain that doing so takes the fun out of the game. Clearly, I disagree with that idea. Otherwise, I wouldn’t teach game design and analysis. So I think I disagree with the statement that trying to figure out how humor works makes the humor disappear. But I do acknowledge that it is possible that humor is somehow different than these other phenomena.

Anyway, Holt discussed several theories about what we find funny and why. I think at least one of these theories can help us to understand what we find fun and why. Humans have probably been telling jokes since before they could speak (if you consider slap-stick a kind of joke). The oldest known joke book is called The Philolegos, or Laughter Lover, which is a Greek anthology from the fourth or fifth century a.d.

The theory of why we laugh at jokes that I found most interesting (and useful for thinking about game design) is the incongruity theory (which is actually about the resolution of incongruity). We perceive an inconsistency of some sort–two things that don’t normally go together or a sentence that seems irrelevant to the story being told. Inconsistencies heighten our attention–an inconsistency in our world might signal the presence of a predator and if we don’t pay attention, we might end up as someone’s lunch. With our attention (and anxieties) heightened, we try to resolve the incongruity. When the resolution finally comes, we realize that the incongruity was actually harmless and we laugh with relief.

The incongruity theory is useful for us in trying to understand why games are fun. Like Michael Shermer, I believe that we are “pattern-seeking” animals. We have evolved to look for patterns in our world–those ancestors who were good at finding patterns were good at seeing the predator hiding in the trees and so survived while those that missed such patterns ended up as lunch. As Steven Johnson reported in Discover magazine, when we find a pattern, we get a little jolt of pleasure in our brains. Games present patterns for us to discover and it’s pleasurable for us to find those patterns. I’m sure it’s one of the reasons I’m addicted to Dr Mario Online Rx. I get a little jolt of pleasure every time I resolve the inconsistency of the active viruses by manipulating the falling pills. So at least one way to put fun into games is to focus on patterns. The patterns have to present an incongruity that can be resolved by the player, but not too easily. If the player doesn’t recognize the incongruity or the incongruity cannot resolved in a recognizable way, the player will be frustrated. If the incongruity is too easily resolved, the player will be bored.

The use of incongruities in games is related to what the authors of the text that I use in Creating Games (Game Design Workshop) call challenge. In order to make a game more fun, the authors say, a game designer can focus on the dramatic elements of the game, one of which is challenge. By focusing on making the challenge appropriate to the level of skill of the player, the game designer can avoid frustration and boredom, both of which are antithetical to fun. When the level of challenge presented to the player closely matches her skill level, she enters a state called flow, in which the player “is fully immersed in what he or she is doing by a feeling of energized focus, full involvement, and success in the process of the activity.” Entering the flow state, being completely immersed in what you’re doing, is pleasurable. As game designers, our ultimate goal is allow someone to enter the flow state while playing our games.



Because so many of my friends are completely addicted to FaceBook (and threatening not to be friends with me anymore), I decided to join two days ago (less than 36 hours ago). In keeping with the entire Web 2.0 movement, I feel that I should share my impressions of FaceBook immediately, before I’ve had too much time to reflect on the experience.

The first strange experience I had on FaceBook involved the status update feature. This is a feature that allows the user to tell her friends what she’s currently doing. One of the options was “Cathie is sleeping” and so when I went to bed, I changed my status to that. Yesterday morning, I logged in for further exploration and Liz was online (and of course, by “online”, I mean “on FaceBook”). I had forgotten to change my status when I logged in and so the first thing Liz said to me was “You aren’t sleeping.” She was right, of course. I was freaked out by the fact that my un-updated status was immediately noticed and commented upon. So I changed my status to “Cathie is freaked out by the status update feature.” This was immediately commented on by the two Robins, both of whom said something like: “It’s how we track your every movement.” Which, of course, freaked me out even more.

The second strange experience is one that Ian Bogost calls “collapsed time.” After I filled out my profile on FaceBook (entering things like where I went to high school, college and so on), the first person the site suggested that I add as a FaceBook friend is someone who actually is a friend of mine, Amy Briggs. I’ve known Amy since I was in seventh grade and she was in sixth. We went to high school together and then went to Dartmouth College together, where we were two of the very few women majoring in Computer Science in the mid-1980s. We both went on to get PhDs in Computer Science and we’re now both faculty members at small New England colleges (although she has gone over to the dark side and is Middlebury’s Acting Dean of Curriculum). Because of the similarities in our backgrounds, it was probably a no-brainer to suggest that I add her as a friend. And, of course, I did. To complete the friendship relationship in FaceBook, however, the second party must agree to the friendship. So I went to bed Monday night without Amy in my FaceBook friend list. By the time I logged into FB Tuesday morning, however, Amy had accepted my request for friendship. What’s strange to me is how FB reported this to me. It said, “Cathie and Amy Briggs are now friends.” Now we’re friends? Despite the fact that we’ve known each other for more than 30 years, now we’re friends? As Bogost has pointed out, FB collapses time to this moment. Now is the only time that matters. This freaks me out just a little bit.

The third strange experience happened this morning. I have been on FB for just more than 36 hours so I have only dabbled in exploring the many features available. For example, I have uploaded only one picture, mostly just to see how the upload feature works. It’s a picture of Ann and I taken at a baby shower this winter. (Despite the fact that I have just joined FB, quite a few other pictures of me are there because of the addicted friends I mentioned earlier. It’s another interesting and freaky aspect of these social networks that you can “exist” on the network without even knowing it.) A friend teased me via a comment on my wall (a public space on which FB members can post comments for and about you), implying that I need to get more photos out there. Although I know the comment was meant in jest, I think it illustrates an issue concerning “immediacy,” in which users expect stuff to happen immediately. The immediacy issue is related to the issue of collapsed time in that they are both about an emphasis on now. And on FB, stuff does happen immediately. And then all your friends are immediately notified about it. Freaky.

And that leads me to the last of my current impressions about FB. I’m having significant information overload. As a user, you can control the kinds of things you are notified about via email. By default, you are notified about everything. So when someone accepts your offer of friendship, you get an email about it. When a friend changes her status, you get an email about it. When a friend writes on your wall, you get an email about it. When a friend adds a photo to her page, you get an email about it. And so on. Like I said, you can change these settings but as a new user, it’s difficult to decide what you want to get an email about and what you don’t. I’m finding it challenging to keep up with it all. This brings to mind Sturgeon’s Law, which says: “Ninety-nine percent of everything is crap.” Since I’m writing these impressions without having thought them through, that’s also what I’m thinking about this blog entry.



{June 14, 2008}   The Real No Longer Exists

We went to Alpine Adventures in Lincoln, NH yesterday with a group of friends to ride the ziplines that they have set up in the woods on Barron Mountain. We had a great time. Traveling through the woods at speeds of about 25-30mph suspended by a harness from a steel cable is an awesome way to spend an early summer afternoon.

So what does ziplining have to do with technology and society? Sometimes I tend to think of technology pretty narrowly, thinking only of computing technology. But there is an amazing amount of engineering involved in setting up a canopy tour that will allow a wide variety of tourists to move safely from tree top to tree top. But that’s not the technology connection that interested me most about yesterday’s adventure.

We went out for a drink after the adventure and we learned that one of the women in our group is deathly afraid of heights. In fact, her family was doubtful that she would be able to jump off the platforms to do the ziplining. Someone said to her, “It’s a good thing we took lots of pictures because otherwise your family wouldn’t believe that you did it.” It was her response that I found most interesting. She said, “I’m glad we took pictures because otherwise I wouldn’t believe I did it.” In other words, the pictures will serve as proof to herself that she experienced her own experiences.

This is an example of what Jean Baudrillard meant when he said that the real no longer exists. (Thanks to Ann for helping me to understand Simulacra and Simulation–in fact, she dragged me through that book). What this provocative statement means is that in contemporary society, the copy has replaced the original in importance. So in order to experience the zip line, the woman who was so deathly afraid actually needs to see the copies of the experience (the images) because they are more real than reality. Baudrillard would say that they are hyperreal.

The hyperreal, the need for a (re)mediation of an experience in order for the experience to feel “real”, is something that I’ve encountered in my own experiences. For example, the day that New Hampshire’s famous Old Man of the Mountain fell off Cannon Mountain, Liz, Evelyn and I were driving to have breakfast at Polly’s Pancake Parlor in Sugar Hill. As we drove through Franconia Notch, I looked for the Old Man and never found it. We joked that perhaps it had finally succumbed to gravity but didn’t believe, despite the evidence before us, that this could actually be true. When we got to Polly’s, we heard that it had indeed fallen. On the way home, heading south through the Notch, I couldn’t believe my eyes–no granite face, police cars and helicopters everywhere. I remember saying that we needed to watch the news to be sure it really had fallen. I needed the experience to be mediated, copied, simulated, in order for it to feel “real” to me.



{April 29, 2008}   Interdisciplinarity?

Ian Bogost’s keynote presentation at this year’s Game Developer’s conference is about the notion of interdisciplinarity. Interesting stuff.



et cetera