Desert of My Real Life











{May 22, 2009}   NeMLA 2010

There have been quite a few stories that have captured my attention in the nearly six month break that I’ve taken from writing entries in this blog.  I will be sharing several of those stories in the next few days.  In the meantime, I recently had a panel proposal accepted for the Northeast Modern Language Association conference that will be held in Montreal in April 2010.  Here’s the call for papers for my panel:

Playing Web 2.0: Intertextuality, Narrative and Identity in New Media

 

41st Anniversary Convention, Northeast Modern Language Association (NeMLA)

April 7-11, 2010

Montreal, Quebec – Hilton Bonaventure

 

A recent Facebook spoof of Hamlet by Sarah Schmelling illustrates the current proliferation of experiments in narrative form and intertextuality found in new media.  Web 2.0 tools, such as wikis, blogs and social networking sites, allow the average web user to actively participate in online life.  Given our societal bent toward postmodernism, it is not surprising that much of this online participation is characterized by a proclivity to challenge and play with traditional conventions.  This panel will examine play, defined in the broadest sense by Salen and Zimmerman as “free movement within a more rigid structure”, using Web 2.0 tools and new media.  Some questions of interest to the panel include:  Are there particular attributes of new media technologies that encourage play?  How is new media play different from/similar to play found elsewhere?  What impact do new media technologies have on our notions of play?  What are the motivations of those who engage in play via new media technologies?  Some example topics for the panel include: experimentation with new literary forms using social networking conventions (such as the 140-character status update); creation of online identities using text-based tools such as blogs; development of fictional worlds by fans of popular culture narratives using wikis and blogging tools; the use of casual online games to influence attitudes and behaviors concerning issues of social importance.

Submit 250-word abstracts to cleblanc@plymouth.edu.

 

Deadline:  September 30, 2009

 

Please include with your abstract:

 

Name and Affiliation

Email address

Postal address

Telephone number

A/V requirements (if any; $10 handling fee)

 

The 41st Annual Convention will feature approximately 350 sessions, as well as dynamic speakers and cultural events.  Details and the complete Call for Papers for the 2010 Convention will be posted in June: http://nemla.org/.

 

Interested participants may submit abstracts to more than one NeMLA session; however panelists can only present one paper (panel or seminar).  Convention participants may present a paper at a panel and also present at a creative session or participate in a roundtable.

 

Travel to Canada now requires a passport for U.S. citizens.  Please get your passport application in early.



{December 31, 2008}   Failed Predictions

Predicting the future is a notoriously difficult endeavor and yet there is never a shortage of people willing to play the game, especially at the end of a year. 

Many of the predictions for 2009 seem to involve world politics.  For example, over at Psychic World, Craig and Jane Hamilton-Parker predict that an assassination attempt on Barack Obama will occur in 2009.  They posted this prediction on October 9, 2008 and then updated the entry on October 27, 2008 (in red font, just so we know that it’s an important update).  The update tells us (and I can almost hear the breathlessness with which this important information is stated) that this prediction already came true!  Apparently, the vague assassination “plot” by two neo-Nazis thwarted by the ATF in October constitutes an assassination “attempt”.  The fact that these men did not actually begin to implement the plot, which involved first shooting over 100 black people in Tennessee and following that spree up with the assassination of then-Senator Obama, doesn’t matter to the psychics who made this prediction.  It still counts as a success for their ability to predict the future.  An even bigger issue for me is the fact that they predicted the assassination attempt would take place in 2009.  Clearly, this plot was discovered in 2008.  The psychics never discuss how useful it is for a prediction to be that far off in its timing and details.

As amusing as I find the predictions of psychics who claim to be able to “foresee” the future, the predictions that I’m most interested in are the ones made by those who examine trends and then predict where those trends will take us.  People who make these kinds of predictions are called “futurists” or “futurologists” and, unlike psychics, claim no mysticism in coming to their predictions.  Instead, according to Wikipedia, futurologists study “yesterday’s and today’s changes, and aggregating and analyzing both lay and professional strategies, and opinions with respect to tomorrow. It includes analyzing the sources, patterns, and causes of change and stability in the attempt to develop foresight and to map possible futures.”  Although futurologists make predictions about many different fields, I’m particularly interested in the area of technology, especially because technological change is very rapid and vast.  I think technology shows despite their claims to scientific methodologies, the predictions of futurologists are typically as wrong as the predictions made by those claiming to have a mystical insight into the future. 

The technological futurologist that has gotten the most attention in the US in recent years is Ray Kurzweil, the author of a number of books that have captured the popular imagination.  Kurzweil is a computer scientist from a time when computer scientists were rare.  When he was just a teenager, long before computers were widespread and common, he created computer software that wrote impressive musical compostions using the patterns it discovered analyzing great masterworks.  He also developed the first optical character recognition software which led to his invention, in 1976, of The Reading Machine, which read written text out loud for blind people.  Since that time, he’s invented musical synthesizers, speech recognition devices, computer technology for use in education, and a whole host of other useful tools.  He’s obviously a smart, creative guy who knows a lot about technology and how to use it to benefit humans.  Kurzweil’s faith in technology is so great that he considers himself to be a transhumanist, advocating the use of technology to “overcome what it regards as undesirable and unnecessary aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death,” according to Wikipedia.  It is in this area that many of his predictions fail.

In his 1999 book, The Age of Spiritual Machines, about the impact of artifcial intelligence on human consciousness, Ray Kurzweil made a number of predictions about technology at the end of 2009, 2019, 2029, and 2099.  Since we are just about to begin the year 2009, I thought it might be interesting to consider how likely it is that Kurzweil’s predictions can come true in the next year.  Chapter 9 of the book, which makes predictions for 2009, can be read online here.

The chapter is divided into sections called The Computer Itself, Education, Disabilities, Communication, Business and Economics, Politics and Society, The Arts, Warfare, Health and Medicine, and Philosophy.  Although some of Kurzweil’s predictions have indeed come to be reality, the vast majority of them are still far off into the future.  In fact, some involve technological tangents that seemed interesting in 1999 but that our society has chosen not to pursue.

Kurzweil predicted that the computer itself would be much more ubiquitous than it actually is and that they would be smaller than they actually are.  Because computers are so ubiquitous and small today, it’s difficult to imagine how someone might have overestimated these trends just ten years ago.  But that’s the problem with Kurzweil.  He is such a technology evangelist that he tends to go too far.  In the case of the computer itself, he predicted that the average person would have a dozen computers on and around her body which would communicate with each other using a wireless body local area network (LAN).  These computers would monitor bodily functions and provide automated identity verification for financial transactions and for entry into secure areas.  The technology he describes is nearly available now in the form of radio frequency identification (RFID) chips which are common in some warehouses and which are now part of every US passport.  Most of these RFID chips are passive devices, however, which means that they can only be read by an external device and do not provide computing power themselves.  In addition, there has been something of an uproar over the increased use of these chips.  For example, I recently received a new ATM/credit card from my bank that had an RFID chip embedded in it to make using the card easier.  I would no longer need to swipe the card to use it.  Instead, I could simply tap it against any reader.  But because it doesn’t have to be swiped, anyone who got close enough to me with a reader could read the chip.  I didn’t see the advantage of having such a chip in my credit card and saw many disadvantages and so I returned it, making a special request to get a card without the chip.  I suspect there are others out there who have similar concerns.  Kurzweil did predict that privacy issues would be a concern in 2009 but I’ll talk about that later.

Some of the other things about the computer itself that Kurzweil got seriously wrong involve the way in which we interact with our computers.  He predicted that most text would be created using continuous speech recognition software–in other words, we would speak to our computers and they would transcribe our speech into text.  This is clearly not going to become the norm in the next year and I’m not sure we would want it to become the norm.  As I sit typing this blog entry, for example, I have the television on (because multi-tasking is the prevalent way of interacting with the world–something that Kurzweil does not mention) and Evelyn is sitting next to me interacting with her own computer.  Neither of us would want the other to be talking to her computer at this moment.  This might be an example of a place where a cool technology would actually be an obstacle to the way most users interact with their computers.  But Kurzweil did not stop there.  He also predicted that we would wear glasses that allowed us to see the regular visual world in front of us but with a virtual world superimposed on it using tiny lasers.  Such glasses do exist but they are novelties, used only in experimental situations.  And I think most people would find such a superimposition to be a distraction.  Until some benefit can be shown for this technology and how it allows us to interact with the world, I think it will remain a novelty.

Another area where Kurzweil predictions have not come to fruition (yet) is the area of disability.  It is in this area that Kurzweil betrays his transhuman biases.  He predicted that by the end of 2009, disabilities such as blindness and deafness could be dealt with using computing technologies to the extent that such disabilities are no longer considered handicaps but are instead mere inconveniences.  Although significant progress has been made in the area of augmenting such situations using computing technologies, we are nowhere close to where Kurzweil predicted we would be.  Kurzweil’s zeal in the advancement of technology once again led him to overestimate the progress that we would be able to make in ten years.  The history of technology is filled with such zeal and overestimation.

I won’t detail every area that Kurzweil gets things wrong but I do want to touch on the area of politics and society.  The Obama campaign rode its unprecedented use of technology to a presidential victory but in ways that were not predicted by Kurzweil.  Kurzweil predicted that privacy issues would be a primary political issue and although there are groups of people who are very concerned with privacy in our society today (both because of technical issues and because of political issues involved with the War on Terror), I don’t think too many people would say that privacy is a primary political issue in our society, although I, for one, wish it was a bigger issue for most people. 

I’m curious to see which of Kurzweil’s predictions do eventually come to pass.  My guess is that anyone who pays close attention to technological issues could attain the same level of accuracy that he does.  At least he doesn’t claim to have some mystical connection to what the future will bring.



{September 7, 2008}   We ARE Telling Stories

As I suggested in a previous post I don’t understand why FaceBook calls each status update a story. I said that if we were to consider each update a plot point in a longer story, then I could understand the use of the word story. Clive Thompson, in a New York Times article, explains that part of the reason these status updates (no matter how banal they might seem individually) are compelling is precisely because taken together, they tell us a story of our friends’ daily lives that we wouldn’t otherwise have. It’s a fascinating article. Thanks to Liz for pointing it out to me.

I can now be found on Twitter. I look forward to reading 140-character installments of your life story there.



{August 10, 2008}   FaceBook Revisited

In honor of the recent release of the remake of Brideshead Revisited, I thought it might be interesting to revisit FaceBook.  I’ve been using FaceBook for nearly a month now and my feelings about it have evolved just as Charles Ryder’s feelings about Brideshead evolved.  (Don’t think too much about the analogy between Brideshead and FaceBook–it doesn’t really fit very well.)

You may recall that my initial reactions to FaceBook were all about freaking out.  I was especially overwhelmed by the amount of information that FaceBook was sending me via email.  I knew that I had the option to turn some of those emails off but as a new user, I was unsure about which ones it made sense to turn off.  I ended up turning them all off.  So I no longer receive any notifications about FaceBook in my email inbox.  Instead, I just receive the notifications of various updates within FaceBook itself.  I guess as a new user I had been worried about missing something but I realized that I wouldn’t miss anything if I got notified within FaceBook.  Since I visit FaceBook less often than I check my email, my notification of FaceBook happenings is not as immediate as if I were getting email updates.  But I don’t want immediate notification of what’s going on in FaceBook.  Instead, I want to be able to control when I receive those notifications.  In other words, I want to receive them when I’m interested in knowing what’s going on in FaceBook.  That is, I want to know what’s happening in FaceBook when I visit FaceBook!  Perfect.

Although I do visit FaceBook less often than I check my email, I have been visiting FaceBook several times per week.  This surprises me because my initial reaction to the social environment was not a particularly positive one.  But now that I am not being overwhelmed by information from FaceBook, I have mostly enjoyed using it.  In fact, I find it to be somewhat addicting.  I’ve been thinking a lot about why and although I don’t have any answers about that question, I do have some observations.

I currently have 43 “friends” on FaceBook.  Of these, there are probably 20 who are quite active, posting something or interacting with me several times a week.  I am most interested in the activities and communications of about 8 of these 20 active friends.  I think it’s because of these 8 that I visit FaceBook as often as I do.  What do these people have in common?  These are all people that I actually am good friends with in real life or that I could imagine being good friends with if our real life circumstances were to change.  Even though I still find the use of the word “friend” problematic in FaceBook, the way we understand the word in real life is similar to the way it actually plays out in my use of FaceBook.  

One of the most interesting aspects of FaceBook so far has been the way in which I “communicate” with most of my friends.  Very little of our interaction is directly targeted at each other.  That is, most of my friends do not post communications that are meant for me in particular.  Instead, they update some part of their FaceBook profile (such as their status) to tell all of their friends what they are currently doing.  I then read that information and find it interesting because I then know a little bit more about their daily lives.  It’s a way of touching base that would not happen without FaceBook and as a result, we get to know each other a little bit better.  And because I already like them in real life, I want to get to know them a little bit better.  In other words, the immediacy (the focus on “now”) of FaceBook, which felt so problematic when I first joined, is actually something I enjoy and look forward to.  What’s different between when I first joined and now that makes me enjoy the immediacy?  I think the main difference is that I have now gotten my FaceBook life “caught up” with my real life.  What do I mean by “caught up”? 

The rhetoric of FaceBook assumes that life begins when you join the social network.  So you are “now” friends with someone you’ve known for a long time simply because FaceBook “now” knows about that relationship.  Each time you add some detail about your life to FaceBook, the rhetoric reminds you that your life has “now” begun, that everything before either didn’t exist or was somehow not quite “real”.  The feeling that your FaceBook life is more “real” than your BFB (Before FaceBook) life is disconcerting.  But once you get the details in to your profile, FaceBook has “caught up” to your actual life and so the things that you do in FaceBook really are happening “now”.  So for me, the rhetoric no longer feels like a mismatch with my “reality”.  Now that my FaceBook life is more closely aligned with my real life, I appreciate the “nowness” of FaceBook.  The “nowness” means I’m learning current tidbits about these friends of mine.

Although most of my friends and I interact in this indirect manner, reading each other’s general updates, there is one friend with whom I have had an ongoing direct conversation.  This friend is an ex-partner of mine with whom I have maintained inconsistent email contact for the past 15+ years (since our break-up).  Now that we are both on FaceBook, we have been using its messaging system to engage in a long, intimate conversation.  The messaging system is similar to email but because it is embedded in FaceBook, I also get to see the frequent (or infrequent, depending on the friend) updates that my friends make to their profiles.  And so when a friend posts a new photo or a link she finds interesting, I can see those things which contextualizes our FaceBook messages in a way that isn’t easily accomplished via email.  So far, this long conversation with my ex has been the most surprising aspect of FaceBook for me.  Until I experienced how different this kind of direct contextualized communication via FaceBook is compared to regular email, I wouldn’t have believed that it would matter so much.  The other interesting thing about this aspect of FaceBook is that although I’ve enjoyed our online communication, I am not tempted to meet in real life for a face-to-face conversation about the break-up or about our current lives (both of which are topics in our online conversation).  FaceBook provides a useful buffer, or maybe it’s a cover, without which I’m not sure I would be comfortable enough to keep the conversation going.

Another thing that I’ve been thinking about is why FaceBook has captured my attention in a way that the other social networking environments I’ve joined (MySpace and LinkedIn, for example) have not.  My nephew is on MySpace and so I’ve spent some time communicating with him there.  But I find these other environments far less compelling than FaceBook.  One reason, I’m sure, is because most of my friends, the ones I’m interested in communicating with, are using FaceBook rather than these other environments.  But I think the main reason is that FaceBook makes it extraordinarily easy to find and communicate with people you know.  When I joined FaceBook, it immediately suggested some people that I might know.  Once I was friends with some of those people, it used their friends to suggest other people I might know.  In contrast, on MySpace, I had to think about who I might know there, coming up with their names out of the blue.  In addition, when I tried to find my nephew on MySpace, I had to weed through several pages of people with the same name, despite the fact that his friends are mostly from Goffstown NH (where he lives) and the fact that I went to Goffstown High School.  It seems like it would be a simple matter to do some sort of matching to determine which Kyle LeBlanc I might be interested in connecting with.  This is actually somewhat of a problem in FaceBook as well although my nephew was at least on the first page of many pages of Kyle LeBlancs.  He should, I think, have been the first Kyle LeBlanc shown to me in both MySpace and FaceBook.  

I also think it’s easier to communicate with your friends in a way that feels most comfortable and appropriate on FaceBook than it is on the other social networks.  For example, my nephew and I were both on MySpace at the same time last night.  I wanted to chat with him but in order to do so, I had to install a separate application (MySpace IM with Skype).  On the other hand, the chat facility is built into the basic FaceBook interface so there’s no extra installation required.  I appreciate that extra ease of use in FaceBook.

I still think there are some interesting problems with FaceBook but overall, I have been happy with my experience there.  Time will tell whether it’s the newness of the tool that keeps me going back or whether it will become something I will wonder how I could have ever lived without.



{August 5, 2008}   We’re Not Telling Stories

One of the aspects of game analysis that my students struggle with has to do with the dramatic elements of a game. According to the text that I use in my Creating Games class (Game Design Workshop by Tracy Fullerton, Chris Swain and Steven Hoffman), dramatic elements are those elements of the game that help to keep the players engaged in the game. There are five dramatic elements but the ones that my students struggle with are premise and story.

The text says that premise sets the stage for the action that will propel the game forward while story has to do with actual plot points in a narrative being told by the game. For example, the premise in Monopoly is that the player is a real estate mogul. But there is no story in Monopoly because there is no action that moves from point to point as determined by the author of the game. Fullerton and her co-authors say, “Plays, movies, and television are all media that involve storytelling and linear narratives. When an audience participates in these media, they experience a story that progresses from one point to the next as determined by an author. The audience is not an interactive participant in these media and cannot change the outcome of the story.” They go on to say that games are different in that the audience (the game player) interacts with a game and can (in fact, must be able to) change the outcome of the game. So traditional storytelling methods will not work in a game system because the player will not have enough of a sense of control if she cannot change the outcome of the game. The game will feel “fixed” or “random” which will result in an unsatisfying game-playing experience. Because of this need to have the player feel that she is in control of the progress of the game, very few games incorporate story as a dramatic element. Instead, most games use some sort of premise. Of course, some premises are more elaborate than others. When a premise gets to be very elaborate, it is called a backstory. Confused yet? Obviously, the boundary between premise and story is blurry.

I’ve been thinking about the word story a lot lately because of FaceBook. Every change that is made to your profile is called a story. So, for example, every time I change my status, a story is posted to my mini-feed as well as to the newsfeed for each of my friends. And then I can look at all my status stories. In fact, here are my status stories:

Saying that each of these items is a story is confusing to me. If you were to say that together these items make a story, each item being a plot point developed by me, the author, then I would understand why the word story is used. But how is each of these items a story by itself? I think this is yet another example of FaceBook coopting a word and changing its meaning.



{August 3, 2008}   A Lipshitz by Any Other Name

Here is one of the most amazing stories about an unthinking reliance on technology that I’ve heard in a long time.  Verizon does not allow the setting up of accounts that have profanity in the name of the account.  This might sound reasonable on its face since you can imagine a young English major deciding it would be cool to have their email address set up to be fuckuahl@verizon.net as an homage to her favorite faculty member at PSU (although I’m not sure why anyone would really care if that was indeed someone’s email address but apparently, Verizon does care and I suppose that is their right).  So the problem is not that someone at Verizon thought it would be good to have automated checks for such things.  The problem arises when there is an unthinking application of the rule so that legitimate requests are denied.

And that’s exactly what happened to Dr. Herman Lipshitz when he tried to set up an Internet account with Verizon.  He was told that because his name contains the word shit, he could not use it as his username for the account.  Like any good customer with a legitimate complaint, he asked to speak to a supervisor.  When the supervisor insisted that the rule must stand (and that perhaps the good doctor should misspell his name in order to get around the rule), Dr. Lipshitz called the billing department and spoke to another supervisor.  That supervisor said that the only person who could deal with it was someone in Tampa who would have to call India to have the computer code changed to allow an exception for this account.  The person from Tampa would call him back.  No one called but eventually, Dr. Lipshitz received a letter telling him that his name could not be used for his email account because it violates Verizon’s policy for allowable usernames.  So Dr. Lipshitz called the Philadelphia Inquirer and after Daniel Rubin published an article about the incident, Verizon relented, saying, “As a general rule (since 2005) Verizon doesn’t allow questionable language in e-mail addresses, but we can, and do, make exceptions based on reasonable requests.”  Dr. Lipshitz points out that he gets phone service from Verizon, is listed as Lipshitz in Verizon’s phone book and, perhaps most importantly, Verizon regularly cashes his checks with the name Lipshitz prominantly displayed on them.

About 15 years ago, I purchased something at a grocery store.  The total came to $2.37.  I gave the cashier $5.37 but she had already put $5.00 into the cash register which told her the change should be $2.63.  Despite five minutes of arguing, I could not convince her that it would be ok to take my $5.37 and give me $3.00 in change.  Until now, that had been my best story of an overreliance on technology.  That has now changed.



{July 24, 2008}   Philosophy of Jokes

On Word of Mouth today on NHPR, I heard an interview with Jim Holt, the author of Stop Me if You’ve Heard This: A History and Philosophy of Jokes. His book is short, only 160 pages, for a history of jokes. The fact that it contains history and philosophy makes me even more suspicious about how much coverage he could possibly have included in the book. But I was intrigued by the philosophy part of what he had to say. He discussed several theories about why people find some jokes funny and I think these theories can illuminate why people find some things fun.

The description of the interview on the Word of Mouth web site says, “Humor lives in the moment and the more you take it apart, the less humorous it becomes.” I think I disagree with this statement–the reason I say “I think I disagree” is because I haven’t thought about this kind of comment in relation to humor but I have heard similar comments about other phenomena and I disagree with those comments. An acquaintance once said to me that she doesn’t want to know too much about astronomy because that knowledge would take away from the beauty of the stars. I completely disagree with this statement. In fact, Richard Dawkins wrote a book called Unweaving the Rainbow: Science, Delusion, and the Appetite for Wonder in which he argues that the more you know about how the world works, the more wondrous the world becomes. In other words, ignorance is NOT bliss! I also hear comments like this when it comes to talking about games. Students often want to begin and end their analysis of a game with the statement: “It was (or wasn’t) fun.” But if I press them to articulate why it was fun, they often complain that doing so takes the fun out of the game. Clearly, I disagree with that idea. Otherwise, I wouldn’t teach game design and analysis. So I think I disagree with the statement that trying to figure out how humor works makes the humor disappear. But I do acknowledge that it is possible that humor is somehow different than these other phenomena.

Anyway, Holt discussed several theories about what we find funny and why. I think at least one of these theories can help us to understand what we find fun and why. Humans have probably been telling jokes since before they could speak (if you consider slap-stick a kind of joke). The oldest known joke book is called The Philolegos, or Laughter Lover, which is a Greek anthology from the fourth or fifth century a.d.

The theory of why we laugh at jokes that I found most interesting (and useful for thinking about game design) is the incongruity theory (which is actually about the resolution of incongruity). We perceive an inconsistency of some sort–two things that don’t normally go together or a sentence that seems irrelevant to the story being told. Inconsistencies heighten our attention–an inconsistency in our world might signal the presence of a predator and if we don’t pay attention, we might end up as someone’s lunch. With our attention (and anxieties) heightened, we try to resolve the incongruity. When the resolution finally comes, we realize that the incongruity was actually harmless and we laugh with relief.

The incongruity theory is useful for us in trying to understand why games are fun. Like Michael Shermer, I believe that we are “pattern-seeking” animals. We have evolved to look for patterns in our world–those ancestors who were good at finding patterns were good at seeing the predator hiding in the trees and so survived while those that missed such patterns ended up as lunch. As Steven Johnson reported in Discover magazine, when we find a pattern, we get a little jolt of pleasure in our brains. Games present patterns for us to discover and it’s pleasurable for us to find those patterns. I’m sure it’s one of the reasons I’m addicted to Dr Mario Online Rx. I get a little jolt of pleasure every time I resolve the inconsistency of the active viruses by manipulating the falling pills. So at least one way to put fun into games is to focus on patterns. The patterns have to present an incongruity that can be resolved by the player, but not too easily. If the player doesn’t recognize the incongruity or the incongruity cannot resolved in a recognizable way, the player will be frustrated. If the incongruity is too easily resolved, the player will be bored.

The use of incongruities in games is related to what the authors of the text that I use in Creating Games (Game Design Workshop) call challenge. In order to make a game more fun, the authors say, a game designer can focus on the dramatic elements of the game, one of which is challenge. By focusing on making the challenge appropriate to the level of skill of the player, the game designer can avoid frustration and boredom, both of which are antithetical to fun. When the level of challenge presented to the player closely matches her skill level, she enters a state called flow, in which the player “is fully immersed in what he or she is doing by a feeling of energized focus, full involvement, and success in the process of the activity.” Entering the flow state, being completely immersed in what you’re doing, is pleasurable. As game designers, our ultimate goal is allow someone to enter the flow state while playing our games.



Because so many of my friends are completely addicted to FaceBook (and threatening not to be friends with me anymore), I decided to join two days ago (less than 36 hours ago). In keeping with the entire Web 2.0 movement, I feel that I should share my impressions of FaceBook immediately, before I’ve had too much time to reflect on the experience.

The first strange experience I had on FaceBook involved the status update feature. This is a feature that allows the user to tell her friends what she’s currently doing. One of the options was “Cathie is sleeping” and so when I went to bed, I changed my status to that. Yesterday morning, I logged in for further exploration and Liz was online (and of course, by “online”, I mean “on FaceBook”). I had forgotten to change my status when I logged in and so the first thing Liz said to me was “You aren’t sleeping.” She was right, of course. I was freaked out by the fact that my un-updated status was immediately noticed and commented upon. So I changed my status to “Cathie is freaked out by the status update feature.” This was immediately commented on by the two Robins, both of whom said something like: “It’s how we track your every movement.” Which, of course, freaked me out even more.

The second strange experience is one that Ian Bogost calls “collapsed time.” After I filled out my profile on FaceBook (entering things like where I went to high school, college and so on), the first person the site suggested that I add as a FaceBook friend is someone who actually is a friend of mine, Amy Briggs. I’ve known Amy since I was in seventh grade and she was in sixth. We went to high school together and then went to Dartmouth College together, where we were two of the very few women majoring in Computer Science in the mid-1980s. We both went on to get PhDs in Computer Science and we’re now both faculty members at small New England colleges (although she has gone over to the dark side and is Middlebury’s Acting Dean of Curriculum). Because of the similarities in our backgrounds, it was probably a no-brainer to suggest that I add her as a friend. And, of course, I did. To complete the friendship relationship in FaceBook, however, the second party must agree to the friendship. So I went to bed Monday night without Amy in my FaceBook friend list. By the time I logged into FB Tuesday morning, however, Amy had accepted my request for friendship. What’s strange to me is how FB reported this to me. It said, “Cathie and Amy Briggs are now friends.” Now we’re friends? Despite the fact that we’ve known each other for more than 30 years, now we’re friends? As Bogost has pointed out, FB collapses time to this moment. Now is the only time that matters. This freaks me out just a little bit.

The third strange experience happened this morning. I have been on FB for just more than 36 hours so I have only dabbled in exploring the many features available. For example, I have uploaded only one picture, mostly just to see how the upload feature works. It’s a picture of Ann and I taken at a baby shower this winter. (Despite the fact that I have just joined FB, quite a few other pictures of me are there because of the addicted friends I mentioned earlier. It’s another interesting and freaky aspect of these social networks that you can “exist” on the network without even knowing it.) A friend teased me via a comment on my wall (a public space on which FB members can post comments for and about you), implying that I need to get more photos out there. Although I know the comment was meant in jest, I think it illustrates an issue concerning “immediacy,” in which users expect stuff to happen immediately. The immediacy issue is related to the issue of collapsed time in that they are both about an emphasis on now. And on FB, stuff does happen immediately. And then all your friends are immediately notified about it. Freaky.

And that leads me to the last of my current impressions about FB. I’m having significant information overload. As a user, you can control the kinds of things you are notified about via email. By default, you are notified about everything. So when someone accepts your offer of friendship, you get an email about it. When a friend changes her status, you get an email about it. When a friend writes on your wall, you get an email about it. When a friend adds a photo to her page, you get an email about it. And so on. Like I said, you can change these settings but as a new user, it’s difficult to decide what you want to get an email about and what you don’t. I’m finding it challenging to keep up with it all. This brings to mind Sturgeon’s Law, which says: “Ninety-nine percent of everything is crap.” Since I’m writing these impressions without having thought them through, that’s also what I’m thinking about this blog entry.



{July 14, 2008}   Game or Sport?

While we were in Barcelona, I picked up the European (Summer Journey Double Issue) edition of Time magazine because it’s about the games that people play around the world. A number of the articles are fascinating, describing activities that I had never heard of.

For example, one article describes parkour like this: “It’s not quite a sport, and it is certainly no game. But for sheer athleticism, the French-born extreme activity is unmatched as a spectacular thrill.” The article goes on to describe parkour as part gymnastics and part tai chi. It involves moving through an urban landscape as quickly (running) and as efficiently (leaping over obstacles such as walls and gaps between buildings) as possible. Clearly, it requires considerable skill to not get hurt. We saw some young men engaging in this activity while we were in Spain and would have had no idea what they were doing had I not read the article. It’s difficult to imagine without seeing someone do it (pictures, video). But the thing that I found most interesting about this article is that it was about an activity that is “certainly no game.” If this special issue is about games that people play, why would parkour be included?

The question came up for me again in another article about competitive computer gaming in South Korea. Apparently, however, to call this activity computer gaming is to commit a faux pas. Instead, the activity is called e-sports. Gaming doesn’t engender the same respect that sport does and the professional gamers in South Korea definitely want respect for what they do.

So this got me to thinking about what distinguishes game from sport. And why does one activity command respect while the other doesn’t? I’ve had a similar conversation with Liz and Ann about art vs craft. I think it’s human nature to want to categorize things and so there are furious debates about what is art and what is craft. Apparently, lots of people have also argued about the difference between game and sport. Until I read the Time magazine articles, I hadn’t given serious thought to what is sport and what is game. In fact, the only reason I think this is an interesting conversation is because of the respect that seems to be accorded to one and not the other.

According to Dictionary.com, sport is:

1.an athletic activity requiring skill or physical prowess and often of a competitive nature, as racing, baseball, tennis, golf, bowling, wrestling, boxing, hunting, fishing, etc.
2.a particular form of this, esp. in the out of doors.
3.diversion; recreation; pleasant pastime.

And a game is:

1.an amusement or pastime
2.the material or equipment used in playing certain games
3.a competitive activity involving skill, chance, or endurance on the part of two or more persons who play according to a set of rules, usually for their own amusement or for that of spectators.

Each word has about 15 or 20 other definitions that are not quite related to this discussion. For example, someone can be a good sport or be in the real estate game. I’ll ignore those possibilities.

These two sets of definitions are very similar. Both a game and a sport are a “competitive” “pastime” involving “skill”. One difference seems to be that sport involves “skill or physical prowess” while physical prowess doesn’t seem to be part of the definition of a game. Instead, games involve “skill, chance or endurance.” But that makes me wonder why ESPN, which considers itself to be “the worldwide leader in sports”, shows the World Series of Poker (WSOP). One could argue that because winning the WSOP requires days and days of poker-playing, it requires physical endurance (which makes it a game) but there is no way to argue that it requires physical prowess.

Wikipedia says: “Sport is an activity that is governed by a set of rules or customs and often engaged in competitively. Sports commonly refer to activities where the physical capabilities of the competitor are the sole or primary determiner of the outcome (winning or losing), but the term is also used to include activities such as mind sports (a common name for some card games and board games with little to no element of chance) and motor sports where mental acuity or equipment quality are major factors.”

Once again, physical prowess appears to be an important factor. But this definition might help us a little in understanding why poker is shown on ESPN. According to this definition, some games, those with “little to no element of chance”, are also sports (presumably even though the games do not involve physical prowess). The role of chance in poker has a name–we call it “a bad beat” when someone should win a hand but chance intervenes to make her lose. While chance might play a role in a particular hand or even entire game of poker, in the long run (perhaps over a series of games), the poker player with the better abilities will come out ahead of the lesser player. So perhaps we can say that poker is on ESPN because chance plays only a small role in determining the outcome. And perhaps that’s also why the South Korean gamers insist that they play e-sports. My guess is that most of the computer games they’re playing leave very little to chance and the very best gamers win these competitions.

I don’t think the distinction between games and sports is important except to the extent that the playing of games is considered “kid stuff” and accorded little respect. In Everything Bad is Good for You, Steven Johnson argues that today’s popular culture, including video games and television, is making us smarter because of the complexity presented to us through these media. I would argue that games (including non-video games) have made us smarter throughout all of history (not just today). Through the playing of games, we practice and develop mental and physical skills in a safe space, a space that Johan Huizinga called “the magic circle”, where the stakes are lower than they are in “real life.” In other words, the magic circle is a learning space. Rather than demanding that our game activities be called sports (as in e-sports), we should be proud to play games since doing so shows we are engaged in lifelong learning.



{June 8, 2008}   Wii Weaknesses

A recent positive experience I had with the Wii exposes a couple of weaknesses in Wii Tennis.

Because they enjoyed playing with our Wii so much, Ann and Greg have purchased their very own Wii. We went over to their house with our Wii remotes to play. It was amazingly fun playing Wii Tennis with four people, 2 against 2. In fact, it was much more fun playing 2 on 2 than it has ever been playing either 1 on 1 or against the computer. I was thinking about why the four person game is more fun and I think the reasons expose some problems with the way the game was designed.

Whenever you play Wii Tennis, you are playing doubles. What this means is: if you are playing 1 on 1 or against the computer, you are controlling two characters (usually two copies of your own Mii) with one remote. I think the decision to always have tennis be doubles was a mistake on the part of the designers of the game. One of the reasons that the Wii is so popular is because of its unique (and innovative) input mechanism. By using the Wii remote, a player is able to interact with the in-game characters in a way that feels like interacting with the real world. Rather than mashing keys on a remote, the player moves an arm to hit the ball in tennis, for example. This more realistic interaction with the game has appealed to many non-gamers and is truly what has made the Wii the phenomenon it has become. But the decision to have a single remote control multiple characters in the tennis game means that we lose some of the realism of the interaction. When a player moves an arm to hit the ball, two characters in the game swing their rackets, which is a little disconcerting. It would feel more realistic and be more engaging (and more fun) to be able to play singles if you are playing against only one other player or against the computer. Then your one remote would control a single character within the game.

Of course, I understand why the designers made this choice. Within Wii Tennis, there is no way to control where your character moves. The only thing you can control is when the racket is swung and at what angle. The movement of the characters is controlled by the game itself. By allowing a single remote to control two characters, the game then only needs to control horizontal movement of the characters (they move left and right depending on where the ball is) and does not need to control vertical (forward and back) movement of the characters. Instead, one character plays the front and the other plays the back. This, however, is another weakness of the game. Because you can’t control the movement of your character, there are some shots that are impossible to defend against. For example, Greg has perfected a shot off a serve to his forehand. If the serve is a regular serve (that is, not one of the ones that is really fast), Greg will return it with a cross-court shot in a spot where the front character cannot get to it and the back character (whose left and right movement is controlled by the game) does not start moving fast enough to be able to return the shot. So as a player, there is nothing you can do to return this shot. You’re inhibited by the limitations of the game implementation.

This second weakness concerning the lack of control of the movement of the in-game characters exists when you play 2 on 2 with a separate remote controlling each of the four characters in the game. But the first weakness is not there so that it feels like a more natural interaction with the game, even if other flaws exist. I think this is a lesson for how to design engaging games. The more realistic the interaction, the more closely the in-game characterization represents the real world, the more engaging (and the more fun) the game is.



et cetera