Desert of My Real Life

{June 19, 2013}   Software Controls Users

I’m often surprised that some of the most valuable lessons I learned back in the late 1980’s have not become standard practice in software development. Back then, I worked for a small software development company in Western Massachusetts called The Geary Corporation. The co-founder and owner of the company was Dave Geary, a guy I feel so fortunate to have learned so much from at a formative stage in my career. He was truly ahead of his time in the way that he viewed software development. In fact, my experience shows that he is ahead our current time as most software developers have not caught up with his ideas even today. I’ve written about these experiences before because I can’t help but view today’s software through the lens that Dave helped me to develop. A couple of incidents recently have me thinking about Dave again.

I was talking to my mother the other day about the … With Friends games from Zynga. You know those games: Words With Friends, Scramble With Friends, Hanging With Friends, and so on. They’re rip-offs of other, more familiar games: Scrabble, Boggle, Hang Man, and so on. She was saying that she stopped playing Hanging With Friends because the game displayed the words that she failed to guess in such a small on her Kindle Fire and so quickly that she couldn’t read them. Think about that. Zynga lost a user because they failed to satisfy her need to know the words that she failed to guess. This is such a simple user interface issue. I’m sure Zynga would explain that there is a way to go back and look for those words if you are unable to read them when they flash by so quickly. But a user like my mother is not interested in extra steps like that. And frankly, why should she be? She’s playing for fun and any additional hassle is just an excuse to stop playing. The thing that surprises me about this, though, is that it would be SO easy for Zynga to fix. A little bit of interface testing with real users would have told them that the font and speed at which they displayed the correct, unguessed word was too small and too fast for a key demographic of the game.

My university is currently implementing an amazingly useful piece of software, DegreeWorks, to help us with advising students. I can’t even tell you how excited I am that we are going to be able to use this software in the near future. It is going to make my advising life so much better and I think students will be extremely happy to be able to use the software to keep track of their progress toward graduation and get advice about classes to think about taking in the future. I have been an effusive cheerleader for the move to this software. There is, however, a major annoyance in the user interface for this software. On the first screen, when selecting a student, an advisor must know that student’s ID number. If the ID number is unknown, there is no way to search by other student attributes, such as last name, without clicking on a Search button and opening another window. This might seem like a minor annoyance but my problem with this is that I NEVER know the student’s ID number. Our students rarely know their own ID number. So EVERY SINGLE time I use this software, I have to make that extra click to open that extra window. I’m so excited about the advantages that I will get by using this software that I am willing to overlook this annoyance. But it is far from minor. The developers clearly didn’t test their interface with real users to understand the work flow at a typical campus. From a technical standpoint, it is such an easy thing to fix. That’s why it is such an annoyance to me. There is absolutely no reason for this particular problem to exist in this software other than a lack of interface testing. Because the software is otherwise so useful, I will use it, mostly happily. But if it weren’t so useful otherwise, I would abandon it, just as my mother abandoned Hanging With Friends. When I complained about this extra click (that I will have to make EVERY time I use the software), our staff person responsible for implementation told me that eventually that extra click will become second nature. In other words, eventually I will mindlessly conform to the requirements that the technology has placed on me.

Dave Geary taught me that when you develop software, you get the actual users of that software involved early and often in the design and testing. Don’t just test it within your development group. Don’t test it with middle management. Get the actual users involved. Make sure that the software supports the work of those actual users. Don’t make them conform to the software. Make the software conform to the users. Otherwise, software that costs millions of dollars to develop is unlikely to be embraced. Dave’s philosophy was that technology is here to help us with our work and play. It should conform to us rather than forcing us to conform to it. Unfortunately, many software developers don’t have the user at the forefront of their minds as they are developing their products. The result is that we continue to allow such software to control and manipulate our behavior in ways that are arbitrary and stupid. Or we abandon software that has cost millions of dollars to develop, wasting value time and financial resources.

This seems like such an easy lesson from nearly thirty years ago. I really don’t understand why it continues to be a pervasive problem in the world of software.

{June 11, 2012}   Interaction Design

I’m reading an interesting book by Janet Murray called Inventing the Medium: Principles of Interaction Design as a Cultural Practice. She is articulating things that I’ve thought for a long time but is also surprising me a lot, making me think about things in new ways. The book is about the digital medium and how objects that we use in this medium influence the way we think about the world. She argues that technological change is happening so quickly that our design for the medium hasn’t kept up. Designers use the conventions that work well in one environment in a different environment without really thinking about whether those conventions make sense in that second environment. As a result we get user interfaces (which is a term she doesn’t like but which I’ll use because most people interested in these things have a pretty good idea of what we mean by the term) that are far too complex and difficult to understand.

One idea that strikes me as particularly important and useful is Murray’s argument that designers create problems when they separate “content” from the technology on which the “content” is viewed. Like McLuhan, Murray believes that “the medium is the message,”  by which she means “there is no such thing as content without form.” She goes on to explain, “When the technical layer changes, the possibilities for meaning making change as well.” In other words, if you change the device through which you deliver the content, the tools needed to help consumers understand that content should probably also change. My favorite personal example of the failure of this idea is the Kindle, Amazon‘s e-reader. I’ve owned my Kindle for about three years and I mostly love it. One thing that feels problematic to me, however, is the reporting of where you are in the book that you’re reading. Printed books are divided into chapters and pages and it is easy to see how much further the reader has to go to the end of the book. Readers trying to read the same book might have difficulty if they are using different editions because page numbers won’t match up but the divisions into chapters should still be the same. If a page of text in a physical book corresponds to a screenful of text on an e-reader, page numbers don’t really make sense in e-books, mainly because the reader can change the size of the font so that less or more text is able to be shown on the screen at a given time. This means that the Kindle doesn’t have page numbers. But readers probably want to be able to jump around e-books just as they do in physical books. And they want to know how much progress they’ve made in an e-book just as they do in a physical book. So Amazon introduced the idea of a “location” in their e-books. The problem with a “location,” however, is that I have no idea what it corresponds to in terms of the length of the book so using locations doesn’t give me a sense of where I am in the book. For that purpose, the Kindle will tell me the percentage of the book that I’ve currently read. I think the problem with these solutions is that the designers of the Kindle have pretty much used the idea of pages, changed it only slightly and unimaginatively, and it isn’t as informative in the digital medium as it is with a physical book. I don’t know what the solution is but Murray suggests that the e-reader designers should think about the difference between “content” and “information” in their design.

Murray distinguishes between “content” and “information” and thinks that device designers have problematically tried to separate content from the technology on which this content will be viewed. So the designers of the Kindle see the text of the book as the content, something they don’t have to really think about in designing their device. Instead, Murray suggests that they focus on information design, where the content, which in this case is the text, and the device, in this case the Kindle, cannot be separated. The designers should think about the affordances provided by the device in helping to design the information, which is meaningful content, with which the reader will interact.

Another example appeared in my Facebook timeline last week, posted there by one of my friends pointing out the fact that the Mitt Romney campaign is insensitive at best and hostile at worst to women. The post is a video of Romney’s senior campaign advisor Eric Fehrnstrom, appearing on This Week with George Stephanopolous a week ago, calling women’s concerns “shiny objects of distraction.” Watching it, I was annoyed and horrified by what I was supposed to annoyed and horrified by. But I also noticed the ticker tape Twitter feed at the bottom of the video. The headline-type feeds at the bottom of the screen on television news have become commonplace, despite the fact that they don’t work particularly well (in my opinion). I’ve always felt that the news producers must know that the news they are presenting is boring if they feel they have to give us headlines in addition to the discussion of the news anchors. But in the video of Romney’s aide, the rolling text at the bottom of the screen is not news headlines but a Twitter feed. So the producers of This Week have decided that while the “conversation” of the show is going on, they want to present the “conversation” that is simultaneously happening on Twitter about the show. There are several problems with this idea, not least of which is that most of the tweets that are shown in the video are not very interesting. In addition, the tweets refer to parts of the program that have already gone by. And finally, the biggest problem is that the Twitter feed recycles. In other words, it’s not a live feed. They show the same few comments several times. Someone must have thought that it would be cool to show the Twitter conversation at the same time as the show’s conversation but they didn’t bother to think carefully about the design of that information or even which information might be useful to viewers. Instead, they simply used the conventions from other environments and contexts in a not very useful or interesting way.

Another of Murray’s ideas that strikes me as useful is the idea of focusing on designing transparent interfaces rather than intuitive interfaces. Intuition requires the user to already understand the metaphor being used. In other words, the user has to understand how an object in the real world relates whatever is happening on the computer screen. This is not particularly “intuitive,” especially for people who don’t use computers. I’ve been thinking about the idea of intuitive interfaces since I started teaching computing skills to senior citizens. For them, it is not “intuitive” that the first screen you see on a Windows computer is your desktop. And once they know that, it isn’t “intuitive” to them what they should do next because it’s all new to them and so they don’t have a sense of what they CAN do. For example, they can put a piece of paper down on a real desktop. Metaphorically, you can put a piece of paper (a file) down on the Windows desktop but the manner in which you do that is not “intuitive.” The first question I always get when I talk about this is: How do I create a piece of paper to be put on the desktop? Of course, that’s not the way they ask the question. They say, “How do I create a letter?” That’s a reasonable question, right? But the answer depends on lots of things, including the software that’s installed on the computer you’re using. So the metaphor only goes so far. And the limitations of the metaphor make the use of the device not particularly intuitive.

Murray argues that focusing on “intuition” is not what designers should do. Instead, designers should focus on “transparency,” which is the idea that when the user does something to the interface, the change should be immediately apparent and clear to the user. This allows the user to develop what we have typically called “intuition” as she uses the interface. In fact, lack of transparency is what makes many software programs feel complex and difficult to use. Moodle, the class management system that my University uses, is a perfect example of non-transparent software. When I create the gradebook, for example, there are many, many options available for how to aggregate and calculate grades. Moodle’s help modules are not actually very helpful but if the software was transparent, that wouldn’t matter. I would be able to make a choice and immediately see how it changed what I was trying to do. That makes perfect sense to me as a way to design software.

This book is full of illuminating observations and has already helped me to think more clearly about the technology that I encounter.

{June 15, 2011}   Tumblr Review–Part 2

It has taken me more than a month and a half to write the second part of this review.  I think it’s because I said in my last post that I would write about THIS topic in my next post.  Since that promise (or threat–take your pick) seems to have stymied me for a while, you can bet that I will never do that again.

I’ve been looking for a long time for a tool that would make it easy for me to implement a web site that looks the way I want it to and organizes information in the way I want it to.  When I first came across Tumblr, I thought I had found a tool that was pretty close to what I wanted.  As I read what the site promises, I realized that it wasn’t exactly what I wanted.  And then as I started to use the site, I realized that the developers of Tumblr hadn’t delivered on what they said Tumblr was going to be and so the tool is even further away from what I’m looking for than I realized.  The first part of my review of the tool focused on the things they promised but didn’t deliver.  I should point out that Tumblr no longer offers the options that I complained about in the first part of my review.  And despite my extensive contact with the technical folks at the company, no one has contacted me about how they’ve decided to resolve these issues. Perhaps it would be difficult to contact a customer (even a non-paying one) to tell them that their complaints prompted you to remove options rather than fix them. In any case, I think my dissatisfaction with Tumblr arises from my overall dissatisfaction with Web 2.0 in general and the values embraced by the people who develop tools for this environment.  So in this second part of my review, I’m going to focus on the main difficulty I have with Tumblr.  I should point out, however, that I am critiquing Tumblr for not doing something they have never promised to do.  I just wish the tool worked differently.

I am one of the few people my age who actually grew up with computer technology.  I started to develop computer software in 1978 when I was a sophomore in high school.  Although the Internet existed then, the World Wide Web did not (trivia: the birth year of the World Wide Web is debated depending on which event you use to mark its birth but it was sometime between 1990 and 1992).  Developing new tools and content for the World Wide Web was somewhat challenging and required a deep knowledge of how it all worked as well as significant programming skills. In other words, I have been producing content since the days of fairly difficult content production.  In those days, the line between content production and content consumption (viewing of that content) was pretty clear.

Gradually, however, tools were developed to allow the creation of content by more and more people. Together, these tools (things like blogging software, photo sharing sites, wikis and so on) make up Web 2.0.  I personally believe that the addition of these new, less technical content producers is a positive thing, leading to more diversity of content on the Web.  But when all of these new, easier-to-use tools entered the marketplace, I recognized that the underlying values of the tools were changing.  I’m only now beginning to fully understand the implications of these changing values.

One of the new underlying values involves a changing understanding of the word production.  I have always thought of production as the creation of new content.  Increasingly, I have come to understand that in Web 2.0 content consumption is in itself a kind of production.  In fact, this is the primary underlying value of Tumblr.  As a user browses the Web, she will inevitably find content that she finds interesting and wants to share with her online friends.  Tumblr makes sharing incredibly easy.  In fact, my unscientific review of Tumblr sites suggests that the vast majority of them are sites where the owner reposts content that she has found elsewhere on the Web.  In other words, the Tumblr owner is producing a new site that is idiosyncratically hers.  Her unique Web content consumption results in the production of a mashup, a site made of pieces of other sites.  For example, this Tumblr reposts items from around the Web that the owner finds “the most entertaining.”  None of the individual items is created by the owner of the Tumblr.  Instead, the owner produces the unique combination of these individual items.  This understanding of production by combining sites is very different than what I had been looking for when I found Tumblr.  Because I wanted to combine my various sites of production (on which I produce the individual items) into a single site, I was looking for something that would automatically grab content from those various sites of production.  Because Tumblr is designed for a human to make qualitative decisions about which content to include (from sites owned by a variety of people), the automatic grabbing of content is not as critical to Tumblr’s designers as it is to me.  As an aside, I am really interested in how this idea of consumption as production is affecting my students and their understanding of things like research and citations and intellectual property and originality.  It’s difficult to know if changing attitudes about these issues is driving changes in technology or vice versa.  In any case, this difference in understanding of the word production is the main reason I am dissatisfied with Tumblr.  What would I be satisfied with?

I would like a tool that automatically consolidates all of my other production sites while also allowing me to easily share Web content produced by others that I find interesting.  And I would like to be able to fully customize the layout of the site into what I will call “channels.”  That is, I’d like a “channel” that shows the content from this blog, another “channel” that shows my Flickr feed and so on, and I’d like to be able to arrange the “channels” on the page in a variety of ways.  And finally, I’d like the tool to allow me to customize how items appear in the various channels.  Another of Web 2.0’s underlying values is the privileging of recency.  That is, the most recent items on a site are the most important and, therefore, appear first.  I’ve written about my concerns about this value before.  Some sites, such as Twitter, take this focus on recency to extremes by deleting any tweets that are more than a few weeks old, which, of course, makes it really difficult to go back at a later time to find tweets that you found interesting in the past.  Therefore, I would like a site that allows me to override the default order of items and to provide my characterization of what is most important.  This last requirement leads me into an entirely new discussion about information organization that I think is an unsolved research problem for the technical world to tackle.  But I want my next blog entry to take me less than a month and a half to write so I won’t promise that that discussion will appear in my next entry.

{April 11, 2011}   We Are STILL Playing a Game

I recently wrote a blog entry in response to Caroline Bender‘s question about the Scrabble game we were playing online.  Since we have different motivations for playing Scrabble, Ms. Bender asked whether we were actually playing a “game.”  My short response: yes.  After reading that response, Scott commented about the difference for him between playing Go and Scrabble on FaceBook.  He observed that Go is a more interesting game for him and he tried to explain why.  His reasons were: 1. He plays LOTS of Scrabble and so it has become less exciting for him. 2. Scrabble on FaceBook has a built-in dictionary and doesn’t allow you to play a word that is not in the dictionary so the game is less about vocabulary and more about the strategy of how to place words for maximum score and blocking his opponent’s potential moves. 3. Scrabble has an element of luck while Go is all about skill which means that in Scrabble, luck can sometimes overcome superior strategy and skill. 4. Go allows for deception. 5. Each move in Go is very clearly part of a larger battle so each move has both short-term and long-term consequences which makes it feel like every move has high stakes attached to it. 6. Finally, Go has a long history with significant implications in East Asian philosophy, society and politics so that when he plays Go, he recognizes that it is more than “just” a board game.  He then goes on to ask how these elements fit into Costikyan‘s six elements that every game must have.  In particular, Scott wants to know whether the historical and cultural context of a game important.  He makes some interesting points and asks a very good question.

Before I discuss Go and Scrabble in particular, I need to explain a bit about Costikyan’s article that may not have been clear in my previous blog entries where I’ve used his framework to analyze a game.  Costikyan wrote his article for game designers.  That is, he intended his framework as a tool for game designers to use when they have created a game that is pretty good (or maybe even pretty bad) and they want to figure out how to make the game great.  And so he spends a lot of time in the article discussing the importance of decision-making and how that relates to management of resources and the type of information given to the player.  For example, in Go, the player has perfect information which means that there is no information hidden from the player.  The player doesn’t have to worry about chance or any hidden resources that her opponent might have.  In contrast, a Scrabble player has imperfect information which, in this case, means that some information about the game state is known to the player while other information is hidden from the player.  In particular, the letters that the opponent has is hidden from the player.  In addition, there is the element of chance in Scrabble coming from the random draw of letters.  If a player happens to get all vowels or all consonants, for example, it may be quite difficult for the player to make any word so she may need to trade in her tiles which amounts to skipping an opportunity for scoring points.  The different information structures in the two games significantly affects the kind of decision-making in the game.  In Go, the better player will always win (unless she makes a stupid mistake) because there is no element of chance and no hidden information.  Chance and hidden information gives the inferior Scrabble player more of a chance to win.  I believe this is part of the reason that Scott prefers Go to Scrabble.

There is a large section of Costikyan’s article that I rarely talk about in these blog posts but which we discuss in detail in my classes.  After specifying the six elements that every game MUST have, Costikyan discusses many more elements that a game may or may not have.  In this section of the article, he is writing to the game designer who has created a good game that needs something extra to make it great.  Interestingly, one of the things that Costikyan suggests the game designer consider adding is more chance.  It’s one of the suggestions that is problematic in using this article with beginning game designers–their games often have too much chance so that the decisions the player makes do not feel significant or meaningful to the player.  Adding more chance to such a game makes the game worse, not better.  Another thing that Costikyan suggests the game designer pay attention to in order to make her game great is narrative tension.  I think this is what Scott is talking about when he says that in Go, he feels like there are mini-battles that make a difference in the larger war that is the game.  Every single move matters in this situation.  No single move can work alone to capture the opponent’s stones.  This idea of narrative tension is why Scott and I each sometimes just want to throw in the towel on a game of Go.  We both know who has won the game and so there is no more narrative tension.  We sometimes continue to play, however, because the mini-battles can themselves be interesting and allow for a sense of tension.  When I’m losing a game, I get great satisfaction from playing and winning one of these mini-battles, even when it won’t make a difference in the larger outcome of the game.  Ultimately, I think Scott understands his game-playing preferences pretty well and he’s done a great job analyzing why he prefers Go over Scrabble.

I find his final question really interesting.  He asks about the tradition of Go, wondering what Costikyan would say about this sense that game is more than “just” a game, that it is an expression of a larger, mystical tradition.  I don’t think Costikyan really has much to say about this particular topic.  But I recently took Ann‘s Postcolonial Literature course and I think a lot of what we read in that class relates to Scott’s comments about the mysticism of Go.  Go really is an ancient game–Wikipedia tells us that the game is more than 2000 years old.  But the sense of mysticism that we in the West associate with the East and with artifacts of the East (like Go) stem from Orientalism, a set of assumptions that stereotype the East in way that Edward Said finds damaging because those stereotypes allow us to think of Asians as “other.”  That is, these stereotypes allow us to think of Asians as somehow fundamentally different than us, the white, Western majority.  As a comparison, we can think of Chess, a game that is nearly as old as Go.  We in the West don’t ascribe the same kind of mysticism to Chess as we do to Go.  Both games are ancient games of perfect information that require significant study and play to master.  But Go is viewed with a sense of awe that is rarely present when Chess is discussed.

This discussion of the history and tradition does, however, make me think of something that is important for game designers to understand.  A game designer can never control what a player brings to the game.  In other words, if a particular game taps into some aspect of player psychology that is completely external to the game itself, the game may or may not be successful on that basis alone.  This particular aspect is completely outside of the game designer’s control.  I think remembering this probably will help a game designer not take the reception of her game too personally.  And it helps us understand that, like many things, there is some “je ne sais quoi” in the art of game design, that helps to keep it perpetually interesting.

{December 27, 2010}   Popular Culture and TIA

I just finished watching the five episodes of the BBC miniseries The Last Enemy.  Ann had recommended it because it is about computers and privacy and also because Benedict Cumberbatch (of recent Sherlock Holmes fame) is the star.  I mostly liked the series but there were a couple of things that really bothered me about it.

The plot begins when Stephen Ezard (played by Cumberbatch) returns home to England after living in China for four years.  He’s coming home to attend the funeral of his brother Michael, an aid worker who was killed in a mine explosion in some Middle Eastern desert.  Ezard is a mathematical genius who went to China to be able to work without all the distractions of life in England.  He is a germaphobe (at least in the first episode–that particular personality trait disappears once the plot no longer needs it) who is horrified by the SARS-like infections that seem to be running rampant on the plane and throughout London.  After his brother’s funeral, Stephen goes to Michael’s apartment and discovers that Michael was married to a woman who was not at the funeral and who appears to be in hiding.  She’s a doctor who is taking care of a woman who is dying from some SARS-like infection–and that woman is in Michael’s apartment.  Despite his germaphobia, Stephen immediately has sex (in this germ-infected apartment) with his brother’s widow.

Meanwhile, Stephen’s ex-girlfriend is an MP who is trying to push through legislation that would allow the use of a program called Total Information Awareness (TIA).  TIA is already largely in place but the people of England are not happy about it.  So Ezard is recruited as a “famous” apolitical mathematician who will look at the program and sell it to the public.  What is TIA?  It’s a big database that collects all kinds of electronic information.  Every credit card purchase, building entry with an id card, video from street cameras, and so on is stored in this database.  The idea is that by sifting through this information, looking for certain patterns, English authorities will be able to find terrorists before they strike.  The interesting thing about this idea is that it isn’t fiction.   In 2002, the US government created the Information Awareness Office in an attempt to create a TIA system.  The project was defunded in 2003 because of the public outcry.  At the time, I was concerned about the project both as a citizen with rights that would potentially be threatened and as a computer scientist critical of the idea that we could actually find the patterns necessary to stop terrorism.

This is where the plot of The Last Enemy became problematic for me.  Michael’s widow, Yassim, who is now Stephen’s lover, disappears.  Stephen takes the job as spokesperson for TIA primarily so he’ll have access to a system that will allow him to track Yassim.  We see many scenes of him sitting for hours and hours wading through data with the help of the TIA computer system.  At one point, he tracks the car that Yassim had been riding in by looking for video footage taken by street surveillance cameras and finding the license plate of the car in the video.  This is completely unrealistic and one of the main reasons that, with our current technology, a TIA system will never work.  We don’t yet have the tools to wade through the massive amounts of irrelevant data to find only the data we’re interested in.  And when that data comes in the form of photos or video, we don’t really have quick, efficient electronic means of searching the visual data for useful information.  Since so much of the plot of The Last Enemy hinges on Stephen finding these “needles in a haystack” in a timely manner, I had a difficult time suspending my disbelief.  The problem is that it is very difficult to find relevant information in the midst of huge amounts of irrelevant information.  Making this kind of meaning is one of the open problems of current information technology research.

The second major problem that I had with the plot of this series has to do with Stephen as a brilliant mathematician and computer expert not understanding that his electronic tracks within the system would be easy to follow.  He makes no attempt to cover those tracks and so as soon as he logs off, his pursuers log on behind him and look at everything he looked at.  And many major plot points hinge on his pursuers knowing what he knows.  He doesn’t even take minimal steps to cover his tracks and then he seems surprised that others have followed him.  This is completely unrealistic if he really is the brilliant computer expert he would need to be in order for the government to hire him in this capacity.

I won’t ruin the surprises of the rest of the plot of this series.  But let’s say that much of the premise seems pretty realistic to me, like we’re not too far off from some of these issues coming up for consideration soon.  For that reason, I recommend the series, despite the problems I saw and despite the unbelievable melodrama that arises as a result of Stephen’s relationship with his brother’s widow.  There is a particularly laughable scene between the two of them when she tries to teach him how to draw blood by allowing him to practice on her.  It’s supposed to be erotic, which is weird enough given the danger they’re in at that point, but the dialog is so bad that I laughed out loud.  Despite these problems, the series explores enough interesting questions that I kept watching, wanting to know how the ethical questions would be resolved.

{December 10, 2010}   Zero Views

Recently, my favorite NPR show, On the Media, had a story about an interesting blog called Zero Views.  The blog celebrates “the best of the bottom of the barrel” by posting the funniest YouTube videos that no one (NO ONE–hence the name “Zero Views”) has watched.  I found several things about this story that are worth commenting on. 

First, this is the kind of meta-site on the Web that I love.  It’s a site that highlights content from another site.  But here’s the thing.  As soon as this site focuses on a video that has zero views, it is HIGHLY likely that the video will no long have zero views.  And in fact, if the Zero Views blog is at all popular (and my sense is that it is fairly popular), any site that it talks about is likely to go viral and become incredibly popular with thousands of views.  That, to me, is a really interesting phenomenon.

The second thing that I find interesting about this story is an underlying issue about popularity.  This is something that I’ve been thinking about for a while.  What makes a blog, a site, a video “popular?”  The easy answer has to do with numbers of views.  But that somehow feels unsatisfying to me.  I’ve watched many videos and traveled to many links that were recommended to me, only to feel…dissatisfied with what I’ve seen.  This makes me think that popularity must have something to do with “likeability” or some other related concept.  How would we measure “likeability” and surely, the fact that someone “recommended” a particular site, blog, video to me must have some relationship to “likeability,” right?

There are sites such as Technorati that try to measure “popularity” by measuring the number of links that each site has to it.  That is, the more other sites link to your site, the higher you rank in Technorati’s popularity rankings.  There are many problems with this idea of “popularity,” the most obvious of which is that more tech-literate folks are more likely to link to other sites.  So if you are “popular” among less tech-literate folks, you are less likely to be linked to so you will be ranked as less “popular.”

I don’t actually know how to measure “popularity” of websites, blogs, videos and so on.  The proliferation of “top 100” or “top 10” shows on TV makes me think that “popularity” is a cultural phenomenon, something we are interested in as a culture.  But I’m curious about what various groups of people mean when they use the word “popular” when it comes to online content.  What do you think?  I’m also really interested in the kinds of activities and behaviors that can affect the “popularity” of online content.  What do you think about that?

{October 22, 2010}   Original Research–Good or Bad?

I recently rewatched Julia, the 1977 film starring Jane Fonda and Vanessa Redgrave.  It is based on a chapter in Lillian Hellman‘s memoir, Pentimento: A Book of Portraits.  That chapter tells the (probably fictional) story of Hellman’s longtime friendship with Julia, a girl from a wealthy family who grows up to fight fascism in Europe in the 1930s.  I loved this book when I read it in high school and I went on to read nearly all of Hellman’s other work as well as several biographies.

As I watched the movie, several questions occurred to me and so, being a modern media consumer, I immediately searched for answers online.  This search led me to Wikipedia, which for me is a fine source of answers to the kinds of questions I had.  In fact, I use Wikipedia all the time for this sort of thing.  I was surprised then to find the following qualifying statement on the entry for Pentimento:

This section may contain original research.  Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed.

As I said, I use Wikipedia a lot.  And I have never seen this qualifying statement before.  I think this statement implies that original research is somehow bad.  I don’t think that’s what the folks at Wikipedia mean.  At least, I hope it’s not what they mean.  So I decided to look into the statement a little more deeply.  There are a couple of parts of the statement that are interesting.   

First, the words “may contain” are in bold.  I think that’s supposed to indicate that the section or may or may not contain original research.  It’s clear that articles in Wikipedia should NOT contain original research but it isn’t clear why. 

I then checked to see how “original research” is defined by Wikipedia and found this on their policy pages: “The term ‘original research’ refers to material—such as facts, allegations, ideas, and stories—not already published by reliable sources.”  How would one determine whether a particular section contained “original research” or not?  Probably by looking for references to “reliable sources” in the section.  Therefore, if a section doesn’t contain references (or not enough references), it might be difficult to determine whether that’s because the author simply didn’t include references to other available sources, the work is based on “original research” or the work is completely fabricated.  Or, I guess, it could be some combination of the three reasons.  So I guess that’s why “may contain” is in bold.  The lack of references could mean any number of things.

The next part of the qualifying statement is even more interesting to me.  “Please improve it by verifying the claims made and adding references.”  This statement implies that “original research” is somehow less valid than work that has been taken from another source.  Again, I doubt that’s what the Wikipedia folks mean. 

So I continued to investigate their policies and found this: “Wikipedia does not publish original thought: all material in Wikipedia must be attributable to a reliable, published source. Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.”  Because of this policy against publishing original thought, to add references to an article or section of an article does indeed “improve” it by making it conform more closely to Wikipedia’s standards for what makes a good article.

This policy against publishing original thought explains the rest of the qualifying statement.  My investigations into Wikipedia’s policies found policies about what it means to “verify” statements in an article.  This is important because Wikipedia says that included articles must be verifiable (which is not the same as “true”), that is, users of Wikipedia must be able to find all material in Wikipedia elsewhere, in reliable , published sources.  And yes, Wikipedia explains what they mean by “reliable.”  That discussion is not easily summarized (and isn’t the point of this post) so anyone who is interested can look here

My surprise concerning the qualifying statement boils down to wording and I think the wording of the statement needs to be changed.  Currently, it implies that original research is bad.  But through my investigation, I’ve decided that Wikipedia probably means that articles should not contain unverified, unsourced statements.  Such statements could come from author sloppiness, original research or outright fabrication.  In any case, they should not be part of Wikipedia’s articles. 

Of course, I haven’t discussed whether the policy of not publishing original thought is an appropriate policy or not.  I have mixed feelings about this.  But that’s a subject for another post.

The latest salvo in the “games good for you” vs. “games bad for you” debate has been fired.  For now, it seems that games are good for you.

Researchers at the University of Rochester chose 26 subjects who had never played action-packed first person shooter games like “Call of Duty” and “Unreal Tournament.”  Over a period of months, 13 subjects played these action games while the other 13 subjects played calmer, strategy-based games like “The Sims 2” (which is probably not really a game but that’s another post).  The researchers then tested the players’ ability to make quick decisions in a variety of situations involving visual and auditory perception.  Those who had played the action games were able to make good decisions based on the information presented 25% more quickly than those who had played the strategy games.  In addition, the action game players improved their skills at playing the games more quickly than the strategy game players. 

The theory behind this study involves the use of probabilistic inference, which is an intuitive form of the more formal tool called Bayesian inference.  Bayesian inference is used in all kinds of artificial intelligence problems to make good decisions based on evidence.  Our brains are constantly taking in visual and auditory information as we move through the world.  Using this information, we make inference based on the probabilities of certain events occurring.  For example, when we drive, we use our perceptions to make decisions such as when to brake or make evasive movements and so on.   That is, we make inferences based on the probabilities that we are constantly calculating based on information presented to us.  People who can do this more quickly and more accurately will make better decisions than people who are slower or less accurate.

This latest study suggests that playing a certain type of video game can train our brains to evaluate information quickly and make accurate judgments about the appropriate action to take in a particular situation.  So it appears that game playing can be beneficial and not just a waste of time.  At least that’s the logic I used to justify playing an hour of Dr. Mario Rx today.

{September 8, 2010}   Mad Men Mixed Media

Mad Men is one of the most interesting shows on television right now.  The characters continue to reveal layers of complexity into the show’s third season.  The early 1960’s setting is rife with tension–between women and men, blacks and whites, old and young, between the staid 1950’s and the revolutionary 1960’s–that we know is going to explode any minute now.  And the story lines focused on advertising provide hints and clues as to how we became the media- and celebrity-obsessed culture that we are today.  It is fascinating to watch.

I just finished watching Season Three on DVD from Netflix.  There were a couple of episodes in this season that made me think about the ways in which the show crosses boundaries. 

First, it crosses a boundary between different media.  In particular, I think the show combines photography with television in ways that I haven’t seen before.  This is particularly appropriate since the show is about advertising in an age when photography was of paramount importance.  The episode that made me think about this was in Season Two, when Betty’s father, Gene, dies.  The last scene of the episode shows Sally in the foreground lying on the floor watching television (so appropriate), her face illuminated by the light of the TV.   To her left, in the background, the adults sit around the kitchen table, lit by an overhead (presumably flourescent) light, drinking cocktails and smoking cigarettes.  They’ve been telling stories and laughing about Gene, celebrating his life without dismissing their grief at his death.  Sally doesn’t like the laughing, doesn’t understand that laughter and celebration is a great way to honor someone who has just died.  The tableau of a grief-stricken Sally in the foreground and the laughing adults in the background is reminiscent of great photography.   No, it IS great photography.  The show is full of these tableaux.  It is beautiful to watch.  That is one of the things that makes this a great show.

The second interesting boundary that the show crosses is between fiction and non-fiction.  In Season Two, there was, for example, an episode in which Bert Cooper bought a painting by Mark Rothko, who, at the time, was a relative unknown.  The characters in the episode have discussions about the nature of painting in the face of the abstraction of this particular painting.  The episode gives us a glimpse into the kinds of discussions that were occurring at the time.  The discussions give the episode a sense of reality and groundedness.  But the last two episodes of Season Three are outstanding in their examination of the ways in which real life events impact the lives of these fictional characters.  And this is a spoiler alert.  If you haven’t watched these episodes of the show, skip the next paragraph.

President Kennedy is assassinated in the next to the last episode of Season Three.  The nation is shaken.  Even the Republicans are upset.  There are many tableaux in this episode.  It is beautiful to watch.  But the impact of the real-life assassination of President Kennedy on the lives of these fictional characters is moving, and, I suspect, realistic in a way that illuminates what this event meant to real people of the time.  The episode features a wedding that occurs a few days after the assassination, before Kennedy’s funeral.  It is a touching nightmare.  But it is the last episode of the season, when people have moved on but the impact of the assassination is still being felt that moved me most.  In this last episode of Season Three, a number of characters have been moved to make major changes in their lives, at least in part because of this major event on the national stage.  Don makes a pitch to Peggy for her to join him in his new ad agency.  She is resisting in uncharacteristic ways, in ways that we, the audience, celebrate.  She wants to know why he wants her.  He tells her that, unlike most people, she sees Kennedy’s assassination in a way that is real.  She sees that in this huge tragedy, people have lost their identities, a sense of themselves.  The tragedy has made them question who they are, who they thought they were.  As someone who has had a terrible thing happen recently (even if it was my “choice”), I understood this.  I recognized this as true.  As “truth”.   Tragic events make you question who you think you are.  This was illuminated for me by this last episode of Season Three of Mad Men.  Isn’t that the definition of great fiction?

When dramatic events occur, people question who they are.  And this episode of Mad Men made me remember this or maybe made me realize this for the first time.  This crossing of the boundary between fiction and non-fiction illuminated for me a truth that helped me understand my actual life.  It helped me understand who I am, why I feel the things I feel.  What more could I ask of a TV show?

{July 15, 2010}   New Ways of Thinking

A year or two ago, in one of my classes at the InterLakes Senior Center, a man asked me how he could get a copy of the information he finds on web sites.  I explained to him how to add the site to his list of favorites so that he could come back to it later.  When I finished with this explanation, he asked me how to put that in his file cabinet.  Only at this point did I realize that he was asking me how to print the contents of a web site so that he could put a piece of paper in his physical file cabinet.  I tried to explain why people don’t do that but instead just save electronic links to electronic material.  He remained unconvinced so I explained to him how to print a web page.  I suspect he now has a file cabinet full of paper taken from the web, badly formatted and rarely read.

I guess I thought that because I’ve been involved in software development and online culture for as long as I can remember, I would be immune to the difficulty that one encounters when faced with new technology and the new ways in which it sometimes requires you to think.  I’ve adopted and adapted to all kinds of new technology in my many years of studying, creating and working with various types of software and hardware systems. 

And yet today I found myself in conflict with Flickr and the way it presents information.  I went to England and France for a few weeks and now wanted to share photos from the trip with my friends and family.  I’ve done this before and struggled with Flickr but figured that it’s two years later so surely the problems I had must be fixed by now.  But they aren’t.  They’re still there.  A huge part of me thinks the problem is with Flickr, that the creators and managers of Flickr are wrong in the way they’re thinking about things.  But then I remembered the guy from my class and thought that maybe the problem is me and my thinking.  Maybe I’m just wanting to create a file cabinet full of paper in a world where file cabinets full of paper are unnecessary.

I think the problem, where I come into conflict with Flickr, is the “photostream.”  This is the main page where my photos will be displayed.  The underlying notion of the photostream is that immediacy is of the utmost importance.  That is, whatever has happened most recently is what is most important.  So when I upload my photos, the ones taken most recently are displayed first by default.  This means that if I upload the images from a trip, those from the end of the trip appear first.  Of course, when I upload images from a trip, I want to tell a story to my viewers, the story of my trip.  This means that I want the images to be displayed in the reverse order from the default in the photostream.  But there is no way to change the order of the images in the photostream. 

The solution appears to be in Flickr’s use of “sets.”  I can put the photos into a set and then order the pictures so that they tell the story of my trip.  This is easy to do and works very well.  The problem is that there is no way to get the sets displayed in place of my photostream.  Instead, the set sits off to the side and the visitor has to click on it to view it.  But when the visitor clicks on the set, the images are displayed as thumbnails by default and the visitor must then click a tab called “detail” in order to see the images in a size that can be easily viewed.  Most people don’t know this and so even if I give them a link directly to the set, they will not be able to easily view the images.  Another problem I have with Flickr is that when I look at full size images, there is no easy way to go to the next full size image.  There is no “next” button.  All of these problems lead me to believe that the folks at Flickr do not think of viewing photos as a linear, possibly narrative, process.  Instead, like much of Web 2.0, whatever has happened most recently is most important.  And whatever has happened most recently is unconnected (narratively) to whatever happened right before that.

My first thought is that this is a mistake on Flickr’s part.  But perhaps I’m the one who is mistaken?  Maybe I’m thinking of time-based linearity in a world that has moved past such ways to organize experience?  Is linear narrative akin to the file cabinet?  From my perspective, it’s difficult to believe this is true.  It seems illogical.  But my student in the senior citizen class thought his way of thinking about things was perfectly logical too.  How can I tell?  I can rationalize the need for linear narrative but is it just a rationalization that I use to try to preserve a way of thinking that is no longer necessary?

et cetera