Desert of My Real Life

{August 9, 2013}   When is Failure Really Failure?

If you are involved in higher education in any way, you have heard about Massive Open Online Courses (MOOCs). I first heard of them back in the Fall of 2011 when I was one of 160,000 students to sign up for an online class in Artificial Intelligence (AI). I have PhD in Computer Science and my area of specialization was Machine Learning, a branch of AI. I have taught AI at the undergraduate level. So I wasn’t signing up for the course because I wanted to learn the content. Instead, I wanted to understand the delivery method of the course. In particular, I wanted to figure out how two preeminent AI researchers, Sebastian Thrun and Peter Norvig, would teach a class to over 150,000 students. I spent a couple of weeks working through online video lectures and problem sets, commenting on my fellow students’ answers and reading their comments on my answers. I stopped participating after a few weeks as other responsibilities ate up my time. It isn’t clear to me how many other people signed up for the class for reasons similar to mine. 23,000 people finished the course and received a certification of completion. Based on the “success” of this first offering, Thrun left his tenured faculty position at Stanford University and founded a start up company called Udacity.

There has been a lot of hype about MOOCs in general and Udacity in particular. It’s interesting to me that many of these MOOCs seem to employ pedagogical techniques that are highly criticized in face-to-face classrooms. In this advertisement, for example, Udacity makes a big deal about the prestige and reputations of the people involved in talking at students about various topics. Want to build a blog? Listen to Reddit co-founder Steve Huffman talk about building blogs. In other words, these classes rely heavily on video lectures. The lecture format for face-to-face classrooms, for example, is much maligned as being ineffective for student learning and mastery of course content. Why, then, do we think online courses which use video lectures (from people who have little training in how to effectively structure a lecture) will be effective? The ad also makes a big deal about the fact that the average length of their video lectures is 1 minute. Is there any evidence that shorter lectures are more effective? It depends on what else students are asked to do. The ad makes bold claims about the interactivity, the hands-on nature of these courses. But how interactivity is implemented is unclear from the ad.

Several people have written thoughtful reviews of Udacity courses based on participating in those courses. Robert Talbert, for example, wrote about his mostly positive experiences in an introductory programming class in The Chronicle of Higher Education. Interestingly, his list of positive pedagogical elements looks like a list of game elements. The course has clear goals, both short and long-term. There is immediate feedback on student learning as they complete frequent quizzes which are graded immediately by scripts. There is a balance between challenge presented and ability level so that as the student becomes more proficient as a programmer, the challenges presented become appropriately more difficult. And the participants in the course feels a sense of control over their activities. This is classic gamification and should result in motivated students.

So why are so many participants having trouble with these courses now? Earlier this year, Udacity set up a partnership with San Jose State to offer courses for credit for a nominal fee (typically $150 per course). After just two semesters, San Jose State put the project on hold earlier this week because of alarmingly high failure rates. The courses were offered to a mix of students, some of whom were San Jose State students and some of whom were not. The failure rate for San Jose State students was between 49 and 71 percent. The failure rate for non-San Jose State students was between 55 and 88 percent. Certainly, in a face-to-face class or in a non-MOOC class, such high failure rates would cause us to at least entertain the possibility that there was something wrong with the course itself. And so it makes sense that San Jose State wants to take a closer look at what’s going on in these courses.

One article about the Udacity/San Jose State project that made me laugh because of its lack of logic is this from Forbes magazine. The title of the article is “Udacity’s High Failure Rate at San Jose State Might Not Be a Sign of Failure.” Huh? What the author means is that a bunch of students failing a class doesn’t mean there’s something wrong with the class itself. He believes that the purpose of higher education is to sort people into two categories–those smart enough to get a degree and those not smart enough. So, his logic goes, those who failed this class are simply not smart enough to get a degree. I would argue with his understanding of the purpose of higher education but let’s grant him his premise. What proof do we have from this set of MOOCs that they accurately sort people into these two categories? Absolutely none. So it still makes sense to me that San Jose State would want to investigate the MOOCs and what happened with them. Technology is rarely a panacea for our problems. I think the MOOC experiment is likely to teach us some interesting things about effective online teaching. But I doubt they are going to fix what ails higher education.

{July 31, 2013}   Whistle-blowers

Two whistle-blowers are in the news today: Bradley Manning and Edward Snowden. Manning is the Army soldier who was convicted yesterday of 17 of the 22 counts against him. He leaked top secret documents to Wikileaks and was convicted of espionage and theft although found innocent of aiding the enemy. He is now awaiting sentencing. Edward Snowden is the contractor working for the National Security Agency who revealed details of several surveillance programs to the press. He is currently on the run from charges of espionage and theft but is continuing to make headlines with further revelations. Some see these two as heroes and others see them as traitors. I think history will judge which they are. What interests me most are the ways these two cases are being discussed.

We already know that Bradley Manning has been found guilty of most of the charges against him. The prosecutor in the case has said that Manning is not a whistle-blower but is instead a traitor looking for attention via a high-profile leak to Wikileaks. Manning’s defense attorney countered by saying that Manning is naive and well-intentioned and wants to inform the American public. “His motive was to spark reform – to spark change.” Why is his motive important? Since when is intent important in determining whether someone committed a crime or not? Next time I get stopped for a traffic infraction, I’m going to try saying “I didn’t intend to break the law” to the officer. What do you think my chances of getting off will be? I also find it interesting that the prosecutor seems to think that Manning is not a whistle-blower because he believes that Manning wanted attention. A whistle-blower is “a person who exposes misconduct, alleged dishonest or illegal activity occurring in an organization.” Manning might not be a whistle-blower because the activity he revealed was not misconduct, was not dishonest or illegal. But to argue that he’s not a whistle-blower because he didn’t have the proper intentions seems to lead us as a society down a dangerous path. Of course, the Zimmerman verdict might have already sent us down that path.

The Snowden situation is more recent than the Manning case so we don’t know what Snowden will be found guilty of. He’s accused of disclosing details about some secret surveillance programs being conducted by the National Security Agency (NSA) in the United States. The NSA is supposed to gather information about foreign entities strictly outside of US boundaries. Edward Snowden revealed the existence of several NSA surveillance programs focused on domestic as well as foreign communications. He then fled the country with several laptops “that enable him to gain access to some of the US government’s most highly classified secrets.” The question that interests me most about this case is how a contractor, an employee of a private company, an employee who probably should have failed his background check on the grounds that his resume contained discrepancies, was able to gain access to such secret information. “Among the questions is how a contract employee at a distant NSA satellite office was able to obtain a copy of an order from the Foreign Intelligence Surveillance Court, a highly classified document that would presumably be sealed from most employees and of little use to someone in his position.” Yes, that IS among the most important questions to answer. The NSA director, Keith Alexander, has said that the security system didn’t work as it should have to prevent someone like Snowden from gathering the sensitive information that he did. Snowden claims that he was authorized to access this information. The NSA claims that he was not authorized. Why does the NSA think it’s preferable that an unauthorized person gained access to its information?

I’m going to pause here to say that I’ve been reading a lot of speculation about how Snowden gained access to this information that he shouldn’t have had access to. There may be some people who know how he gained this access but in the dross of the Internet, the methods aren’t yet clear. From a technical standpoint, however, I find it incredibly disturbing that someone with Snowden’s computer security background (which appears to be rather mundane–he was no genius computer hacker) was able to gain access to all of this sensitive information within the agency that is supposed to be most expert in the security game. No matter what you think of Snowden and his intentions, I think you have to be concerned about the ease with which someone was able to gain access to these “secrets.” Having now read a whole bunch of information about this case, I feel like it is similar to the one in which the high school student is punished by the school’s IT staff for pointing out how weak the school’s computer security setup is. Perhaps we should be focused on the (lack of) security around this information rather than the fact that it has been disclosed.

In the Senior Seminar that I teach, we often discuss whistle-blowing. If I use the term “whistle-blowing,” my students generally feel that the person doing the disclosing is doing a service to society. If, instead, I say that the employee is revealing corporate secrets, my students generally feel that the person is betraying his/her employer. The cases of Manning and Snowden are more complex than I can easily comprehend but I guess I generally feel that shedding light on situations is better than trying to maintain security by secrecy, by obscuring the facts. In a democracy, sunshine is a good thing.

{July 23, 2013}   Unconventional Play

My aunt sent me a link to a Forbes Magazine online article about Words With Friends, Zynga‘s Scrabble rip-off. (An aside: I got sidetracked by the fact that Zynga’s business model seems based on cloning other people’s ideas for games and found that it is really difficult to protect against such clones. I’ll write about this in a future post.) The author of the Forbes article, Jeff Bercovici, quit playing Words With Friends awhile ago because the game allows players to try out combinations of letters with no penalty. That is, a player can guess at high-scoring words until s/he finds one and not suffer a penalty for doing so. The rules of the Scrabble board game prohibit this by allowing an opponent to challenge a word and if that word is not found in the dictionary, the player who played it, loses his/her turn. Bercovici would like Words With Friends to enforce the rules of the Scrabble board game and prohibit random guessing of words because it isn’t fun for him to play against someone who engages in that behavior.

One person’s flaw truly is another person’s feature. Bercovici is particularly annoyed that the authors of Words With Friends refuse to say that this is a “flaw” in the game but instead insist that it is a “feature,” something they designed into the game from the beginning. And Bercovici is apparently not the only one infuriated by this “flaw.” Penny Arcade calls it “The Brute Force Method.” John Hodgman calls it “Spamming the Engine.” I would call it “Playing the Game.”

Although we call Words With Friends a “clone” of Scrabble, it actually differs in a number of ways. The size of the board is different. The placement of special spaces such as Triple Word  and Double Letter scores is different. The way the game starts is different–in Scrabble, the score for the first word played is always double the face value of the letters while in Words With Friends, the first word score is not doubled unless one of its letters covers a Double Word space. For Bercovici, there is something special about changing the rules so that the player doesn’t have to know about the existence of a word before playing, something that goes against the spirit of the game in a way that the other rules changes do not.

In the mid-90’s, Richard Bartle published an article laying out a simple taxonomy of MUD player types. The most important point of the article in looking at play activities other than MUDs is that players have a variety of motivations for why they play particular games. In other words, not everyone is playing for the same reason or to get the same experience from the activity. For Bercovici, randomly trying letter combinations until the game accepts one violates his idea of what the game should be about. He personally would not get pleasure from playing like that and he finds it infuriating to play against others who play like that. He wants the game to stop his opponents from playing like that, to enforce his idea of what the conventions of the game should be. Unconventional players frustrate him to the point of giving up the game. Bercovici goes on to tell us a story in which he consistently beat a “better” tennis player by using “junk” shots. The other player was annoyed and frustrated because Bercovici wasn’t playing conventionally and therefore, he was difficult to beat using conventional skills and strategies. It wasn’t until Bercovici tried to develop those conventional skills and strategies himself that he understood his opponent’s frustration. Bercovici tells this story to explain why the fact that Words With Friends allows this unconventional behavior is a flaw and not a feature.

I think Bercovici should indeed stop playing any game that is causing more frustration than pleasure. But that doesn’t mean there is something wrong with the game. Texas Hold ’em is a great example of how unconventional, “junk” play can improve everyone’s game. The popularity of Texas Hold ’em exploded in 2003 when then-amateur Chris Moneymaker won the World Series of Poker. Suddenly, everyone was playing Texas Hold ’em. And just as suddenly, amateurs were beating professionals in all kinds of tournaments. Many of these amateurs violated the conventions about the “right” way to play the game. They were gambling on hands that professionals would have folded. They were making bets that made no sense given the conventional wisdom of how to play the game. Sometimes those professionals behaved very badly as they were getting beat by these unconventional amateurs because they didn’t like how the amateurs were playing. Has the influx of amateurs playing unconventionally been bad for the game? Some might say yes but I think it’s good to mix things up, to have different people playing in different ways and for different reasons.

All that said, I think Bercovici shouldn’t blame Words With Friends for not being Scrabble. Why not go play Scrabble instead? I would note that the version of Scrabble on Facebook by Electronic Arts also allows random guessing of words with no penalty. And there’s a dictionary built right into the game for the players to use.

{June 30, 2013}   Gamification and Education

A trending buzzword in today’s digital culture is gamification. According to most sources on the Internet, the term was coined in 2004 by Nick Pelling, a UK-based business consultant who promises to help manufacturers make their electronic devices more fun. Since then, the business world has jumped on the gamification bandwagon with fervor. Most definitions of the term look something like this: “Gamification is a business strategy which applies game design techniques to non-game experiences to drive user behavior.” The idea is that a business will add game elements to its interactions with consumers so that consumers will become more loyal and spend more time and money with the business.

We can see examples of gamification all over the place. Lots of apps give badges for participation and completion of various goals.  Many also provide leader boards to allow users to compare their progress toward various goals against the progress of other people. Airlines and credit card companies give points that can be redeemed for various rewards. Grocery stores and drugstores give discounts on purchases to holders of loyalty cards. Businesses of all types have added simple game elements like goals, points, badges, rewards and feedback about progress to compel the consumer to continuously engage with the business.

This type of gamification is so ubiquitous (and shallow, transparent, self-serving) that a number of prominent thinkers have decried the trend. My favorite is the condemnation written by the game scholar Ian Bogost. Bogost writes, “Game developers and players have critiqued gamification on the grounds that it gets games wrong, mistaking incidental properties like points and levels for primary features like interactions with behavioral complexity.” In other words, gamification efforts focus on superficial elements of games rather than those elements of games that make games powerful, mysterious, and compelling. Those superficial elements are easy to adapt to other contexts, requiring little thought or effort, allowing the marketers “to clock out at 5pm.” The superficial elements are deployed in a way that affirms existing corporate practices, rather than offering something new and different. Bogost goes on to say, “I realize that using games earnestly would mean changing the very operation of most businesses.” It’s this last statement that most interests me. What would “using games earnestly” look like?

Since 2007, I have been teaching a class called Creating Games, which fulfills a Creative Thought general education requirement at my university. The class focuses on game design principles by engaging students in the design and development of card and board games. And because of that content, I thought it would be natural environment to test out some ideas about gamification and its role in education. So I made a number of changes to the course starting in the Fall of 2010. I added some of the more superficial elements of games to the class to help support the gamification effort. In addition and more importantly, I added some game elements which I think start to change the very operation of the classroom. I think these deeper changes involving “behavioral complexity” are motivational for students, resulting in a more thorough learning of the content of the class.

To determine what to change about my class, I started with Greg Costikyan’s definition of a game, which he articulated in the article called I Have No Words and I Must Design. Costikyan says, “A game is a form of art in which participants, termed players, make decisions in order to manage resources through game tokens in the pursuit of a goal.” If we use this definition, then to gamify an activity, we would add players, decisions, resources to manage, game tokens and/or a goal.

I thought about whether and how to add each of these game elements to my class and decided to add a clearly articulated goal, similar to the kinds of goals that are present in typical games. I focused the goal on points, which we call Experience Points (EP). So at the start of the semester, students are told that they will need to earn 1100 EP in order to get an A in the course, 1000 EP for a B, 900 EP for a C, 800 for a D, and anything less than 800 would result in an F in the course. Students can then choose the letter grade that they want to earn and strive to achieve the appropriate number of points to do so. I added a series of levels so that students could set shorter term goals as they progressed toward the larger goal of reaching the specific grade they wanted. All students start the class at level 1 and as they earn EP, they progress through the levels. The highest level is level 15, which requires 1100 EP to achieve and corresponds to earning an A in the class. The number of points between the levels increases as the levels increase so that early in the class, students are making fairly quick progress but as they gain proficiency, they must work harder to reach the next level. So, for example, the difference between levels 1 and 2 is 30 EP while the difference between levels 14 and 15 is 100 EP. Costikyan also mentioned game tokens as a mechanism for players to monitor their status in the game. I added a weekly leader board to my class so that students would be able to determine how the number of EP they’ve earned compares to their class mates. These superficial elements of games were easy to add, just as Bogost suggested. I then started to think about how I might add game elements “earnestly” in a way that creates something new and different for the students.

In 1987, Malone and Lepper published a study called “Making Learning Fun: A Taxonomy of Intrinsic Motivations for Learning.” Their taxonomy includes four basic kind of motivations for game players to continue to play games and they suggest that educators think about ways to use these motivations for classroom learning. The four categories focus on challenge, control, curiosity and fantasy. Adding points, levels and a leader board all relate to the category called challenge, which involves clear goal statements, feedback on progress toward goals, short-term goals and goals of varying levels of difficulty. I then focused on the category of motivations called control.

According to Malone and Lepper, control involves players making decisions that have consequences that produce results with significant outcomes with those outcomes being uncertain. In fact, Costikyan says, “The thing that makes a game a game is the need to make decisions.” So for him, decision-making is the most important element of a game. It shouldn’t be surprising, then, that Malone and Lepper found control to be an important motivational factor in games. But in most classrooms, students make few decisions about their learning and have no control over their own activities. I decided that my gamification effort would focus on adding decision-making to the class. Therefore, no activity in the class is required. Students get to decide which activities from a large (and growing) array of activities they would like to engage in. I give them the entire list of activities (with due dates and rubrics just like in other classes) at the start of the semester and students get to decide which of the activities they would like to complete. As a student moves through the class, if she thinks of a new activity not currently on the list that she would like to work on, she can work with me to formalize the idea and it will be added to the list of possibilities for the rest of the students and will become a permanent part of the course for future offerings. The first time I taught the course, I had thought of 1350 EP worth of activities and required 1100 to earn an A. The latest offering of the course had 1800 EP worth of activities and still required 1100 to earn an A. I also have made a semantic shift in the way I talk about points in the class. In most classrooms, when a student earns a 75 on an exam worth 100 points, the top of paper will show a -25 to signify the number of points the student lost. I never talk about points lost but rather focus on the EP that has been earned. One of the nice consequences of this flipping of the focus is that students understand that if earning 75 points on exam does not bring them close enough to their next goal, they will have to engage in additional activity in order to earn additional points. They could also choose not to engage in any additional activity. There are significant consequences either way but the important point is that the student is in control and can make decisions about the best way to achieve his/her goal.

Student comments on my course evaluations suggest that students initially find it difficult to understand this grading system (because it is so different from what they are used to in their other classes) but once they understand it, they love it. They enjoy being able to decide whether to take an exam, for example. One way to determine whether students are learning the content of the class is to look at final course grades. Here is a comparison of a random Fall semester section of the course before I made this change to a random Fall semester section after I made the change:

—Fall 2009 (before this change)
—A  4
—B  6
—C  8
—D  4
—F  6
—Fall 2011 (after the change)
—A  18
—B  4
—C  2

On average, the students in the Fall 2011 section of the course did more work and engaged more often with the course content than did the students in the Fall 2009 section. And as I said earlier, students often think about the material independently to come up with their own assignments that are added to the course for everyone to choose from.

I wouldn’t suggest that I have changed “the very operation” of education. But I do think that an earnest focus on giving students more control over their own learning is a huge step in the right direction and moves us away from the bullshit that Bogost rightly complains about.

{June 19, 2013}   Software Controls Users

I’m often surprised that some of the most valuable lessons I learned back in the late 1980’s have not become standard practice in software development. Back then, I worked for a small software development company in Western Massachusetts called The Geary Corporation. The co-founder and owner of the company was Dave Geary, a guy I feel so fortunate to have learned so much from at a formative stage in my career. He was truly ahead of his time in the way that he viewed software development. In fact, my experience shows that he is ahead our current time as most software developers have not caught up with his ideas even today. I’ve written about these experiences before because I can’t help but view today’s software through the lens that Dave helped me to develop. A couple of incidents recently have me thinking about Dave again.

I was talking to my mother the other day about the … With Friends games from Zynga. You know those games: Words With Friends, Scramble With Friends, Hanging With Friends, and so on. They’re rip-offs of other, more familiar games: Scrabble, Boggle, Hang Man, and so on. She was saying that she stopped playing Hanging With Friends because the game displayed the words that she failed to guess in such a small on her Kindle Fire and so quickly that she couldn’t read them. Think about that. Zynga lost a user because they failed to satisfy her need to know the words that she failed to guess. This is such a simple user interface issue. I’m sure Zynga would explain that there is a way to go back and look for those words if you are unable to read them when they flash by so quickly. But a user like my mother is not interested in extra steps like that. And frankly, why should she be? She’s playing for fun and any additional hassle is just an excuse to stop playing. The thing that surprises me about this, though, is that it would be SO easy for Zynga to fix. A little bit of interface testing with real users would have told them that the font and speed at which they displayed the correct, unguessed word was too small and too fast for a key demographic of the game.

My university is currently implementing an amazingly useful piece of software, DegreeWorks, to help us with advising students. I can’t even tell you how excited I am that we are going to be able to use this software in the near future. It is going to make my advising life so much better and I think students will be extremely happy to be able to use the software to keep track of their progress toward graduation and get advice about classes to think about taking in the future. I have been an effusive cheerleader for the move to this software. There is, however, a major annoyance in the user interface for this software. On the first screen, when selecting a student, an advisor must know that student’s ID number. If the ID number is unknown, there is no way to search by other student attributes, such as last name, without clicking on a Search button and opening another window. This might seem like a minor annoyance but my problem with this is that I NEVER know the student’s ID number. Our students rarely know their own ID number. So EVERY SINGLE time I use this software, I have to make that extra click to open that extra window. I’m so excited about the advantages that I will get by using this software that I am willing to overlook this annoyance. But it is far from minor. The developers clearly didn’t test their interface with real users to understand the work flow at a typical campus. From a technical standpoint, it is such an easy thing to fix. That’s why it is such an annoyance to me. There is absolutely no reason for this particular problem to exist in this software other than a lack of interface testing. Because the software is otherwise so useful, I will use it, mostly happily. But if it weren’t so useful otherwise, I would abandon it, just as my mother abandoned Hanging With Friends. When I complained about this extra click (that I will have to make EVERY time I use the software), our staff person responsible for implementation told me that eventually that extra click will become second nature. In other words, eventually I will mindlessly conform to the requirements that the technology has placed on me.

Dave Geary taught me that when you develop software, you get the actual users of that software involved early and often in the design and testing. Don’t just test it within your development group. Don’t test it with middle management. Get the actual users involved. Make sure that the software supports the work of those actual users. Don’t make them conform to the software. Make the software conform to the users. Otherwise, software that costs millions of dollars to develop is unlikely to be embraced. Dave’s philosophy was that technology is here to help us with our work and play. It should conform to us rather than forcing us to conform to it. Unfortunately, many software developers don’t have the user at the forefront of their minds as they are developing their products. The result is that we continue to allow such software to control and manipulate our behavior in ways that are arbitrary and stupid. Or we abandon software that has cost millions of dollars to develop, wasting value time and financial resources.

This seems like such an easy lesson from nearly thirty years ago. I really don’t understand why it continues to be a pervasive problem in the world of software.

{June 12, 2013}   A Possible Return

I have been away for an entire academic year. It was my intention this year to find time to regularly write entries about various technology and society issues. But it didn’t happen. I blame the fact that I’ve been a department chair for two years now. The increased administrative tasks that come with being chair leave me fairly mentally exhausted so that the only scholarly activities that I’ve engaged in are things that result in actual presentations or publications. That doesn’t mean that I haven’t thought about blog topics, however.

So I’m declaring it here as a way to hold myself accountable. My goal for the upcoming academic year is to write at least one blog entry per week. It feels daunting but so worthwhile since I like thinking about technology issues way more than I like doing administrative tasks. I need to make the time for this.

{September 9, 2012}   Communicating for Change

The university where I work, like most universities, is facing significant challenges from multiple fronts. To meet these challenges, we’re finding that we need to change the way we do business. The question that I’ve been pondering is how to get people on board with change, especially when that change means an increase in work load. In the last year and a half, I have been persuaded that some efforts that I had not originally supported were good changes for the University. Proponents of other efforts, however, have been unable to persuade me that the extra work required for implementation would be worth the effort. There is currently a change on the table and the proponents of the change have done a poor job of communicating the benefits of the extra work involved in making the change. So I’ve been thinking about how that group could have done better in getting the community to commit to making the suggested change. Here’s what I think you need to do to gain support of people whose (work) lives are affected by a change you are proposing:

1. Clearly identify the problem you’re trying to solve. Make sure your stakeholders understand why the problem is a problem for them or for groups that they care about. This step is also important so that you can later determine whether the change you are proposing actually solves the problem you’ve identified. Saying “we need to do better” is not a clear articulation of a problem. What do we need to do better? Why do we need to do better? What are the negative consequences of the way we’re currently doing things? Who thinks we need to do something better? Try to figure out why not doing better negatively impacts on the various stakeholders. How could their lives be better if we changed the way we’re doing things?

2. Initiate an inclusive process for generating solutions to that problem. You and your group can sit in a room and think up solutions to your now clearly identified problem but you’re all likely looking at the problem from a similar perspective. Identify other groups to explain the problem to and ask them to generate some solutions. Send out surveys, run focus groups, attend meetings of a variety of stakeholders. Ask for feedback in a bunch of different ways. Keep track of all of the possible solutions generated, even the ones that seem kind of crazy at first.

3. For each solution, identify pros and cons and the overall impact of those pros and cons. There may be some solutions whose cons are so great that they create bigger problems than the original problem you’re trying to solve. Make sure you understand how these solutions will impact each group of stakeholders.

4. Choose the solution that solves the biggest portion of the problem but that also generates the fewest additional problems. Try to think about unwanted, unintended consequences. There’s no sense in solving a problem only to create larger, worse problems. Go back to the groups who generated your list of solutions and ask them what they think about the solution that you think is best. Ask them what the consequences will be. And don’t ignore any of the feedback you receive. You can use the feedback to anticipate objections to the solution when you propose it to the larger community.

5. Develop an implementation plan that acknowledges the difficulties with implementing any significant change. Be sure to weigh whether those difficulties are worth the effort given the original problem that you are trying to solve.

6. Share the entire process that you’ve gone through to develop a solution with the people who will be affected by the change. Listen to their feedback and try to deal with as many of their concerns as possible, either by making them go away (by changing the solution or the implementation to address the concern) or by acknowledging the concern but explaining why the solution will make their overall lives better, so that whatever their concern is will be dwarfed by the relief in having solved the original problem.

7. Although you will never be able to please everyone, only implement solutions that actually solve the problem identified. If you can’t articulate how the solution solves the problem in a way that gets people to understand what you’re doing and why, perhaps the solution is not a good one.

The group that is currently proposing a change has not done any of these steps. They have proposed a solution to a problem that they have not clearly articulated. The solution was generated by their group alone and when they brought the idea to another group that I’m a part of, they got feedback that the proposed change had a lot of problems, including some probable unintended, unwanted consequences. But then they have ignored that feedback and told us that they are implementing the change anyway, without even acknowledging that they got any negative feedback at all.

I’m hoping that I can use my better understanding of what I think should happen for buy-in to occur to explain to the group why what they’re doing is problematic, so that they’ll go back to the drawing board and reexamine the issue. And I hope I can keep this lesson in mind the next time I’m part of a group that wants to initiate change.

The 2012 Summer Olympics are nearly over. I haven’t watched them much, mostly because I can’t stand the way they are covered by NBC and its affiliates, especially in prime time, when I’m most likely to be watching. I don’t think this video aired on national television but it sums up NBC’s attitude about the Olympics–it’s only marginally about the sports and performances. The main focus is on disembodied female athlete body parts moving in slow motion, sometimes during the execution of an athletic move but often just as the athlete moves around the playing area. It’s soft core porn. Interestingly, I watched the video earlier today on the NBC Olympics page but now it’s gone. I guess someone at NBC came to their senses and realized that it’s inappropriate to focus on female Olympians bodies without emphasizing their athleticism. But anyway, sexism in the coverage isn’t what I was planning to write about tonight.

I wish NBC would focus more on the performances of the athletes. An athletic performance can be interesting and amazing even in the athlete hasn’t overcome significant life difficulties to be an Olympic athlete. Each of those athletes, even the ones who have had fairly mundane lives outside of their athletics pursuits, has overcome incredible odds to make it to the Olympics at all. For every athlete that makes it to the Olympics, there are probably thousands of others who tried and didn’t make it.

That said, one athlete that caught my attention for overcoming incredible odds to make it to the Olympics is Oscar Pistorius. He is the sprinter from South Africa who has a double below-the-knee amputation but who has now competed in the Olympics using prostheses, earning him the nickname “The Blade Runner.” His participation in the Olympics has been controversial. Some have claimed that the prostheses he uses give him an advantage over other athletes and, as a result, in 2008, the IAAF banned their use, which meant that Pistorius would not be able to compete with able-bodied athletes. Although the ban was overturned that same year in time for Pistorius to participate in the 2008 Summer Olympics, he failed to qualify for the South African team. But this year, he was on that team and both the 400 meter individual race and the 400 meter relay. I saw his heat in the 400 meter individual race and although he came in last, it was an inspirational moment.

Pistorius’ historic run reminded me that over time science fiction often becomes science fact. Remember The Bionic Woman? I loved that show when I was about 13 years old. Jaime Sommers was beautiful, brave and bionic. She nearly died in a skydiving accident but she was lucky to be the girlfriend of Steve Austin, aka The Six Million Dollar Man, who had had his own life-threatening accident a number of years earlier. He loved her so much that he begged his boss to save her by giving her bionic legs, a bionic arm and a bionic ear to replace her damaged parts. Unlike Pistorius’ legs, Jaime’s clearly were “better” than human legs, allowing her to run more than 60mph. Her bionic arm was far stronger than a human arm, allowing her to bend steel with her hand. I always loved her bionic ear, which allowed her to hear things that no human could possibly hear, but only if she pushed hair out of the way first.

Speaking of hearing, I love the story about the technology that is being used to make the Olympics sound like the Olympics to home viewers. The Olympic games have a sound designer named Dennis Baxter. He is the reason we can hear the arrow swoosh through the air in the archery competition. This is a sound that folks at the event probably can’t hear. And yet, Baxter sets up microphones so that we, the television viewing audience, can actually hear that arrow move through the air. Baxter claims that this technology makes the event seem more “real” to the viewing audience.

This raises such interesting questions about augmented reality. We can never directly experience the “real.” It will always be mediated by at least our senses. We know for a fact that our brains fill in holes in our visual perception. Our brains augment what we perceive via our senses. When we perceive an Olympic event via transmission technology (like television or the Internet), are we witnessing the “real” event? Is it still “real” when technology augments some aspect of our sensory perception, like when Baxter adds microphones to allow us to hear things we wouldn’t hear even if we were attending the event? When does technological augmentation become unreality? Where do we draw the line? And most importantly, does it matter? Do we care whether we’re experiencing something “real”?

Quite a lot of people hate “Obamacare” which is otherwise known as the Patient Protection Affordable Care Act. And there are indeed things to hate about the law. For example, I am a proponent of single payer health insurance and so the “individual mandate,” where people are required to purchase insurance on their own or pay a “tax” or a “fee” or whatever you want to call it, is problematic to me. I would prefer that we be completely up front about things and build the payment for health care into our tax law. Yes, I know that makes me a “socialist” but I think health care is kind of like fire fighting. Do we want to go back to the days of private fire fighters, where you had to pay up front or the fire fighters wouldn’t show up at your house? Fire fighting is something that we should all contribute to via our tax dollars and then when we need it, the service is provided. If that’s “socialism,” then yes, I am for socialized medicine.

As I said, I believe there are things to complain about and criticize in the Affordable Care Act. But it was quite surprising to me that one of my FB friends posted a link to a video claiming that the Affordable Care Act mandates that we all be implanted with RFID chips with our health information by March 23, 2013. I had not heard of this mandate, despite the fact that I have been paying pretty close attention to the debate. I would have serious problems with such a mandate but there were several things about the claim that immediately made me suspect it was a figment of someone’s imagination. If you can bear to watch the video, here‘s a short version of it. But for those of you who can’t bear to watch the video, I’ll describe it.

The video begins with an advertisement from a company that makes implantable radio frequency identification (RFID) chips. These are chips that many of us already possess on our ATM cards or passports. The chips contain information of some sort that can be read with a special device that picks up the radio signals emitted by the chip on the card. There are companies that make versions of these chips that can be implanted under the skin of a human or an animal. Some pet owners may have implanted them into their dogs or cats in case the pet gets lost. In any case, the video starts with an ad for these implantable chips and then claims that the Affordable Care Act requires that everyone in the US be implanted with one of these by March 23, 2012. The evidence? The narrator reads a passage (claiming it comes from the law itself) that discuss the creation of a government database to keep track of devices that have been implanted into humans. Then the narrator reads another passage that mandates the creation of a system within hospitals and doctors’ offices that will allow medical information to be stored on and read from RFIDs. These passages say that these two systems must be in place by 36 months from the passage of the law. That’s where the narrator gets March 23, 2013–that is 36 months after the passage of the law.

The thing to notice about these passages is that they say nothing about forcing the implantation of RFID chips. A database to keep track of devices that have been implanted in humans would keep track of things like pace-makers and hip replacements and all kinds of devices that are implanted voluntarily and for the improvement of someone’s life. And we already use RFIDs to keep track of personal information, such as financial information or passport information. These RFIDs are embedded in cards that we carry around with us and the passage that the narrator reads simply suggests that we need a system that would allow medical information to be stored on RFIDs, presumably embedded in cards similar to a credit card or a passport. There is nothing about mandating the implantation of an RFID. Here’s what Snopes has to say about this particular conspiracy theory–note that their evaluation is that there is no truth to the claim.

When there are real things to criticize in this law, why would someone make up a threat such as this? I think it’s because it works. It plays on an emotional response in ways that the real issues do not. And so you get lots more people to care about what is admittedly a scary idea than you would ever get to care about the real problems with the law. So people who would probably not pay attention to the health care debate otherwise are now vehemently against the government intruding on our medical privacy in this way, despite the fact that there is no evidence that the government plans to intrude in this way. So lots of people who would actually benefit from the provisions of the Affordable Care Act are vehemently opposed to the law for reasons that have nothing to do with the reality of the law. And no amount of debunking will make these untruths go away. Just ask the American public whether the US ever found evidence that Saddam Hussein had weapons of mass destruction.

{July 1, 2012}   Email: Buried Alive

I became the chair of my department a little over a year ago and within a few months, I found myself completely overwhelmed by email. Emails started to get buried in my inbox, either read and then forgotten or never read at all. I realized that I needed to use part of the summer break from teaching to develop a new system for dealing with the volume of emails that I receive in this position.

I have been using email since the 1980’s and have used the same process this entire time to deal with emails. I would keep emails in my inbox that I wanted to pay attention to for some reason (interesting content or information I might need in the future were the two major reasons) and if the email contained a task that I needed to complete in the future, I would mark it as unread. A few years ago, I started to use a system of folders for emails with interesting content or useful information. I maintained my habit of marking future task-oriented emails as unread. This system worked for years for me. Every summer, I spent a couple of hours cleaning up folders and my inbox. It was completely manageable.

As department chair, however, the number of emails that I received increased dramatically. The number of emails with interesting content, useful information or future task information also increased dramatically. But I think the thing that started to bury me is that the number of interruptions that occurred through the course of a day also increased dramatically. What that meant was that I might be in the middle of reading email when someone would come into my office and I would immediately give them my attention. If I was in the middle of reading an email, I might (and often did) forget to complete the process of dealing with the email. So emails with important task information might not get marked as unread or emails with interesting content or useful information might not get filed into the appropriate folders. Or I might forget where in the list of emails I had gotten to in my reading so that some messages were marked unread because I truly had not read them.

I soon found myself with over 2000 emails in my inbox, over 650 of which were marked as unread. A big problem with the unread messages is that I had no way of determining whether they were unread because I really hadn’t read them or because they contained important future task-related information. I was using that category for two very different purposes. I had no idea what those unread emails contained. Organizing my inbox began to feel like an insurmountable task. I began to have anxiety about the idea that I might actually have 650+ tasks that I needed to deal with. And we all know that we don’t work best when we feel overwhelmed and anxious. I knew I had to figure out some other way of dealing with my email.

My book club buddy and I read Time Management for Department Chairs by Christian Hansen. I attended a workshop that he presented at the Academic Chairs Conference that I attended in February in Orlando and although I found much of what he said about time management incredibly useful, I ironically didn’t have time during the Spring semester to implement very many of the ideas he presented. He has a couple of interesting things to say about managing the email deluge that I wanted to try to implement but I really needed to get my email under control first.

Here’s what I did and what I plan to do to keep things organized.

First, I needed to clean up my inbox. I began by reorganizing my folders. I did my normal summer clean up of the folders and then added a folder called “Defer” which I’ll come back to. Then I started on the inbox itself, reading the emails to determine what I was going to do with each one. I had four choices, which Hansen calls “the four D’s.” I could “delete,” “do,” “delegate,” or “defer.” I spent over 10 hours one Sunday deleting emails which needed no response from me or doing whatever task was required by an email if I could deal with it immediately. Doing whatever I needed to do sometimes meant delegating the task to someone else so I wrote a bunch of emails asking others to do things. Other times, “doing” meant answering questions. And still other times, it meant filing the email in one of my email folders. And finally, if dealing with an email required more time than I had available to me that day or required information that I didn’t currently have or required someone else to do something before I could do what I needed to do, I put it into the “Defer” folder that I mentioned early. I can’t explain the elation I felt when I finally had 0 emails in my inbox. What was more amazing than having 0 emails in my inbox was that I only had 9 emails in my “Defer” folder! I had been SO worried about what I wasn’t dealing with and it was such a relief to find that there were only 9 emails that I couldn’t deal with that day.

So that’s how I cleaned up my inbox. Now I have to maintain it and that means implementing a different system for email. Hansen suggests only looking at email at designated times during the day, times when you are unlikely to be interrupted. And the four D’s should be the practice every time you look at your email. I think I can manage this part of the process although it’s difficult to tell in the middle of summer when email only trickles in. The part that might be more difficult to me involves a larger picture time management strategy.

Hansen suggests that we should all abandon the daily to do list. It leads us to be often in crisis because each day we’re only dealing with the things that HAVE to be done on that day. Instead, we should create a master to do list that contains the things that absolutely must be done by a particular day but should also contain things that we’d LIKE to do, things that are not critical but that will help us to be more productive in the long run. A great example of this kind of thing is planning. Many of us would like to develop plans for our departments (or our lives) but that kind of work always gets put on the back burner, to be done when we “have time.” Ironically, not planning often takes more time in the long run as we have to deal with things when we’re in crisis mode rather than ahead of time when we’re thinking clearly. Hansen also suggests that when we’re creating our schedules for the week or the month or the semester, we should put these kinds of tasks on the schedule and actually do them when we schedule them. What does this have to do with the “Defer” email folder? We need to regularly put time in our schedule to deal with the tasks in that folder. In fact, we need to schedule time to review the tasks that are in the folder so that we can then put the tasks on the calendar. It’s this bit that I’m worried about. I worry that there will be crises and I will be unable to resist putting off the “Defer” folder review and planning. But I’m going to really try to implement this step. I think it’s the only way the entire system will work.

One follow-up: In the 10 hours that I spent deleting and otherwise dealing with emails, I clearly didn’t read them all carefully. Just this past week, I got an email from one of the administrators at my University about a student who claimed to have sent me email a week earlier and that I had not responded to. I have no recollection of the email whatsoever but I also don’t doubt that the student sent the email and I simply deleted it unread. When I shared that story with a friend, she said that was her biggest fear in deleting emails, that she will miss something important. And although I acknowledge the risk (especially since it actually happened to me), I still think cleaning up my inbox was worth that risk. If I had not cleaned up my email, that student message would likely have remained buried in my inbox for the week and the student would have complained to the administrator anyway. So I would have had to deal with that issue either way. The difference is that I now feel pretty confident that future student emails (or other emails) will not get buried and I will no longer have this problem. In addition, my anxiety level about my emails is currently at zero which I think makes me more productive. That alone is worth the effort.

I’m curious about how other people deal with the email deluge.

et cetera