Desert of My Real Life











{September 22, 2013}   Social Media Round Up

Now that the craziness of the start of the semester has begun to slow down, I thought I’d do a quick hit on a variety of social media topics that I’ve been thinking about in the last few weeks but have not yet found the time to write about.

A few weeks ago, Twitter updated its rules to make it clear that abuse would not be tolerated. The events that prompted the rule updating included specific bomb threats and threats of rape sent to women journalists and politicians. Many of the comments on the articles covering this story think that it was improper for Twitter CEO Tony Wang to apologize to the women in question. Other comments suggest that it’s stupid to try to police these kinds of threats because it’s not going to make a difference. Still other comments suggest that unless someone breaks the law, Twitter should not “censor” tweets. My main response to these comments is that making direct and specific threats against a particular individual is indeed against the law. It doesn’t seem to be a terrible thing to me that Wang chose to apologize to individuals who had crimes committed against them using his product. In fact, that seems to make good business sense. And I agree that rules alone won’t make a difference in changing the tone of discourse on Twitter. There has to be enforcement of those rules as well. So I hope Twitter will follow up on its promises to make reporting abuse easier and hiring more people to deal with such reports so that they can be handled more quickly. Twitter didn’t handle this issue particularly well, in my opinion, but they are taking some first steps to fix the issue.

I use a variety of social networking sites at varying levels of activity. For example, I’m pretty active on Facebook, regularly posting status updates, photos and links to stories that I think my friends will be interested in. I am far less active on LinkedIn although I have many contacts in my network, mostly current and former students who are using the network professionally. I try to keep up with the various networks that are available so I decided recently to check out Google+. I’ve been using Google Calendar and Gmail for years so it felt like a natural step to set up a profile and get started with Google+. I’ve found so far that it is much more like Facebook than like LinkedIn but there’s a bit of Twitter thrown in. It’s like Facebook in that you have a stream very much like Facebook‘s newsfeed. You also share status updates, photos, etc. just like on Facebook. You can even “like” posts by others (called +1 in Google+). But like Twitter, Google+ has the option to that allows you to follow people and organizations. In Facebook, your friendships are bidirectional in that both parties must agree to the relationship. In Twitter, you can follow someone to be able to see their public tweets and they do not have to follow you back. In other words, a relationship requires only a uni-directional connection. Google+ also only requires this uni-directional connection. So, in Google+, we get the sharing features of Facebook combined with the relationship features of Twitter. But Google+ also offers another feature that I think is pretty cool. One of the problems with Facebook is that all friends are treated equally on the network even if they aren’t equal in real life. That has caused problems for lots of people. So Google+ allows the user to create different “circles” for their connections which will allow the user to easily manage the kinds of material people in a particular circle will see–just like in “real life.” Another interesting aspect of Google+ is the “hangout” concept although I haven’t played with entering them or creating them yet. Perhaps that will be the subject of a future post. The main problem with Google+, however, is that so few of the people I care about are using it. That’s the draw of Facebook–many of the people I care about in “real life” are posting really interesting (and not so interesting) things on Facebook so I keep going back. Until more people migrate to Google+ in a meaningful way, I probably won’t participate very much myself. Google faces a classic chicken and egg kind of problem here.

I regularly check out new social media tools, just to see what they’re about. Some of the tools become part of my repertoire (Tumblr, Flickr) while some do not or, at least, haven’t yet (Klout, Medium). One tool that was quite intriguing to me when I first looked at it but then kind of disappointed me was Storify. It’s a tool that is designed to allow people to curate social media artifacts to tell a story. I wrote one story ten months ago and then forgot about it. As I was thinking about the things I wanted to write about in this round up of my social media activity, I remembered that I had written that one story and went back to check what’s been going on in that social media world. I was surprised to find that my story had 56 views. That may not sound like much activity for 10 months, but I had done nothing to bring attention to the story and none of my friends (as far as I know) are members of that community. I have no idea how many people read each one of these blog posts but I’m guessing it is far fewer than 56 people. So Storify is back on my radar although I’m not sure how I might use it yet.

It’s difficult to keep up with what’s going on in the world of social media. I would like a tool that helps me me keep up with what’s available and helps to put it all together in a way that makes sense.

 

 



{June 19, 2013}   Software Controls Users

I’m often surprised that some of the most valuable lessons I learned back in the late 1980’s have not become standard practice in software development. Back then, I worked for a small software development company in Western Massachusetts called The Geary Corporation. The co-founder and owner of the company was Dave Geary, a guy I feel so fortunate to have learned so much from at a formative stage in my career. He was truly ahead of his time in the way that he viewed software development. In fact, my experience shows that he is ahead our current time as most software developers have not caught up with his ideas even today. I’ve written about these experiences before because I can’t help but view today’s software through the lens that Dave helped me to develop. A couple of incidents recently have me thinking about Dave again.

I was talking to my mother the other day about the … With Friends games from Zynga. You know those games: Words With Friends, Scramble With Friends, Hanging With Friends, and so on. They’re rip-offs of other, more familiar games: Scrabble, Boggle, Hang Man, and so on. She was saying that she stopped playing Hanging With Friends because the game displayed the words that she failed to guess in such a small on her Kindle Fire and so quickly that she couldn’t read them. Think about that. Zynga lost a user because they failed to satisfy her need to know the words that she failed to guess. This is such a simple user interface issue. I’m sure Zynga would explain that there is a way to go back and look for those words if you are unable to read them when they flash by so quickly. But a user like my mother is not interested in extra steps like that. And frankly, why should she be? She’s playing for fun and any additional hassle is just an excuse to stop playing. The thing that surprises me about this, though, is that it would be SO easy for Zynga to fix. A little bit of interface testing with real users would have told them that the font and speed at which they displayed the correct, unguessed word was too small and too fast for a key demographic of the game.

My university is currently implementing an amazingly useful piece of software, DegreeWorks, to help us with advising students. I can’t even tell you how excited I am that we are going to be able to use this software in the near future. It is going to make my advising life so much better and I think students will be extremely happy to be able to use the software to keep track of their progress toward graduation and get advice about classes to think about taking in the future. I have been an effusive cheerleader for the move to this software. There is, however, a major annoyance in the user interface for this software. On the first screen, when selecting a student, an advisor must know that student’s ID number. If the ID number is unknown, there is no way to search by other student attributes, such as last name, without clicking on a Search button and opening another window. This might seem like a minor annoyance but my problem with this is that I NEVER know the student’s ID number. Our students rarely know their own ID number. So EVERY SINGLE time I use this software, I have to make that extra click to open that extra window. I’m so excited about the advantages that I will get by using this software that I am willing to overlook this annoyance. But it is far from minor. The developers clearly didn’t test their interface with real users to understand the work flow at a typical campus. From a technical standpoint, it is such an easy thing to fix. That’s why it is such an annoyance to me. There is absolutely no reason for this particular problem to exist in this software other than a lack of interface testing. Because the software is otherwise so useful, I will use it, mostly happily. But if it weren’t so useful otherwise, I would abandon it, just as my mother abandoned Hanging With Friends. When I complained about this extra click (that I will have to make EVERY time I use the software), our staff person responsible for implementation told me that eventually that extra click will become second nature. In other words, eventually I will mindlessly conform to the requirements that the technology has placed on me.

Dave Geary taught me that when you develop software, you get the actual users of that software involved early and often in the design and testing. Don’t just test it within your development group. Don’t test it with middle management. Get the actual users involved. Make sure that the software supports the work of those actual users. Don’t make them conform to the software. Make the software conform to the users. Otherwise, software that costs millions of dollars to develop is unlikely to be embraced. Dave’s philosophy was that technology is here to help us with our work and play. It should conform to us rather than forcing us to conform to it. Unfortunately, many software developers don’t have the user at the forefront of their minds as they are developing their products. The result is that we continue to allow such software to control and manipulate our behavior in ways that are arbitrary and stupid. Or we abandon software that has cost millions of dollars to develop, wasting value time and financial resources.

This seems like such an easy lesson from nearly thirty years ago. I really don’t understand why it continues to be a pervasive problem in the world of software.



The 2012 Summer Olympics are nearly over. I haven’t watched them much, mostly because I can’t stand the way they are covered by NBC and its affiliates, especially in prime time, when I’m most likely to be watching. I don’t think this video aired on national television but it sums up NBC’s attitude about the Olympics–it’s only marginally about the sports and performances. The main focus is on disembodied female athlete body parts moving in slow motion, sometimes during the execution of an athletic move but often just as the athlete moves around the playing area. It’s soft core porn. Interestingly, I watched the video earlier today on the NBC Olympics page but now it’s gone. I guess someone at NBC came to their senses and realized that it’s inappropriate to focus on female Olympians bodies without emphasizing their athleticism. But anyway, sexism in the coverage isn’t what I was planning to write about tonight.

I wish NBC would focus more on the performances of the athletes. An athletic performance can be interesting and amazing even in the athlete hasn’t overcome significant life difficulties to be an Olympic athlete. Each of those athletes, even the ones who have had fairly mundane lives outside of their athletics pursuits, has overcome incredible odds to make it to the Olympics at all. For every athlete that makes it to the Olympics, there are probably thousands of others who tried and didn’t make it.

That said, one athlete that caught my attention for overcoming incredible odds to make it to the Olympics is Oscar Pistorius. He is the sprinter from South Africa who has a double below-the-knee amputation but who has now competed in the Olympics using prostheses, earning him the nickname “The Blade Runner.” His participation in the Olympics has been controversial. Some have claimed that the prostheses he uses give him an advantage over other athletes and, as a result, in 2008, the IAAF banned their use, which meant that Pistorius would not be able to compete with able-bodied athletes. Although the ban was overturned that same year in time for Pistorius to participate in the 2008 Summer Olympics, he failed to qualify for the South African team. But this year, he was on that team and both the 400 meter individual race and the 400 meter relay. I saw his heat in the 400 meter individual race and although he came in last, it was an inspirational moment.

Pistorius’ historic run reminded me that over time science fiction often becomes science fact. Remember The Bionic Woman? I loved that show when I was about 13 years old. Jaime Sommers was beautiful, brave and bionic. She nearly died in a skydiving accident but she was lucky to be the girlfriend of Steve Austin, aka The Six Million Dollar Man, who had had his own life-threatening accident a number of years earlier. He loved her so much that he begged his boss to save her by giving her bionic legs, a bionic arm and a bionic ear to replace her damaged parts. Unlike Pistorius’ legs, Jaime’s clearly were “better” than human legs, allowing her to run more than 60mph. Her bionic arm was far stronger than a human arm, allowing her to bend steel with her hand. I always loved her bionic ear, which allowed her to hear things that no human could possibly hear, but only if she pushed hair out of the way first.

Speaking of hearing, I love the story about the technology that is being used to make the Olympics sound like the Olympics to home viewers. The Olympic games have a sound designer named Dennis Baxter. He is the reason we can hear the arrow swoosh through the air in the archery competition. This is a sound that folks at the event probably can’t hear. And yet, Baxter sets up microphones so that we, the television viewing audience, can actually hear that arrow move through the air. Baxter claims that this technology makes the event seem more “real” to the viewing audience.

This raises such interesting questions about augmented reality. We can never directly experience the “real.” It will always be mediated by at least our senses. We know for a fact that our brains fill in holes in our visual perception. Our brains augment what we perceive via our senses. When we perceive an Olympic event via transmission technology (like television or the Internet), are we witnessing the “real” event? Is it still “real” when technology augments some aspect of our sensory perception, like when Baxter adds microphones to allow us to hear things we wouldn’t hear even if we were attending the event? When does technological augmentation become unreality? Where do we draw the line? And most importantly, does it matter? Do we care whether we’re experiencing something “real”?



Quite a lot of people hate “Obamacare” which is otherwise known as the Patient Protection Affordable Care Act. And there are indeed things to hate about the law. For example, I am a proponent of single payer health insurance and so the “individual mandate,” where people are required to purchase insurance on their own or pay a “tax” or a “fee” or whatever you want to call it, is problematic to me. I would prefer that we be completely up front about things and build the payment for health care into our tax law. Yes, I know that makes me a “socialist” but I think health care is kind of like fire fighting. Do we want to go back to the days of private fire fighters, where you had to pay up front or the fire fighters wouldn’t show up at your house? Fire fighting is something that we should all contribute to via our tax dollars and then when we need it, the service is provided. If that’s “socialism,” then yes, I am for socialized medicine.

As I said, I believe there are things to complain about and criticize in the Affordable Care Act. But it was quite surprising to me that one of my FB friends posted a link to a video claiming that the Affordable Care Act mandates that we all be implanted with RFID chips with our health information by March 23, 2013. I had not heard of this mandate, despite the fact that I have been paying pretty close attention to the debate. I would have serious problems with such a mandate but there were several things about the claim that immediately made me suspect it was a figment of someone’s imagination. If you can bear to watch the video, here‘s a short version of it. But for those of you who can’t bear to watch the video, I’ll describe it.

The video begins with an advertisement from a company that makes implantable radio frequency identification (RFID) chips. These are chips that many of us already possess on our ATM cards or passports. The chips contain information of some sort that can be read with a special device that picks up the radio signals emitted by the chip on the card. There are companies that make versions of these chips that can be implanted under the skin of a human or an animal. Some pet owners may have implanted them into their dogs or cats in case the pet gets lost. In any case, the video starts with an ad for these implantable chips and then claims that the Affordable Care Act requires that everyone in the US be implanted with one of these by March 23, 2012. The evidence? The narrator reads a passage (claiming it comes from the law itself) that discuss the creation of a government database to keep track of devices that have been implanted into humans. Then the narrator reads another passage that mandates the creation of a system within hospitals and doctors’ offices that will allow medical information to be stored on and read from RFIDs. These passages say that these two systems must be in place by 36 months from the passage of the law. That’s where the narrator gets March 23, 2013–that is 36 months after the passage of the law.

The thing to notice about these passages is that they say nothing about forcing the implantation of RFID chips. A database to keep track of devices that have been implanted in humans would keep track of things like pace-makers and hip replacements and all kinds of devices that are implanted voluntarily and for the improvement of someone’s life. And we already use RFIDs to keep track of personal information, such as financial information or passport information. These RFIDs are embedded in cards that we carry around with us and the passage that the narrator reads simply suggests that we need a system that would allow medical information to be stored on RFIDs, presumably embedded in cards similar to a credit card or a passport. There is nothing about mandating the implantation of an RFID. Here’s what Snopes has to say about this particular conspiracy theory–note that their evaluation is that there is no truth to the claim.

When there are real things to criticize in this law, why would someone make up a threat such as this? I think it’s because it works. It plays on an emotional response in ways that the real issues do not. And so you get lots more people to care about what is admittedly a scary idea than you would ever get to care about the real problems with the law. So people who would probably not pay attention to the health care debate otherwise are now vehemently against the government intruding on our medical privacy in this way, despite the fact that there is no evidence that the government plans to intrude in this way. So lots of people who would actually benefit from the provisions of the Affordable Care Act are vehemently opposed to the law for reasons that have nothing to do with the reality of the law. And no amount of debunking will make these untruths go away. Just ask the American public whether the US ever found evidence that Saddam Hussein had weapons of mass destruction.



{July 1, 2012}   Email: Buried Alive

I became the chair of my department a little over a year ago and within a few months, I found myself completely overwhelmed by email. Emails started to get buried in my inbox, either read and then forgotten or never read at all. I realized that I needed to use part of the summer break from teaching to develop a new system for dealing with the volume of emails that I receive in this position.

I have been using email since the 1980’s and have used the same process this entire time to deal with emails. I would keep emails in my inbox that I wanted to pay attention to for some reason (interesting content or information I might need in the future were the two major reasons) and if the email contained a task that I needed to complete in the future, I would mark it as unread. A few years ago, I started to use a system of folders for emails with interesting content or useful information. I maintained my habit of marking future task-oriented emails as unread. This system worked for years for me. Every summer, I spent a couple of hours cleaning up folders and my inbox. It was completely manageable.

As department chair, however, the number of emails that I received increased dramatically. The number of emails with interesting content, useful information or future task information also increased dramatically. But I think the thing that started to bury me is that the number of interruptions that occurred through the course of a day also increased dramatically. What that meant was that I might be in the middle of reading email when someone would come into my office and I would immediately give them my attention. If I was in the middle of reading an email, I might (and often did) forget to complete the process of dealing with the email. So emails with important task information might not get marked as unread or emails with interesting content or useful information might not get filed into the appropriate folders. Or I might forget where in the list of emails I had gotten to in my reading so that some messages were marked unread because I truly had not read them.

I soon found myself with over 2000 emails in my inbox, over 650 of which were marked as unread. A big problem with the unread messages is that I had no way of determining whether they were unread because I really hadn’t read them or because they contained important future task-related information. I was using that category for two very different purposes. I had no idea what those unread emails contained. Organizing my inbox began to feel like an insurmountable task. I began to have anxiety about the idea that I might actually have 650+ tasks that I needed to deal with. And we all know that we don’t work best when we feel overwhelmed and anxious. I knew I had to figure out some other way of dealing with my email.

My book club buddy and I read Time Management for Department Chairs by Christian Hansen. I attended a workshop that he presented at the Academic Chairs Conference that I attended in February in Orlando and although I found much of what he said about time management incredibly useful, I ironically didn’t have time during the Spring semester to implement very many of the ideas he presented. He has a couple of interesting things to say about managing the email deluge that I wanted to try to implement but I really needed to get my email under control first.

Here’s what I did and what I plan to do to keep things organized.

First, I needed to clean up my inbox. I began by reorganizing my folders. I did my normal summer clean up of the folders and then added a folder called “Defer” which I’ll come back to. Then I started on the inbox itself, reading the emails to determine what I was going to do with each one. I had four choices, which Hansen calls “the four D’s.” I could “delete,” “do,” “delegate,” or “defer.” I spent over 10 hours one Sunday deleting emails which needed no response from me or doing whatever task was required by an email if I could deal with it immediately. Doing whatever I needed to do sometimes meant delegating the task to someone else so I wrote a bunch of emails asking others to do things. Other times, “doing” meant answering questions. And still other times, it meant filing the email in one of my email folders. And finally, if dealing with an email required more time than I had available to me that day or required information that I didn’t currently have or required someone else to do something before I could do what I needed to do, I put it into the “Defer” folder that I mentioned early. I can’t explain the elation I felt when I finally had 0 emails in my inbox. What was more amazing than having 0 emails in my inbox was that I only had 9 emails in my “Defer” folder! I had been SO worried about what I wasn’t dealing with and it was such a relief to find that there were only 9 emails that I couldn’t deal with that day.

So that’s how I cleaned up my inbox. Now I have to maintain it and that means implementing a different system for email. Hansen suggests only looking at email at designated times during the day, times when you are unlikely to be interrupted. And the four D’s should be the practice every time you look at your email. I think I can manage this part of the process although it’s difficult to tell in the middle of summer when email only trickles in. The part that might be more difficult to me involves a larger picture time management strategy.

Hansen suggests that we should all abandon the daily to do list. It leads us to be often in crisis because each day we’re only dealing with the things that HAVE to be done on that day. Instead, we should create a master to do list that contains the things that absolutely must be done by a particular day but should also contain things that we’d LIKE to do, things that are not critical but that will help us to be more productive in the long run. A great example of this kind of thing is planning. Many of us would like to develop plans for our departments (or our lives) but that kind of work always gets put on the back burner, to be done when we “have time.” Ironically, not planning often takes more time in the long run as we have to deal with things when we’re in crisis mode rather than ahead of time when we’re thinking clearly. Hansen also suggests that when we’re creating our schedules for the week or the month or the semester, we should put these kinds of tasks on the schedule and actually do them when we schedule them. What does this have to do with the “Defer” email folder? We need to regularly put time in our schedule to deal with the tasks in that folder. In fact, we need to schedule time to review the tasks that are in the folder so that we can then put the tasks on the calendar. It’s this bit that I’m worried about. I worry that there will be crises and I will be unable to resist putting off the “Defer” folder review and planning. But I’m going to really try to implement this step. I think it’s the only way the entire system will work.

One follow-up: In the 10 hours that I spent deleting and otherwise dealing with emails, I clearly didn’t read them all carefully. Just this past week, I got an email from one of the administrators at my University about a student who claimed to have sent me email a week earlier and that I had not responded to. I have no recollection of the email whatsoever but I also don’t doubt that the student sent the email and I simply deleted it unread. When I shared that story with a friend, she said that was her biggest fear in deleting emails, that she will miss something important. And although I acknowledge the risk (especially since it actually happened to me), I still think cleaning up my inbox was worth that risk. If I had not cleaned up my email, that student message would likely have remained buried in my inbox for the week and the student would have complained to the administrator anyway. So I would have had to deal with that issue either way. The difference is that I now feel pretty confident that future student emails (or other emails) will not get buried and I will no longer have this problem. In addition, my anxiety level about my emails is currently at zero which I think makes me more productive. That alone is worth the effort.

I’m curious about how other people deal with the email deluge.



{June 5, 2012}   Magical Thinking

You probably haven’t noticed that I’ve been away for awhile. But I have. In fact, this is my first post of 2012. I have no excuse other than to say that being the chair of an academic department is a time sink. Despite my absence, there have been a number of things over the last five months that have caught my attention and that I thought, “I should write a blog entry about that.” I’m sure I’ll get to many of those topics as I renew my resolve to write this blog regularly. But today, I encountered a topic so important, so unbelievable, so ludicrous, that I have to write about it.

One of my friends posted a link to Stephen Colbert’s The Word segment from last night. Go watch it. It’s smart and funny but incredibly scary for its implications. For those of you who don’t watch it, I’ll summarize for you. The word is “Sink or Swim” (and yes, I’m sure Colbert knows that isn’t a word–he’s ironic). Colbert is commenting in this segment on the fact that North Carolina legislators want to write a law that scientists can only compute predicted sea level rises based on historical data and historical rates of change rather than using all data available. In other words, scientists are not allowed to predict future rates of change in sea levels, only future sea levels. They cannot use the data that they have available that show that the rate of change itself is increasing dramatically. Instead, they can only predict the sea level based on how fast it has risen in the past. Colbert has a great analogy for this. He suggests that his life insurance company should only be able to use historical data in predicting when he will die. Historical evidence shows that he has never died. Therefore, his life insurance company can only use that evidence in setting his life insurance rates. Never mind the fact that there is strong evidence from elsewhere that suggest it is highly likely that he will die at some point in the future. The analogy is not perfect but I think it illustrates the idea.

Using all evidence, scientists are predicting sea levels will rise by about a meter (Colbert makes a funny comment that no one understands what this means because it’s in metric–that’s the subject of another post) before the end of the 21st century. If this is true, anyone who develops property along the coast will see their property underwater in a relatively short amount of time. Insurance rates for such properties will probably be astronomical and it might even be impossible for such development to occur because without insurance, loans may not be able to be secured. That’s not good for business. In what can only be called “magical thinking,” the North Carolina legislature is putting it into law that climate change models can only use historical sea level rising rates to make predictions about future sea levels. Such models ignore the data that suggests that the rate of rise in sea levels is increasing. This will make the historical rates of increase look incredibly slow. In fact, the bill actually says, “These rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of seas-level rise may be extrapolated linearly … .” So despite evidence that sea levels are rising in a non-linear manner (because the rates of increase are actually increasing), predictions cannot use this fact. When scientists use a linear rate of increase, the models predict that sea levels will rise by “only” 8 inches by the end of the century. I think even these rates are scary, especially for coastal development projects, but scientists are pretty sure they vastly underestimate the extent of the danger. It’s as though these legislators think they can simply wish away climate change.

We live in a society where saying something is so is often as good as it being so. Is Barack Obama a citizen of the US? Evidence indicates that he actually is but critics persist in saying that he isn’t. As recently as 2010, 25% of survey respondents believed that he was born in another country and so isn’t eligible to be president. Were the 9/11 attackers from Iraq? Despite the objective evidence, 44% of the American public believe that several of them were Iraqis, which would then presumably be justification for the war in Iraq. Is global warming caused by humans? Despite overwhelming scientific opinion that it is, only 47% of the American public believe it is. Why do people believe these erroneous claims? Because the media (or at least parts of the media) advocate such positions. And because we are guilty of magical thinking. Say something is true and it will be true.

Scott Huler of Scientific American says it better than I can: “North Carolina legislators are now tossing around bills that not only protect themselves from concepts that make them uncomfortable, they’re DETERMINING HOW WE MEASURE REALITY.” Meanwhile, sea levels rise non-linearly, no matter what the North Carolina legislature legislates. And because we refuse to accept reality, we lose valuable time for an effort to reverse or at least to slow down this scary trend. So I have a tip for you: don’t buy any coastal property.



I don’t think anyone would accuse me of being a Luddite. I began to learn to program in the late 1970’s when I was in high school, majored in computer science, worked as a software developer and got a PhD in computer science. I love my tech toys tools and think that overall, we are better off with the technology we have today than we were before it was available. But I am often a skeptic when it comes to educational technology.

I was reminded of my skepticism about a month ago when I cam across this photo and caption. For those of you who won’t click through, I’ll describe it. It is a photo of a classroom smart board being used as a bulletin board, with large sheets of paper taped to it, completely covering the smart board itself. The poster of the photo asks a number of questions, including whether the teacher who uses the equipment in this manner should be reprimanded for educational malpractice. The comments on the photo imply that the fact that the teacher is using this equipment in this way is evidence that the teacher is resistant to using the equipment appropriately. I was happy to see that the poster of the photo also asked some questions about why a teacher might use the equipment in this way such as not enough training. But I think the issue really is that the teacher has not had the right kind of training and the probable reason for that is that the promoters of educational technology are almost always focused on the technology itself and not on the education that the technology can provide.

The fact that someone would consider reprimanding a teacher for using technology in this (admittedly inappropriate) way is part of the problem that I see in all corners of educational technology. When we engage in technology training for teachers, we almost always focus on how and not why. That is, we focus on how to use the technology and don’t engage in meaningful discussion of the pedagogical advantages of using the technology in the classroom. The impression then is that we want to wow our students with this new technology, to do something flashy because the flashiness will capture the attention of the students. I see several problems with this idea. First, if students are using similar technology in all of their classes, the newness of the technology wears off and the flashiness disappears. Second, we should be in the business of getting students to actually learn something and if we don’t have proof that a particular technology (used appropriately) improves learning, perhaps we shouldn’t be investing in such high-priced items. In other words, I do not see technology as a panacea to our educational problems.

I’ll give my own example of how this has played out in my own teaching. A few years ago, my University purchased a bunch of clickers. I went to several training sessions for the clickers, hoping to hear a pedagogical explanation for why the use of the clickers might improve student learning. I heard a lot about how to use the clickers (technical details) as well as the cool things I could do to survey my students to see where their misunderstandings are. But even this last point didn’t convince me that the technology was worth the cost or the effort to use it because I already have ways that I can survey my students to see where their misunderstandings are. In fact, I’ve been developing those kinds of techniques for years, without the use of technology. So what I wanted to know was how the technology will improve on those techniques so that my students learn better. And no one could provide me with those answers. This summer, however, I went to a technology institute for faculty in the University System of New Hampshire. One of our presenters told us about a learning framework which might help us think about technology use in the classroom. He cited several studies that sought to identify why individual tutoring of students is so effective at improving student learning. The results show that students learn best when they get immediate feedback about their learning (the more immediate the better), can engage in conversation about their learning (that is, when they have to try to explain what they learned to someone else) and have learning activities that are customized to their needs (so that they are not wasting their time going over material that they already understand). What technology can do, he argued, is help us provide individual tutoring learning experiences for large numbers of students cost-effectively. Therefore, we can use clickers, not to provide the teacher with information about student learning but rather to provide the students themselves with information about their own learning. That is, the clickers allows us to ask questions of the class, have all the students answer simultaneously and then when we reveal the answer(s), the student can see how he fared compared to his classmates and compared to the correct answer(s). This immediate feedback provides an individual tutoring type experience only if it is done with an eye toward making sure students understand what they are supposed to get out of the use of the clickers. But too often, clickers are used in the classroom because they are cool, and new, and innovative.

So back to the question of whether the teacher who used the smart board inappropriately should be reprimanded. If, instead of having students write on big pieces of paper which she taped onto the smart board, the teacher had the students type their items into a computer and then she had displayed them on the smart board in the “appropriate” manner, we would not be having this discussion. But in neither case have we asked what her pedagogical motivations were for the exercise that the students engaged in. That to me is the important question and the one that would determine whether she has committed “educational malpractice.” And before we spend tons of money on smart boards and iPads and clickers and and and…, I think we should focus on the learning improvements that might be gained from the use of such technology. In most cases, I don’t think we have a whole lot of evidence that it does improve learning. And I definitely don’t think we’re training teachers to use it in a way that takes advantage of the ways that it might improve learning.



{November 13, 2011}   Software Development and the User

I’ve been thinking about software usability a lot lately, mostly because I encounter so much software that isn’t particularly usable. There are two pieces of software that I use a lot right now which drive me crazy for their lack of usability. And yet, I still use them. Perhaps that’s why the usability doesn’t improve. Anyway, here are some thoughts.

The software development company that I worked for right out of college was The Geary Corporation, founded by Dave Geary in Pittsfield, Massachusetts. It doesn’t exist anymore because Dave died of MS and not too long after, his family sold it to Keane, Inc. But while it was around, Geary was an awesome company to work for and one of the things that distinguished it from other companies at the time (and apparently, companies now) was its focus on the user. We did a lot of development for Fortune 500 companies, which have a lot of middle management type people. Dave would not deal with those folks as we developed our software and this is a lesson I learned well. He would make the contract with the folks at the top of the decision chain and then he would go straight to the users. We might deal some with the users’ direct supervisor but all decisions about how the software needed to work were passed by the users on the front line, tested by them and approved by them. I learned this lesson so well that it is a central tenant of the software engineering textbook that I co-wrote.

I think about this a lot when I’m using Facebook. It’s a great tool for social networking but as time has gone on, I think the folks at Facebook have forgotten the user. The latest example of this is their recent upgrade of the Newsfeed so that it is no longer presented chronologically. Instead, Facebook decides what to show you. The Facebook site explains that this is for people who don’t visit Facebook very often and so Facebook tries to predict what will be most interesting so they don’t have to wade through a lot of minutia. That’s fine but did Facebook test this out with folks who use Facebook every day or multiple times a day? Given the subsequent uproar, I would guess not. To their credit, Facebook recently announced that they’ll be rolling out another update to give users an option concerning how they want their Newsfeed to appear. I keep using Facebook because the advantages outweigh the disadvantages (so far) but I have installed a cool app that gives me more control over my experience with the site. The app is called Social Fixer (used to be Better Facebook) and although it doesn’t work perfectly, given that it’s created by one guy in his living room, it’s awesome.

The other piece of software that is giving me fits is the tool that we use at PSU to search for courses. It’s always been ugly and clunky and not easy to understand but we have such a shortage of IT folks to help fix these things that I’ve never officially complained about it. We recently decided to stop printing a paper list of our courses which forces everyone to use this search tool. And so someone recently decided to upgrade the tool. To do the same search now requires more clicks and more scrolling than before. That’s a sign to me that whoever did the upgrade didn’t talk to faculty about how they use it. I suspect that they also didn’t talk to students. What a horribly inefficient use of time–why would you spend time upgrading a tool so that the result is less usable? If someone had come to talk to me for ten minutes, I would have explained, for example, that searching for courses by department is not an “advanced” use of the software and so I don’t want to have to click an extra time to get to that option.

None of this probably seems like a huge deal. But when you think of the amount of time we spend developing software and then using that software, it seems crazy to me that we would not take a few minutes early on to get user input as to how the tool can be most efficient and effective.



A student from my Creating Games class came to my office today to talk about the keynote speech from a conference he had recently attended.  The speaker was lamenting the fact that kindergarten has become increasingly focused on “preparing children for first grade” rather than socialization through play activities.  Because we talk a lot about play and its importance in life (even adult life), he wanted to know what I thought about this.

We had a great conversation and in the middle of it, I had an epiphany that many of our society’s ills stem from the very philosophy that encourages (or even requires) kindergarten classrooms to be structured around preparation for first grade.  I think the philosophy comes from capitalistic tendencies to focus on “efficiency,” “productivity,” and “progress,” all of which are defined in a very narrow sense.  And the more I think about this, the more I see it everywhere in our society.

My original thought was that we are forgetting the importance of play because we are so focused on short-term, immediate, measurable outcomes.  We have few resources and so we need to use them efficiently in order to make progress toward some short-term goal.  Any “unproductive” use of resources is discouraged as wasteful.  That is, if we can’t see the immediate consequence of the use of those resources, the resources have been wasted.  So children engaging in unstructured, “unproductive” play in kindergarten is wasteful because they aren’t learning to read, something they must know how to do when they enter first grade.  We need to test our students regularly (using standardized tests) to measure their “progress” and if they aren’t all making the same “progress,” someone must be punished (with loss of funding or firing). So we eliminate art programs and physical education and other extra subjects so we can focus our resources on getting students to perform well on our measurement tools.

As I thought more about this, I started to see this idea everywhere. Because money is the only measurement tool that matters for the stock market, if a company is not making adequate “progress” (which means increasing profits every quarter–profits which stay the same are not “progressing”), it will be punished by shareholders leaving them (well, maybe not in this particular economic climate). So companies engage in practices which make (or save) money in the short-term but which might not make sense if we had a longer view.  And mathematicians and fund managers design financial products that will increase in profits every quarter. If we had a longer view, we would recognize the risk of these products and wouldn’t allow them to take down our entire economy with their collapse. We won’t fund basic research and development because it isn’t immediately clear what the benefits are. And so we won’t learn more about how the universe and the world works just for the sake of learning those things today but which tomorrow might lead to amazing technological advances. I could go on and on.

This kind of thinking is the root of many of our societal problems. Kids engaging in unstructured, unsupervised play is important to teach them skills that can’t be easily measured and whose benefits may not be visible for years. They will learn to entertain themselves. They will learn to focus on an activity for more than a half hour at a time. They will use their imaginations. They will learn to navigate the world on their own, without some external force guiding them to the next “correct” step. These things may take years to learn and are definitely not easily measured. But it seems to me that those are not valid reasons to give up on them. Yet, I think we have largely given up on them. Just as we’ve given up on many of the things in my list above.

I realize I probably sound like a curmudgeon longing for “the good old days.” Or that I think we shouldn’t measure anything in the short-term. But that isn’t my point at all. My point is simply that our societal focus on ONLY measurable, short-term outcomes has consequences. And I would argue that those consequences are mostly bad. They lead to less creativity and fewer workers prepared to adapt to the ever-changing world and economic collapses and fewer technological advances and and and. Focusing on these other things, these things we can’t measure or see the results of immediately, is risky. We might “waste” some resources. But sometimes, what seems like a “waste” today turns out to be life-changing, society-changing, at a point in the unknowable future. And the really sad thing is that if we don’t invest in these “wastes,” we’ll never even know what we might be missing.



{October 10, 2011}   Qwikster Part II

Although it is not a personal email written by founder and President of Netflix Reed Hastings, I was happy to receive the following email this morning.  Perhaps “the Netflix team” will start doing some market research before they make big announcements.  But at least they listened to the overwhelming majority of their subscribers.

Dear Cathie,

It is clear that for many of our members two websites would make things more difficult, so we are going to keep Netflix as one place to go for streaming and DVDs.

This means no change: one website, one account, one password…in other words, no Qwikster.

While the July price change was necessary, we are now done with price changes.

We’re constantly improving our streaming selection. We’ve recently added hundreds of movies from Paramount, Sony, Universal, Fox, Warner Bros., Lionsgate, MGM and Miramax. Plus, in the last couple of weeks alone, we’ve added over 3,500 TV episodes from ABC, NBC, FOX, CBS, USA, E!, Nickelodeon, Disney Channel, ABC Family, Discovery Channel, TLC, SyFy, A&E, History, and PBS.

We value you as a member, and we are committed to making Netflix the best place to get your movies & TV shows.

Respectfully,

The Netflix Team



et cetera