Desert of My Real Life











{June 29, 2016}   Information Archives

I’ve been spending Wednesday mornings in the library this summer working on my Freaks and Geeks project in the company of other academics working on their own projects. One of the frustrating things about today’s session is that I’m trying to find a particular advertisement that NBC created for Freaks and Geeks that used the tagline “What high school was like for the rest of us.” And I can’t find it. This made me start thinking about all of the cultural ephemera that we have lost because we don’t pay attention at the start of a project to archiving the materials of the project.

As I’ve said before, I’m also working on a project this summer (and into the next academic year) that will transform my university’s structure around interdisciplinary clusters. No other university has attempted such a vast overhaul of the way they do things and so we are being watched by people all over the higher education landscape. I am serving as a guide for the project (I’m not always completely sure what that means). A group of us guides decided last week that we should be documenting the process of change as it occurs and no one is going to do that documentation if we don’t. So we’re working on a proposal describing how to do that. In the meantime, some of us have started our own personal documentation using various social media platforms. We don’t know exactly what will happen with the materials that we create and collect or how we will end up using them but we hope that we will be able provide lessons (both positive and negative) to other universities that are thinking about major transformation initiatives.

Once again, I see connections between my two major projects this summer, even though they seem very different from each other on the surface. This idea of connections also got me thinking about how I do my research for the Freaks and Geeks project (which is no different than the way most people do research). I sometimes find myself having followed paths of inquiry that have led me in very different directions compared to where I thought I was going. For example, I was researching other television shows related to high school. The TV show James at 15 is on that list. It was only on for two seasons and I was just a year younger than the title character. I loved that show! It was another “realistic” look at high school kids, but with less comedy than Freaks and Geeks. I haven’t seen (or really even thought about) the show since its original airing. I wanted to know if it was as good as I remembered and so I did a bit of research, starting with Wikipedia. I discovered that Kim Richards played James’ sister in the show. That name seemed familiar but I couldn’t remember why. So then I researched her. She was Prudence in the show The Nanny and the Professor, which I also loved when I was a really little kid. It turns out that Richards also was one of the original members of the cast of The Real Housewives of Beverly Hills, which is a show that I have never seen. The interconnectedness of knowledge and information would be an interesting premise for a blog called “Rabbit Hole” in which the author described their wanderings around the Internet that happen simply by clicking links and seeing where they end up.

Interconnectedness of knowledge, TV shows about high school, information archives. I get to think about all the fun things.



{February 20, 2016}   Apple vs. The FBI

I’ve been reading a lot about the controversy surrounding the court order compelling Apple to help the FBI break into the phone used by one of the San Bernardino killers, Sayed Farook. I think at this point, I mostly understand the technical issues although the legal issues still confound me. And there’s a significant question that I’m not seeing many people discuss but would help me to understand the situation better.

Here’s what the case is about. The iPhone used by one of the killers is owned by his employer, San Bernardino County. The FBI sought and received a court order to confiscate the phone with the intention of gathering the data stored on it. The County willingly turned the phone over. As an aside, there is currently a controversy with the FBI saying that a County employee, working on his own, reset the password for the phone after giving it to the FBI which means one possible method for retrieving the data from the phone is no longer available. The County claims that its employee reset the password under the direction of the FBI. Somebody is lying. If the FBI really did direct the employee to reset the password, they need to hire more adept technologists. The news stories about this controversy neglect to mention that the method in question would only have worked if Farook had not changed his password after he turned off the automatic iCloud backup. I think that’s pretty unlikely.

So, the FBI has physical access to the iPhone but the problem is that the phone has two layers of security. The first is that it will automatically delete all of its data if someone enters an incorrect password 10 times. The second is that the data on the phone is encrypted which means that it can’t be read unless the password is entered. The FBI sought and received a court order to require Apple to “bypass or disable” the feature that wipes the phone clean. Doing so would then allow the FBI an unlimited number of password attempts to decrypt the data stored on the phone. Apple’s response to the court order is that to comply would be to put the data of every iPhone user in jeopardy.

One of the things that confused me about this story was that I kept hearing and reading reports about Apple helping law enforcement to unlock iPhones many times in the past. The folks over at Tech Crunch helpfully explained that Apple’s current response is not hypocritical. For iPhones running the operating system iOS 7 (and previous versions of iOS), Apple had the ability to extract data from the phones. And so it complied with court orders requiring it to extract data from iPhones. For iPhones running iOS 8 and later, Apple removed that capability. Apple has stated that the company wants to protect its users’ data even from Apple. The iPhone in question is running iOS 9. So Apple does not currently have to capability to extract data from the phone in the ways that it has in past cases. In order to comply with the court order, Apple would need to write some new software, a version of iOS with the phone wiping feature disabled, and then install it on the iPhone in question. The court order requires Apple to provide “reasonable technical assistance.” Is writing new software “reasonable technical assistance”?

But here’s the question that I haven’t found an answer for. Is there a precedent for the government compelling a person (remember: corporations are people so Apple is a person, right?) to build something that doesn’t already exist? The case that’s being cited as a precedent seems to me (admittedly, not a lawyer) to be pretty different. In that case, the Supreme Court said that the government could compel The New York Telephone Company to put a pen register (a monitoring device) on a phone line. But the telephone company already had the technology to monitor phone lines so it wasn’t as though they were being compelled to create a new technology. Apple is being asked to write a new piece of software, to build something that doesn’t already exist. This diversion of resources is one of their grounds for objecting to the court order. So, John McAfee has offered to write the software for free. It isn’t clear, however, that writing the software is enough since iPhones will only work with software that has been signed by Apple. Even if McAfee was successful, the government would still need Apple’s cooperation. And that’s unlikely since Apple’s philosophy is that their products should provide their customers as much data security as possible.

Ultimately, I agree with Bruce Schneier that the American public is best served if Apple does not comply with the government’s order. The government says that this request would be a one time thing, that they would not ask for such assistance again. I don’t believe that. Even if I did believe that the government would not ask again, I don’t think we can keep such software, once it exists, out of the hands of the many, many hackers who want to steal your data. That is a threat to our everyday lives that far outweighs the threat of terrorism.

Addendum (2/21/16): I’ve read some articles that take issue with Apple CEO Tim Cook’s “slippery slope” argument. His argument has been that if Apple complies with this order to circumvent the iPhone feature that wipes the phone clean after 10 incorrect password attempts, they will have no basis to refuse to do so in the future. Every time the US government asks them to circumvent the feature, they will have to do so. Government lawyers have said that this request is about this phone only and that they won’t ask in other cases. Tell that to Cyrus Vance, Jr., the district attorney in Manhattan. On Weekend Edition this morning, Vance argued that Apple should comply with the order because they are circumventing law enforcement’s ability to view the data on more than 175 phones related to criminal investigations. If this software is available for use by law enforcement officials, it will be available for use by the “bad guys.” That puts everyone’s data in jeopardy. Apple is protecting your ability to keep your data out of the hands of hackers (whether they work for the government or not).



{June 19, 2014}   HCF Redux

Three episodes into Halt and Catch Fire and I still can’t make up my mind about whether it is an interesting show or not. I really want to like this show. I love that it isn’t afraid to be confusing about the underlying geeky details of computing. The show almost relishes those moments when characters articulate what they’re thinking about the technology without speaking down to its audience. On the other hand, the motivations and actions of the characters outside the realm of technology are the stuff of melodrama and really cheapen the engagement we might have in the pseudo-historical story of developing a new technology that is very different than all that has come before.

Spoiler alert–there is one major plot twist that I’m going to discuss below that if you haven’t yet watched the first three episodes of this show, you might want to avoid.

One of the reasons that this show has intrigued me is that Cameron Howe, the (genius) developer of the BIOS of the new personal computer in the show, is a woman. She is androgynous in her name and her appearance and she is brilliant and defiant. All of that intrigues me when the story takes place in the early 1980s. She is focused on developing this really base level machine code without which the hardware cannot succeed. So psyched that a woman is central to the success of this new machine. On the other hand, she is the only character who is shown shopping for new clothes because, of course, in the middle of trying to revolutionize computing, she would be concerned that her clothing isn’t feminine enough. Annoying.

Another woman in the show, Donna Clark, is portrayed as both the nagging wife of our hardware genius, Gordon, and the unacknowledged originator of the chip layering idea that we already know will be the thing that allows our new computer to be light enough to be portable. I might appreciate the complexity of this character if it wasn’t done in such a shallow obvious manner. Donna seems to be the inhibitor of Gordon’s real genius because she keeps reminding him that he has children and they might need a little bit of his attention. The bird that shows up in episode three was a bit much for me, especially when Donna was the one who had to be practical and kill it with a shovel. Metaphor, anyone?

Lee Pace’s portrayal of Joe MacMillan has been particularly annoying. His single emotion seems to be anger. The story line about the scars on his chest is only interesting if the creators take advantage of the inconsistencies that Cameron pointed out in his telling of how he got them. I get it. He’s angry. With EVERYONE. So let’s start explaining some of the past events that have so far been alluded to. And here’s the big spoiler–what is up with sex scene with LouLu’s boy toy? That was a plot twist that surprised me. But I don’t think Lee Pace is great in this role because he seems to think that playing a genius means constantly displaying arrogant anger. I think it would have been much more interesting if he had played that sex scene more tenderly.

So where do I currently fall in regards to this show? I still like that the show doesn’t sugar coat the technicalities of what this group of people is trying to achieve. I want the show to succeed in telling that story. On the other hand, I think the layering of the interpersonal relationships has been a bit heavy handed and has taken away from what might be a powerful story.



{June 5, 2014}   HCF

I just watched the pilot episode of AMC’s new show, Halt and Catch Fire, which airs in Mad Men‘s Sunday 10pm slot. I was pretty intrigued by the slew of previews I saw while watching this spring’s half season of Mad Men (and by the way, since when does a season start in the spring of one year, take nearly a year hiatus, and then end in the spring of the following year?). I definitely recognize that a show about building a new computer in the early 1980s has the potential to be incredibly boring. There was a lot of good stuff in the pilot as well as some potentially bad but I definitely wasn’t bored.

One of the annoying things about the show is the arrogant genius behaving badly trope. Lee Pace plays the first arrogant genius, Joe MacMillan. When Joe is introduced to us, he is driving his Porsche very, very fast and runs over an armadillo, which is our first clue that he’s in Texas. Joe makes speeches full of the vision thing and gets annoyed when his fellow computer salesman, Gordon, tries to talk about mundane details like free installation. He is a master manipulator, which I found annoying, but he has some mystery in his background, which I found intriguing. I look forward to finding out what he’s been doing since his disappearance from his IBM job a year prior to the events of the show. The second arrogant genius is Cameron Howe, a woman who is a senior at an engineering school, where, for some unknown reason, Joe is a guest speaker. She is the misunderstood genius that no one pays attention to because she is so far ahead of her time. As Mackenzie Davis portrays her, Cameron reminds me of Watts, the  Mary Stuart Masterson character in Some Kind of Wonderful, complete with anger at the world and a punk soundtrack playing on her Walkman. But she’s a genius so we forgive her her quirks. The final genius is not as arrogant as he is depressed. Gordon Clark, played by Scott McNairy, was the inventor of a failed computer who has been reduced to selling other people’s computers. When we first meet him, he is drunk and his wife has brought their kids to the jailhouse to bail him out. He drunkenly reminisces about the failure of his computer–when they tried to turn it on to demo it, it wouldn’t turn on. But he is also a visionary, having written an article for Byte magazine about open architectures for CPUs. Joe quotes that article to convince Gordon to come work with him on his new project.

Although I found the genius trope annoying and over the top, there was a lot about the show that I enjoyed. I really enjoyed the history of the show. Even though it’s fictional, it reminded me of a lot of things that I haven’t thought about in years. Byte magazine is one of those things. I loved that magazine and was a regular reader in the 1980s. It seemed completely believable to me that someone might have written an article for the magazine that inspired someone else to take a big chance on trying something new and different. Other mentions in the show that brought back memories: CP/M, SCP, the dominance of IBM (International Business Machines) in the computer industry of the day and the joys of playing Centipede at the arcade. I also liked the reverse engineering scene although I can understand that if you don’t have a tech background, that scene might have been confusing or boring or both. That’s probably why it’s kind of glossed over. Most viewers probably won’t be too excited about watching guys using an oscilloscope to record pin voltages and then recording the contents of 64K of ROM to get the BIOS instructions in assembly language. Just writing that sentence makes me smile. It’s a very cool scene.

I am a little torn by the title of the show. On the one hand, I think it’s cool that the title refers to an assembly language instruction, HCF. Assembly is a low-level computer language which means that there is a very low level of abstraction which means the programmer is very close to writing code in binary, the zeroes and ones that the computer understands. It is really geeky to program in assembly these days as most software is written in languages that contain instructions at a higher level of abstraction from binary. HCF is an instruction that halted operation of the computer by instructing it to repeat the same operation over and over. The “catch fire” part of the instruction comes from the story (myth?) that some of the wiring in an old computer heated up so much by this repetition that it actually caught fire. Nice. On the other hand, “halt and catch fire” seems like an obvious metaphor that sometimes the best laid plans blow up in your face. Bleh. In fact, metaphor in this show is pretty obvious. At one point, for example, when it looks like Gordon won’t work with him, Joe pulls out a bat that has the inscription “Swing for the fences” and so he does, literally, by hitting a ball over and over until he breaks a window. Not so subtle.

A couple of other things made me roll my eyes as well. Most of the bonding/conflict stuff between Cameron and Joe, for example. The trick quarter, the conversation about VLSI, and the stupid sex scene all seemed too superficial and lazy. But I understand that first episodes are tricky. The characters have to be introduced and established quickly and so shortcuts are often taken. I just hope the show relies more on the cool stuff once the story is established. I will keep watching to see what they do with this fairly promising start.



{September 22, 2013}   Social Media Round Up

Now that the craziness of the start of the semester has begun to slow down, I thought I’d do a quick hit on a variety of social media topics that I’ve been thinking about in the last few weeks but have not yet found the time to write about.

A few weeks ago, Twitter updated its rules to make it clear that abuse would not be tolerated. The events that prompted the rule updating included specific bomb threats and threats of rape sent to women journalists and politicians. Many of the comments on the articles covering this story think that it was improper for Twitter CEO Tony Wang to apologize to the women in question. Other comments suggest that it’s stupid to try to police these kinds of threats because it’s not going to make a difference. Still other comments suggest that unless someone breaks the law, Twitter should not “censor” tweets. My main response to these comments is that making direct and specific threats against a particular individual is indeed against the law. It doesn’t seem to be a terrible thing to me that Wang chose to apologize to individuals who had crimes committed against them using his product. In fact, that seems to make good business sense. And I agree that rules alone won’t make a difference in changing the tone of discourse on Twitter. There has to be enforcement of those rules as well. So I hope Twitter will follow up on its promises to make reporting abuse easier and hiring more people to deal with such reports so that they can be handled more quickly. Twitter didn’t handle this issue particularly well, in my opinion, but they are taking some first steps to fix the issue.

I use a variety of social networking sites at varying levels of activity. For example, I’m pretty active on Facebook, regularly posting status updates, photos and links to stories that I think my friends will be interested in. I am far less active on LinkedIn although I have many contacts in my network, mostly current and former students who are using the network professionally. I try to keep up with the various networks that are available so I decided recently to check out Google+. I’ve been using Google Calendar and Gmail for years so it felt like a natural step to set up a profile and get started with Google+. I’ve found so far that it is much more like Facebook than like LinkedIn but there’s a bit of Twitter thrown in. It’s like Facebook in that you have a stream very much like Facebook‘s newsfeed. You also share status updates, photos, etc. just like on Facebook. You can even “like” posts by others (called +1 in Google+). But like Twitter, Google+ has the option to that allows you to follow people and organizations. In Facebook, your friendships are bidirectional in that both parties must agree to the relationship. In Twitter, you can follow someone to be able to see their public tweets and they do not have to follow you back. In other words, a relationship requires only a uni-directional connection. Google+ also only requires this uni-directional connection. So, in Google+, we get the sharing features of Facebook combined with the relationship features of Twitter. But Google+ also offers another feature that I think is pretty cool. One of the problems with Facebook is that all friends are treated equally on the network even if they aren’t equal in real life. That has caused problems for lots of people. So Google+ allows the user to create different “circles” for their connections which will allow the user to easily manage the kinds of material people in a particular circle will see–just like in “real life.” Another interesting aspect of Google+ is the “hangout” concept although I haven’t played with entering them or creating them yet. Perhaps that will be the subject of a future post. The main problem with Google+, however, is that so few of the people I care about are using it. That’s the draw of Facebook–many of the people I care about in “real life” are posting really interesting (and not so interesting) things on Facebook so I keep going back. Until more people migrate to Google+ in a meaningful way, I probably won’t participate very much myself. Google faces a classic chicken and egg kind of problem here.

I regularly check out new social media tools, just to see what they’re about. Some of the tools become part of my repertoire (Tumblr, Flickr) while some do not or, at least, haven’t yet (Klout, Medium). One tool that was quite intriguing to me when I first looked at it but then kind of disappointed me was Storify. It’s a tool that is designed to allow people to curate social media artifacts to tell a story. I wrote one story ten months ago and then forgot about it. As I was thinking about the things I wanted to write about in this round up of my social media activity, I remembered that I had written that one story and went back to check what’s been going on in that social media world. I was surprised to find that my story had 56 views. That may not sound like much activity for 10 months, but I had done nothing to bring attention to the story and none of my friends (as far as I know) are members of that community. I have no idea how many people read each one of these blog posts but I’m guessing it is far fewer than 56 people. So Storify is back on my radar although I’m not sure how I might use it yet.

It’s difficult to keep up with what’s going on in the world of social media. I would like a tool that helps me me keep up with what’s available and helps to put it all together in a way that makes sense.

 

 



{August 9, 2013}   When is Failure Really Failure?

If you are involved in higher education in any way, you have heard about Massive Open Online Courses (MOOCs). I first heard of them back in the Fall of 2011 when I was one of 160,000 students to sign up for an online class in Artificial Intelligence (AI). I have PhD in Computer Science and my area of specialization was Machine Learning, a branch of AI. I have taught AI at the undergraduate level. So I wasn’t signing up for the course because I wanted to learn the content. Instead, I wanted to understand the delivery method of the course. In particular, I wanted to figure out how two preeminent AI researchers, Sebastian Thrun and Peter Norvig, would teach a class to over 150,000 students. I spent a couple of weeks working through online video lectures and problem sets, commenting on my fellow students’ answers and reading their comments on my answers. I stopped participating after a few weeks as other responsibilities ate up my time. It isn’t clear to me how many other people signed up for the class for reasons similar to mine. 23,000 people finished the course and received a certification of completion. Based on the “success” of this first offering, Thrun left his tenured faculty position at Stanford University and founded a start up company called Udacity.

There has been a lot of hype about MOOCs in general and Udacity in particular. It’s interesting to me that many of these MOOCs seem to employ pedagogical techniques that are highly criticized in face-to-face classrooms. In this advertisement, for example, Udacity makes a big deal about the prestige and reputations of the people involved in talking at students about various topics. Want to build a blog? Listen to Reddit co-founder Steve Huffman talk about building blogs. In other words, these classes rely heavily on video lectures. The lecture format for face-to-face classrooms, for example, is much maligned as being ineffective for student learning and mastery of course content. Why, then, do we think online courses which use video lectures (from people who have little training in how to effectively structure a lecture) will be effective? The ad also makes a big deal about the fact that the average length of their video lectures is 1 minute. Is there any evidence that shorter lectures are more effective? It depends on what else students are asked to do. The ad makes bold claims about the interactivity, the hands-on nature of these courses. But how interactivity is implemented is unclear from the ad.

Several people have written thoughtful reviews of Udacity courses based on participating in those courses. Robert Talbert, for example, wrote about his mostly positive experiences in an introductory programming class in The Chronicle of Higher Education. Interestingly, his list of positive pedagogical elements looks like a list of game elements. The course has clear goals, both short and long-term. There is immediate feedback on student learning as they complete frequent quizzes which are graded immediately by scripts. There is a balance between challenge presented and ability level so that as the student becomes more proficient as a programmer, the challenges presented become appropriately more difficult. And the participants in the course feels a sense of control over their activities. This is classic gamification and should result in motivated students.

So why are so many participants having trouble with these courses now? Earlier this year, Udacity set up a partnership with San Jose State to offer courses for credit for a nominal fee (typically $150 per course). After just two semesters, San Jose State put the project on hold earlier this week because of alarmingly high failure rates. The courses were offered to a mix of students, some of whom were San Jose State students and some of whom were not. The failure rate for San Jose State students was between 49 and 71 percent. The failure rate for non-San Jose State students was between 55 and 88 percent. Certainly, in a face-to-face class or in a non-MOOC class, such high failure rates would cause us to at least entertain the possibility that there was something wrong with the course itself. And so it makes sense that San Jose State wants to take a closer look at what’s going on in these courses.

One article about the Udacity/San Jose State project that made me laugh because of its lack of logic is this from Forbes magazine. The title of the article is “Udacity’s High Failure Rate at San Jose State Might Not Be a Sign of Failure.” Huh? What the author means is that a bunch of students failing a class doesn’t mean there’s something wrong with the class itself. He believes that the purpose of higher education is to sort people into two categories–those smart enough to get a degree and those not smart enough. So, his logic goes, those who failed this class are simply not smart enough to get a degree. I would argue with his understanding of the purpose of higher education but let’s grant him his premise. What proof do we have from this set of MOOCs that they accurately sort people into these two categories? Absolutely none. So it still makes sense to me that San Jose State would want to investigate the MOOCs and what happened with them. Technology is rarely a panacea for our problems. I think the MOOC experiment is likely to teach us some interesting things about effective online teaching. But I doubt they are going to fix what ails higher education.



{July 31, 2013}   Whistle-blowers

Two whistle-blowers are in the news today: Bradley Manning and Edward Snowden. Manning is the Army soldier who was convicted yesterday of 17 of the 22 counts against him. He leaked top secret documents to Wikileaks and was convicted of espionage and theft although found innocent of aiding the enemy. He is now awaiting sentencing. Edward Snowden is the contractor working for the National Security Agency who revealed details of several surveillance programs to the press. He is currently on the run from charges of espionage and theft but is continuing to make headlines with further revelations. Some see these two as heroes and others see them as traitors. I think history will judge which they are. What interests me most are the ways these two cases are being discussed.

We already know that Bradley Manning has been found guilty of most of the charges against him. The prosecutor in the case has said that Manning is not a whistle-blower but is instead a traitor looking for attention via a high-profile leak to Wikileaks. Manning’s defense attorney countered by saying that Manning is naive and well-intentioned and wants to inform the American public. “His motive was to spark reform – to spark change.” Why is his motive important? Since when is intent important in determining whether someone committed a crime or not? Next time I get stopped for a traffic infraction, I’m going to try saying “I didn’t intend to break the law” to the officer. What do you think my chances of getting off will be? I also find it interesting that the prosecutor seems to think that Manning is not a whistle-blower because he believes that Manning wanted attention. A whistle-blower is “a person who exposes misconduct, alleged dishonest or illegal activity occurring in an organization.” Manning might not be a whistle-blower because the activity he revealed was not misconduct, was not dishonest or illegal. But to argue that he’s not a whistle-blower because he didn’t have the proper intentions seems to lead us as a society down a dangerous path. Of course, the Zimmerman verdict might have already sent us down that path.

The Snowden situation is more recent than the Manning case so we don’t know what Snowden will be found guilty of. He’s accused of disclosing details about some secret surveillance programs being conducted by the National Security Agency (NSA) in the United States. The NSA is supposed to gather information about foreign entities strictly outside of US boundaries. Edward Snowden revealed the existence of several NSA surveillance programs focused on domestic as well as foreign communications. He then fled the country with several laptops “that enable him to gain access to some of the US government’s most highly classified secrets.” The question that interests me most about this case is how a contractor, an employee of a private company, an employee who probably should have failed his background check on the grounds that his resume contained discrepancies, was able to gain access to such secret information. “Among the questions is how a contract employee at a distant NSA satellite office was able to obtain a copy of an order from the Foreign Intelligence Surveillance Court, a highly classified document that would presumably be sealed from most employees and of little use to someone in his position.” Yes, that IS among the most important questions to answer. The NSA director, Keith Alexander, has said that the security system didn’t work as it should have to prevent someone like Snowden from gathering the sensitive information that he did. Snowden claims that he was authorized to access this information. The NSA claims that he was not authorized. Why does the NSA think it’s preferable that an unauthorized person gained access to its information?

I’m going to pause here to say that I’ve been reading a lot of speculation about how Snowden gained access to this information that he shouldn’t have had access to. There may be some people who know how he gained this access but in the dross of the Internet, the methods aren’t yet clear. From a technical standpoint, however, I find it incredibly disturbing that someone with Snowden’s computer security background (which appears to be rather mundane–he was no genius computer hacker) was able to gain access to all of this sensitive information within the agency that is supposed to be most expert in the security game. No matter what you think of Snowden and his intentions, I think you have to be concerned about the ease with which someone was able to gain access to these “secrets.” Having now read a whole bunch of information about this case, I feel like it is similar to the one in which the high school student is punished by the school’s IT staff for pointing out how weak the school’s computer security setup is. Perhaps we should be focused on the (lack of) security around this information rather than the fact that it has been disclosed.

In the Senior Seminar that I teach, we often discuss whistle-blowing. If I use the term “whistle-blowing,” my students generally feel that the person doing the disclosing is doing a service to society. If, instead, I say that the employee is revealing corporate secrets, my students generally feel that the person is betraying his/her employer. The cases of Manning and Snowden are more complex than I can easily comprehend but I guess I generally feel that shedding light on situations is better than trying to maintain security by secrecy, by obscuring the facts. In a democracy, sunshine is a good thing.



{June 19, 2013}   Software Controls Users

I’m often surprised that some of the most valuable lessons I learned back in the late 1980’s have not become standard practice in software development. Back then, I worked for a small software development company in Western Massachusetts called The Geary Corporation. The co-founder and owner of the company was Dave Geary, a guy I feel so fortunate to have learned so much from at a formative stage in my career. He was truly ahead of his time in the way that he viewed software development. In fact, my experience shows that he is ahead our current time as most software developers have not caught up with his ideas even today. I’ve written about these experiences before because I can’t help but view today’s software through the lens that Dave helped me to develop. A couple of incidents recently have me thinking about Dave again.

I was talking to my mother the other day about the … With Friends games from Zynga. You know those games: Words With Friends, Scramble With Friends, Hanging With Friends, and so on. They’re rip-offs of other, more familiar games: Scrabble, Boggle, Hang Man, and so on. She was saying that she stopped playing Hanging With Friends because the game displayed the words that she failed to guess in such a small on her Kindle Fire and so quickly that she couldn’t read them. Think about that. Zynga lost a user because they failed to satisfy her need to know the words that she failed to guess. This is such a simple user interface issue. I’m sure Zynga would explain that there is a way to go back and look for those words if you are unable to read them when they flash by so quickly. But a user like my mother is not interested in extra steps like that. And frankly, why should she be? She’s playing for fun and any additional hassle is just an excuse to stop playing. The thing that surprises me about this, though, is that it would be SO easy for Zynga to fix. A little bit of interface testing with real users would have told them that the font and speed at which they displayed the correct, unguessed word was too small and too fast for a key demographic of the game.

My university is currently implementing an amazingly useful piece of software, DegreeWorks, to help us with advising students. I can’t even tell you how excited I am that we are going to be able to use this software in the near future. It is going to make my advising life so much better and I think students will be extremely happy to be able to use the software to keep track of their progress toward graduation and get advice about classes to think about taking in the future. I have been an effusive cheerleader for the move to this software. There is, however, a major annoyance in the user interface for this software. On the first screen, when selecting a student, an advisor must know that student’s ID number. If the ID number is unknown, there is no way to search by other student attributes, such as last name, without clicking on a Search button and opening another window. This might seem like a minor annoyance but my problem with this is that I NEVER know the student’s ID number. Our students rarely know their own ID number. So EVERY SINGLE time I use this software, I have to make that extra click to open that extra window. I’m so excited about the advantages that I will get by using this software that I am willing to overlook this annoyance. But it is far from minor. The developers clearly didn’t test their interface with real users to understand the work flow at a typical campus. From a technical standpoint, it is such an easy thing to fix. That’s why it is such an annoyance to me. There is absolutely no reason for this particular problem to exist in this software other than a lack of interface testing. Because the software is otherwise so useful, I will use it, mostly happily. But if it weren’t so useful otherwise, I would abandon it, just as my mother abandoned Hanging With Friends. When I complained about this extra click (that I will have to make EVERY time I use the software), our staff person responsible for implementation told me that eventually that extra click will become second nature. In other words, eventually I will mindlessly conform to the requirements that the technology has placed on me.

Dave Geary taught me that when you develop software, you get the actual users of that software involved early and often in the design and testing. Don’t just test it within your development group. Don’t test it with middle management. Get the actual users involved. Make sure that the software supports the work of those actual users. Don’t make them conform to the software. Make the software conform to the users. Otherwise, software that costs millions of dollars to develop is unlikely to be embraced. Dave’s philosophy was that technology is here to help us with our work and play. It should conform to us rather than forcing us to conform to it. Unfortunately, many software developers don’t have the user at the forefront of their minds as they are developing their products. The result is that we continue to allow such software to control and manipulate our behavior in ways that are arbitrary and stupid. Or we abandon software that has cost millions of dollars to develop, wasting value time and financial resources.

This seems like such an easy lesson from nearly thirty years ago. I really don’t understand why it continues to be a pervasive problem in the world of software.



The 2012 Summer Olympics are nearly over. I haven’t watched them much, mostly because I can’t stand the way they are covered by NBC and its affiliates, especially in prime time, when I’m most likely to be watching. I don’t think this video aired on national television but it sums up NBC’s attitude about the Olympics–it’s only marginally about the sports and performances. The main focus is on disembodied female athlete body parts moving in slow motion, sometimes during the execution of an athletic move but often just as the athlete moves around the playing area. It’s soft core porn. Interestingly, I watched the video earlier today on the NBC Olympics page but now it’s gone. I guess someone at NBC came to their senses and realized that it’s inappropriate to focus on female Olympians bodies without emphasizing their athleticism. But anyway, sexism in the coverage isn’t what I was planning to write about tonight.

I wish NBC would focus more on the performances of the athletes. An athletic performance can be interesting and amazing even in the athlete hasn’t overcome significant life difficulties to be an Olympic athlete. Each of those athletes, even the ones who have had fairly mundane lives outside of their athletics pursuits, has overcome incredible odds to make it to the Olympics at all. For every athlete that makes it to the Olympics, there are probably thousands of others who tried and didn’t make it.

That said, one athlete that caught my attention for overcoming incredible odds to make it to the Olympics is Oscar Pistorius. He is the sprinter from South Africa who has a double below-the-knee amputation but who has now competed in the Olympics using prostheses, earning him the nickname “The Blade Runner.” His participation in the Olympics has been controversial. Some have claimed that the prostheses he uses give him an advantage over other athletes and, as a result, in 2008, the IAAF banned their use, which meant that Pistorius would not be able to compete with able-bodied athletes. Although the ban was overturned that same year in time for Pistorius to participate in the 2008 Summer Olympics, he failed to qualify for the South African team. But this year, he was on that team and both the 400 meter individual race and the 400 meter relay. I saw his heat in the 400 meter individual race and although he came in last, it was an inspirational moment.

Pistorius’ historic run reminded me that over time science fiction often becomes science fact. Remember The Bionic Woman? I loved that show when I was about 13 years old. Jaime Sommers was beautiful, brave and bionic. She nearly died in a skydiving accident but she was lucky to be the girlfriend of Steve Austin, aka The Six Million Dollar Man, who had had his own life-threatening accident a number of years earlier. He loved her so much that he begged his boss to save her by giving her bionic legs, a bionic arm and a bionic ear to replace her damaged parts. Unlike Pistorius’ legs, Jaime’s clearly were “better” than human legs, allowing her to run more than 60mph. Her bionic arm was far stronger than a human arm, allowing her to bend steel with her hand. I always loved her bionic ear, which allowed her to hear things that no human could possibly hear, but only if she pushed hair out of the way first.

Speaking of hearing, I love the story about the technology that is being used to make the Olympics sound like the Olympics to home viewers. The Olympic games have a sound designer named Dennis Baxter. He is the reason we can hear the arrow swoosh through the air in the archery competition. This is a sound that folks at the event probably can’t hear. And yet, Baxter sets up microphones so that we, the television viewing audience, can actually hear that arrow move through the air. Baxter claims that this technology makes the event seem more “real” to the viewing audience.

This raises such interesting questions about augmented reality. We can never directly experience the “real.” It will always be mediated by at least our senses. We know for a fact that our brains fill in holes in our visual perception. Our brains augment what we perceive via our senses. When we perceive an Olympic event via transmission technology (like television or the Internet), are we witnessing the “real” event? Is it still “real” when technology augments some aspect of our sensory perception, like when Baxter adds microphones to allow us to hear things we wouldn’t hear even if we were attending the event? When does technological augmentation become unreality? Where do we draw the line? And most importantly, does it matter? Do we care whether we’re experiencing something “real”?



{June 11, 2012}   Interaction Design

I’m reading an interesting book by Janet Murray called Inventing the Medium: Principles of Interaction Design as a Cultural Practice. She is articulating things that I’ve thought for a long time but is also surprising me a lot, making me think about things in new ways. The book is about the digital medium and how objects that we use in this medium influence the way we think about the world. She argues that technological change is happening so quickly that our design for the medium hasn’t kept up. Designers use the conventions that work well in one environment in a different environment without really thinking about whether those conventions make sense in that second environment. As a result we get user interfaces (which is a term she doesn’t like but which I’ll use because most people interested in these things have a pretty good idea of what we mean by the term) that are far too complex and difficult to understand.

One idea that strikes me as particularly important and useful is Murray’s argument that designers create problems when they separate “content” from the technology on which the “content” is viewed. Like McLuhan, Murray believes that “the medium is the message,”  by which she means “there is no such thing as content without form.” She goes on to explain, “When the technical layer changes, the possibilities for meaning making change as well.” In other words, if you change the device through which you deliver the content, the tools needed to help consumers understand that content should probably also change. My favorite personal example of the failure of this idea is the Kindle, Amazon‘s e-reader. I’ve owned my Kindle for about three years and I mostly love it. One thing that feels problematic to me, however, is the reporting of where you are in the book that you’re reading. Printed books are divided into chapters and pages and it is easy to see how much further the reader has to go to the end of the book. Readers trying to read the same book might have difficulty if they are using different editions because page numbers won’t match up but the divisions into chapters should still be the same. If a page of text in a physical book corresponds to a screenful of text on an e-reader, page numbers don’t really make sense in e-books, mainly because the reader can change the size of the font so that less or more text is able to be shown on the screen at a given time. This means that the Kindle doesn’t have page numbers. But readers probably want to be able to jump around e-books just as they do in physical books. And they want to know how much progress they’ve made in an e-book just as they do in a physical book. So Amazon introduced the idea of a “location” in their e-books. The problem with a “location,” however, is that I have no idea what it corresponds to in terms of the length of the book so using locations doesn’t give me a sense of where I am in the book. For that purpose, the Kindle will tell me the percentage of the book that I’ve currently read. I think the problem with these solutions is that the designers of the Kindle have pretty much used the idea of pages, changed it only slightly and unimaginatively, and it isn’t as informative in the digital medium as it is with a physical book. I don’t know what the solution is but Murray suggests that the e-reader designers should think about the difference between “content” and “information” in their design.

Murray distinguishes between “content” and “information” and thinks that device designers have problematically tried to separate content from the technology on which this content will be viewed. So the designers of the Kindle see the text of the book as the content, something they don’t have to really think about in designing their device. Instead, Murray suggests that they focus on information design, where the content, which in this case is the text, and the device, in this case the Kindle, cannot be separated. The designers should think about the affordances provided by the device in helping to design the information, which is meaningful content, with which the reader will interact.

Another example appeared in my Facebook timeline last week, posted there by one of my friends pointing out the fact that the Mitt Romney campaign is insensitive at best and hostile at worst to women. The post is a video of Romney’s senior campaign advisor Eric Fehrnstrom, appearing on This Week with George Stephanopolous a week ago, calling women’s concerns “shiny objects of distraction.” Watching it, I was annoyed and horrified by what I was supposed to annoyed and horrified by. But I also noticed the ticker tape Twitter feed at the bottom of the video. The headline-type feeds at the bottom of the screen on television news have become commonplace, despite the fact that they don’t work particularly well (in my opinion). I’ve always felt that the news producers must know that the news they are presenting is boring if they feel they have to give us headlines in addition to the discussion of the news anchors. But in the video of Romney’s aide, the rolling text at the bottom of the screen is not news headlines but a Twitter feed. So the producers of This Week have decided that while the “conversation” of the show is going on, they want to present the “conversation” that is simultaneously happening on Twitter about the show. There are several problems with this idea, not least of which is that most of the tweets that are shown in the video are not very interesting. In addition, the tweets refer to parts of the program that have already gone by. And finally, the biggest problem is that the Twitter feed recycles. In other words, it’s not a live feed. They show the same few comments several times. Someone must have thought that it would be cool to show the Twitter conversation at the same time as the show’s conversation but they didn’t bother to think carefully about the design of that information or even which information might be useful to viewers. Instead, they simply used the conventions from other environments and contexts in a not very useful or interesting way.

Another of Murray’s ideas that strikes me as useful is the idea of focusing on designing transparent interfaces rather than intuitive interfaces. Intuition requires the user to already understand the metaphor being used. In other words, the user has to understand how an object in the real world relates whatever is happening on the computer screen. This is not particularly “intuitive,” especially for people who don’t use computers. I’ve been thinking about the idea of intuitive interfaces since I started teaching computing skills to senior citizens. For them, it is not “intuitive” that the first screen you see on a Windows computer is your desktop. And once they know that, it isn’t “intuitive” to them what they should do next because it’s all new to them and so they don’t have a sense of what they CAN do. For example, they can put a piece of paper down on a real desktop. Metaphorically, you can put a piece of paper (a file) down on the Windows desktop but the manner in which you do that is not “intuitive.” The first question I always get when I talk about this is: How do I create a piece of paper to be put on the desktop? Of course, that’s not the way they ask the question. They say, “How do I create a letter?” That’s a reasonable question, right? But the answer depends on lots of things, including the software that’s installed on the computer you’re using. So the metaphor only goes so far. And the limitations of the metaphor make the use of the device not particularly intuitive.

Murray argues that focusing on “intuition” is not what designers should do. Instead, designers should focus on “transparency,” which is the idea that when the user does something to the interface, the change should be immediately apparent and clear to the user. This allows the user to develop what we have typically called “intuition” as she uses the interface. In fact, lack of transparency is what makes many software programs feel complex and difficult to use. Moodle, the class management system that my University uses, is a perfect example of non-transparent software. When I create the gradebook, for example, there are many, many options available for how to aggregate and calculate grades. Moodle’s help modules are not actually very helpful but if the software was transparent, that wouldn’t matter. I would be able to make a choice and immediately see how it changed what I was trying to do. That makes perfect sense to me as a way to design software.

This book is full of illuminating observations and has already helped me to think more clearly about the technology that I encounter.



et cetera