Desert of My Real Life

{July 2, 2016}   Ten Months In

Last August, our new president, Don Birx, spoke to eager Plymouth State University employees about his vision for the campus to be reorganized around strategic clusters and open labs. Even though we are still at the beginning of implementing this vision, I think there are some lessons to be learned now that we are ten months into this process.

As a community, we have been figuring out what strategic clusters and open labs are. We have been working to implement these things even while we figure out what we mean by these terms. I think we might have moved more quickly to implementation if we had taken the time to really figure out what we mean by strategic cluster, open lab, and so on. In the rush to implementation, we actually floundered a bit last academic year in a way that I don’t think we necessarily needed to. So that’s my first lesson. The community needs to understand a bit about what we’re trying to implement before we actually start implementing it. Despite that early misstep, I think we’re coming to some concrete understandings of these terms.

As a cluster guide, someone charged with moving the initiative forward over the next academic year (including this summer), I have been fortunate to discuss and shape our understandings of these terms. We’re still working on them and will engage in discussions with faculty and staff when we all return to campus in August to really solidify the definitions. But here’s my current understanding of what we’re doing. A cluster is an affinity group comprised of programs and the resources, including people, attached to those programs. A cluster differs from a department or a college because of intention. We bring these resources together into a cluster with the intention of working across our individual disciplines in some way….through projects, through curriculum, through teaching and pedagogy, through open labs, through service. Open labs are spaces where this working together might happen. So a cluster can be thought of, broadly speaking, as “who” comes together and open labs as a place “where” the coming together happens, a space of potential since we won’t always know what will arise when we come together. Cluster projects and other cluster activities are “what” we are working on. The projects and other activities will focus on work that is useful beyond the class that a student is currently taking, giving the student “real world” experience. These definitions are maybe necessarily a little slippery. But I think we (the group of cluster guides) are beginning to have common understandings of the terms.

There are also lots of questions about why we are engaging in this change, what the benefits will be. The president has said that he sees seven drivers for doing this. The first is related to the increasing fragmentation of knowledge that he believes characterizes the higher education landscape. As society learns more and more about the world and how it works, individuals know less and less because of their areas of specialization, their fragmented disciplinary knowledge. Strategic clusters are a way of trying to organize the university so that we bring individuals (faculty, staff, students, and external partners) with different disciplinary knowledge and perspectives together to work on large problems that will not be able to be solved by a single disciplinary approach. Students then are exposed to a variety of ways of looking at the world while getting hands-on experience. They will understand how what they’re learning can be applied and integrated with what other people know. We already provide some such experiences for students. But the cluster initiative pushes us to provide multiple such experiences for a larger percentage (ideally, all) of our students.

The president hasn’t yet written the blog posts that lay out his six other drivers of the strategic cluster initiative so I can’t report on those yet. I hope he posts those sooner rather than later so that we can all think about them as we implement his vision.  But even without those, I think the new vision of the university is exciting and will provide each student with an excellent education that will serve them well as they move into an unknown future. Plymouth State University will become known for this innovative approach to education, drawing students to us because they want exactly this kind of experience. Strategic clusters and open labs will represent a unique identity for Plymouth State University, distinguishing us from other institutions. All of that is exciting to me.

Our process so far has not been perfect. As I said, I wish we had taken time to to discuss definitions before we tried to begin implementation of the vision. There are other issues with the specifics of the implementation structure (how guides were chosen and putting programs into clusters as a first step, to name two that come to my mind) that we’ve put in place that I wish had gone differently. But it feels like we are overcoming those issues. We need to make mistakes and then learn from and overcome them.

The biggest lesson that I’ve learned so far in this process, however, has to do with responsibility. Until late this Spring, I kept waiting for someone to tell me things–tell me the definition of a strategic cluster, tell me how we will implement open labs, etc. But then I realized that there is no one to tell me those things. We are doing something really different here. So I am responsible for figuring those things out. That responsibility doesn’t come because I am a cluster guide (although that fact adds some urgency to my sense of responsibility). I am responsible because I am a member of the Plymouth State University community. We all need to figure this stuff out together. We have to engage in this process with curiosity and skepticism and with a sense of trying to move the initiative forward. I know it sounds corny but I really believe that the survival of higher education is in our hands. We are responsible. All of us.

{June 29, 2016}   Information Archives

I’ve been spending Wednesday mornings in the library this summer working on my Freaks and Geeks project in the company of other academics working on their own projects. One of the frustrating things about today’s session is that I’m trying to find a particular advertisement that NBC created for Freaks and Geeks that used the tagline “What high school was like for the rest of us.” And I can’t find it. This made me start thinking about all of the cultural ephemera that we have lost because we don’t pay attention at the start of a project to archiving the materials of the project.

As I’ve said before, I’m also working on a project this summer (and into the next academic year) that will transform my university’s structure around interdisciplinary clusters. No other university has attempted such a vast overhaul of the way they do things and so we are being watched by people all over the higher education landscape. I am serving as a guide for the project (I’m not always completely sure what that means). A group of us guides decided last week that we should be documenting the process of change as it occurs and no one is going to do that documentation if we don’t. So we’re working on a proposal describing how to do that. In the meantime, some of us have started our own personal documentation using various social media platforms. We don’t know exactly what will happen with the materials that we create and collect or how we will end up using them but we hope that we will be able provide lessons (both positive and negative) to other universities that are thinking about major transformation initiatives.

Once again, I see connections between my two major projects this summer, even though they seem very different from each other on the surface. This idea of connections also got me thinking about how I do my research for the Freaks and Geeks project (which is no different than the way most people do research). I sometimes find myself having followed paths of inquiry that have led me in very different directions compared to where I thought I was going. For example, I was researching other television shows related to high school. The TV show James at 15 is on that list. It was only on for two seasons and I was just a year younger than the title character. I loved that show! It was another “realistic” look at high school kids, but with less comedy than Freaks and Geeks. I haven’t seen (or really even thought about) the show since its original airing. I wanted to know if it was as good as I remembered and so I did a bit of research, starting with Wikipedia. I discovered that Kim Richards played James’ sister in the show. That name seemed familiar but I couldn’t remember why. So then I researched her. She was Prudence in the show The Nanny and the Professor, which I also loved when I was a really little kid. It turns out that Richards also was one of the original members of the cast of The Real Housewives of Beverly Hills, which is a show that I have never seen. The interconnectedness of knowledge and information would be an interesting premise for a blog called “Rabbit Hole” in which the author described their wanderings around the Internet that happen simply by clicking links and seeing where they end up.

Interconnectedness of knowledge, TV shows about high school, information archives. I get to think about all the fun things.

I have two major projects that I’m working on this summer. One is related to the television show Freaks and Geeks which I love and use in my Analyzing Television class every spring. The other is related to the development of strategic clusters at my University. In doing research for the Freaks and Geeks project, I discovered a comment by one of the stars of the show that made me think about our approach to strategic clusters. I love these kinds of connections that bring together the various aspects of my work.

If you don’t know Freaks and Geeks, it was a show that lasted only one season on NBC in the 1999-2000 season. It was created by Paul Feig, produced by Judd Apatow, and (mostly) directed by Jake Kasdan. Each of them had worked on other things before the show but, despite its short lifespan, the show brought them to prominence. One of the things that is remarkable about the show is that it launched the careers of some of our most successful young actors today. Many of those actors have gone on to be well-known in a variety of creative endeavors. Linda Cardellini was a 24-year-old actress when she was cast as Lindsay Weir, the leading role on Freaks and Geeks. She has since gone on to success in both television and films, winning a TVLand award for her role in ER in 2009. James Franco, who portrayed Daniel Desario in his first major role, has starred in blockbuster movies and television shows, taken smaller roles in critically-acclaimed films, hosted the Oscars, published poetry and short stories, written and directed documentaries and docudramas, and starred on Broadway. Jason Segel, who portrayed Nick Andropolis, starred in the hit television show, How I Met Your Mother, and has achieved commercial and criticalsuccess in his film career. Seth Rogen, who portrayed Ken Miller, was nominated for an Emmy as a staff writer for Da Ali G Show, and has written, directed and starred in many movies. John Francis Daley, who starred as Sam Weir, also starred in the hit show Bones and co-wrote the movie Horrible Bosses, among other accomplishments. Creators Feig and Apatow are clearly very good at identifying young talent.

Based on some comments by the cast members, however, I would argue that Feig and Apatow were also very good at nurturing young talent. For example, Segal, Rogen, and Franco, who at the time were 19, 16, and 20 respectively, would get a script written by someone else on Friday and then get together on Sunday to “improve” it. Rogen has said, “We felt if we made the scenes better on the weekend, if we came in with better jokes, they would film it. And they would! And we didn’t know it at the time, but that was completely un-indicative of probably every other show that was on television.” Reflecting on the experience, co-star Busy Phillips comments, “I don’t think it’s surprising that 8 or 10 of us that were on the show have successfully written and produced our own things. … Judd and Paul and Jake and all of the writers made us feel like all our ideas were worth something, when so many other people were telling me that basically I was a talking prop.”

These comments make me think about my University’s current effort to move to a strategic cluster orientation for our academic experience. Strategic clusters are a way of organizing a university around discpline-based affinity groups. The idea is that faculty, staff, students, and outside partners with similar interests work on problems, tasks, events, and so on across disciplines, each bringing their unique disciplinary knowledge and perspective to the endeavor. The reorganization of a university into clusters is a huge project but one that is likely to have many benefits. The benefit that I’m most excited about is that students, as part of their regular academic experience rather than as an add-on, will engage in work that will be useful outside of the classroom. I think students take the work more seriously when they perceive that there is an audience for it beyond the instructor of the course and a use for the work beyond the existence of the course. For example, the student blogs for my Analyzing Television class are more insightful and of a higher quality than the papers students used to write for the class that only I would read. I think that’s because the blogs are public and I work to make them known to a larger community of readers who will give the students feedback on what they’ve written.

The comments from the Freaks and Geeks cast members make me think of another benefit of strategic clusters. If student ideas are taken seriously on these “real-world” projects, they will see their participation as more important than just being “a talking prop.” Encouraging student ideas and actually using their ideas on these projects will benefit both the projects that the students are currently working on and the students themselves in the long term as they see themselves as vital, valuable contributors to their disciplines. If the creators of a very public television show can use the work of a group of college aged people in a serious way, so can we. And so should we.

{February 21, 2016}   Prometheus or Misogyny on a Blog

I haven’t paid any attention to this blog for about a year and a half. That’s what being a department chair does to you. I was inspired yesterday to write about the Apple vs. the FBI controversy and so logged on for the first time in a long while. I have my comments set up so that I have to approve them before they get posted to the blog. I was surprised to discover that I had one comment waiting for approval. It was posted to my About the Author page on February 2, 2016, by an anonymous poster. I don’t allow anonymous comments but this person made up a name (“Mother”) and an email address that even has a made up domain name.

Here’s what “Mother” says in the comment: “Just read your Prometheus ‘review’ – Bottom line. Everything to you is Misogynistic.”

I wrote my review of the movie Prometheus in June, 2012, and titled it “Prometheus or Misogyny on a Space Ship.” I’m so curious about what brought “Mother” to a review on a fairly defunct blog more than three and a half years after the movie’s release. I have no idea what that’s about. I’m also curious about the reasons for “Mother” choosing to place his (almost surely “his”) comment on the About the Author page rather than on the movie review post itself. I would guess that “Mother” meant his comment to be directed at me as a person rather than engaging with the ideas that I presented in the actual review. Finally, I’m curious about the decision by “Mother” to post the comment anonymously with a made up email address. Clearly, “Mother” doesn’t want to engage in any real conversation about the merits of the movie.

Despite the fact that “Mother” doesn’t want to engage with me, I will respond to him by saying this: your statement is demonstrably false. Do a search for “misogyny” on my blog and you will see that this is the only post that uses the word. I discuss many movies throughout my blog and have not identified any others as misogynistic. For example, I also didn’t like the movie Disgrace but my reasons for that dislike have nothing to do with seeing the movie as misogynistic. There is a huge difference between identifying one movie as being misogynistic and identifying “everything” as misogynistic.

Do you really not see this difference? If you don’t see the difference, you aren’t very smart. If you do see the difference, I don’t understand the point you’re trying to make. Especially when you behave like a coward and don’t identify yourself.

{February 20, 2016}   Apple vs. The FBI

I’ve been reading a lot about the controversy surrounding the court order compelling Apple to help the FBI break into the phone used by one of the San Bernardino killers, Sayed Farook. I think at this point, I mostly understand the technical issues although the legal issues still confound me. And there’s a significant question that I’m not seeing many people discuss but would help me to understand the situation better.

Here’s what the case is about. The iPhone used by one of the killers is owned by his employer, San Bernardino County. The FBI sought and received a court order to confiscate the phone with the intention of gathering the data stored on it. The County willingly turned the phone over. As an aside, there is currently a controversy with the FBI saying that a County employee, working on his own, reset the password for the phone after giving it to the FBI which means one possible method for retrieving the data from the phone is no longer available. The County claims that its employee reset the password under the direction of the FBI. Somebody is lying. If the FBI really did direct the employee to reset the password, they need to hire more adept technologists. The news stories about this controversy neglect to mention that the method in question would only have worked if Farook had not changed his password after he turned off the automatic iCloud backup. I think that’s pretty unlikely.

So, the FBI has physical access to the iPhone but the problem is that the phone has two layers of security. The first is that it will automatically delete all of its data if someone enters an incorrect password 10 times. The second is that the data on the phone is encrypted which means that it can’t be read unless the password is entered. The FBI sought and received a court order to require Apple to “bypass or disable” the feature that wipes the phone clean. Doing so would then allow the FBI an unlimited number of password attempts to decrypt the data stored on the phone. Apple’s response to the court order is that to comply would be to put the data of every iPhone user in jeopardy.

One of the things that confused me about this story was that I kept hearing and reading reports about Apple helping law enforcement to unlock iPhones many times in the past. The folks over at Tech Crunch helpfully explained that Apple’s current response is not hypocritical. For iPhones running the operating system iOS 7 (and previous versions of iOS), Apple had the ability to extract data from the phones. And so it complied with court orders requiring it to extract data from iPhones. For iPhones running iOS 8 and later, Apple removed that capability. Apple has stated that the company wants to protect its users’ data even from Apple. The iPhone in question is running iOS 9. So Apple does not currently have to capability to extract data from the phone in the ways that it has in past cases. In order to comply with the court order, Apple would need to write some new software, a version of iOS with the phone wiping feature disabled, and then install it on the iPhone in question. The court order requires Apple to provide “reasonable technical assistance.” Is writing new software “reasonable technical assistance”?

But here’s the question that I haven’t found an answer for. Is there a precedent for the government compelling a person (remember: corporations are people so Apple is a person, right?) to build something that doesn’t already exist? The case that’s being cited as a precedent seems to me (admittedly, not a lawyer) to be pretty different. In that case, the Supreme Court said that the government could compel The New York Telephone Company to put a pen register (a monitoring device) on a phone line. But the telephone company already had the technology to monitor phone lines so it wasn’t as though they were being compelled to create a new technology. Apple is being asked to write a new piece of software, to build something that doesn’t already exist. This diversion of resources is one of their grounds for objecting to the court order. So, John McAfee has offered to write the software for free. It isn’t clear, however, that writing the software is enough since iPhones will only work with software that has been signed by Apple. Even if McAfee was successful, the government would still need Apple’s cooperation. And that’s unlikely since Apple’s philosophy is that their products should provide their customers as much data security as possible.

Ultimately, I agree with Bruce Schneier that the American public is best served if Apple does not comply with the government’s order. The government says that this request would be a one time thing, that they would not ask for such assistance again. I don’t believe that. Even if I did believe that the government would not ask again, I don’t think we can keep such software, once it exists, out of the hands of the many, many hackers who want to steal your data. That is a threat to our everyday lives that far outweighs the threat of terrorism.

Addendum (2/21/16): I’ve read some articles that take issue with Apple CEO Tim Cook’s “slippery slope” argument. His argument has been that if Apple complies with this order to circumvent the iPhone feature that wipes the phone clean after 10 incorrect password attempts, they will have no basis to refuse to do so in the future. Every time the US government asks them to circumvent the feature, they will have to do so. Government lawyers have said that this request is about this phone only and that they won’t ask in other cases. Tell that to Cyrus Vance, Jr., the district attorney in Manhattan. On Weekend Edition this morning, Vance argued that Apple should comply with the order because they are circumventing law enforcement’s ability to view the data on more than 175 phones related to criminal investigations. If this software is available for use by law enforcement officials, it will be available for use by the “bad guys.” That puts everyone’s data in jeopardy. Apple is protecting your ability to keep your data out of the hands of hackers (whether they work for the government or not).

I went to a conference sponsored by UBTech a few weeks ago. It was advertised as a “national summit on technology and leadership in higher education” and had a tag line of “Your future is being determined. Be there.” Not surprisingly, many of the sessions were focused on “disruptive innovation,” the idea that new technologies are disrupting the traditional university.

The keynote speaker on the second day of the conference was Richard Baraniuk, a professor at Rice University and founder of OpenStax, a company dedicated to developing text books that are given to students for free. The company has so far published two online text books, most notably, one for a standard College Physics class. Baraniuk talked a lot about disruption and what that will mean for colleges and universities and suggested that we all go out and read the work of the creator of the theory of disruptive innovation, Clayton Christensen. Looking at Christensen’s own summary of disruptive innovation, I was struck by the fact that Baraniuk neglected to discuss one of the major components of the theory. That is, Baraniuk never mentioned the fact that disruptive innovation happens at the “bottom of the market.” While companies are focused on providing quality products or services at high cost to their most sophisticated customers (sustaining innovations), other companies come into the market to provide cheaper, lower quality products or services to customers who traditionally have not been able to afford those products or services (disruptive innovations). It’s an interesting omission since when I’ve heard proponents of these ideas talk about their application to higher education, they have insisted that quality will not suffer. Another interesting insight from Baraniuk’s presentation came when someone asked about the cost of creating these “free” text books and Baraniuk answered that each one costs about a million dollars to develop. When I looked into the business model of OpenStax, I found that their initial funding comes from a series of foundations (the Bill and Melinda Gates Foundation, the William and Flora Hewett Foundation, etc.). Of course, that isn’t sustainable so OpenStax also provides premium services and content to supplement the free versions of their text books. For example, they have created an iPad version of their College Physics text that costs $4.95. That, of course, is significantly cheaper than a traditional text book but the question remains whether students will pay even that small amount when a free version of the text exists. Other “free” text book publishers, such as Flat World Knowledge, have stopped providing a free version and instead, have had to focus on low-cost texts. This model of text book delivery would still be much more affordable for the student than the traditional model but I think it remains to be seen whether it will be financially viable. It seems to me that to make it work, someone will have to come up with a disruptive innovation in funding models, perhaps something like crowd-funding?

Less than a week after I came back from the conference, The New Yorker published historian Jill Lepore’s critique of the theory of disruptive innovation. The gist of her critique is that the theory has been applied to industries that are very different than the manufacturing industries that Christensen initially studied. And, she says, the theory doesn’t even work particularly well with those manufacturing industries because Christensen’s methods were lacking–he cherry-picked examples, he ignored potential complexities in causation, he arbitrarily chose time frames that would artificially support his claims, and so on. In other words, she says the theory doesn’t work very well to explain how businesses succeed and fail. I am most interested in Christensen’s claims that his theory has predictive value, that is, that it can help us to determine which companies will succeed and which will fail. As an educator, this would help me to figure out how to deal with the disruptive innovations that higher education is facing. Unfortunately for me, the record is pretty clear that this theory hasn’t been very useful as a predictive tool so far. For example, in 2007, he predicted that Apple would fail with the iPhone. We know that this prediction was incredibly wrong. In addition, in 2000, he started the Disruptive Growth Fund, a fund which used his theory to determine which companies to invest in. The fund was liquidated a year later because it lost significantly more money than the NASDAQ average during that time. A Tribune writer quipped that “the only thing the fund ever disrupted was the financial security of its investors.” Ouch.

Christensen hasn’t written a formal response to Lepore’s article but he did give this really weird interview to Business Week. I was particularly interested in his response to the lack of predictive value of the theory. First, he says that he was not advising the guy who was running the Disruptive Growth Fund so you can’t claim that it is a failure of the theory. Regarding the iPhone, he says that he labeled it as a sustaining innovation (doomed to fail) against Nokia’s smart phone instead of labeling it as a disruptive innovation (destined to succeed) against the laptop. That definitely sounds like predicting the future after it has already happened. But this explanation brings up one of the things that has confused me most about the theory. One of his main examples involves the hard disk drive industry. I don’t know anything about the business of that industry but I certainly know something about the technology. It seems weird to me that he would label the reduction in size of hard drives as “new technology.” It seems to me that to go from a 3.5in hard drive to a 2.5in hard drive is an incremental improvement in the technology, a tweak, (a sustaining innovation?) rather than a disruptive innovation. Perhaps he explains his methodology for categorizing innovations in his books, which I have not read, but I haven’t been able to find such an explanation so I sort of doubt it exists. If we don’t have a method for this categorization, and we categorize innovations after the fact, how can the theory help us to understand what has happened in the past or what will happen in the future?

And that leads me back to higher education. One of the hot topics at the conference was MOOCs, those massive open online courses that gained popularity a few years ago, especially after the publication of The Innovative University, Christensen’s co-authored book applying disruptive innovation theory to higher education. The idea of a MOOC is that an expert in a particular field records a bunch of lectures about her field of expertise and makes that content available online for all who want to take the course. There are assignments that may be evaluated by others taking the course. There may be credentials (certificates, badges, etc.) that are given to those who successfully complete the course. Sometimes, the student has to pay to receive the credentials but the cost is minimal. Some of these MOOCs have had thousands, even hundreds of thousands, of students. I was one of the 58,000 people who signed up for the Introduction to Artificial Intelligence course offered by Sebastian Thrun and Peter Norvig a few years ago. Like most of those who started the course, I didn’t finish it. I just wanted to see how it worked. There was a lot of text to read. There were some recorded lectures. There were assignments to complete on my own. There were discussion boards where I could discuss the assignments with other students. Other than this interaction with the other students, it felt like a correspondence course from the 1970s. Christensen has labeled this kind of online delivery a disruptive innovation. And perhaps it is. But lecture-based education (“sage on the stage”) has been criticized as old-fashioned by many. It seems to me that the very idea of a MOOC relies on the idea of a “sage on the stage.” In fact, much online education is developed and delivered in that manner. Why would this kind of education be cutting edge if it’s delivered via the Internet and old-fashioned if it’s delivered face-to-face?

I don’t know where any of this will lead us in higher education. But the conference was interesting because it prompted some new thinking for me in the area of disruptive innovation as well as in several other areas. I’m looking forward to continuing these conversations at PSU during Faculty Week in August when Ann McClellan and I will lead a discussion on the ideas and technologies we heard about at the conference.

{June 19, 2014}   HCF Redux

Three episodes into Halt and Catch Fire and I still can’t make up my mind about whether it is an interesting show or not. I really want to like this show. I love that it isn’t afraid to be confusing about the underlying geeky details of computing. The show almost relishes those moments when characters articulate what they’re thinking about the technology without speaking down to its audience. On the other hand, the motivations and actions of the characters outside the realm of technology are the stuff of melodrama and really cheapen the engagement we might have in the pseudo-historical story of developing a new technology that is very different than all that has come before.

Spoiler alert–there is one major plot twist that I’m going to discuss below that if you haven’t yet watched the first three episodes of this show, you might want to avoid.

One of the reasons that this show has intrigued me is that Cameron Howe, the (genius) developer of the BIOS of the new personal computer in the show, is a woman. She is androgynous in her name and her appearance and she is brilliant and defiant. All of that intrigues me when the story takes place in the early 1980s. She is focused on developing this really base level machine code without which the hardware cannot succeed. So psyched that a woman is central to the success of this new machine. On the other hand, she is the only character who is shown shopping for new clothes because, of course, in the middle of trying to revolutionize computing, she would be concerned that her clothing isn’t feminine enough. Annoying.

Another woman in the show, Donna Clark, is portrayed as both the nagging wife of our hardware genius, Gordon, and the unacknowledged originator of the chip layering idea that we already know will be the thing that allows our new computer to be light enough to be portable. I might appreciate the complexity of this character if it wasn’t done in such a shallow obvious manner. Donna seems to be the inhibitor of Gordon’s real genius because she keeps reminding him that he has children and they might need a little bit of his attention. The bird that shows up in episode three was a bit much for me, especially when Donna was the one who had to be practical and kill it with a shovel. Metaphor, anyone?

Lee Pace’s portrayal of Joe MacMillan has been particularly annoying. His single emotion seems to be anger. The story line about the scars on his chest is only interesting if the creators take advantage of the inconsistencies that Cameron pointed out in his telling of how he got them. I get it. He’s angry. With EVERYONE. So let’s start explaining some of the past events that have so far been alluded to. And here’s the big spoiler–what is up with sex scene with LouLu’s boy toy? That was a plot twist that surprised me. But I don’t think Lee Pace is great in this role because he seems to think that playing a genius means constantly displaying arrogant anger. I think it would have been much more interesting if he had played that sex scene more tenderly.

So where do I currently fall in regards to this show? I still like that the show doesn’t sugar coat the technicalities of what this group of people is trying to achieve. I want the show to succeed in telling that story. On the other hand, I think the layering of the interpersonal relationships has been a bit heavy handed and has taken away from what might be a powerful story.

{June 5, 2014}   HCF

I just watched the pilot episode of AMC’s new show, Halt and Catch Fire, which airs in Mad Men‘s Sunday 10pm slot. I was pretty intrigued by the slew of previews I saw while watching this spring’s half season of Mad Men (and by the way, since when does a season start in the spring of one year, take nearly a year hiatus, and then end in the spring of the following year?). I definitely recognize that a show about building a new computer in the early 1980s has the potential to be incredibly boring. There was a lot of good stuff in the pilot as well as some potentially bad but I definitely wasn’t bored.

One of the annoying things about the show is the arrogant genius behaving badly trope. Lee Pace plays the first arrogant genius, Joe MacMillan. When Joe is introduced to us, he is driving his Porsche very, very fast and runs over an armadillo, which is our first clue that he’s in Texas. Joe makes speeches full of the vision thing and gets annoyed when his fellow computer salesman, Gordon, tries to talk about mundane details like free installation. He is a master manipulator, which I found annoying, but he has some mystery in his background, which I found intriguing. I look forward to finding out what he’s been doing since his disappearance from his IBM job a year prior to the events of the show. The second arrogant genius is Cameron Howe, a woman who is a senior at an engineering school, where, for some unknown reason, Joe is a guest speaker. She is the misunderstood genius that no one pays attention to because she is so far ahead of her time. As Mackenzie Davis portrays her, Cameron reminds me of Watts, the  Mary Stuart Masterson character in Some Kind of Wonderful, complete with anger at the world and a punk soundtrack playing on her Walkman. But she’s a genius so we forgive her her quirks. The final genius is not as arrogant as he is depressed. Gordon Clark, played by Scott McNairy, was the inventor of a failed computer who has been reduced to selling other people’s computers. When we first meet him, he is drunk and his wife has brought their kids to the jailhouse to bail him out. He drunkenly reminisces about the failure of his computer–when they tried to turn it on to demo it, it wouldn’t turn on. But he is also a visionary, having written an article for Byte magazine about open architectures for CPUs. Joe quotes that article to convince Gordon to come work with him on his new project.

Although I found the genius trope annoying and over the top, there was a lot about the show that I enjoyed. I really enjoyed the history of the show. Even though it’s fictional, it reminded me of a lot of things that I haven’t thought about in years. Byte magazine is one of those things. I loved that magazine and was a regular reader in the 1980s. It seemed completely believable to me that someone might have written an article for the magazine that inspired someone else to take a big chance on trying something new and different. Other mentions in the show that brought back memories: CP/M, SCP, the dominance of IBM (International Business Machines) in the computer industry of the day and the joys of playing Centipede at the arcade. I also liked the reverse engineering scene although I can understand that if you don’t have a tech background, that scene might have been confusing or boring or both. That’s probably why it’s kind of glossed over. Most viewers probably won’t be too excited about watching guys using an oscilloscope to record pin voltages and then recording the contents of 64K of ROM to get the BIOS instructions in assembly language. Just writing that sentence makes me smile. It’s a very cool scene.

I am a little torn by the title of the show. On the one hand, I think it’s cool that the title refers to an assembly language instruction, HCF. Assembly is a low-level computer language which means that there is a very low level of abstraction which means the programmer is very close to writing code in binary, the zeroes and ones that the computer understands. It is really geeky to program in assembly these days as most software is written in languages that contain instructions at a higher level of abstraction from binary. HCF is an instruction that halted operation of the computer by instructing it to repeat the same operation over and over. The “catch fire” part of the instruction comes from the story (myth?) that some of the wiring in an old computer heated up so much by this repetition that it actually caught fire. Nice. On the other hand, “halt and catch fire” seems like an obvious metaphor that sometimes the best laid plans blow up in your face. Bleh. In fact, metaphor in this show is pretty obvious. At one point, for example, when it looks like Gordon won’t work with him, Joe pulls out a bat that has the inscription “Swing for the fences” and so he does, literally, by hitting a ball over and over until he breaks a window. Not so subtle.

A couple of other things made me roll my eyes as well. Most of the bonding/conflict stuff between Cameron and Joe, for example. The trick quarter, the conversation about VLSI, and the stupid sex scene all seemed too superficial and lazy. But I understand that first episodes are tricky. The characters have to be introduced and established quickly and so shortcuts are often taken. I just hope the show relies more on the cool stuff once the story is established. I will keep watching to see what they do with this fairly promising start.

{September 22, 2013}   Social Media Round Up

Now that the craziness of the start of the semester has begun to slow down, I thought I’d do a quick hit on a variety of social media topics that I’ve been thinking about in the last few weeks but have not yet found the time to write about.

A few weeks ago, Twitter updated its rules to make it clear that abuse would not be tolerated. The events that prompted the rule updating included specific bomb threats and threats of rape sent to women journalists and politicians. Many of the comments on the articles covering this story think that it was improper for Twitter CEO Tony Wang to apologize to the women in question. Other comments suggest that it’s stupid to try to police these kinds of threats because it’s not going to make a difference. Still other comments suggest that unless someone breaks the law, Twitter should not “censor” tweets. My main response to these comments is that making direct and specific threats against a particular individual is indeed against the law. It doesn’t seem to be a terrible thing to me that Wang chose to apologize to individuals who had crimes committed against them using his product. In fact, that seems to make good business sense. And I agree that rules alone won’t make a difference in changing the tone of discourse on Twitter. There has to be enforcement of those rules as well. So I hope Twitter will follow up on its promises to make reporting abuse easier and hiring more people to deal with such reports so that they can be handled more quickly. Twitter didn’t handle this issue particularly well, in my opinion, but they are taking some first steps to fix the issue.

I use a variety of social networking sites at varying levels of activity. For example, I’m pretty active on Facebook, regularly posting status updates, photos and links to stories that I think my friends will be interested in. I am far less active on LinkedIn although I have many contacts in my network, mostly current and former students who are using the network professionally. I try to keep up with the various networks that are available so I decided recently to check out Google+. I’ve been using Google Calendar and Gmail for years so it felt like a natural step to set up a profile and get started with Google+. I’ve found so far that it is much more like Facebook than like LinkedIn but there’s a bit of Twitter thrown in. It’s like Facebook in that you have a stream very much like Facebook‘s newsfeed. You also share status updates, photos, etc. just like on Facebook. You can even “like” posts by others (called +1 in Google+). But like Twitter, Google+ has the option to that allows you to follow people and organizations. In Facebook, your friendships are bidirectional in that both parties must agree to the relationship. In Twitter, you can follow someone to be able to see their public tweets and they do not have to follow you back. In other words, a relationship requires only a uni-directional connection. Google+ also only requires this uni-directional connection. So, in Google+, we get the sharing features of Facebook combined with the relationship features of Twitter. But Google+ also offers another feature that I think is pretty cool. One of the problems with Facebook is that all friends are treated equally on the network even if they aren’t equal in real life. That has caused problems for lots of people. So Google+ allows the user to create different “circles” for their connections which will allow the user to easily manage the kinds of material people in a particular circle will see–just like in “real life.” Another interesting aspect of Google+ is the “hangout” concept although I haven’t played with entering them or creating them yet. Perhaps that will be the subject of a future post. The main problem with Google+, however, is that so few of the people I care about are using it. That’s the draw of Facebook–many of the people I care about in “real life” are posting really interesting (and not so interesting) things on Facebook so I keep going back. Until more people migrate to Google+ in a meaningful way, I probably won’t participate very much myself. Google faces a classic chicken and egg kind of problem here.

I regularly check out new social media tools, just to see what they’re about. Some of the tools become part of my repertoire (Tumblr, Flickr) while some do not or, at least, haven’t yet (Klout, Medium). One tool that was quite intriguing to me when I first looked at it but then kind of disappointed me was Storify. It’s a tool that is designed to allow people to curate social media artifacts to tell a story. I wrote one story ten months ago and then forgot about it. As I was thinking about the things I wanted to write about in this round up of my social media activity, I remembered that I had written that one story and went back to check what’s been going on in that social media world. I was surprised to find that my story had 56 views. That may not sound like much activity for 10 months, but I had done nothing to bring attention to the story and none of my friends (as far as I know) are members of that community. I have no idea how many people read each one of these blog posts but I’m guessing it is far fewer than 56 people. So Storify is back on my radar although I’m not sure how I might use it yet.

It’s difficult to keep up with what’s going on in the world of social media. I would like a tool that helps me me keep up with what’s available and helps to put it all together in a way that makes sense.



{August 9, 2013}   When is Failure Really Failure?

If you are involved in higher education in any way, you have heard about Massive Open Online Courses (MOOCs). I first heard of them back in the Fall of 2011 when I was one of 160,000 students to sign up for an online class in Artificial Intelligence (AI). I have PhD in Computer Science and my area of specialization was Machine Learning, a branch of AI. I have taught AI at the undergraduate level. So I wasn’t signing up for the course because I wanted to learn the content. Instead, I wanted to understand the delivery method of the course. In particular, I wanted to figure out how two preeminent AI researchers, Sebastian Thrun and Peter Norvig, would teach a class to over 150,000 students. I spent a couple of weeks working through online video lectures and problem sets, commenting on my fellow students’ answers and reading their comments on my answers. I stopped participating after a few weeks as other responsibilities ate up my time. It isn’t clear to me how many other people signed up for the class for reasons similar to mine. 23,000 people finished the course and received a certification of completion. Based on the “success” of this first offering, Thrun left his tenured faculty position at Stanford University and founded a start up company called Udacity.

There has been a lot of hype about MOOCs in general and Udacity in particular. It’s interesting to me that many of these MOOCs seem to employ pedagogical techniques that are highly criticized in face-to-face classrooms. In this advertisement, for example, Udacity makes a big deal about the prestige and reputations of the people involved in talking at students about various topics. Want to build a blog? Listen to Reddit co-founder Steve Huffman talk about building blogs. In other words, these classes rely heavily on video lectures. The lecture format for face-to-face classrooms, for example, is much maligned as being ineffective for student learning and mastery of course content. Why, then, do we think online courses which use video lectures (from people who have little training in how to effectively structure a lecture) will be effective? The ad also makes a big deal about the fact that the average length of their video lectures is 1 minute. Is there any evidence that shorter lectures are more effective? It depends on what else students are asked to do. The ad makes bold claims about the interactivity, the hands-on nature of these courses. But how interactivity is implemented is unclear from the ad.

Several people have written thoughtful reviews of Udacity courses based on participating in those courses. Robert Talbert, for example, wrote about his mostly positive experiences in an introductory programming class in The Chronicle of Higher Education. Interestingly, his list of positive pedagogical elements looks like a list of game elements. The course has clear goals, both short and long-term. There is immediate feedback on student learning as they complete frequent quizzes which are graded immediately by scripts. There is a balance between challenge presented and ability level so that as the student becomes more proficient as a programmer, the challenges presented become appropriately more difficult. And the participants in the course feels a sense of control over their activities. This is classic gamification and should result in motivated students.

So why are so many participants having trouble with these courses now? Earlier this year, Udacity set up a partnership with San Jose State to offer courses for credit for a nominal fee (typically $150 per course). After just two semesters, San Jose State put the project on hold earlier this week because of alarmingly high failure rates. The courses were offered to a mix of students, some of whom were San Jose State students and some of whom were not. The failure rate for San Jose State students was between 49 and 71 percent. The failure rate for non-San Jose State students was between 55 and 88 percent. Certainly, in a face-to-face class or in a non-MOOC class, such high failure rates would cause us to at least entertain the possibility that there was something wrong with the course itself. And so it makes sense that San Jose State wants to take a closer look at what’s going on in these courses.

One article about the Udacity/San Jose State project that made me laugh because of its lack of logic is this from Forbes magazine. The title of the article is “Udacity’s High Failure Rate at San Jose State Might Not Be a Sign of Failure.” Huh? What the author means is that a bunch of students failing a class doesn’t mean there’s something wrong with the class itself. He believes that the purpose of higher education is to sort people into two categories–those smart enough to get a degree and those not smart enough. So, his logic goes, those who failed this class are simply not smart enough to get a degree. I would argue with his understanding of the purpose of higher education but let’s grant him his premise. What proof do we have from this set of MOOCs that they accurately sort people into these two categories? Absolutely none. So it still makes sense to me that San Jose State would want to investigate the MOOCs and what happened with them. Technology is rarely a panacea for our problems. I think the MOOC experiment is likely to teach us some interesting things about effective online teaching. But I doubt they are going to fix what ails higher education.

et cetera