Desert of My Real Life











I went to a conference sponsored by UBTech a few weeks ago. It was advertised as a “national summit on technology and leadership in higher education” and had a tag line of “Your future is being determined. Be there.” Not surprisingly, many of the sessions were focused on “disruptive innovation,” the idea that new technologies are disrupting the traditional university.

The keynote speaker on the second day of the conference was Richard Baraniuk, a professor at Rice University and founder of OpenStax, a company dedicated to developing text books that are given to students for free. The company has so far published two online text books, most notably, one for a standard College Physics class. Baraniuk talked a lot about disruption and what that will mean for colleges and universities and suggested that we all go out and read the work of the creator of the theory of disruptive innovation, Clayton Christensen. Looking at Christensen’s own summary of disruptive innovation, I was struck by the fact that Baraniuk neglected to discuss one of the major components of the theory. That is, Baraniuk never mentioned the fact that disruptive innovation happens at the “bottom of the market.” While companies are focused on providing quality products or services at high cost to their most sophisticated customers (sustaining innovations), other companies come into the market to provide cheaper, lower quality products or services to customers who traditionally have not been able to afford those products or services (disruptive innovations). It’s an interesting omission since when I’ve heard proponents of these ideas talk about their application to higher education, they have insisted that quality will not suffer. Another interesting insight from Baraniuk’s presentation came when someone asked about the cost of creating these “free” text books and Baraniuk answered that each one costs about a million dollars to develop. When I looked into the business model of OpenStax, I found that their initial funding comes from a series of foundations (the Bill and Melinda Gates Foundation, the William and Flora Hewett Foundation, etc.). Of course, that isn’t sustainable so OpenStax also provides premium services and content to supplement the free versions of their text books. For example, they have created an iPad version of their College Physics text that costs $4.95. That, of course, is significantly cheaper than a traditional text book but the question remains whether students will pay even that small amount when a free version of the text exists. Other “free” text book publishers, such as Flat World Knowledge, have stopped providing a free version and instead, have had to focus on low-cost texts. This model of text book delivery would still be much more affordable for the student than the traditional model but I think it remains to be seen whether it will be financially viable. It seems to me that to make it work, someone will have to come up with a disruptive innovation in funding models, perhaps something like crowd-funding?

Less than a week after I came back from the conference, The New Yorker published historian Jill Lepore’s critique of the theory of disruptive innovation. The gist of her critique is that the theory has been applied to industries that are very different than the manufacturing industries that Christensen initially studied. And, she says, the theory doesn’t even work particularly well with those manufacturing industries because Christensen’s methods were lacking–he cherry-picked examples, he ignored potential complexities in causation, he arbitrarily chose time frames that would artificially support his claims, and so on. In other words, she says the theory doesn’t work very well to explain how businesses succeed and fail. I am most interested in Christensen’s claims that his theory has predictive value, that is, that it can help us to determine which companies will succeed and which will fail. As an educator, this would help me to figure out how to deal with the disruptive innovations that higher education is facing. Unfortunately for me, the record is pretty clear that this theory hasn’t been very useful as a predictive tool so far. For example, in 2007, he predicted that Apple would fail with the iPhone. We know that this prediction was incredibly wrong. In addition, in 2000, he started the Disruptive Growth Fund, a fund which used his theory to determine which companies to invest in. The fund was liquidated a year later because it lost significantly more money than the NASDAQ average during that time. A Tribune writer quipped that “the only thing the fund ever disrupted was the financial security of its investors.” Ouch.

Christensen hasn’t written a formal response to Lepore’s article but he did give this really weird interview to Business Week. I was particularly interested in his response to the lack of predictive value of the theory. First, he says that he was not advising the guy who was running the Disruptive Growth Fund so you can’t claim that it is a failure of the theory. Regarding the iPhone, he says that he labeled it as a sustaining innovation (doomed to fail) against Nokia’s smart phone instead of labeling it as a disruptive innovation (destined to succeed) against the laptop. That definitely sounds like predicting the future after it has already happened. But this explanation brings up one of the things that has confused me most about the theory. One of his main examples involves the hard disk drive industry. I don’t know anything about the business of that industry but I certainly know something about the technology. It seems weird to me that he would label the reduction in size of hard drives as “new technology.” It seems to me that to go from a 3.5in hard drive to a 2.5in hard drive is an incremental improvement in the technology, a tweak, (a sustaining innovation?) rather than a disruptive innovation. Perhaps he explains his methodology for categorizing innovations in his books, which I have not read, but I haven’t been able to find such an explanation so I sort of doubt it exists. If we don’t have a method for this categorization, and we categorize innovations after the fact, how can the theory help us to understand what has happened in the past or what will happen in the future?

And that leads me back to higher education. One of the hot topics at the conference was MOOCs, those massive open online courses that gained popularity a few years ago, especially after the publication of The Innovative University, Christensen’s co-authored book applying disruptive innovation theory to higher education. The idea of a MOOC is that an expert in a particular field records a bunch of lectures about her field of expertise and makes that content available online for all who want to take the course. There are assignments that may be evaluated by others taking the course. There may be credentials (certificates, badges, etc.) that are given to those who successfully complete the course. Sometimes, the student has to pay to receive the credentials but the cost is minimal. Some of these MOOCs have had thousands, even hundreds of thousands, of students. I was one of the 58,000 people who signed up for the Introduction to Artificial Intelligence course offered by Sebastian Thrun and Peter Norvig a few years ago. Like most of those who started the course, I didn’t finish it. I just wanted to see how it worked. There was a lot of text to read. There were some recorded lectures. There were assignments to complete on my own. There were discussion boards where I could discuss the assignments with other students. Other than this interaction with the other students, it felt like a correspondence course from the 1970s. Christensen has labeled this kind of online delivery a disruptive innovation. And perhaps it is. But lecture-based education (“sage on the stage”) has been criticized as old-fashioned by many. It seems to me that the very idea of a MOOC relies on the idea of a “sage on the stage.” In fact, much online education is developed and delivered in that manner. Why would this kind of education be cutting edge if it’s delivered via the Internet and old-fashioned if it’s delivered face-to-face?

I don’t know where any of this will lead us in higher education. But the conference was interesting because it prompted some new thinking for me in the area of disruptive innovation as well as in several other areas. I’m looking forward to continuing these conversations at PSU during Faculty Week in August when Ann McClellan and I will lead a discussion on the ideas and technologies we heard about at the conference.



{August 9, 2013}   When is Failure Really Failure?

If you are involved in higher education in any way, you have heard about Massive Open Online Courses (MOOCs). I first heard of them back in the Fall of 2011 when I was one of 160,000 students to sign up for an online class in Artificial Intelligence (AI). I have PhD in Computer Science and my area of specialization was Machine Learning, a branch of AI. I have taught AI at the undergraduate level. So I wasn’t signing up for the course because I wanted to learn the content. Instead, I wanted to understand the delivery method of the course. In particular, I wanted to figure out how two preeminent AI researchers, Sebastian Thrun and Peter Norvig, would teach a class to over 150,000 students. I spent a couple of weeks working through online video lectures and problem sets, commenting on my fellow students’ answers and reading their comments on my answers. I stopped participating after a few weeks as other responsibilities ate up my time. It isn’t clear to me how many other people signed up for the class for reasons similar to mine. 23,000 people finished the course and received a certification of completion. Based on the “success” of this first offering, Thrun left his tenured faculty position at Stanford University and founded a start up company called Udacity.

There has been a lot of hype about MOOCs in general and Udacity in particular. It’s interesting to me that many of these MOOCs seem to employ pedagogical techniques that are highly criticized in face-to-face classrooms. In this advertisement, for example, Udacity makes a big deal about the prestige and reputations of the people involved in talking at students about various topics. Want to build a blog? Listen to Reddit co-founder Steve Huffman talk about building blogs. In other words, these classes rely heavily on video lectures. The lecture format for face-to-face classrooms, for example, is much maligned as being ineffective for student learning and mastery of course content. Why, then, do we think online courses which use video lectures (from people who have little training in how to effectively structure a lecture) will be effective? The ad also makes a big deal about the fact that the average length of their video lectures is 1 minute. Is there any evidence that shorter lectures are more effective? It depends on what else students are asked to do. The ad makes bold claims about the interactivity, the hands-on nature of these courses. But how interactivity is implemented is unclear from the ad.

Several people have written thoughtful reviews of Udacity courses based on participating in those courses. Robert Talbert, for example, wrote about his mostly positive experiences in an introductory programming class in The Chronicle of Higher Education. Interestingly, his list of positive pedagogical elements looks like a list of game elements. The course has clear goals, both short and long-term. There is immediate feedback on student learning as they complete frequent quizzes which are graded immediately by scripts. There is a balance between challenge presented and ability level so that as the student becomes more proficient as a programmer, the challenges presented become appropriately more difficult. And the participants in the course feels a sense of control over their activities. This is classic gamification and should result in motivated students.

So why are so many participants having trouble with these courses now? Earlier this year, Udacity set up a partnership with San Jose State to offer courses for credit for a nominal fee (typically $150 per course). After just two semesters, San Jose State put the project on hold earlier this week because of alarmingly high failure rates. The courses were offered to a mix of students, some of whom were San Jose State students and some of whom were not. The failure rate for San Jose State students was between 49 and 71 percent. The failure rate for non-San Jose State students was between 55 and 88 percent. Certainly, in a face-to-face class or in a non-MOOC class, such high failure rates would cause us to at least entertain the possibility that there was something wrong with the course itself. And so it makes sense that San Jose State wants to take a closer look at what’s going on in these courses.

One article about the Udacity/San Jose State project that made me laugh because of its lack of logic is this from Forbes magazine. The title of the article is “Udacity’s High Failure Rate at San Jose State Might Not Be a Sign of Failure.” Huh? What the author means is that a bunch of students failing a class doesn’t mean there’s something wrong with the class itself. He believes that the purpose of higher education is to sort people into two categories–those smart enough to get a degree and those not smart enough. So, his logic goes, those who failed this class are simply not smart enough to get a degree. I would argue with his understanding of the purpose of higher education but let’s grant him his premise. What proof do we have from this set of MOOCs that they accurately sort people into these two categories? Absolutely none. So it still makes sense to me that San Jose State would want to investigate the MOOCs and what happened with them. Technology is rarely a panacea for our problems. I think the MOOC experiment is likely to teach us some interesting things about effective online teaching. But I doubt they are going to fix what ails higher education.



{June 30, 2013}   Gamification and Education

A trending buzzword in today’s digital culture is gamification. According to most sources on the Internet, the term was coined in 2004 by Nick Pelling, a UK-based business consultant who promises to help manufacturers make their electronic devices more fun. Since then, the business world has jumped on the gamification bandwagon with fervor. Most definitions of the term look something like this: “Gamification is a business strategy which applies game design techniques to non-game experiences to drive user behavior.” The idea is that a business will add game elements to its interactions with consumers so that consumers will become more loyal and spend more time and money with the business.

We can see examples of gamification all over the place. Lots of apps give badges for participation and completion of various goals.  Many also provide leader boards to allow users to compare their progress toward various goals against the progress of other people. Airlines and credit card companies give points that can be redeemed for various rewards. Grocery stores and drugstores give discounts on purchases to holders of loyalty cards. Businesses of all types have added simple game elements like goals, points, badges, rewards and feedback about progress to compel the consumer to continuously engage with the business.

This type of gamification is so ubiquitous (and shallow, transparent, self-serving) that a number of prominent thinkers have decried the trend. My favorite is the condemnation written by the game scholar Ian Bogost. Bogost writes, “Game developers and players have critiqued gamification on the grounds that it gets games wrong, mistaking incidental properties like points and levels for primary features like interactions with behavioral complexity.” In other words, gamification efforts focus on superficial elements of games rather than those elements of games that make games powerful, mysterious, and compelling. Those superficial elements are easy to adapt to other contexts, requiring little thought or effort, allowing the marketers “to clock out at 5pm.” The superficial elements are deployed in a way that affirms existing corporate practices, rather than offering something new and different. Bogost goes on to say, “I realize that using games earnestly would mean changing the very operation of most businesses.” It’s this last statement that most interests me. What would “using games earnestly” look like?

Since 2007, I have been teaching a class called Creating Games, which fulfills a Creative Thought general education requirement at my university. The class focuses on game design principles by engaging students in the design and development of card and board games. And because of that content, I thought it would be natural environment to test out some ideas about gamification and its role in education. So I made a number of changes to the course starting in the Fall of 2010. I added some of the more superficial elements of games to the class to help support the gamification effort. In addition and more importantly, I added some game elements which I think start to change the very operation of the classroom. I think these deeper changes involving “behavioral complexity” are motivational for students, resulting in a more thorough learning of the content of the class.

To determine what to change about my class, I started with Greg Costikyan’s definition of a game, which he articulated in the article called I Have No Words and I Must Design. Costikyan says, “A game is a form of art in which participants, termed players, make decisions in order to manage resources through game tokens in the pursuit of a goal.” If we use this definition, then to gamify an activity, we would add players, decisions, resources to manage, game tokens and/or a goal.

I thought about whether and how to add each of these game elements to my class and decided to add a clearly articulated goal, similar to the kinds of goals that are present in typical games. I focused the goal on points, which we call Experience Points (EP). So at the start of the semester, students are told that they will need to earn 1100 EP in order to get an A in the course, 1000 EP for a B, 900 EP for a C, 800 for a D, and anything less than 800 would result in an F in the course. Students can then choose the letter grade that they want to earn and strive to achieve the appropriate number of points to do so. I added a series of levels so that students could set shorter term goals as they progressed toward the larger goal of reaching the specific grade they wanted. All students start the class at level 1 and as they earn EP, they progress through the levels. The highest level is level 15, which requires 1100 EP to achieve and corresponds to earning an A in the class. The number of points between the levels increases as the levels increase so that early in the class, students are making fairly quick progress but as they gain proficiency, they must work harder to reach the next level. So, for example, the difference between levels 1 and 2 is 30 EP while the difference between levels 14 and 15 is 100 EP. Costikyan also mentioned game tokens as a mechanism for players to monitor their status in the game. I added a weekly leader board to my class so that students would be able to determine how the number of EP they’ve earned compares to their class mates. These superficial elements of games were easy to add, just as Bogost suggested. I then started to think about how I might add game elements “earnestly” in a way that creates something new and different for the students.

In 1987, Malone and Lepper published a study called “Making Learning Fun: A Taxonomy of Intrinsic Motivations for Learning.” Their taxonomy includes four basic kind of motivations for game players to continue to play games and they suggest that educators think about ways to use these motivations for classroom learning. The four categories focus on challenge, control, curiosity and fantasy. Adding points, levels and a leader board all relate to the category called challenge, which involves clear goal statements, feedback on progress toward goals, short-term goals and goals of varying levels of difficulty. I then focused on the category of motivations called control.

According to Malone and Lepper, control involves players making decisions that have consequences that produce results with significant outcomes with those outcomes being uncertain. In fact, Costikyan says, “The thing that makes a game a game is the need to make decisions.” So for him, decision-making is the most important element of a game. It shouldn’t be surprising, then, that Malone and Lepper found control to be an important motivational factor in games. But in most classrooms, students make few decisions about their learning and have no control over their own activities. I decided that my gamification effort would focus on adding decision-making to the class. Therefore, no activity in the class is required. Students get to decide which activities from a large (and growing) array of activities they would like to engage in. I give them the entire list of activities (with due dates and rubrics just like in other classes) at the start of the semester and students get to decide which of the activities they would like to complete. As a student moves through the class, if she thinks of a new activity not currently on the list that she would like to work on, she can work with me to formalize the idea and it will be added to the list of possibilities for the rest of the students and will become a permanent part of the course for future offerings. The first time I taught the course, I had thought of 1350 EP worth of activities and required 1100 to earn an A. The latest offering of the course had 1800 EP worth of activities and still required 1100 to earn an A. I also have made a semantic shift in the way I talk about points in the class. In most classrooms, when a student earns a 75 on an exam worth 100 points, the top of paper will show a -25 to signify the number of points the student lost. I never talk about points lost but rather focus on the EP that has been earned. One of the nice consequences of this flipping of the focus is that students understand that if earning 75 points on exam does not bring them close enough to their next goal, they will have to engage in additional activity in order to earn additional points. They could also choose not to engage in any additional activity. There are significant consequences either way but the important point is that the student is in control and can make decisions about the best way to achieve his/her goal.

Student comments on my course evaluations suggest that students initially find it difficult to understand this grading system (because it is so different from what they are used to in their other classes) but once they understand it, they love it. They enjoy being able to decide whether to take an exam, for example. One way to determine whether students are learning the content of the class is to look at final course grades. Here is a comparison of a random Fall semester section of the course before I made this change to a random Fall semester section after I made the change:

—Fall 2009 (before this change)
—A  4
—B  6
—C  8
—D  4
—F  6
—Fall 2011 (after the change)
—A  18
—B  4
—C  2

On average, the students in the Fall 2011 section of the course did more work and engaged more often with the course content than did the students in the Fall 2009 section. And as I said earlier, students often think about the material independently to come up with their own assignments that are added to the course for everyone to choose from.

I wouldn’t suggest that I have changed “the very operation” of education. But I do think that an earnest focus on giving students more control over their own learning is a huge step in the right direction and moves us away from the bullshit that Bogost rightly complains about.


I don’t think anyone would accuse me of being a Luddite. I began to learn to program in the late 1970’s when I was in high school, majored in computer science, worked as a software developer and got a PhD in computer science. I love my tech toys tools and think that overall, we are better off with the technology we have today than we were before it was available. But I am often a skeptic when it comes to educational technology.

I was reminded of my skepticism about a month ago when I cam across this photo and caption. For those of you who won’t click through, I’ll describe it. It is a photo of a classroom smart board being used as a bulletin board, with large sheets of paper taped to it, completely covering the smart board itself. The poster of the photo asks a number of questions, including whether the teacher who uses the equipment in this manner should be reprimanded for educational malpractice. The comments on the photo imply that the fact that the teacher is using this equipment in this way is evidence that the teacher is resistant to using the equipment appropriately. I was happy to see that the poster of the photo also asked some questions about why a teacher might use the equipment in this way such as not enough training. But I think the issue really is that the teacher has not had the right kind of training and the probable reason for that is that the promoters of educational technology are almost always focused on the technology itself and not on the education that the technology can provide.

The fact that someone would consider reprimanding a teacher for using technology in this (admittedly inappropriate) way is part of the problem that I see in all corners of educational technology. When we engage in technology training for teachers, we almost always focus on how and not why. That is, we focus on how to use the technology and don’t engage in meaningful discussion of the pedagogical advantages of using the technology in the classroom. The impression then is that we want to wow our students with this new technology, to do something flashy because the flashiness will capture the attention of the students. I see several problems with this idea. First, if students are using similar technology in all of their classes, the newness of the technology wears off and the flashiness disappears. Second, we should be in the business of getting students to actually learn something and if we don’t have proof that a particular technology (used appropriately) improves learning, perhaps we shouldn’t be investing in such high-priced items. In other words, I do not see technology as a panacea to our educational problems.

I’ll give my own example of how this has played out in my own teaching. A few years ago, my University purchased a bunch of clickers. I went to several training sessions for the clickers, hoping to hear a pedagogical explanation for why the use of the clickers might improve student learning. I heard a lot about how to use the clickers (technical details) as well as the cool things I could do to survey my students to see where their misunderstandings are. But even this last point didn’t convince me that the technology was worth the cost or the effort to use it because I already have ways that I can survey my students to see where their misunderstandings are. In fact, I’ve been developing those kinds of techniques for years, without the use of technology. So what I wanted to know was how the technology will improve on those techniques so that my students learn better. And no one could provide me with those answers. This summer, however, I went to a technology institute for faculty in the University System of New Hampshire. One of our presenters told us about a learning framework which might help us think about technology use in the classroom. He cited several studies that sought to identify why individual tutoring of students is so effective at improving student learning. The results show that students learn best when they get immediate feedback about their learning (the more immediate the better), can engage in conversation about their learning (that is, when they have to try to explain what they learned to someone else) and have learning activities that are customized to their needs (so that they are not wasting their time going over material that they already understand). What technology can do, he argued, is help us provide individual tutoring learning experiences for large numbers of students cost-effectively. Therefore, we can use clickers, not to provide the teacher with information about student learning but rather to provide the students themselves with information about their own learning. That is, the clickers allows us to ask questions of the class, have all the students answer simultaneously and then when we reveal the answer(s), the student can see how he fared compared to his classmates and compared to the correct answer(s). This immediate feedback provides an individual tutoring type experience only if it is done with an eye toward making sure students understand what they are supposed to get out of the use of the clickers. But too often, clickers are used in the classroom because they are cool, and new, and innovative.

So back to the question of whether the teacher who used the smart board inappropriately should be reprimanded. If, instead of having students write on big pieces of paper which she taped onto the smart board, the teacher had the students type their items into a computer and then she had displayed them on the smart board in the “appropriate” manner, we would not be having this discussion. But in neither case have we asked what her pedagogical motivations were for the exercise that the students engaged in. That to me is the important question and the one that would determine whether she has committed “educational malpractice.” And before we spend tons of money on smart boards and iPads and clickers and and and…, I think we should focus on the learning improvements that might be gained from the use of such technology. In most cases, I don’t think we have a whole lot of evidence that it does improve learning. And I definitely don’t think we’re training teachers to use it in a way that takes advantage of the ways that it might improve learning.



{June 25, 2011}   Technology in Education

I just got back from a three day workshop on academic technology.  As a computer scientist, I was intrigued by the idea of this workshop but I was worried that it would be a disappointment because so many of these workshops focus on what I consider to be the wrong things.  I am so glad I attended the workshop because I learned a lot and was inspired by a lot of what I heard.

The reason I’m often disappointed by technology workshops and technology training for educators is because they are often led by people whose focus is on the technology and teaching the participants how to use that technology.  This is definitely an important task but it is one that I typically find tedious because I’m comfortable with technology and want to go faster than the workshop usually go.  And I want to have conversations about more than “how” to use the technology.  I want to talk about “why” we should use the technology.  We discussed this topic quite a bit (more than I ever have) at this technology workshop.

My big take-away from the workshop concerning “why” we should use technology came from the Day 2 keynote speaker, Michael Caulfield, who is an instructional designer at Keene State College.  He presented research that shows that average students become exemplary students if they can have conversation about the topic they are learning, can have instruction that is customized to them and what they are not understanding, and can receive immediate feedback about their learning.  Basically, if every student can have a full-time, one-on-one tutor, she can move from being an average student to being an exemplary student.  Sounds great, but who wants to pay for that (especially in this economic climate)?  So, Caulfield explained, we really need to figure how to provide “tutoring at scale.”  That is, we need to figure out how to provide each student with conversation, customization and feedback in classrooms that have more than one student.  Caulfield then discussed various uses of instructional technology (which was called “rich media” at this workshop, a phrase that I’m still processing and deciding whether I like) and how to leverage technology to provide “tutoring at scale.”  Caulfield’s talk gave me a great perspective through which to view all of the activities we engaged in during the workshop.

My one critique of the workshop (and it is a small one) is that we didn’t sufficiently separate faculty development of “rich media” artifacts for use in providing “tutoring at scale” from faculty development of assignments that require students to create their own “rich media” artifacts.  It feels like the issues are related to each other but are also quite separate, with different things for the faculty member to consider.

I would strongly encourage my PSU colleagues to apply to and attend next years Academic Technology Institute.  It is well worth the time!



{March 4, 2011}   Game Design Education

I belong to the International Game Developers Association (IGDA) which has a fairly active listserv.  The most recent discussion on the listserv was prompted by Brenda Brathwaite‘s rant at the most recent Game Developers Conference, which ends today.  Brathwaite is a well-known game designer, educator, IGDA board member, and author.  She wrote one of my favorite game design books, Challenges for Game Designers.  So people pay attention to what she has to say.  And what she had to say in this latest rant has been quite controversial.

The title of her rant is Built on a Foundation of Code.  Her basic point is this: “Game design programs must be firmly rooted in a foundation of code.”  What she means is that students graduating from a game design program must be good programmers.  They must learn to create digital games from scratch.  Code is the tool of the trade and if we game educators do not teach our students to program, we are doing them a huge disservice.  She makes this point as a game designer who started in industry, went to academia, and is now back in industry.  She sees thousands of resumes and wants us all to know that she will not hire entry-level game designers who have not created their own digital games.  That is, she will not hire game designers who can’t code. 

I’ve heard this kind of argument before but it usually comes from computer scientists who think that their discipline is the most important one for the multidisciplinary field of game development.  But Brathwaite is not a computer scientist and so her argument is a bit surprising.  And it’s also why no one is simply dismissing what she is saying–she’s not saying MY discipline is the most important. 

At the risk of sounding discipline-centric, as a computer scientist, I think that the training that computer scientists go through is extremely important for anyone who wants to create any sort of procedural content.  What do I mean by that?

Procedural content is any artifact that is executed by a computer, any artifact that is comprised of a series of instructions that are to be run by a computer.  For example, this blog entry is digital content but not procedural content–it does not contain instructions for the computer to execture.  The blog software that I’m using (wordpress) IS procedural content–it is comprised of instructions that are executed by the computer as I write my blog entry.  Creating procedural content requires a particular way of thinking about that content.  Creating procedural content also requires the development of debugging skills because no one writes procedural content that works perfectly the first time.  Making this content work properly can be tedious and frustrating and the developer needs to be persistent and detail-oriented, while also being able to take a step away from the content to think about the obstacles in new ways.  It takes practice to implement this cycle of creating the content, testing to find bugs, planning a fix for the bugs, implementing the new content, testing to find bugs, planning a fix, and so on.  And the ability to think in a way that allows you to go through this cycle over and over seems important for anyone who wants to work in game development.

Notice that I’m saying something a bit different than Brathewaite.  She says she wants all game developers to be able to code.  I’m saying I think game developers need to be able to think like coders.  But perhaps it boils down to the same thing, perhaps the only way to teach someone to think like a coder is to teach them to code.  In any case, I think this is an interesting question, one that I’ve thought about quite a bit as I’ve tried to teach game design and development to non-computer science majors.  I’m still trying to figure out the best way to teach this kind of thinking.



{October 22, 2010}   Original Research–Good or Bad?

I recently rewatched Julia, the 1977 film starring Jane Fonda and Vanessa Redgrave.  It is based on a chapter in Lillian Hellman‘s memoir, Pentimento: A Book of Portraits.  That chapter tells the (probably fictional) story of Hellman’s longtime friendship with Julia, a girl from a wealthy family who grows up to fight fascism in Europe in the 1930s.  I loved this book when I read it in high school and I went on to read nearly all of Hellman’s other work as well as several biographies.

As I watched the movie, several questions occurred to me and so, being a modern media consumer, I immediately searched for answers online.  This search led me to Wikipedia, which for me is a fine source of answers to the kinds of questions I had.  In fact, I use Wikipedia all the time for this sort of thing.  I was surprised then to find the following qualifying statement on the entry for Pentimento:

This section may contain original research.  Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed.

As I said, I use Wikipedia a lot.  And I have never seen this qualifying statement before.  I think this statement implies that original research is somehow bad.  I don’t think that’s what the folks at Wikipedia mean.  At least, I hope it’s not what they mean.  So I decided to look into the statement a little more deeply.  There are a couple of parts of the statement that are interesting.   

First, the words “may contain” are in bold.  I think that’s supposed to indicate that the section or may or may not contain original research.  It’s clear that articles in Wikipedia should NOT contain original research but it isn’t clear why. 

I then checked to see how “original research” is defined by Wikipedia and found this on their policy pages: “The term ‘original research’ refers to material—such as facts, allegations, ideas, and stories—not already published by reliable sources.”  How would one determine whether a particular section contained “original research” or not?  Probably by looking for references to “reliable sources” in the section.  Therefore, if a section doesn’t contain references (or not enough references), it might be difficult to determine whether that’s because the author simply didn’t include references to other available sources, the work is based on “original research” or the work is completely fabricated.  Or, I guess, it could be some combination of the three reasons.  So I guess that’s why “may contain” is in bold.  The lack of references could mean any number of things.

The next part of the qualifying statement is even more interesting to me.  “Please improve it by verifying the claims made and adding references.”  This statement implies that “original research” is somehow less valid than work that has been taken from another source.  Again, I doubt that’s what the Wikipedia folks mean. 

So I continued to investigate their policies and found this: “Wikipedia does not publish original thought: all material in Wikipedia must be attributable to a reliable, published source. Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.”  Because of this policy against publishing original thought, to add references to an article or section of an article does indeed “improve” it by making it conform more closely to Wikipedia’s standards for what makes a good article.

This policy against publishing original thought explains the rest of the qualifying statement.  My investigations into Wikipedia’s policies found policies about what it means to “verify” statements in an article.  This is important because Wikipedia says that included articles must be verifiable (which is not the same as “true”), that is, users of Wikipedia must be able to find all material in Wikipedia elsewhere, in reliable , published sources.  And yes, Wikipedia explains what they mean by “reliable.”  That discussion is not easily summarized (and isn’t the point of this post) so anyone who is interested can look here

My surprise concerning the qualifying statement boils down to wording and I think the wording of the statement needs to be changed.  Currently, it implies that original research is bad.  But through my investigation, I’ve decided that Wikipedia probably means that articles should not contain unverified, unsourced statements.  Such statements could come from author sloppiness, original research or outright fabrication.  In any case, they should not be part of Wikipedia’s articles. 

Of course, I haven’t discussed whether the policy of not publishing original thought is an appropriate policy or not.  I have mixed feelings about this.  But that’s a subject for another post.



{February 8, 2010}   The iPad and Education

David Parry recently wrote a very interesting post on ProfHacker regarding the impact that the iPad is likely to have on education.  Parry is an assistant professor of emerging media and technology (what a cool title) at the University of Texas-Dallas and the author of academHack, one of my favorite blogs about technology and education (see my Blogroll).  Parry, who is an avid Apple consumer, thinks the iPad is far from the panacea for education that its proponents claim it will be.  For those of you who won’t follow the link to his post, I’ll summarize his main points.

Many are saying that the iPad will do for education (and textbooks) what the iPod did for music.  Parry points out that the iPod is not revolutionary.  It didn’t change the way we consume music.  Instead, it was the development of iTunes that changed the way we consume music.  The change in distribution channels rather than a change in consumption platform is what was important to changing the way we consume music.  We can now purchase individual songs for only 99 cents (which is a price point that makes the inconvenience of illegal downloading not worthwhile) and create playlists from those individual songs.  In order for there to be an impact in our consumption of textbooks, the cost would need to drop a lot and we would have to be able to assemble new textbooks from individual chapters (and perhaps even individual fragments of text) from existing textbooks.  No one in the textbook business is talking about an iTunes-like experience for textbooks.

Parry’s second major point is, for me, even more important for those us who are involved in higher education.  He points out that the iPad is designed to be a media consumption device.  But he (and I) wants his students to be more than media consumers.  To be successful citizens in the digital age, students need to be critical consumers and creators of media.  With its lack of camera, lack of microphone, lack of multitasking ability, the iPad teaches people how to be passive consumers of media.  Such a device is bad for educating the active, critically questioning citizen for today’s (and tomorrow’s) digital world.

Parry raises many additional issues and explains the two I mention here much more articulately than I have.  Go read his post.



{December 26, 2009}   Whose Property Is It?

When I was in graduate school more than 12 years ago, a new company opened up in Tallahassee that caught the attention of many students (and probably faculty members) at Florida State University.  I don’t remember the name of the company but I do remember its business purpose.  The company would pay students to take notes in their classes and then would sell those lecture notes to other students in those same classes.  This service seems like a waste of money to me since any student already paying tuition for the class could simply create his or her own version of the lecture notes.  If they went to class, that is.  But I suppose the prime target of this company could be those students who haven’t yet learned to take good notes themselves.  In any case, that business all those years ago in Tallahassee appeared to do very well in the face of some concerns expressed by various factions in the academic community.

I hadn’t thought about this company in years.  Recently, however, this kind of business is much in the news.  In 2008, Michael Moulton, a faculty member at the University of Florida, filed a lawsuit against a company called Einstein’s Notes, which sells what they call “study kits” for classes at UF.  Moulton, and the company that publishes the textbook that he has written, claim that the material in Moulton’s lectures is copyrighted and therefore, by publishing student lecture notes without his permission, Einstein’s Notes is violating that copyright.  The issue is a difficult one, especially because it is the material created by the student that is being sold by Einstein’s Notes rather than any written material created by the faculty member.

Copyright provides the author of a work the exclusive right to control the publication, distribution, and adaptation of that work.  An idea cannot be copyrighted.  Instead, copyright extends only to “any expressible form of an idea or information that is substantive and discrete and fixed in a medium.”  This is key to these lawsuits, it seems, since the gray area seems to lie in whether the lecture itself is “fixed in a medium.”  In Moulton’s case, it just might be.  Moulton has published two textbooks based on his lectures and uses them in his classes.  In addition, his publisher sells its own version of lecture notes for his classes.  So when a student takes notes in a class based on the lecture, although those notes are not a “copy” of the professor’s lecture, they are derivative of the lecture.  That is, those notes are a kind of adaptation of the professor’s lecture.

Of course, I’m not a lawyer but this is how I understand the issues in the Moulton case.  I think things get murkier when a faculty member has not “published” anything related to his or her lectures, however.  Moulton’s lawyer doesn’t seem to think so.  He says that if a faculty member were to write out the high points of the lecture on a transparency and display them to the class via overhead projector, that fixes the material in a medium.  If a student then bases her lecture notes on that transparency, her notes are a derivative of material that is copyrighted and therefore, is not eligible to be sold without the faculty member’s permission.  The lawyer doesn’t say anything about whether material written on the chalkboard is fixed in a medium.

As an academic at a public university, I believe that education should be available as cheaply as possible for as wide an audience as possible.  For example, I teach a computer literacy class for free for senior citizens and get enormous pleasure from seeing them learn.  I would, however, have a problem if someone took my “lecture notes” from that class and sold them on the Internet without my permission.  The material that I teach in that class is basic information, available in a variety of forms from a variety of sources.  There’s nothing in the content that could be considered new information.  What is original about the class is the way the material is organized and presented.  Many of the senior citizens tell me stories about taking beginning computer classes elsewhere and feeling overwhelmed, lost and discouraged.  This class, they tell me, is the first time they’ve felt as though they actually could learn to use a computer to send and receive email and to search the Internet.  So there is definitely something unique and original about the way I’m presenting the information.  Why would I have a problem if this material was made available through a company like Einstein’s Notes?  It isn’t because I don’t want the material to be made available.  Instead, it’s because I don’t think Einstein’s Notes should make money from my work without getting my permission and without compensating me.

Moulton’s lawyer points out that Einstein’s Notes puts a copyright notice on the lecture notes that they sell.  In other words, the company sells the lecture notes but then attempts to prevent those notes from being copied.  They are claiming copyright on material that they played no part in creating.  In what world does that make sense?



{October 25, 2009}   Making Sense of All Things Digital

For the last few years, I have been volunteering my time at a local senior center, teaching computing skills.  One of the struggles is to explain the subtle cues that the computer provides to us as its users to let us know what we can do at that particular time.  What do I mean by “cues”?  This Friday, I talked about how you know when you can type text in a particular spot.  Think about it.  You look for your cursor to change to a straight vertical line that blinks.  Wherever it blinks is where your text will appear if you press the keys on your keyboard.  We all know this, right?  The problem is that there are thousands of these items.  Each appears to be a small thing, without much consequence.  And yet, by paying attention to these small, visual cues, we all know what we can do and when we can do it.  It’s challenging to teach people who aren’t used to paying attention to, much less deciphering, these subtle cues.  I love it but I’m constantly struggling to explain why things are as they are on PCs, to help make sense of the virtual world.

One of the things I have never been able to explain is why sometimes you need to click and why sometimes you need to double-click.  I would like to be able to articulate a rule about when to engage in each action but I have not yet been able to do so.  Instead, I tell the students in this class that they should first try to click on something and if nothing happens, they should double-click.  This explanation feels wholy unsatisfactory to me because I want to believe that computers are logical.  But deep in my heart, I know they aren’t.  They are just as subject to whims of culture-making as any other artifact of our culture.  And now I have proof of that.  Tim Berners-Lee (that’s SIR Berners-Lee to you) recently admitted that he regrets the double-slash.

Sir Berners-Lee invented the World Wide Web.  The author of the article I linked to says he is considered a father of the Internet but that’s not true.  There is much confusion about the difference between the Internet and the World Wide Web.  In fact, most people consider them to be the same thing.  But they are not.  The Internet is the hardware that the World Wide Web (which is comprised of information) resides on.  The Internet was created in the early 1970’s.  The World Wide Web was conceived of by Berners-Lee in the early 1990’s.  Berners-Lee’s achievement is monumental.  We don’t have to give him credit for the entire Internet.  He’s still an amazing guy.

The World Wide Web is comprised of web pages and social networking sites and blogs and such rather than the actual machines that hold all of that information.  When we browse the World Wide Web, we typically use a web browser like Internet Explorer or Firefox.  If you look in the address box of the web browser you’re using, you will see that the address there contains a number of pieces.  The first part of the address is the protocol that your computer is using to communicate with the computer that contains the information you want to see.  A protocol is simply a set of rules that both computers agree to abide by in their communication.  You can think of a protocol as a language that the computer agree to use in their communication.  Typically, the protocol these days for web browsing is http (hypertext transfer protocol) or https (hypertext transfer protocol secure).  Much of the text of the address of the web site you’re looking at specifies the name of a computer and the name of some space on that computer.  The thing that Berners-Lee regrets is the set of characters he chose to separate the protocol from the rest of the address.  He chose “://”.  He doesn’t regret the :.  It’s a piece of punctuation that represents a separation.  He does regret the //.  It’s superfluous, unnecessary.  This whole conversation makes me feel better about teaching the senior citizens who choose to take my class.  Some digital things are not logical.  They are whims.  Just ask Tim Berners-Lee.



et cetera