Desert of My Real Life











{July 2, 2016}   Ten Months In

Last August, our new president, Don Birx, spoke to eager Plymouth State University employees about his vision for the campus to be reorganized around strategic clusters and open labs. Even though we are still at the beginning of implementing this vision, I think there are some lessons to be learned now that we are ten months into this process.

As a community, we have been figuring out what strategic clusters and open labs are. We have been working to implement these things even while we figure out what we mean by these terms. I think we might have moved more quickly to implementation if we had taken the time to really figure out what we mean by strategic cluster, open lab, and so on. In the rush to implementation, we actually floundered a bit last academic year in a way that I don’t think we necessarily needed to. So that’s my first lesson. The community needs to understand a bit about what we’re trying to implement before we actually start implementing it. Despite that early misstep, I think we’re coming to some concrete understandings of these terms.

As a cluster guide, someone charged with moving the initiative forward over the next academic year (including this summer), I have been fortunate to discuss and shape our understandings of these terms. We’re still working on them and will engage in discussions with faculty and staff when we all return to campus in August to really solidify the definitions. But here’s my current understanding of what we’re doing. A cluster is an affinity group comprised of programs and the resources, including people, attached to those programs. A cluster differs from a department or a college because of intention. We bring these resources together into a cluster with the intention of working across our individual disciplines in some way….through projects, through curriculum, through teaching and pedagogy, through open labs, through service. Open labs are spaces where this working together might happen. So a cluster can be thought of, broadly speaking, as “who” comes together and open labs as a place “where” the coming together happens, a space of potential since we won’t always know what will arise when we come together. Cluster projects and other cluster activities are “what” we are working on. The projects and other activities will focus on work that is useful beyond the class that a student is currently taking, giving the student “real world” experience. These definitions are maybe necessarily a little slippery. But I think we (the group of cluster guides) are beginning to have common understandings of the terms.

There are also lots of questions about why we are engaging in this change, what the benefits will be. The president has said that he sees seven drivers for doing this. The first is related to the increasing fragmentation of knowledge that he believes characterizes the higher education landscape. As society learns more and more about the world and how it works, individuals know less and less because of their areas of specialization, their fragmented disciplinary knowledge. Strategic clusters are a way of trying to organize the university so that we bring individuals (faculty, staff, students, and external partners) with different disciplinary knowledge and perspectives together to work on large problems that will not be able to be solved by a single disciplinary approach. Students then are exposed to a variety of ways of looking at the world while getting hands-on experience. They will understand how what they’re learning can be applied and integrated with what other people know. We already provide some such experiences for students. But the cluster initiative pushes us to provide multiple such experiences for a larger percentage (ideally, all) of our students.

The president hasn’t yet written the blog posts that lay out his six other drivers of the strategic cluster initiative so I can’t report on those yet. I hope he posts those sooner rather than later so that we can all think about them as we implement his vision.  But even without those, I think the new vision of the university is exciting and will provide each student with an excellent education that will serve them well as they move into an unknown future. Plymouth State University will become known for this innovative approach to education, drawing students to us because they want exactly this kind of experience. Strategic clusters and open labs will represent a unique identity for Plymouth State University, distinguishing us from other institutions. All of that is exciting to me.

Our process so far has not been perfect. As I said, I wish we had taken time to to discuss definitions before we tried to begin implementation of the vision. There are other issues with the specifics of the implementation structure (how guides were chosen and putting programs into clusters as a first step, to name two that come to my mind) that we’ve put in place that I wish had gone differently. But it feels like we are overcoming those issues. We need to make mistakes and then learn from and overcome them.

The biggest lesson that I’ve learned so far in this process, however, has to do with responsibility. Until late this Spring, I kept waiting for someone to tell me things–tell me the definition of a strategic cluster, tell me how we will implement open labs, etc. But then I realized that there is no one to tell me those things. We are doing something really different here. So I am responsible for figuring those things out. That responsibility doesn’t come because I am a cluster guide (although that fact adds some urgency to my sense of responsibility). I am responsible because I am a member of the Plymouth State University community. We all need to figure this stuff out together. We have to engage in this process with curiosity and skepticism and with a sense of trying to move the initiative forward. I know it sounds corny but I really believe that the survival of higher education is in our hands. We are responsible. All of us.



I went to a conference sponsored by UBTech a few weeks ago. It was advertised as a “national summit on technology and leadership in higher education” and had a tag line of “Your future is being determined. Be there.” Not surprisingly, many of the sessions were focused on “disruptive innovation,” the idea that new technologies are disrupting the traditional university.

The keynote speaker on the second day of the conference was Richard Baraniuk, a professor at Rice University and founder of OpenStax, a company dedicated to developing text books that are given to students for free. The company has so far published two online text books, most notably, one for a standard College Physics class. Baraniuk talked a lot about disruption and what that will mean for colleges and universities and suggested that we all go out and read the work of the creator of the theory of disruptive innovation, Clayton Christensen. Looking at Christensen’s own summary of disruptive innovation, I was struck by the fact that Baraniuk neglected to discuss one of the major components of the theory. That is, Baraniuk never mentioned the fact that disruptive innovation happens at the “bottom of the market.” While companies are focused on providing quality products or services at high cost to their most sophisticated customers (sustaining innovations), other companies come into the market to provide cheaper, lower quality products or services to customers who traditionally have not been able to afford those products or services (disruptive innovations). It’s an interesting omission since when I’ve heard proponents of these ideas talk about their application to higher education, they have insisted that quality will not suffer. Another interesting insight from Baraniuk’s presentation came when someone asked about the cost of creating these “free” text books and Baraniuk answered that each one costs about a million dollars to develop. When I looked into the business model of OpenStax, I found that their initial funding comes from a series of foundations (the Bill and Melinda Gates Foundation, the William and Flora Hewett Foundation, etc.). Of course, that isn’t sustainable so OpenStax also provides premium services and content to supplement the free versions of their text books. For example, they have created an iPad version of their College Physics text that costs $4.95. That, of course, is significantly cheaper than a traditional text book but the question remains whether students will pay even that small amount when a free version of the text exists. Other “free” text book publishers, such as Flat World Knowledge, have stopped providing a free version and instead, have had to focus on low-cost texts. This model of text book delivery would still be much more affordable for the student than the traditional model but I think it remains to be seen whether it will be financially viable. It seems to me that to make it work, someone will have to come up with a disruptive innovation in funding models, perhaps something like crowd-funding?

Less than a week after I came back from the conference, The New Yorker published historian Jill Lepore’s critique of the theory of disruptive innovation. The gist of her critique is that the theory has been applied to industries that are very different than the manufacturing industries that Christensen initially studied. And, she says, the theory doesn’t even work particularly well with those manufacturing industries because Christensen’s methods were lacking–he cherry-picked examples, he ignored potential complexities in causation, he arbitrarily chose time frames that would artificially support his claims, and so on. In other words, she says the theory doesn’t work very well to explain how businesses succeed and fail. I am most interested in Christensen’s claims that his theory has predictive value, that is, that it can help us to determine which companies will succeed and which will fail. As an educator, this would help me to figure out how to deal with the disruptive innovations that higher education is facing. Unfortunately for me, the record is pretty clear that this theory hasn’t been very useful as a predictive tool so far. For example, in 2007, he predicted that Apple would fail with the iPhone. We know that this prediction was incredibly wrong. In addition, in 2000, he started the Disruptive Growth Fund, a fund which used his theory to determine which companies to invest in. The fund was liquidated a year later because it lost significantly more money than the NASDAQ average during that time. A Tribune writer quipped that “the only thing the fund ever disrupted was the financial security of its investors.” Ouch.

Christensen hasn’t written a formal response to Lepore’s article but he did give this really weird interview to Business Week. I was particularly interested in his response to the lack of predictive value of the theory. First, he says that he was not advising the guy who was running the Disruptive Growth Fund so you can’t claim that it is a failure of the theory. Regarding the iPhone, he says that he labeled it as a sustaining innovation (doomed to fail) against Nokia’s smart phone instead of labeling it as a disruptive innovation (destined to succeed) against the laptop. That definitely sounds like predicting the future after it has already happened. But this explanation brings up one of the things that has confused me most about the theory. One of his main examples involves the hard disk drive industry. I don’t know anything about the business of that industry but I certainly know something about the technology. It seems weird to me that he would label the reduction in size of hard drives as “new technology.” It seems to me that to go from a 3.5in hard drive to a 2.5in hard drive is an incremental improvement in the technology, a tweak, (a sustaining innovation?) rather than a disruptive innovation. Perhaps he explains his methodology for categorizing innovations in his books, which I have not read, but I haven’t been able to find such an explanation so I sort of doubt it exists. If we don’t have a method for this categorization, and we categorize innovations after the fact, how can the theory help us to understand what has happened in the past or what will happen in the future?

And that leads me back to higher education. One of the hot topics at the conference was MOOCs, those massive open online courses that gained popularity a few years ago, especially after the publication of The Innovative University, Christensen’s co-authored book applying disruptive innovation theory to higher education. The idea of a MOOC is that an expert in a particular field records a bunch of lectures about her field of expertise and makes that content available online for all who want to take the course. There are assignments that may be evaluated by others taking the course. There may be credentials (certificates, badges, etc.) that are given to those who successfully complete the course. Sometimes, the student has to pay to receive the credentials but the cost is minimal. Some of these MOOCs have had thousands, even hundreds of thousands, of students. I was one of the 58,000 people who signed up for the Introduction to Artificial Intelligence course offered by Sebastian Thrun and Peter Norvig a few years ago. Like most of those who started the course, I didn’t finish it. I just wanted to see how it worked. There was a lot of text to read. There were some recorded lectures. There were assignments to complete on my own. There were discussion boards where I could discuss the assignments with other students. Other than this interaction with the other students, it felt like a correspondence course from the 1970s. Christensen has labeled this kind of online delivery a disruptive innovation. And perhaps it is. But lecture-based education (“sage on the stage”) has been criticized as old-fashioned by many. It seems to me that the very idea of a MOOC relies on the idea of a “sage on the stage.” In fact, much online education is developed and delivered in that manner. Why would this kind of education be cutting edge if it’s delivered via the Internet and old-fashioned if it’s delivered face-to-face?

I don’t know where any of this will lead us in higher education. But the conference was interesting because it prompted some new thinking for me in the area of disruptive innovation as well as in several other areas. I’m looking forward to continuing these conversations at PSU during Faculty Week in August when Ann McClellan and I will lead a discussion on the ideas and technologies we heard about at the conference.



{June 12, 2013}   A Possible Return

I have been away for an entire academic year. It was my intention this year to find time to regularly write entries about various technology and society issues. But it didn’t happen. I blame the fact that I’ve been a department chair for two years now. The increased administrative tasks that come with being chair leave me fairly mentally exhausted so that the only scholarly activities that I’ve engaged in are things that result in actual presentations or publications. That doesn’t mean that I haven’t thought about blog topics, however.

So I’m declaring it here as a way to hold myself accountable. My goal for the upcoming academic year is to write at least one blog entry per week. It feels daunting but so worthwhile since I like thinking about technology issues way more than I like doing administrative tasks. I need to make the time for this.



{July 1, 2012}   Email: Buried Alive

I became the chair of my department a little over a year ago and within a few months, I found myself completely overwhelmed by email. Emails started to get buried in my inbox, either read and then forgotten or never read at all. I realized that I needed to use part of the summer break from teaching to develop a new system for dealing with the volume of emails that I receive in this position.

I have been using email since the 1980’s and have used the same process this entire time to deal with emails. I would keep emails in my inbox that I wanted to pay attention to for some reason (interesting content or information I might need in the future were the two major reasons) and if the email contained a task that I needed to complete in the future, I would mark it as unread. A few years ago, I started to use a system of folders for emails with interesting content or useful information. I maintained my habit of marking future task-oriented emails as unread. This system worked for years for me. Every summer, I spent a couple of hours cleaning up folders and my inbox. It was completely manageable.

As department chair, however, the number of emails that I received increased dramatically. The number of emails with interesting content, useful information or future task information also increased dramatically. But I think the thing that started to bury me is that the number of interruptions that occurred through the course of a day also increased dramatically. What that meant was that I might be in the middle of reading email when someone would come into my office and I would immediately give them my attention. If I was in the middle of reading an email, I might (and often did) forget to complete the process of dealing with the email. So emails with important task information might not get marked as unread or emails with interesting content or useful information might not get filed into the appropriate folders. Or I might forget where in the list of emails I had gotten to in my reading so that some messages were marked unread because I truly had not read them.

I soon found myself with over 2000 emails in my inbox, over 650 of which were marked as unread. A big problem with the unread messages is that I had no way of determining whether they were unread because I really hadn’t read them or because they contained important future task-related information. I was using that category for two very different purposes. I had no idea what those unread emails contained. Organizing my inbox began to feel like an insurmountable task. I began to have anxiety about the idea that I might actually have 650+ tasks that I needed to deal with. And we all know that we don’t work best when we feel overwhelmed and anxious. I knew I had to figure out some other way of dealing with my email.

My book club buddy and I read Time Management for Department Chairs by Christian Hansen. I attended a workshop that he presented at the Academic Chairs Conference that I attended in February in Orlando and although I found much of what he said about time management incredibly useful, I ironically didn’t have time during the Spring semester to implement very many of the ideas he presented. He has a couple of interesting things to say about managing the email deluge that I wanted to try to implement but I really needed to get my email under control first.

Here’s what I did and what I plan to do to keep things organized.

First, I needed to clean up my inbox. I began by reorganizing my folders. I did my normal summer clean up of the folders and then added a folder called “Defer” which I’ll come back to. Then I started on the inbox itself, reading the emails to determine what I was going to do with each one. I had four choices, which Hansen calls “the four D’s.” I could “delete,” “do,” “delegate,” or “defer.” I spent over 10 hours one Sunday deleting emails which needed no response from me or doing whatever task was required by an email if I could deal with it immediately. Doing whatever I needed to do sometimes meant delegating the task to someone else so I wrote a bunch of emails asking others to do things. Other times, “doing” meant answering questions. And still other times, it meant filing the email in one of my email folders. And finally, if dealing with an email required more time than I had available to me that day or required information that I didn’t currently have or required someone else to do something before I could do what I needed to do, I put it into the “Defer” folder that I mentioned early. I can’t explain the elation I felt when I finally had 0 emails in my inbox. What was more amazing than having 0 emails in my inbox was that I only had 9 emails in my “Defer” folder! I had been SO worried about what I wasn’t dealing with and it was such a relief to find that there were only 9 emails that I couldn’t deal with that day.

So that’s how I cleaned up my inbox. Now I have to maintain it and that means implementing a different system for email. Hansen suggests only looking at email at designated times during the day, times when you are unlikely to be interrupted. And the four D’s should be the practice every time you look at your email. I think I can manage this part of the process although it’s difficult to tell in the middle of summer when email only trickles in. The part that might be more difficult to me involves a larger picture time management strategy.

Hansen suggests that we should all abandon the daily to do list. It leads us to be often in crisis because each day we’re only dealing with the things that HAVE to be done on that day. Instead, we should create a master to do list that contains the things that absolutely must be done by a particular day but should also contain things that we’d LIKE to do, things that are not critical but that will help us to be more productive in the long run. A great example of this kind of thing is planning. Many of us would like to develop plans for our departments (or our lives) but that kind of work always gets put on the back burner, to be done when we “have time.” Ironically, not planning often takes more time in the long run as we have to deal with things when we’re in crisis mode rather than ahead of time when we’re thinking clearly. Hansen also suggests that when we’re creating our schedules for the week or the month or the semester, we should put these kinds of tasks on the schedule and actually do them when we schedule them. What does this have to do with the “Defer” email folder? We need to regularly put time in our schedule to deal with the tasks in that folder. In fact, we need to schedule time to review the tasks that are in the folder so that we can then put the tasks on the calendar. It’s this bit that I’m worried about. I worry that there will be crises and I will be unable to resist putting off the “Defer” folder review and planning. But I’m going to really try to implement this step. I think it’s the only way the entire system will work.

One follow-up: In the 10 hours that I spent deleting and otherwise dealing with emails, I clearly didn’t read them all carefully. Just this past week, I got an email from one of the administrators at my University about a student who claimed to have sent me email a week earlier and that I had not responded to. I have no recollection of the email whatsoever but I also don’t doubt that the student sent the email and I simply deleted it unread. When I shared that story with a friend, she said that was her biggest fear in deleting emails, that she will miss something important. And although I acknowledge the risk (especially since it actually happened to me), I still think cleaning up my inbox was worth that risk. If I had not cleaned up my email, that student message would likely have remained buried in my inbox for the week and the student would have complained to the administrator anyway. So I would have had to deal with that issue either way. The difference is that I now feel pretty confident that future student emails (or other emails) will not get buried and I will no longer have this problem. In addition, my anxiety level about my emails is currently at zero which I think makes me more productive. That alone is worth the effort.

I’m curious about how other people deal with the email deluge.



{March 4, 2011}   Game Design Education

I belong to the International Game Developers Association (IGDA) which has a fairly active listserv.  The most recent discussion on the listserv was prompted by Brenda Brathwaite‘s rant at the most recent Game Developers Conference, which ends today.  Brathwaite is a well-known game designer, educator, IGDA board member, and author.  She wrote one of my favorite game design books, Challenges for Game Designers.  So people pay attention to what she has to say.  And what she had to say in this latest rant has been quite controversial.

The title of her rant is Built on a Foundation of Code.  Her basic point is this: “Game design programs must be firmly rooted in a foundation of code.”  What she means is that students graduating from a game design program must be good programmers.  They must learn to create digital games from scratch.  Code is the tool of the trade and if we game educators do not teach our students to program, we are doing them a huge disservice.  She makes this point as a game designer who started in industry, went to academia, and is now back in industry.  She sees thousands of resumes and wants us all to know that she will not hire entry-level game designers who have not created their own digital games.  That is, she will not hire game designers who can’t code. 

I’ve heard this kind of argument before but it usually comes from computer scientists who think that their discipline is the most important one for the multidisciplinary field of game development.  But Brathwaite is not a computer scientist and so her argument is a bit surprising.  And it’s also why no one is simply dismissing what she is saying–she’s not saying MY discipline is the most important. 

At the risk of sounding discipline-centric, as a computer scientist, I think that the training that computer scientists go through is extremely important for anyone who wants to create any sort of procedural content.  What do I mean by that?

Procedural content is any artifact that is executed by a computer, any artifact that is comprised of a series of instructions that are to be run by a computer.  For example, this blog entry is digital content but not procedural content–it does not contain instructions for the computer to execture.  The blog software that I’m using (wordpress) IS procedural content–it is comprised of instructions that are executed by the computer as I write my blog entry.  Creating procedural content requires a particular way of thinking about that content.  Creating procedural content also requires the development of debugging skills because no one writes procedural content that works perfectly the first time.  Making this content work properly can be tedious and frustrating and the developer needs to be persistent and detail-oriented, while also being able to take a step away from the content to think about the obstacles in new ways.  It takes practice to implement this cycle of creating the content, testing to find bugs, planning a fix for the bugs, implementing the new content, testing to find bugs, planning a fix, and so on.  And the ability to think in a way that allows you to go through this cycle over and over seems important for anyone who wants to work in game development.

Notice that I’m saying something a bit different than Brathewaite.  She says she wants all game developers to be able to code.  I’m saying I think game developers need to be able to think like coders.  But perhaps it boils down to the same thing, perhaps the only way to teach someone to think like a coder is to teach them to code.  In any case, I think this is an interesting question, one that I’ve thought about quite a bit as I’ve tried to teach game design and development to non-computer science majors.  I’m still trying to figure out the best way to teach this kind of thinking.



{October 22, 2010}   Original Research–Good or Bad?

I recently rewatched Julia, the 1977 film starring Jane Fonda and Vanessa Redgrave.  It is based on a chapter in Lillian Hellman‘s memoir, Pentimento: A Book of Portraits.  That chapter tells the (probably fictional) story of Hellman’s longtime friendship with Julia, a girl from a wealthy family who grows up to fight fascism in Europe in the 1930s.  I loved this book when I read it in high school and I went on to read nearly all of Hellman’s other work as well as several biographies.

As I watched the movie, several questions occurred to me and so, being a modern media consumer, I immediately searched for answers online.  This search led me to Wikipedia, which for me is a fine source of answers to the kinds of questions I had.  In fact, I use Wikipedia all the time for this sort of thing.  I was surprised then to find the following qualifying statement on the entry for Pentimento:

This section may contain original research.  Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed.

As I said, I use Wikipedia a lot.  And I have never seen this qualifying statement before.  I think this statement implies that original research is somehow bad.  I don’t think that’s what the folks at Wikipedia mean.  At least, I hope it’s not what they mean.  So I decided to look into the statement a little more deeply.  There are a couple of parts of the statement that are interesting.   

First, the words “may contain” are in bold.  I think that’s supposed to indicate that the section or may or may not contain original research.  It’s clear that articles in Wikipedia should NOT contain original research but it isn’t clear why. 

I then checked to see how “original research” is defined by Wikipedia and found this on their policy pages: “The term ‘original research’ refers to material—such as facts, allegations, ideas, and stories—not already published by reliable sources.”  How would one determine whether a particular section contained “original research” or not?  Probably by looking for references to “reliable sources” in the section.  Therefore, if a section doesn’t contain references (or not enough references), it might be difficult to determine whether that’s because the author simply didn’t include references to other available sources, the work is based on “original research” or the work is completely fabricated.  Or, I guess, it could be some combination of the three reasons.  So I guess that’s why “may contain” is in bold.  The lack of references could mean any number of things.

The next part of the qualifying statement is even more interesting to me.  “Please improve it by verifying the claims made and adding references.”  This statement implies that “original research” is somehow less valid than work that has been taken from another source.  Again, I doubt that’s what the Wikipedia folks mean. 

So I continued to investigate their policies and found this: “Wikipedia does not publish original thought: all material in Wikipedia must be attributable to a reliable, published source. Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.”  Because of this policy against publishing original thought, to add references to an article or section of an article does indeed “improve” it by making it conform more closely to Wikipedia’s standards for what makes a good article.

This policy against publishing original thought explains the rest of the qualifying statement.  My investigations into Wikipedia’s policies found policies about what it means to “verify” statements in an article.  This is important because Wikipedia says that included articles must be verifiable (which is not the same as “true”), that is, users of Wikipedia must be able to find all material in Wikipedia elsewhere, in reliable , published sources.  And yes, Wikipedia explains what they mean by “reliable.”  That discussion is not easily summarized (and isn’t the point of this post) so anyone who is interested can look here

My surprise concerning the qualifying statement boils down to wording and I think the wording of the statement needs to be changed.  Currently, it implies that original research is bad.  But through my investigation, I’ve decided that Wikipedia probably means that articles should not contain unverified, unsourced statements.  Such statements could come from author sloppiness, original research or outright fabrication.  In any case, they should not be part of Wikipedia’s articles. 

Of course, I haven’t discussed whether the policy of not publishing original thought is an appropriate policy or not.  I have mixed feelings about this.  But that’s a subject for another post.



{February 8, 2010}   The iPad and Education

David Parry recently wrote a very interesting post on ProfHacker regarding the impact that the iPad is likely to have on education.  Parry is an assistant professor of emerging media and technology (what a cool title) at the University of Texas-Dallas and the author of academHack, one of my favorite blogs about technology and education (see my Blogroll).  Parry, who is an avid Apple consumer, thinks the iPad is far from the panacea for education that its proponents claim it will be.  For those of you who won’t follow the link to his post, I’ll summarize his main points.

Many are saying that the iPad will do for education (and textbooks) what the iPod did for music.  Parry points out that the iPod is not revolutionary.  It didn’t change the way we consume music.  Instead, it was the development of iTunes that changed the way we consume music.  The change in distribution channels rather than a change in consumption platform is what was important to changing the way we consume music.  We can now purchase individual songs for only 99 cents (which is a price point that makes the inconvenience of illegal downloading not worthwhile) and create playlists from those individual songs.  In order for there to be an impact in our consumption of textbooks, the cost would need to drop a lot and we would have to be able to assemble new textbooks from individual chapters (and perhaps even individual fragments of text) from existing textbooks.  No one in the textbook business is talking about an iTunes-like experience for textbooks.

Parry’s second major point is, for me, even more important for those us who are involved in higher education.  He points out that the iPad is designed to be a media consumption device.  But he (and I) wants his students to be more than media consumers.  To be successful citizens in the digital age, students need to be critical consumers and creators of media.  With its lack of camera, lack of microphone, lack of multitasking ability, the iPad teaches people how to be passive consumers of media.  Such a device is bad for educating the active, critically questioning citizen for today’s (and tomorrow’s) digital world.

Parry raises many additional issues and explains the two I mention here much more articulately than I have.  Go read his post.



{December 26, 2009}   Whose Property Is It?

When I was in graduate school more than 12 years ago, a new company opened up in Tallahassee that caught the attention of many students (and probably faculty members) at Florida State University.  I don’t remember the name of the company but I do remember its business purpose.  The company would pay students to take notes in their classes and then would sell those lecture notes to other students in those same classes.  This service seems like a waste of money to me since any student already paying tuition for the class could simply create his or her own version of the lecture notes.  If they went to class, that is.  But I suppose the prime target of this company could be those students who haven’t yet learned to take good notes themselves.  In any case, that business all those years ago in Tallahassee appeared to do very well in the face of some concerns expressed by various factions in the academic community.

I hadn’t thought about this company in years.  Recently, however, this kind of business is much in the news.  In 2008, Michael Moulton, a faculty member at the University of Florida, filed a lawsuit against a company called Einstein’s Notes, which sells what they call “study kits” for classes at UF.  Moulton, and the company that publishes the textbook that he has written, claim that the material in Moulton’s lectures is copyrighted and therefore, by publishing student lecture notes without his permission, Einstein’s Notes is violating that copyright.  The issue is a difficult one, especially because it is the material created by the student that is being sold by Einstein’s Notes rather than any written material created by the faculty member.

Copyright provides the author of a work the exclusive right to control the publication, distribution, and adaptation of that work.  An idea cannot be copyrighted.  Instead, copyright extends only to “any expressible form of an idea or information that is substantive and discrete and fixed in a medium.”  This is key to these lawsuits, it seems, since the gray area seems to lie in whether the lecture itself is “fixed in a medium.”  In Moulton’s case, it just might be.  Moulton has published two textbooks based on his lectures and uses them in his classes.  In addition, his publisher sells its own version of lecture notes for his classes.  So when a student takes notes in a class based on the lecture, although those notes are not a “copy” of the professor’s lecture, they are derivative of the lecture.  That is, those notes are a kind of adaptation of the professor’s lecture.

Of course, I’m not a lawyer but this is how I understand the issues in the Moulton case.  I think things get murkier when a faculty member has not “published” anything related to his or her lectures, however.  Moulton’s lawyer doesn’t seem to think so.  He says that if a faculty member were to write out the high points of the lecture on a transparency and display them to the class via overhead projector, that fixes the material in a medium.  If a student then bases her lecture notes on that transparency, her notes are a derivative of material that is copyrighted and therefore, is not eligible to be sold without the faculty member’s permission.  The lawyer doesn’t say anything about whether material written on the chalkboard is fixed in a medium.

As an academic at a public university, I believe that education should be available as cheaply as possible for as wide an audience as possible.  For example, I teach a computer literacy class for free for senior citizens and get enormous pleasure from seeing them learn.  I would, however, have a problem if someone took my “lecture notes” from that class and sold them on the Internet without my permission.  The material that I teach in that class is basic information, available in a variety of forms from a variety of sources.  There’s nothing in the content that could be considered new information.  What is original about the class is the way the material is organized and presented.  Many of the senior citizens tell me stories about taking beginning computer classes elsewhere and feeling overwhelmed, lost and discouraged.  This class, they tell me, is the first time they’ve felt as though they actually could learn to use a computer to send and receive email and to search the Internet.  So there is definitely something unique and original about the way I’m presenting the information.  Why would I have a problem if this material was made available through a company like Einstein’s Notes?  It isn’t because I don’t want the material to be made available.  Instead, it’s because I don’t think Einstein’s Notes should make money from my work without getting my permission and without compensating me.

Moulton’s lawyer points out that Einstein’s Notes puts a copyright notice on the lecture notes that they sell.  In other words, the company sells the lecture notes but then attempts to prevent those notes from being copied.  They are claiming copyright on material that they played no part in creating.  In what world does that make sense?



Ian Schreiber, co-author of Challenges for Game Designers, is undertaking an interesting experiment in online education this summer.  He is offering an online course called Game Design Concepts via Web 2.0 tools (a blog, a wiki, a discussion board, Twitter and so on).  None of this is revolutionary.  What makes this experiment interesting is that Ian is offering the course entirely for free and allowing an unlimited number of people to register (or not) for the course.  Registration closed yesterday (June 29th) with 1402 registrants.  Many, many more people (myself included) will probably follow the course informally without registering for it.

One thing that I wondered was why Ian would decide to do this.  In his own words, here’s why:

I have many motivations for starting this project, some selfish and some altrusitic. Best to be up front about it:

  • Game design is my passion, and I love to share it with anyone and everyone.
  • I have taught some classes in a traditional classroom and others online, and I want to experiment with alternate methods of teaching.
  • By exposing my course content and viewing the comments and discussions, I can improve the course when I teach it for money.
  • It is a career move. If this course is successful, it gives me greater exposure in my field and promotes my name as a brand.

The reason that I find most interesting is the last one. I wonder how he knows that a successful course will lead to “greater exposure” and branding his name.  But let’s assume that assume that success will mean that he gets these things.  I also wonder how he will determine whether the course is “successful.”

Most of the materials that Ian is providing come in the form of text–twice a week blog postings, a wiki and so on.  These items are not really much different than the book that he requires the students to buy for the course.  In other words, so far this sounds like a correspondance course.  But online education differs from other types of correspondance courses in its ability to allow interaction between a faculty member and a students as well as between two students.  With 1402 students, I don’t think Ian will have much time to interact with the students individually.  He puts the students into online groups and so they should have the opportunity to interact with each other.  Of course, the quality of the experience that one has in such a situation is likely to depend on the other students in one’s group.  It could be a great, meaningful experience if there is a critical number of students in the group who engage in thoughtful online discussions and group project work.  It’s unclear at this point how many of the 1402 students will have this experience.

In explaining that there is a text book required for participation in the course (but no other expense), Ian says, “It’s still cheaper than a college education.”  He’s absolutely right.  The idea of getting together a group of people who are interested in learning the same thing is nothing new.  I participate in a two-person academic book club and in a teaching reflective practice group to accomplish something similar to what Ian is trying to do via this class and I find both to be among my most rewarding activities.   The difference between my book club and Ian’s class, however, is that the class has a single person (a teacher) who is structuring the experience while in the book club, we both take responsibility for structuring the experience.  This responsibility ensures that we are both serious about the work we do in our book club meetings.  But if enough of the people in Ian’s class are serious about the work, I think this will be a “successful” experience.



{May 22, 2009}   NeMLA 2010

There have been quite a few stories that have captured my attention in the nearly six month break that I’ve taken from writing entries in this blog.  I will be sharing several of those stories in the next few days.  In the meantime, I recently had a panel proposal accepted for the Northeast Modern Language Association conference that will be held in Montreal in April 2010.  Here’s the call for papers for my panel:

Playing Web 2.0: Intertextuality, Narrative and Identity in New Media

 

41st Anniversary Convention, Northeast Modern Language Association (NeMLA)

April 7-11, 2010

Montreal, Quebec – Hilton Bonaventure

 

A recent Facebook spoof of Hamlet by Sarah Schmelling illustrates the current proliferation of experiments in narrative form and intertextuality found in new media.  Web 2.0 tools, such as wikis, blogs and social networking sites, allow the average web user to actively participate in online life.  Given our societal bent toward postmodernism, it is not surprising that much of this online participation is characterized by a proclivity to challenge and play with traditional conventions.  This panel will examine play, defined in the broadest sense by Salen and Zimmerman as “free movement within a more rigid structure”, using Web 2.0 tools and new media.  Some questions of interest to the panel include:  Are there particular attributes of new media technologies that encourage play?  How is new media play different from/similar to play found elsewhere?  What impact do new media technologies have on our notions of play?  What are the motivations of those who engage in play via new media technologies?  Some example topics for the panel include: experimentation with new literary forms using social networking conventions (such as the 140-character status update); creation of online identities using text-based tools such as blogs; development of fictional worlds by fans of popular culture narratives using wikis and blogging tools; the use of casual online games to influence attitudes and behaviors concerning issues of social importance.

Submit 250-word abstracts to cleblanc@plymouth.edu.

 

Deadline:  September 30, 2009

 

Please include with your abstract:

 

Name and Affiliation

Email address

Postal address

Telephone number

A/V requirements (if any; $10 handling fee)

 

The 41st Annual Convention will feature approximately 350 sessions, as well as dynamic speakers and cultural events.  Details and the complete Call for Papers for the 2010 Convention will be posted in June: http://nemla.org/.

 

Interested participants may submit abstracts to more than one NeMLA session; however panelists can only present one paper (panel or seminar).  Convention participants may present a paper at a panel and also present at a creative session or participate in a roundtable.

 

Travel to Canada now requires a passport for U.S. citizens.  Please get your passport application in early.



et cetera