Desert of My Real Life











{April 24, 2011}   Games and Lessons for Life

I am a sucker for stories about the relationship between games and life.  When I was a graduate student, a story in the Tallahassee Democrat about the life of Warrick Dunn, a star football player whose police officer mother was killed in the line of duty while he was in high school, brought me to tears.  I love movies like Sea Biscuit and Brian’s Song.  I have myself written blog entries ruminating about what we can learn about life from playing games.

So you would think a story that I heard on NPR this morning would be right up my alley.  Weekend Edition Sunday host Liane Hansen interviewed Dan Barry, author of a new book called Bottom of the 33rd about the longest baseball game ever played in the history of US men’s professional baseball.  This particular game was played in 1981, between the Pawtucket Red Sox and the Rochester Red Wings, farm teams of the Boston Red Sox and the Baltimore Orioles, respectively.  The teams played 32 innings in 8.5 hours before the owner of the league called the umpires to tell them to halt the game.  That was at 4 in the morning on Easter Sunday and there were 19 people left in the chilly stands in Pawtucket, RI.  When the teams reunited 2 months later to finish the game, nearly 6000 fans showed up and over 140 reporters from all over the world came to cover it.  Pawtucket won the game in the bottom of the 33rd inning, a mere 18 minutes after the game resumed.

The subtitle of Dan Barry’s book is Hope, Redemption and Baseball’s Longest Game.  I expected the interview on NPR to touch on hope and redemption and perhaps something about how this longest game can teach us something about perseverance.  Instead, the interview focused on the facts of the game, including the fact that Cal Ripken, Jr., who went on to set the record for consecutive starts in the Major League, played all 33 innings and that Wade Boggs, future Hall of Famer, tied it up for Pawtucket in the twenty-first inning. Barry also told us that the original 19 fans who stuck it out for those 32 innings in April were annoyed that nearly 6000 people could now say they saw history being made when they really only had seen the last inning of that historic game.

But nothing in the interview touched on hope or redemption.  Or perseverance.  Or anything of importance.  Which annoyed me.  Not every sports story is a story about life, about issues larger than the game itself.  A book about a particular game that is the longest in professional history is probably of interest to baseball fanatics.  The fact that NPR picks the author of that book as someone deserving of an interview implies there is more to the story, something that we can all learn from.  As far as I can tell, that is not the case with this particular game or this particular book, the hyperbole of its subtitle notwithstanding.  Adding the words “hope” and “redemption” to the subtitle of a book will not make that book interesting for a general audience.  I realize I’m judging the book by its interview.  Maybe that’s not fair.  But neither is it fair to promise us a discussion of what a game can tell us about hope and redemption and instead waste our time with the facts and statistics of a particular game.  Come on, NPR.  With all the real, inspiring sports stories out there, we deserve better.  Did you choose to tell us about this book simply because the game went into the wee hours of Easter morning, 1981, which happens to be 30 years ago today?  That coincidence also doesn’t make this story interesting for the general reader.



There is a huge controversy raging in NH this year involving the Northern Pass Project.  According to the project’s web site, the Northern Pass is “a transmission project designed to deliver up to 1,200 megawatts of low-carbon, renewable energy (predominantly hydropower) from Québec to New England’s power grid.”  Despite the apparent “greenness” of this project, many people in the state (including many environmentalists) are fighting this project.

I’ve been having some difficulty separating hype from truth when talking to people and reading articles in the newspaper about this topic.  So I decided to do some additional research about it to see what I think in advance of voting on a resolution about it tomorrow on election day.

Here is the proposed path of the power line.  You can see that it goes right through Groveton, Lancaster, Lincoln, Campton, Plymouth, Ashland and Bristol.  These are towns that depend heavily on tourist dollars for their economic vitality.  And much of the argument against the project focuses on the impact of the project on tourism.  According to the project’s own web site, the towers along the project’s path will stand between 80 and 135 feet in the air.  The web site compares these towers to a typical cell phone tower, which stands 180 feet tall.  This seems to me to be an irrelevant comparison since cell phone towers are typically singular whereas the criticism of the project’s towers is that there will 140 miles of them.  These towers will run through some of the most scenic areas of the state and the fear is that this will detract from the beauty of the state, meaning that tourists will not want to vacation here anymore.

Another criticism of the project is that the electricity originates in Quebec, which means that we will be purchasing this power from Canada.  I was in a local business recently where the owner was expressing his discontent about the project with an official of the project.  I overheard him say that this project represents a “wholesale invasion of New Hampshire by Canada.”  This seems a bit overblown to me but the answer to the question of why we should buy power from Canada on the FAQ of the project seems to be a non-answer.  They say that the New England states must buy renewable energy in as cost-effective a manner as possible.  There is nothing in the answer that explains why this is the most cost-effective manner possible.  The answers in the FAQ do, however, make it very clear that we are indeed buying this electricity from Hydro-Quebec.  We are still relying on foreign energy.  This is not necessarily bad but I don’t really see how it helps New Hampshire to do so.

Another of the arguments in favor of the project is that it will create jobs in the North Country of New Hampshire.  But if you read between the lines, it’s clear that these jobs are construction jobs.  Once the transmission lines are built, those jobs disappear.  So this is a very short-term benefit with a long-term negative impact.

I have just relied on the information provided by the people involved in the Northern Pass project and they really have not convinced me that this is good for the people of New Hampshire.  I haven’t even spent any time reading the web pages of the critics of the project.  They are planning to deliver this electricity to the southern part of New Hampshire and south of that (Massachusetts, Connecticut and Rhode Island), where the largest population base is.  And yet, it seems that the largest negative impact will be on the people of northern and central New Hampshire.  How is that fair?  Unless someone comments with a compelling argument, I am going to have to vote in favor of the resolution against this project.  What do you think?



{December 27, 2010}   Popular Culture and TIA

I just finished watching the five episodes of the BBC miniseries The Last Enemy.  Ann had recommended it because it is about computers and privacy and also because Benedict Cumberbatch (of recent Sherlock Holmes fame) is the star.  I mostly liked the series but there were a couple of things that really bothered me about it.

The plot begins when Stephen Ezard (played by Cumberbatch) returns home to England after living in China for four years.  He’s coming home to attend the funeral of his brother Michael, an aid worker who was killed in a mine explosion in some Middle Eastern desert.  Ezard is a mathematical genius who went to China to be able to work without all the distractions of life in England.  He is a germaphobe (at least in the first episode–that particular personality trait disappears once the plot no longer needs it) who is horrified by the SARS-like infections that seem to be running rampant on the plane and throughout London.  After his brother’s funeral, Stephen goes to Michael’s apartment and discovers that Michael was married to a woman who was not at the funeral and who appears to be in hiding.  She’s a doctor who is taking care of a woman who is dying from some SARS-like infection–and that woman is in Michael’s apartment.  Despite his germaphobia, Stephen immediately has sex (in this germ-infected apartment) with his brother’s widow.

Meanwhile, Stephen’s ex-girlfriend is an MP who is trying to push through legislation that would allow the use of a program called Total Information Awareness (TIA).  TIA is already largely in place but the people of England are not happy about it.  So Ezard is recruited as a “famous” apolitical mathematician who will look at the program and sell it to the public.  What is TIA?  It’s a big database that collects all kinds of electronic information.  Every credit card purchase, building entry with an id card, video from street cameras, and so on is stored in this database.  The idea is that by sifting through this information, looking for certain patterns, English authorities will be able to find terrorists before they strike.  The interesting thing about this idea is that it isn’t fiction.   In 2002, the US government created the Information Awareness Office in an attempt to create a TIA system.  The project was defunded in 2003 because of the public outcry.  At the time, I was concerned about the project both as a citizen with rights that would potentially be threatened and as a computer scientist critical of the idea that we could actually find the patterns necessary to stop terrorism.

This is where the plot of The Last Enemy became problematic for me.  Michael’s widow, Yassim, who is now Stephen’s lover, disappears.  Stephen takes the job as spokesperson for TIA primarily so he’ll have access to a system that will allow him to track Yassim.  We see many scenes of him sitting for hours and hours wading through data with the help of the TIA computer system.  At one point, he tracks the car that Yassim had been riding in by looking for video footage taken by street surveillance cameras and finding the license plate of the car in the video.  This is completely unrealistic and one of the main reasons that, with our current technology, a TIA system will never work.  We don’t yet have the tools to wade through the massive amounts of irrelevant data to find only the data we’re interested in.  And when that data comes in the form of photos or video, we don’t really have quick, efficient electronic means of searching the visual data for useful information.  Since so much of the plot of The Last Enemy hinges on Stephen finding these “needles in a haystack” in a timely manner, I had a difficult time suspending my disbelief.  The problem is that it is very difficult to find relevant information in the midst of huge amounts of irrelevant information.  Making this kind of meaning is one of the open problems of current information technology research.

The second major problem that I had with the plot of this series has to do with Stephen as a brilliant mathematician and computer expert not understanding that his electronic tracks within the system would be easy to follow.  He makes no attempt to cover those tracks and so as soon as he logs off, his pursuers log on behind him and look at everything he looked at.  And many major plot points hinge on his pursuers knowing what he knows.  He doesn’t even take minimal steps to cover his tracks and then he seems surprised that others have followed him.  This is completely unrealistic if he really is the brilliant computer expert he would need to be in order for the government to hire him in this capacity.

I won’t ruin the surprises of the rest of the plot of this series.  But let’s say that much of the premise seems pretty realistic to me, like we’re not too far off from some of these issues coming up for consideration soon.  For that reason, I recommend the series, despite the problems I saw and despite the unbelievable melodrama that arises as a result of Stephen’s relationship with his brother’s widow.  There is a particularly laughable scene between the two of them when she tries to teach him how to draw blood by allowing him to practice on her.  It’s supposed to be erotic, which is weird enough given the danger they’re in at that point, but the dialog is so bad that I laughed out loud.  Despite these problems, the series explores enough interesting questions that I kept watching, wanting to know how the ethical questions would be resolved.



{December 26, 2010}   More About Net Neutrality

This entry was inspired by Meg, who asked some great questions after I posted my last entry.  In that entry, I explained what the net neutrality debate is about and why consumers should care about the FCC’s recent ruling requiring that traditional ISPs cannot discriminate the traffic that they carry over their wires.  This is a good thing for consumers (IMHO).  Near the end of the post, I also suggested that the ruling didn’t go far enough because it didn’t apply the same rules to wireless providers.  I didn’t explain what I meant by that and so Meg asked some great questions.  So here’s a further investigation of the FCC ruling, as it applies to wireless providers.

An article from Wired summarizes the three rules that the FCC passed for wired ISPs: 1. They must be “transparent about how they handle network congestion”; cannot block any particular traffic on wired networks, and cannot “unreasonably” discriminate on those networks.  This last rule means that the speed of data transmission must be the same regardless of the source of that data.  So Time Warner (as an ISP) cannot make your connection slower to Netflix‘s online video service than the connection to Time Warner‘s own online video service (if they had one).

Despite these consumer protections, the ruling is being thrashed because it does not apply these rules to wireless providers of Internet access.  What does that mean?  It means that if you access the Internet on your phone, your phone company can charge you different rates to access different sites.  If Facebook is particularly popular, for example, your phone company can charge you more to access it than it charges to access MySpace.  Or worse, if your phone company creates their own social networking site, they can charge you more to access all competitors’ sites than they do to access the more well-known sites.  Or even worse yet, they can prevent you from using their wireless network to access the competitors’ sites at all.  This is clearly not in the best interest of consumers.  It’s also not in the best interest of innovation since most innovation does not come from the biggest companies and small companies could get squeezed out if no one is able to access their sites.

Right now, these (non)rules concerning wireless providers apply mostly to cell phone companies who provide Internet access.  Most other access is wired access.  Even when we have wireless networks in our homes and places of work, we have wired access that comes into the building and then we have a local wireless network set up.  So the ISP isn’t providing the Internet access wirelessly.  And so they would be governed by the stricter rules imposed by the FCC ruling.  But that may not always be the case.  In the future, more and more ISPs may figure out ways to effectively and efficiently provide wireless access into our homes and businesses.  And if that happens, those new networks will be governed by the softer rules.  This seems short-sighted to me.  And it seems like it happens because the folks on the FCC are not tech people and so don’t really understand what is different and what is the same about different kinds of technology.  Let’s hope that changes.



The debate about net neutrality has been around for a while.  I taught my students about it back when I was still in the Computer Science Department, during the Bush administration.  Today, finally, we’ve gotten a ruling from the Federal Communications Commission about this “controversial” subject.  But to understand the FCC ruling, we first have to understand the debate.  And that means that we have to understand what the Internet actually is.

So, what is the debate?  It’s about your access to the Internet.  The Internet was founded as a decentralized network of computers.  That’s right.  The Internet is  a network of computers.  Each of these computers provides some service.  So when you connect to the “Internet,” you are connecting to a bunch of computers.  And you ask those computers to provide you with some sort of service.  Like viewing a web page.  Or looking at your email.  Or listening to music.  Or watching a movie.  Each of these services involves sending your computer data in the form of a bunch of zeroes and ones that your computer then translates into something that you (as a human) recognize.  Some of these services involve a few zeroes and ones while others involve MANY zeroes and ones.  The Internet was founded on the idea that zeroes and ones are zeroes and ones.  That is, we should not make any distinction between THIS set of zeroes and ones and THAT set of zeroes and ones.  That’s the idea of net neutrality.

How does this relate to you and your everyday, online life?  It means that when you use your Internet Service Provider (Time Warner Cable or Netzero or Verizon or whoever) to connect to Google (or Microsoft or LL Bean or YouTube or Hulu whoever), the zeroes and ones are not discriminated.  All zeroes and ones are treated equally.  So, for example, Time Warner cannot make a deal with Microsoft to make Bing (Microsoft’s search engine) run faster than Google (Bing’s direct competitor).  And Time Warner cannot make a deal with Microsoft to charge you more to access Google than to access Bing.  AND Time Warner cannot make a deal with Microsoft to completely block your access to Google so that you MUST use Bing as a search engine.  THAT is net neutrality.

So the issue has been whether to consider the Internet to be more like a communication network or an entertainment provider.   If the Internet is about communication, then it should be regulated in the same ways that phone communication has been regulated.  Phone companies must carry all phone calls at the same rate based on distance.  In other words, they can charge you more to call California than to call the town next to you, but they can’t charge you more to call Business A than to call Business B based solely on the fact that Business A is different than Business B.   And they can’t block your call to any place.  They must carry all calls.  On the other hand, if the Internet is about entertainment, then they should be able to make deals like your cable company makes deals.  For example, my cable company, Time Warner, recently failed to come to an agreement with an ABC affiliate out of Vermont.  As a result, I no longer get that channel in my cable lineup–I cannot access that channel no matter what I do (unless I change to a cable or satellite provider that gives me that access–but, of course, most cable companies have monopoly access in the towns where they provide service).  In addition, if I want access to certain channels, my cable company may charge me more.  I have access to The Sundance Channel but I don’t have access to the Independent Film Channel because I pay at the level that gives me Sundance but I don’t pay at the level that gives me IFC.

So the question has been, is the Internet a communication network (like phones) or an entertainment network (like cable TV)?  Another way to ask this question is: should Internet service provision be regulated to prevent differential access to certain sites?   Many Republicans have argued that deregulation, allowing companies to do whatever they want, promotes competition and is therefore good for consumers.  And so they have argued that we should allow Internet Service Providers to charge different amounts for different kinds of access and to actually block access to certain sites.  I generally believe that consumers are best served by rules that promote net neutrality.  So I have argued for a long time that the FCC should make rules that prevent situations such as what happened with my ABC affiliate and my cable TV provider.

So today, the FCC ruled in favor of net neutrality.  THIS is a good thing (IMHO) for consumers–and THAT is why you should care about this.  Some Republicans have called this ruling “regulatory hubris.”  Many on the other side of the debate have also decried this ruling because it doesn’t go far enough in its regulations.  The ruling explicitly singles out cell phone operating systems, such as Android, as the reason that the FCC was softening its rules for net neutrality on wireless networks.  This is defintely something that consumers need to pay attention to.



{December 10, 2010}   Zero Views

Recently, my favorite NPR show, On the Media, had a story about an interesting blog called Zero Views.  The blog celebrates “the best of the bottom of the barrel” by posting the funniest YouTube videos that no one (NO ONE–hence the name “Zero Views”) has watched.  I found several things about this story that are worth commenting on. 

First, this is the kind of meta-site on the Web that I love.  It’s a site that highlights content from another site.  But here’s the thing.  As soon as this site focuses on a video that has zero views, it is HIGHLY likely that the video will no long have zero views.  And in fact, if the Zero Views blog is at all popular (and my sense is that it is fairly popular), any site that it talks about is likely to go viral and become incredibly popular with thousands of views.  That, to me, is a really interesting phenomenon.

The second thing that I find interesting about this story is an underlying issue about popularity.  This is something that I’ve been thinking about for a while.  What makes a blog, a site, a video “popular?”  The easy answer has to do with numbers of views.  But that somehow feels unsatisfying to me.  I’ve watched many videos and traveled to many links that were recommended to me, only to feel…dissatisfied with what I’ve seen.  This makes me think that popularity must have something to do with “likeability” or some other related concept.  How would we measure “likeability” and surely, the fact that someone “recommended” a particular site, blog, video to me must have some relationship to “likeability,” right?

There are sites such as Technorati that try to measure “popularity” by measuring the number of links that each site has to it.  That is, the more other sites link to your site, the higher you rank in Technorati’s popularity rankings.  There are many problems with this idea of “popularity,” the most obvious of which is that more tech-literate folks are more likely to link to other sites.  So if you are “popular” among less tech-literate folks, you are less likely to be linked to so you will be ranked as less “popular.”

I don’t actually know how to measure “popularity” of websites, blogs, videos and so on.  The proliferation of “top 100” or “top 10” shows on TV makes me think that “popularity” is a cultural phenomenon, something we are interested in as a culture.  But I’m curious about what various groups of people mean when they use the word “popular” when it comes to online content.  What do you think?  I’m also really interested in the kinds of activities and behaviors that can affect the “popularity” of online content.  What do you think about that?



{November 11, 2010}   Google and Privacy

A story about Google and privacy on NPR last week caught my attention because it seemed so strange.  And now that I know what the real story is, it still seems really strange to me.

Google Map’s Street View function is very cool.  It provides street-level camera views of many locations.  In Boston, for example, you could type “Prudential Center” in the Google Maps tool, choose “Street View” and then stand virtually in front of the Prudential Center and look around, as though you were actually standing at that spot.  You can then (virtually) move in any direction along the street, as though you were traveling in a car.  I’ve used the function before visiting new places, trying to find new addresses, to get a sense of what I’ll see when I’m actually there.

To create these street-level views, Google sends people in cars to drive around, video-taping the view at various locations.  To facilitate the coordination of the video with actual addresses, the people in the car utilize mobile computing technology to gather GPS information that is then attached to the video.  The software that Google used in this project apparently had a feature that captured other kinds data from the airwaves in addition to the data needed to create the street views.  In particular, this software sniffed out unsecured wireless networks and captured data such as email addresses, passwords, and IP addresses.  After denying that they were capturing such data, Google finally admitted that they were “inadvertantly” capturing it but that the data was never used for any purpose.  The data capture was inadvertant because the company was using software that had been developed for other purposes and they simply didn’t realize this capability remained intact.

In Britain, such data capture is illegal.  So the story I heard was about the British government deciding whether to fine Google for the “data breach” or not.  Instead of fining Google, the British government sought written assurance from Google that they would not engage in such practices again.  In addition, the government would like to conduct an audit of Google’s data protection practices.  And that, apparently, will be the end of the incident.

I think there are two interesting parts to this story that have not been discussed. 

First, there are a ton of wireless networks that are unsecured.  What this means is that people set up a wireless network in their house or their business and they don’t encrypt the data that is sent via that network.  So all information that is sent on the network can be read by anyone.  If you put in a password, it is transmitted in plain text, so anyone (with a sniffer–another type of program, readily available–that’s another post) can read it.  If you put in your bank account number, it is transmitted in plain text and anyone (with a sniffer) can read it.  In other words, it is a really bad idea to set up an unsecured, unencrypted wireless network.  When you buy a wireless router, the setup instructions are pretty easy for setting up a secure, encrypted network.  But many people choose not to.  I’m not sure why.  Of course, it still makes sense to me that it would be illegal to gather private information from unsecured networks.  If someone doesn’t lock the door to their apartment, we still think it’s a crime for someone to steal things out of that apartment.  It’s the same situation with an unsecured wireless network.

The second thing that I think is interesting about this story is the fact that Google’s software contained functionality left over from some previous project that was unrelated to the current project.  This might not seem like a big deal but I’ve seen this in other pieces of software and it is indeed a big deal.  A few years ago, Microsoft’s Excel was a hog, using huge amounts of memory and CPU time, far beyond what you would expect given its functionality.  I discovered (via the Internet, of course) that the Microsoft programmers had inserted a huge chunk of Microsoft’s Flight Simulator into the Excel code.  So if you pressed a bizarre sequence of keys while you were in Excel, you would suddenly find yourself flying a simulated plane, with some of the most realistic graphics available at the time.  This is called an “Easter egg.”  And here are some instructions for how to get to the Flight Simulator from within Excel. (By the way, I was unable to get this to work on Vista but you can go to Wikipedia to find some documentation of various Easter eggs in Microsoft products.)  It was a cool discovery.  Most Excel users never knew this functionality existed.  And it shouldn’t have existed because it was completely unrelated to spreadsheets.  It was (probably) the major reason that Excel was bloated, taking more memory and CPU time than necessary.

So although the story about Google’s privacy breaches is strange, it contains a couple of lessons for the average computer user as well as for software developers.  Average user–secure your wireless network!  Software developer–resist the temptation to play around as you develop your software.



{October 22, 2010}   Original Research–Good or Bad?

I recently rewatched Julia, the 1977 film starring Jane Fonda and Vanessa Redgrave.  It is based on a chapter in Lillian Hellman‘s memoir, Pentimento: A Book of Portraits.  That chapter tells the (probably fictional) story of Hellman’s longtime friendship with Julia, a girl from a wealthy family who grows up to fight fascism in Europe in the 1930s.  I loved this book when I read it in high school and I went on to read nearly all of Hellman’s other work as well as several biographies.

As I watched the movie, several questions occurred to me and so, being a modern media consumer, I immediately searched for answers online.  This search led me to Wikipedia, which for me is a fine source of answers to the kinds of questions I had.  In fact, I use Wikipedia all the time for this sort of thing.  I was surprised then to find the following qualifying statement on the entry for Pentimento:

This section may contain original research.  Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed.

As I said, I use Wikipedia a lot.  And I have never seen this qualifying statement before.  I think this statement implies that original research is somehow bad.  I don’t think that’s what the folks at Wikipedia mean.  At least, I hope it’s not what they mean.  So I decided to look into the statement a little more deeply.  There are a couple of parts of the statement that are interesting.   

First, the words “may contain” are in bold.  I think that’s supposed to indicate that the section or may or may not contain original research.  It’s clear that articles in Wikipedia should NOT contain original research but it isn’t clear why. 

I then checked to see how “original research” is defined by Wikipedia and found this on their policy pages: “The term ‘original research’ refers to material—such as facts, allegations, ideas, and stories—not already published by reliable sources.”  How would one determine whether a particular section contained “original research” or not?  Probably by looking for references to “reliable sources” in the section.  Therefore, if a section doesn’t contain references (or not enough references), it might be difficult to determine whether that’s because the author simply didn’t include references to other available sources, the work is based on “original research” or the work is completely fabricated.  Or, I guess, it could be some combination of the three reasons.  So I guess that’s why “may contain” is in bold.  The lack of references could mean any number of things.

The next part of the qualifying statement is even more interesting to me.  “Please improve it by verifying the claims made and adding references.”  This statement implies that “original research” is somehow less valid than work that has been taken from another source.  Again, I doubt that’s what the Wikipedia folks mean. 

So I continued to investigate their policies and found this: “Wikipedia does not publish original thought: all material in Wikipedia must be attributable to a reliable, published source. Articles may not contain any new analysis or synthesis of published material that serves to advance a position not clearly advanced by the sources.”  Because of this policy against publishing original thought, to add references to an article or section of an article does indeed “improve” it by making it conform more closely to Wikipedia’s standards for what makes a good article.

This policy against publishing original thought explains the rest of the qualifying statement.  My investigations into Wikipedia’s policies found policies about what it means to “verify” statements in an article.  This is important because Wikipedia says that included articles must be verifiable (which is not the same as “true”), that is, users of Wikipedia must be able to find all material in Wikipedia elsewhere, in reliable , published sources.  And yes, Wikipedia explains what they mean by “reliable.”  That discussion is not easily summarized (and isn’t the point of this post) so anyone who is interested can look here

My surprise concerning the qualifying statement boils down to wording and I think the wording of the statement needs to be changed.  Currently, it implies that original research is bad.  But through my investigation, I’ve decided that Wikipedia probably means that articles should not contain unverified, unsourced statements.  Such statements could come from author sloppiness, original research or outright fabrication.  In any case, they should not be part of Wikipedia’s articles. 

Of course, I haven’t discussed whether the policy of not publishing original thought is an appropriate policy or not.  I have mixed feelings about this.  But that’s a subject for another post.



{October 17, 2010}   News Media Not Doing Its Job

As I drove to the airport in late September, I listened, as usual, to New Hampshire Public Radio in my car.   Election season is upon us so much of the coverage that morning was about state politics.  Two candidates are running for governor in NH, the incumbent John Lynch, a Democrat, and the Republican challenger, John Stephen.  They had debated the issues the day before and the reporter, Dan Gorenstein, was covering that debate.

Early in the report, Gorenstein quoted a voter who said, “They probably don’t agree on what day it is.”  That’s not a surprise for two politicians from opposite ends of the political spectrum.  Gorenstein goes on to say that the two candidates presented very different numbers during the debate concerning the budget.  Lynch claims that, under his leadership, General Fund spending has gone down 7%.  Stephen claims that under Lynch’s leadership, budget appropriations have gone up 24%.  Gorenstein then told us that both numbers are accurate “as long as you cut the numbers the right way.”

I’m writing about this report because of what happened next.  I expected Gorenstein to explain to us the differences in the way the two candidates “cut the numbers.”  But that’s not what this report was about.  Gorenstein simply told us that the numbers are confusing, that voters are right to be confused by the numbers, and that most voters will probably not take the time to figure out the differences.  How is that news?  How is that helpful to anyone listening to the report?

I spent the rest of my drive south thinking about this report, about how NHPR (my news source of choice) failed me and the other voters of New Hampshire in this report.  Why hadn’t they delved into the numbers for me?  Why didn’t they explain to me how both sets of numbers could be accurate (and, as Gorenstein also said, “arguably misleading”)?  What was the point of “covering” the debate in this manner? I was (and continue to be) significantly disappointed in NHPR.  And so I planned this blog entry in my head.

As I began to write this entry, I became even MORE disappointed in NHPR.  If you check out the link that I provided to Gorenstein’s report, you’ll see that within the transcript there is a link to an earlier, related story, also reported by Gorenstein.  On August 12, 2010, Gorenstein reported on the widely different budget numbers that were being touted by the two candidates.  And he explained why they are different and how they can both be considered accurate!  WHAT?  He had already done the research and yet made NO mention of it in his coverage of the debate.  The really surprising thing to me is that the explanation is not even very hard to understand.  Very disappointing.  And a lesson to aspiring journalists about how NOT to report the news.

So I don’t commit the same error for which I’m criticizing Gorenstein (even though it isn’t my JOB to inform the public), here’s the explanation for why the numbers are so different and can both be considered accurate.  John Lynch is correct when he says that spending from the General Fund has gone down 7% in the last two years.  But notice the words “General Fund.”  Many items have been moved to their own, separate budgets.  For example, the Liquor Commission (which runs all of the state liquor stores and deals with liquor licenses) budget is no longer part of the General Fund.  I tried to determine whether the Liquor Commission is self-sustaining, that is, takes in the amount of money (or more) in sales and fees to cover the amount that it spends.  I was unable to find that information, however.  So it isn’t clear to me what it means to these numbers to say that we’ve taken these items out of the General Fund.  John Stephen’s numbers take into account ALL of the money in the state budget, not just the General Fund.  If you do that, you’ll see that our state budget increased 24% in large part because of federal stimulus funds, money that the state received from the federal government to undertake specific projects such as bridge repair.  It isn’t clear to me that it’s helpful to include these funds when looking at whether John Lynch is a good fiscal manager.  Passing up these funds would be problematic (in my opinion).  In addition, Stephen’s number is about appropriations, rather than money actually spent.  In other words, he’s looking at how much has been budgeted to be spent, rather than how much was actually spent.  In 2009, the state budgeted 12% more in spending than it actually spent.  Lynch, on the other hand, is talking about the amount of money actually spent.  In other words, the two candidates are talking about apples and oranges.

That wasn’t so difficult to understand, was it?  I think Gorenstein could have explained this in his report.  Or, if he didn’t have time, he could have simply said that interested voters could go online and find his report about this to understand the differences between the numbers.  Then he’d be doing his job.



{September 23, 2010}   New Definition of “Friend”

One of the ways that I first knew that Facebook was having a major impact on our society was that I heard my friends in the real world, many of whom are English professors, using the word “friend” as a verb.  Before Facebook, “friend” was a noun.  Before Facebook, the verb form of “friend” was “befriend.”  But now, it is common to use “friend” as a verb, as in “He wants to friend me” or “She friended me.”  Of course, this use of the word refers to the creation of a symmetrical relationship between two Facebook accounts in which each acknowledges the relationship in a way that allows the owner of each account to see the content posted by the owner of the other account.  At least, that has been how we Facebook users have used the word from 2004 (when Facebook was founded) until this week.

And that’s because Facebook is once again changing the definition of the word.  Until this week, when someone made a request to be my friend, that would appear on my Facebook page with two options.  I could either accept this friend request or I could ignore it.  I’m not sure why I wasn’t able to outright REJECT such requests but ignoring them certainly appealed to my ever-shrinking nice side.  In any case, in anticipation of the new Facebook movie (The Social Network) and the “real” Facebook movie (Catfish), Facebook has made a change.  We no longer get the options of accepting or ignoring friend requests.  Instead, we can either accept the friend request or we can say “Not Now.”

So what does “Not Now” mean?  When you click “Not Now,” you are putting that particular friend request into a pending state, indicating that you want to deal with it later.  While this friend request is in the pending state, the person who did the requesting, when looking at your profile, will see the “Awaiting Friend Confirmation” message that they would have previously seen before you dealt with their request at all.  In other words, they will have no idea that you have put them into this pending state. 

Meanwhile, if you look at the right side of your main Facebook page and scroll down, you’ll see a “Requests” section and the friend request will appear there.  If you then click on it, you will be given the option at that point to either confirm the friend request or delete it.  By the way, THIS is how you really say you don’t want to be friends with someone.

But there are some other important points to keep in mind.  First, remember that you have to pay attention to your privacy settings.   For example, I make the majority of my information available to “Friends Only,” which means that only my friends can see my information.  Another of the options is that “Everyone” can see your information.  If that is the choice you have made, you might be interested in this new change made by Facebook concerning Friend requests.  If you have some of your settings set to “Everyone,”  then any Friend requests that you have said “Not Now” to and have not yet deleted from your Requests menu will get your status updates in their Newsfeed.  As though they had been approved as your friend.  Even though you have put them into this “pending” status.

So I think there are a couple of important things to pay attention to here.  The first is that “Everyone” is always a dangerous setting for privacy.  So think carefully about whether you want something to be set to “Everyone.”  The only things I have set to “Everyone” are “Send me Friend Requests” and “Send me messages.”  In other words, everyone can request to be my friend.  And everyone can send me a message.  I set this to “Everyone” because I wanted people who were requesting to be my friends to send me a message about why I should accept their friend request.  But since I don’t have “Search for me on Facebook” set to “Everyone,” I feel pretty safe here.  I have that set to “Friends and Networks.”

Now that the logistics of these settings is out of the way, it might be interesting to consider why Facebook would be making these changes.  Why would Facebook be changing the way friend requests work?  I think Facebook wants to change the way we think about the word “friend” so that we will be prepared for some additional changes in the future.  Currently, I think most people think of a “friend” relationship as a reciprocal relationship, a two-way relationship between two people.  By allowing this “pending” state for friends, Facebook is trying to get us to believe that friendship may not be reciprocal, may not be two-way.  If you put someone in this pending state (and you haven’t set privacy settings correctly), then they will have things put in their newsfeed about you that “non-friends” won’t have.

Why would Facebook want to change the definition of “friend?”  I think it’s all about money.  More specifically, I think it’s all about advertising.  I think Facebook is trying to push the envelope in terms of the definition of “friend” so that we increasingly accept things from our “friends” (even those in a pending status) as somehow more valid than “real” advertising.  Somehow Facebook will make money from our acceptance of non-friends as friends of some type, even if that type is “pending.”  Facebook doesn’t want us to think too much about this.  They just want us to accept.  Or at least say “Not Now.”



et cetera