Desert of My Real Life











{November 23, 2009}   Digital Rights Management Part 2

I’ve written about digital right management before.  My latest encounter with these technologies came this weekend and I’m reminded that the “rights” mentioned are not the rights of the consumer.

A couple of weeks ago, I realized my car is equipped with a jack that allows me to plug in an MP3 player to play music through my stereo.  Yes, my car is more than three years old but my excuse for only having just found this jack is that it is located in the center console, beneath the parking break.  Who would have looked in such an unlikely spot for an MP3 player jack?  In any case, once I found it, I thought that I should buy a cable that would allow my cheapo, no-name MP3 player to be connected.  Liz and I found those cables this weekend in Concord at Pitchfork Records for $5. 

I decided that I would augment my newly useful MP3 player with some new music.  Rather than purchase entire CDs which I would then rip and put on the player, I went in search of a legal, online source for music.  I checked out a few options and thought that I would include iTunes music store in my research.  So I downloaded the iTunes player and started to browse the music.  They have a great selection and you can listen to a sample of each song before you purchase it.  That’s a nice feature.  Finally, I made my selection, entered my credit card information and completed the purchase. 

Then I tried to download the song to my non-iPod MP3 player.  Unfortunately, Apple doesn’t want me to be able to play my newly, legally purchased song on a non-Apple device.  And so it is saved on my computer in a proprietary format that cannot be played on any device that is not some version of an iPod.  So I have now purchased a song that I can listen to on my computer but not on my MP3 player.  Unless, of course, I buy an iPod. 

This kind of digital rights management is all about the rights of the company that sold me this song.  I have no rights in this situation.  Through some research, I’ve learned that I can save the song to a CD in the proprietary format, change some settings in my iTunes player so that all ripped songs are ripped as MP3 files, and then rip the song from the CD.  If I do that, I will apparently have an MP3 version of this song that I have purchased legally.  I haven’t tried this method yet.  If I only have a couple of songs that I need to go through this process with, it is certainly manageable.  But it’s still a pain. 

I understand that companies want and need to protect their digital rights.  This particular practice, however, strikes me as monopolistic and against the underlying ideas of capitalism, that the market should prevail.  Why isn’t anyone suing Apple for these anti-capitalist practices?



{November 2, 2009}   Differences in Media

I’ve been thinking lately about the differences between media types.  This thinking was inspired by the new movie Disgrace based on J. M. Coetzee’s novel of the same name.  I will definitely see this movie (if it is ever released throughout the US) but I’m worried about the choices that the filmmakers have made.  I thought Coetzee’s novel was brilliant because it was told from the point of view of a character who is somewhat reprehensible.  But, of course, his reprehensibility must only be hinted at since he himself wouldn’t think he was reprehensible.  The subtlety of the novel is difficult to convey in a film.  And so the filmmakers have made choices that reduce the brilliant ambiguity of the novel.  And that makes me wonder whether I’m interested enough in the plot of the novel to enjoy the movie.

As I’ve mentioned before, I’ve been watching Battlestar Galactica on DVD.  The original series aired on television and so the commercial breaks are obvious on the DVD.  In the most recent episode that I watched, a character is in a room with a spiritual advisor, discussing a recurring dream.  At a dramatic moment in the telling of the dream, the screen goes black, clearly a commercial break.  When we return to the story (without having to watch a commercial, which is why we like NetFlix), we enter the story at exactly the same point that we left it.  We left the story at a tension point so that we would be sure to come back after the commercial.  This technique works well in television.

The same technique does not work well at all in novels.  I read and hated Dan Brown’s novel, The DaVinci Code.  I really wanted to like this novel.  Dan Brown, after all, is from New Hampshire, and the premise of the story is intriguing.  But I couldn’t get past the poor craftsmanship of the novel.  The characters were two-dimensional and indistinguishable from each other.  I figured out the “secret” of the novel (which I won’t spoil here) about half-way through.  But my biggest problem was the chapter breaks.  Dan Brown writes really short chapters, some of which are a page long.  And often it is completely unclear why these chapter breaks occur.  Why have a chapter that is one page long and then have the next chapter start right where the action of that really short chapter ended?  I felt as though Brown had thought about moving these two chapters around, away from each, in order to build tension, in much the same way that Battlestar Galactica’s breaks for commercials build tension.  A good editor could have made sure these two chapters did not appear one right after the other, unlike the two scenes with the commercial break between them.  These examples remind me that different media require different production techniques just as they require different analysis techniques.

On a side note, the use of the made-up word “fracking” on Battlestar Galactica is getting on my nerves.



{July 18, 2009}   New Kindle Developments

I was talking to my dad tonight about the Kindle.  He’s a fan and wants one, but feels as though he doesn’t read enough to justify the expense.  I’ve written about the Kindle before and have said that I have a problem with Amazon’s high pricing of electronic books.  Now Amazon has screwed up in another way and I have mixed feelings about that.

Recently, Amazon removed all traces of the digital versions of two of George Orwell’s classic novels, 1984 and Animal Farm, from their web site so that Kindle users can no longer purchase them.  That action is not controversial.  Amazon’s other actions, however, are controversial.  Amazon also removed all digital traces of the novels from the Kindle devices of users who had purchased the novels.  It turns out that the publisher who sold Amazon the rights to distribute the novels did not actually own the copyrights (in the US) for them.  When Amazon determined that they were illegally selling the digital version of the novel, they stopped selling it.  But they also retroactively removed the digital versions of the novels from those who had purchased it.  People in the blogosphere writing about this issue have conflicting ideas concerning Amazon’s reaction.  Some are outraged while others think Amazon did the right thing.

The difference in these two points of view comes down to values.  Those who think Amazon did the right thing liken this to the police confiscating a stolen car from your driveway.  You never had the right to own the item, whether you purchased it knowing it was stolen or not  Those who think Amazon did the wrong thing believe that the users had purchased the item in what they thought was a legal manner and, therefore, Amazon should have left well enough alone.  In fact, many are making the argument that situations such as this are arguments against digital distribution of content since the ownership of digital content is so ephemeral.  The truth seems to be somewhere in between these two extremes, I think.  There are two reasons that this is not the same as the police confiscating a stolen car.  First, Amazon had a duty to determine that they were selling a legal product.  They failed in this duty and should be held liable in some way for that failure.  Second, once Amazon discovered their error in illegally selling the product, they were less than forthcoming about the remedy.  They did refund the purchase price of the novel but they didn’t clearly explain what had happened and clearly notify those who had purchased the novels that they were being removed.  Instead, Amazon surreptitiously removed the novels from the Kindle devices.  That’s wrong.  On the other hand, Amazon is not the devil in this situation.  They honored the copyright of the novels and, most importantly, they refunded the purchase price.  They tried to do the right thing.

As in so many situations, the real issue here seems to be about Amazon’s lack of forthrightness about the issue once it was discovered.  The cover-up of the crime once again turns out to be worse than the crime itself.  Did we learn nothing from Watergate?



{May 22, 2009}   NeMLA 2010

There have been quite a few stories that have captured my attention in the nearly six month break that I’ve taken from writing entries in this blog.  I will be sharing several of those stories in the next few days.  In the meantime, I recently had a panel proposal accepted for the Northeast Modern Language Association conference that will be held in Montreal in April 2010.  Here’s the call for papers for my panel:

Playing Web 2.0: Intertextuality, Narrative and Identity in New Media

 

41st Anniversary Convention, Northeast Modern Language Association (NeMLA)

April 7-11, 2010

Montreal, Quebec – Hilton Bonaventure

 

A recent Facebook spoof of Hamlet by Sarah Schmelling illustrates the current proliferation of experiments in narrative form and intertextuality found in new media.  Web 2.0 tools, such as wikis, blogs and social networking sites, allow the average web user to actively participate in online life.  Given our societal bent toward postmodernism, it is not surprising that much of this online participation is characterized by a proclivity to challenge and play with traditional conventions.  This panel will examine play, defined in the broadest sense by Salen and Zimmerman as “free movement within a more rigid structure”, using Web 2.0 tools and new media.  Some questions of interest to the panel include:  Are there particular attributes of new media technologies that encourage play?  How is new media play different from/similar to play found elsewhere?  What impact do new media technologies have on our notions of play?  What are the motivations of those who engage in play via new media technologies?  Some example topics for the panel include: experimentation with new literary forms using social networking conventions (such as the 140-character status update); creation of online identities using text-based tools such as blogs; development of fictional worlds by fans of popular culture narratives using wikis and blogging tools; the use of casual online games to influence attitudes and behaviors concerning issues of social importance.

Submit 250-word abstracts to cleblanc@plymouth.edu.

 

Deadline:  September 30, 2009

 

Please include with your abstract:

 

Name and Affiliation

Email address

Postal address

Telephone number

A/V requirements (if any; $10 handling fee)

 

The 41st Annual Convention will feature approximately 350 sessions, as well as dynamic speakers and cultural events.  Details and the complete Call for Papers for the 2010 Convention will be posted in June: http://nemla.org/.

 

Interested participants may submit abstracts to more than one NeMLA session; however panelists can only present one paper (panel or seminar).  Convention participants may present a paper at a panel and also present at a creative session or participate in a roundtable.

 

Travel to Canada now requires a passport for U.S. citizens.  Please get your passport application in early.



{December 31, 2008}   Failed Predictions

Predicting the future is a notoriously difficult endeavor and yet there is never a shortage of people willing to play the game, especially at the end of a year. 

Many of the predictions for 2009 seem to involve world politics.  For example, over at Psychic World, Craig and Jane Hamilton-Parker predict that an assassination attempt on Barack Obama will occur in 2009.  They posted this prediction on October 9, 2008 and then updated the entry on October 27, 2008 (in red font, just so we know that it’s an important update).  The update tells us (and I can almost hear the breathlessness with which this important information is stated) that this prediction already came true!  Apparently, the vague assassination “plot” by two neo-Nazis thwarted by the ATF in October constitutes an assassination “attempt”.  The fact that these men did not actually begin to implement the plot, which involved first shooting over 100 black people in Tennessee and following that spree up with the assassination of then-Senator Obama, doesn’t matter to the psychics who made this prediction.  It still counts as a success for their ability to predict the future.  An even bigger issue for me is the fact that they predicted the assassination attempt would take place in 2009.  Clearly, this plot was discovered in 2008.  The psychics never discuss how useful it is for a prediction to be that far off in its timing and details.

As amusing as I find the predictions of psychics who claim to be able to “foresee” the future, the predictions that I’m most interested in are the ones made by those who examine trends and then predict where those trends will take us.  People who make these kinds of predictions are called “futurists” or “futurologists” and, unlike psychics, claim no mysticism in coming to their predictions.  Instead, according to Wikipedia, futurologists study “yesterday’s and today’s changes, and aggregating and analyzing both lay and professional strategies, and opinions with respect to tomorrow. It includes analyzing the sources, patterns, and causes of change and stability in the attempt to develop foresight and to map possible futures.”  Although futurologists make predictions about many different fields, I’m particularly interested in the area of technology, especially because technological change is very rapid and vast.  I think technology shows despite their claims to scientific methodologies, the predictions of futurologists are typically as wrong as the predictions made by those claiming to have a mystical insight into the future. 

The technological futurologist that has gotten the most attention in the US in recent years is Ray Kurzweil, the author of a number of books that have captured the popular imagination.  Kurzweil is a computer scientist from a time when computer scientists were rare.  When he was just a teenager, long before computers were widespread and common, he created computer software that wrote impressive musical compostions using the patterns it discovered analyzing great masterworks.  He also developed the first optical character recognition software which led to his invention, in 1976, of The Reading Machine, which read written text out loud for blind people.  Since that time, he’s invented musical synthesizers, speech recognition devices, computer technology for use in education, and a whole host of other useful tools.  He’s obviously a smart, creative guy who knows a lot about technology and how to use it to benefit humans.  Kurzweil’s faith in technology is so great that he considers himself to be a transhumanist, advocating the use of technology to “overcome what it regards as undesirable and unnecessary aspects of the human condition, such as disability, suffering, disease, aging, and involuntary death,” according to Wikipedia.  It is in this area that many of his predictions fail.

In his 1999 book, The Age of Spiritual Machines, about the impact of artifcial intelligence on human consciousness, Ray Kurzweil made a number of predictions about technology at the end of 2009, 2019, 2029, and 2099.  Since we are just about to begin the year 2009, I thought it might be interesting to consider how likely it is that Kurzweil’s predictions can come true in the next year.  Chapter 9 of the book, which makes predictions for 2009, can be read online here.

The chapter is divided into sections called The Computer Itself, Education, Disabilities, Communication, Business and Economics, Politics and Society, The Arts, Warfare, Health and Medicine, and Philosophy.  Although some of Kurzweil’s predictions have indeed come to be reality, the vast majority of them are still far off into the future.  In fact, some involve technological tangents that seemed interesting in 1999 but that our society has chosen not to pursue.

Kurzweil predicted that the computer itself would be much more ubiquitous than it actually is and that they would be smaller than they actually are.  Because computers are so ubiquitous and small today, it’s difficult to imagine how someone might have overestimated these trends just ten years ago.  But that’s the problem with Kurzweil.  He is such a technology evangelist that he tends to go too far.  In the case of the computer itself, he predicted that the average person would have a dozen computers on and around her body which would communicate with each other using a wireless body local area network (LAN).  These computers would monitor bodily functions and provide automated identity verification for financial transactions and for entry into secure areas.  The technology he describes is nearly available now in the form of radio frequency identification (RFID) chips which are common in some warehouses and which are now part of every US passport.  Most of these RFID chips are passive devices, however, which means that they can only be read by an external device and do not provide computing power themselves.  In addition, there has been something of an uproar over the increased use of these chips.  For example, I recently received a new ATM/credit card from my bank that had an RFID chip embedded in it to make using the card easier.  I would no longer need to swipe the card to use it.  Instead, I could simply tap it against any reader.  But because it doesn’t have to be swiped, anyone who got close enough to me with a reader could read the chip.  I didn’t see the advantage of having such a chip in my credit card and saw many disadvantages and so I returned it, making a special request to get a card without the chip.  I suspect there are others out there who have similar concerns.  Kurzweil did predict that privacy issues would be a concern in 2009 but I’ll talk about that later.

Some of the other things about the computer itself that Kurzweil got seriously wrong involve the way in which we interact with our computers.  He predicted that most text would be created using continuous speech recognition software–in other words, we would speak to our computers and they would transcribe our speech into text.  This is clearly not going to become the norm in the next year and I’m not sure we would want it to become the norm.  As I sit typing this blog entry, for example, I have the television on (because multi-tasking is the prevalent way of interacting with the world–something that Kurzweil does not mention) and Evelyn is sitting next to me interacting with her own computer.  Neither of us would want the other to be talking to her computer at this moment.  This might be an example of a place where a cool technology would actually be an obstacle to the way most users interact with their computers.  But Kurzweil did not stop there.  He also predicted that we would wear glasses that allowed us to see the regular visual world in front of us but with a virtual world superimposed on it using tiny lasers.  Such glasses do exist but they are novelties, used only in experimental situations.  And I think most people would find such a superimposition to be a distraction.  Until some benefit can be shown for this technology and how it allows us to interact with the world, I think it will remain a novelty.

Another area where Kurzweil predictions have not come to fruition (yet) is the area of disability.  It is in this area that Kurzweil betrays his transhuman biases.  He predicted that by the end of 2009, disabilities such as blindness and deafness could be dealt with using computing technologies to the extent that such disabilities are no longer considered handicaps but are instead mere inconveniences.  Although significant progress has been made in the area of augmenting such situations using computing technologies, we are nowhere close to where Kurzweil predicted we would be.  Kurzweil’s zeal in the advancement of technology once again led him to overestimate the progress that we would be able to make in ten years.  The history of technology is filled with such zeal and overestimation.

I won’t detail every area that Kurzweil gets things wrong but I do want to touch on the area of politics and society.  The Obama campaign rode its unprecedented use of technology to a presidential victory but in ways that were not predicted by Kurzweil.  Kurzweil predicted that privacy issues would be a primary political issue and although there are groups of people who are very concerned with privacy in our society today (both because of technical issues and because of political issues involved with the War on Terror), I don’t think too many people would say that privacy is a primary political issue in our society, although I, for one, wish it was a bigger issue for most people. 

I’m curious to see which of Kurzweil’s predictions do eventually come to pass.  My guess is that anyone who pays close attention to technological issues could attain the same level of accuracy that he does.  At least he doesn’t claim to have some mystical connection to what the future will bring.



{November 10, 2008}   Missing in Action

Clearly, I have been missing in action.  The only thing I have to say about that is that my real life (as opposed to this, online, unreal life) has been crazy.  I have thought about several posts but have not had time to actually write any of them.  I’m hoping that things will settle down in a few weeks.  In the meantime, here’s a excellent post from Drawing In starring one of my favorite actors, Jodie Foster.  And it’s about a kind of technology that we don’t see much any more.  Enjoy!



{September 13, 2008}   Digital Rights Management

Back in the late 1980’s, I worked as a volunteer on a running race. Because I had a background in computer science, one of my tasks was to set up a database of all the race entrants and then to enter their finishing positions after the race so that we could publish the results in the local paper. A friend of mine had an Apple II computer with a database management program on it. I think the database program was AppleWorks. In any case, the database management program had a primitive copy protection mechanism, a scheme for ensuring that users of the software did not give copies of it to their friends. Each time I started the program, I had to answer a question from the user manual. The question might be something like: On page 37 of the manual, what is the fourth word in the third paragraph? This was in the days before copy machines were widely available so the thinking was that the software would not be very useful if you didn’t also have a copy of the user manual. It was a very primitive way of trying to prevent users from giving the software to all of their friends, of trying to protect what the software developers felt was their right to limit the copying of their software. Of course, this mechanism would not work today since it’s extremely easy to copy user manuals. But even back then, the critique of this protection mechanism was that it was easy to circumvent if you were determined to do so but it was simply an inconvenience for legitimate users. What if you lost your user manual, for example?

Since that time, digital rights management has come of age. DRM is a hot topic with owners of digital content claiming that their rights cover all sorts of things, allowing them to do all sorts of things to our computers without our consent. And yet, it is virtually impossible to use technology to prevent the copying of software and other digital content. So DRM is typically criticized for not actually protecting against illegitimate copying while making the lives of legitimate users very difficult. A number of stories about DRM have been in the news recently.

What is digital rights management? According to Wikipedia, it is a generic term that refers to any scheme that a hardware manufacturer or copyright holder implements to prevent illegimate use of their hardware or copyrighted materials. In 1998, the United States passed the Digital Millenium Copyright Act (DMCA) which among other things, made the circumvention of any digital right management mechanism a crime. In other words, if a company used the DRM mechanism that I described above (asking users to answer questions from a user manual), then copying the manual and giving it to a friend would violate the DMCA. But the situation for users of digital content is even more dire than that. DRM mechanisms today are wide-ranging, claiming all kinds of rights for the owners of digital copyrights, at the expense of your right to control what happens on your own computer.

I have been thinking about the DMCA since its passage because of its immediate impact on the research of computer scientists. Soon after the passage of the DMCA, the Secure Digital Music Initiative (SDMI) ran a contest that challenged researchers to break their latest digital watermarking scheme. Edward Felten, a computer scientist at Princeton, chose not to sign any of the confidentiality agreements that would qualify him for the monetary prize of the contest. Within three weeks, he and his team had broken the watermarking scheme and wrote a scientific paper that described the techniques they used. When the SDMI and the Recording Industry Association of America (RIAA) found out that the team was planning to present this paper at a conference, they threatened to sue, citing violation of the DMCA, specifically the portion of the act that makes it illegal to circumvent DRM schemes (of which the digital watermarking scheme was one). Felten withdrew the paper but also sued the SDMI and the RIAA and sought a ruling that presenting the original paper should actually have been allowed. Because Felten had not actually been sued and therefore had not been harmed, his case against the SDMI and the RIAA was dismissed on the grounds that he lacked standing to sue. Since then, the Justice Department has said that any threatened legal action against researchers such as Felten under the DMCA is invalid. But this judgment has not yet been tested in a court of law. And in the meantime, content providers have gotten bolder in their uses of DRM technologies.

In early 2007, Sony BMG Music Entertainment agreed to settle with the Federal Trade Commission after it was discovered that music CDs from the company contained software that was secretly installed on any computer on which the CDs were played. This software “limited the devices on which the music could be played, restricted the number of copies that could be made, and contained technology that monitored their listening habits to send them marketing messages.” Because the software gave access to users’ computers to Sony BMG, it also opened up holes on those computers to any intruder who knew about them. In addition, the software, once discovered, was unreasonably difficult to remove. The Federal Trade Commission said that this secret installation of software violated federal law. The settlement was a financial and public relations disaster for Sony BMG and should have put that kind of DRM technology out of business forever.

But the long-awaited release of Will Wright’s new game, Spore, from Electronic Arts earlier this month shows that DRM is alive and kicking. The reviews on Amazon are overwhelmingly negative due to the existence of SecuROM, a particularly nasty implementation of DRM. This software was developed by Sony DADC, does not announce that it is installing itself, limits the user to 3 installations of the game (even if it has been uninstalled), and is very difficult to uninstall, even if the game is uninstalled. It remains to be seen what kinds of security risks are opened up on the computers that have SecuROM on them. The biggest complaint seems to be about the limit of three installations because of how strict this limit is. Apparently, changes in hardware make the software believe that a new installation has occurred. So if a user upgrades her video card, she may use up one of her Spore installations. This software sounds very similar to the software that Sony BMG got slapped down for using so I can only imagine what is going to happen as these thousands of disgruntled gamers make their dissatisfaction known. Of course, the developers of Spore claim they are just trying to stop piracy. The problem with this argument is that the DRM scheme was broken before the game was released so anyone intent on pirating the game will be able to do so. Only legitimate users of the game will be harmed by SecuROM.

Legitimate users of Yahoo Music recently learned the lesson that purchasing DRM-protected content is actually like renting, rather than purchasing, that content. Yahoo Music Store will close its virtual doors at the end of this month. If you are one of the unlucky legitimate customers who bought your music through this store, you will no longer have access to your music because of Yahoo’s DRM scheme. When the store closes, the DRM license key servers will shut down. If you can’t get a DRM license key, you can no longer listen to music that you legitimately purchased. Meanwhile, those who pirated that same music will continue to enjoy what they pirated.

Content providers need to stop creating roadblocks for their legitimate users. These roadblocks do nothing to protect content.



{September 9, 2008}   Fringe

I am watching a new show on Fox tonight.  It is called Fringe and it’s from JJ Abrams, the guy who developed Lost which is one of my favorite shows of all time.  It’s a cross between X-Files, Lost and Heroes, all of which I have really enjoyed.  But here’s the thing.  One of the major plot developments in the pilot involves a Harvard faculty member who was institutionalized 17 years ago in a psychiatric hospital.  He is released from the hospital and goes back to the Kresge building at Harvard.  His lab had been in the basement of that building which has been used as storage for the last 17 years.  Lo and behold, his lab is pretty much intact, with microscopes and lab equipment just as he left it all those years ago.

This series is science fiction but this particular plot point was the most difficult one for me to swallow.  If Harvard is like most institutions of higher education (including my own), space is at a premium.  There is no way that such a large space would be left alone for 17 years while the primary user of that space languishes in a mental institution!  It’s funny to me that it’s this mundane detail rather than the many sci fi potentialities that makes me question the reality of this show.



{September 7, 2008}   We ARE Telling Stories

As I suggested in a previous post I don’t understand why FaceBook calls each status update a story. I said that if we were to consider each update a plot point in a longer story, then I could understand the use of the word story. Clive Thompson, in a New York Times article, explains that part of the reason these status updates (no matter how banal they might seem individually) are compelling is precisely because taken together, they tell us a story of our friends’ daily lives that we wouldn’t otherwise have. It’s a fascinating article. Thanks to Liz for pointing it out to me.

I can now be found on Twitter. I look forward to reading 140-character installments of your life story there.



{July 21, 2008}   Non-FaceBook Friends, Beware

Pat sent me this article from the New York Times. So if you’re a non-FaceBook friend of mine and I don’t return your phone call, I hope you’ll understand why.

Hey, Friend, Do I Know You?

By DAVID CARR
Published: July 21, 2008

Not that long ago, I needed some advice on the book business and thought to ask my friend Buzz Bissinger, the author of “Friday Night Lights” and “A Prayer for the City.” The only sticking point was, we’d never met.

Although he used to be a reporter, we are not what I would call peers. He wrote one of the greatest sports books ever, and oh, one of the best books about city government ever. “Friday Night Lights” became a movie and then a television series and apart from me being a hopeless fanboy of the show, we have nothing in common.

Other than Facebook, of course, where we are “friends,” after he was referred by our mutual friend Vernon Loeb of The Philadelphia Inquirer. Taking that supplied noun as a permission, I sent Mr. Bissinger a message on Facebook and asked for advice. We got on the phone and I found out exactly, precisely what I wanted to know from, as they say in the Web world, a highly trusted source.

Isn’t “friendship” wonderful?

Facebook, which I had always thought of as a guilty diversion just a step up from Funny or Die, does have its social — and business — prerogatives. The network is on a tear right now, achieving numerical parity with MySpace in global reach.

Last month, according to comScore, Facebook had 123.9 million unique visitors and 50.6 billion page views worldwide while MySpace had 114.6 million unique visitors and 45.4 billion page views. MySpace still dominates in the United States, but if my page is any indication, a lot of people who aren’t texting OMG about the guy sitting in the next booth feel a need to opt in to social media.

According to company executives, Facebook, which has over 80 million subscribers worldwide, doubled the number of subscribers under 35 last year, but it tripled the number of subscribers between 35 and 54. Early adopters of Facebook, which was the province of students until 2006, must wonder who let all the old guys in. Sometime in the next day or so, Facebook will unveil a major new design for the site, which users can opt-in to.

As we speak, my Facebook page, a couple of months old, is crawling past 200 friends. There are people on there whom I have known since they wore skinny ties and distressed sport coats, and there are others whom I would not know if they walked up with name tags on the size of sandwich boards. But we have friends in common, and in the parlance of social media, we are connected.

Skeptics slag Facebook and its ilk as e-mail with pictures, but do not underestimate the value of a photo — oh, he seems nice — along with a referral. If the person pinging you is friends with five friends of yours, shouldn’t that person be your friend?

Once you jack in, Facebook creates its own imperatives. Why am I uploading pictures of my last family trip to the lake in the Adirondacks at 11:45 p.m.? Because I want someone, anyone, to see them. But from a business perspective, it creates some more complications. Say the head of a media company that I occasionally cover wants to “friend” me. Seems O.K., but should he really know what I look like with my shirt off? (Trust me, don’t let the image linger. I shower with my shirt on.)

I neither want to be strategic in my postings nor selective in my friending, but I should probably be doing one or the other. I am also not religious in maintaining my profile, in part because I have no personal assistant to update my page, as one executive I know told me he does.

Its viral effects can be profound. Peter Shankman, founder of Help a Reporter Out (gee, wonder how I found that?), began that mission on his Facebook page in November of 2007. The endeavor outgrew its Facebook nest and was reborn a few months later as www.helpareporter.com, which offers sources to reporters who post. It started out with 694 members; now it has more than 15,000.

I think of Facebook as a middle ground between business and pleasure, sort of MySpace for post-adolescents or LinkedIn for professional late adopters like me. Facebook, developed by Harvard kids to keep track of each other, was unleashed on the world in 2004 and has become an Ivy League at large for the land-grant set, a place where it’s not whom you know, it’s whom you kind of know.

But some people want no business mixed with their pleasure.

“Web sites are similar to TV and radio stations: people expect some form of programming format,” wrote Tyler DeAngelo, interactive creative director at DeVito/Verdi. “I don’t want to hear country music on a rock station, so why do I want to hear you talk about financial reports in the same place I discuss who I have been hooking up with?” (Besides, he adds, “I’m not sure you want Tom from accounting checking out your hot daughter in her bikini last summer.”)

Chamath Palihapitiya, vice president of Facebook, noted that the site is not primarily a business tool. “We are not going to help you close a deal, but Facebook is a social utility that is relevant in many contexts, including business,” he said. “As you get older, there is this huge tapestry of your life, with many inflection points from where you went to school and the jobs you had, and as more and more people connect with you, it rapidly increases the utility.”

But at some point, you lose utility as well. As Simon Dumenco noted in Portfolio, Bill Gates dropped the habit after getting 8,000 requests a day to be friends. Some social truisms — the rich will always be popular — still hold in the supposedly flat world.

When a new media winner like Facebook comes over the horizon, who loses? In my case, it’s probably my real actual friends. As a reporter, I learn to hate the telephone during the day, but at night I feel somewhat social again and step out onto the porch to call buddies for a little nocturnal quality time. Now I am too busy checking their status updates to actually speak to them.



et cetera