Desert of My Real Life











{February 20, 2016}   Apple vs. The FBI

I’ve been reading a lot about the controversy surrounding the court order compelling Apple to help the FBI break into the phone used by one of the San Bernardino killers, Sayed Farook. I think at this point, I mostly understand the technical issues although the legal issues still confound me. And there’s a significant question that I’m not seeing many people discuss but would help me to understand the situation better.

Here’s what the case is about. The iPhone used by one of the killers is owned by his employer, San Bernardino County. The FBI sought and received a court order to confiscate the phone with the intention of gathering the data stored on it. The County willingly turned the phone over. As an aside, there is currently a controversy with the FBI saying that a County employee, working on his own, reset the password for the phone after giving it to the FBI which means one possible method for retrieving the data from the phone is no longer available. The County claims that its employee reset the password under the direction of the FBI. Somebody is lying. If the FBI really did direct the employee to reset the password, they need to hire more adept technologists. The news stories about this controversy neglect to mention that the method in question would only have worked if Farook had not changed his password after he turned off the automatic iCloud backup. I think that’s pretty unlikely.

So, the FBI has physical access to the iPhone but the problem is that the phone has two layers of security. The first is that it will automatically delete all of its data if someone enters an incorrect password 10 times. The second is that the data on the phone is encrypted which means that it can’t be read unless the password is entered. The FBI sought and received a court order to require Apple to “bypass or disable” the feature that wipes the phone clean. Doing so would then allow the FBI an unlimited number of password attempts to decrypt the data stored on the phone. Apple’s response to the court order is that to comply would be to put the data of every iPhone user in jeopardy.

One of the things that confused me about this story was that I kept hearing and reading reports about Apple helping law enforcement to unlock iPhones many times in the past. The folks over at Tech Crunch helpfully explained that Apple’s current response is not hypocritical. For iPhones running the operating system iOS 7 (and previous versions of iOS), Apple had the ability to extract data from the phones. And so it complied with court orders requiring it to extract data from iPhones. For iPhones running iOS 8 and later, Apple removed that capability. Apple has stated that the company wants to protect its users’ data even from Apple. The iPhone in question is running iOS 9. So Apple does not currently have to capability to extract data from the phone in the ways that it has in past cases. In order to comply with the court order, Apple would need to write some new software, a version of iOS with the phone wiping feature disabled, and then install it on the iPhone in question. The court order requires Apple to provide “reasonable technical assistance.” Is writing new software “reasonable technical assistance”?

But here’s the question that I haven’t found an answer for. Is there a precedent for the government compelling a person (remember: corporations are people so Apple is a person, right?) to build something that doesn’t already exist? The case that’s being cited as a precedent seems to me (admittedly, not a lawyer) to be pretty different. In that case, the Supreme Court said that the government could compel The New York Telephone Company to put a pen register (a monitoring device) on a phone line. But the telephone company already had the technology to monitor phone lines so it wasn’t as though they were being compelled to create a new technology. Apple is being asked to write a new piece of software, to build something that doesn’t already exist. This diversion of resources is one of their grounds for objecting to the court order. So, John McAfee has offered to write the software for free. It isn’t clear, however, that writing the software is enough since iPhones will only work with software that has been signed by Apple. Even if McAfee was successful, the government would still need Apple’s cooperation. And that’s unlikely since Apple’s philosophy is that their products should provide their customers as much data security as possible.

Ultimately, I agree with Bruce Schneier that the American public is best served if Apple does not comply with the government’s order. The government says that this request would be a one time thing, that they would not ask for such assistance again. I don’t believe that. Even if I did believe that the government would not ask again, I don’t think we can keep such software, once it exists, out of the hands of the many, many hackers who want to steal your data. That is a threat to our everyday lives that far outweighs the threat of terrorism.

Addendum (2/21/16): I’ve read some articles that take issue with Apple CEO Tim Cook’s “slippery slope” argument. His argument has been that if Apple complies with this order to circumvent the iPhone feature that wipes the phone clean after 10 incorrect password attempts, they will have no basis to refuse to do so in the future. Every time the US government asks them to circumvent the feature, they will have to do so. Government lawyers have said that this request is about this phone only and that they won’t ask in other cases. Tell that to Cyrus Vance, Jr., the district attorney in Manhattan. On Weekend Edition this morning, Vance argued that Apple should comply with the order because they are circumventing law enforcement’s ability to view the data on more than 175 phones related to criminal investigations. If this software is available for use by law enforcement officials, it will be available for use by the “bad guys.” That puts everyone’s data in jeopardy. Apple is protecting your ability to keep your data out of the hands of hackers (whether they work for the government or not).



Quite a lot of people hate “Obamacare” which is otherwise known as the Patient Protection Affordable Care Act. And there are indeed things to hate about the law. For example, I am a proponent of single payer health insurance and so the “individual mandate,” where people are required to purchase insurance on their own or pay a “tax” or a “fee” or whatever you want to call it, is problematic to me. I would prefer that we be completely up front about things and build the payment for health care into our tax law. Yes, I know that makes me a “socialist” but I think health care is kind of like fire fighting. Do we want to go back to the days of private fire fighters, where you had to pay up front or the fire fighters wouldn’t show up at your house? Fire fighting is something that we should all contribute to via our tax dollars and then when we need it, the service is provided. If that’s “socialism,” then yes, I am for socialized medicine.

As I said, I believe there are things to complain about and criticize in the Affordable Care Act. But it was quite surprising to me that one of my FB friends posted a link to a video claiming that the Affordable Care Act mandates that we all be implanted with RFID chips with our health information by March 23, 2013. I had not heard of this mandate, despite the fact that I have been paying pretty close attention to the debate. I would have serious problems with such a mandate but there were several things about the claim that immediately made me suspect it was a figment of someone’s imagination. If you can bear to watch the video, here‘s a short version of it. But for those of you who can’t bear to watch the video, I’ll describe it.

The video begins with an advertisement from a company that makes implantable radio frequency identification (RFID) chips. These are chips that many of us already possess on our ATM cards or passports. The chips contain information of some sort that can be read with a special device that picks up the radio signals emitted by the chip on the card. There are companies that make versions of these chips that can be implanted under the skin of a human or an animal. Some pet owners may have implanted them into their dogs or cats in case the pet gets lost. In any case, the video starts with an ad for these implantable chips and then claims that the Affordable Care Act requires that everyone in the US be implanted with one of these by March 23, 2012. The evidence? The narrator reads a passage (claiming it comes from the law itself) that discuss the creation of a government database to keep track of devices that have been implanted into humans. Then the narrator reads another passage that mandates the creation of a system within hospitals and doctors’ offices that will allow medical information to be stored on and read from RFIDs. These passages say that these two systems must be in place by 36 months from the passage of the law. That’s where the narrator gets March 23, 2013–that is 36 months after the passage of the law.

The thing to notice about these passages is that they say nothing about forcing the implantation of RFID chips. A database to keep track of devices that have been implanted in humans would keep track of things like pace-makers and hip replacements and all kinds of devices that are implanted voluntarily and for the improvement of someone’s life. And we already use RFIDs to keep track of personal information, such as financial information or passport information. These RFIDs are embedded in cards that we carry around with us and the passage that the narrator reads simply suggests that we need a system that would allow medical information to be stored on RFIDs, presumably embedded in cards similar to a credit card or a passport. There is nothing about mandating the implantation of an RFID. Here’s what Snopes has to say about this particular conspiracy theory–note that their evaluation is that there is no truth to the claim.

When there are real things to criticize in this law, why would someone make up a threat such as this? I think it’s because it works. It plays on an emotional response in ways that the real issues do not. And so you get lots more people to care about what is admittedly a scary idea than you would ever get to care about the real problems with the law. So people who would probably not pay attention to the health care debate otherwise are now vehemently against the government intruding on our medical privacy in this way, despite the fact that there is no evidence that the government plans to intrude in this way. So lots of people who would actually benefit from the provisions of the Affordable Care Act are vehemently opposed to the law for reasons that have nothing to do with the reality of the law. And no amount of debunking will make these untruths go away. Just ask the American public whether the US ever found evidence that Saddam Hussein had weapons of mass destruction.



{June 5, 2012}   Magical Thinking

You probably haven’t noticed that I’ve been away for awhile. But I have. In fact, this is my first post of 2012. I have no excuse other than to say that being the chair of an academic department is a time sink. Despite my absence, there have been a number of things over the last five months that have caught my attention and that I thought, “I should write a blog entry about that.” I’m sure I’ll get to many of those topics as I renew my resolve to write this blog regularly. But today, I encountered a topic so important, so unbelievable, so ludicrous, that I have to write about it.

One of my friends posted a link to Stephen Colbert’s The Word segment from last night. Go watch it. It’s smart and funny but incredibly scary for its implications. For those of you who don’t watch it, I’ll summarize for you. The word is “Sink or Swim” (and yes, I’m sure Colbert knows that isn’t a word–he’s ironic). Colbert is commenting in this segment on the fact that North Carolina legislators want to write a law that scientists can only compute predicted sea level rises based on historical data and historical rates of change rather than using all data available. In other words, scientists are not allowed to predict future rates of change in sea levels, only future sea levels. They cannot use the data that they have available that show that the rate of change itself is increasing dramatically. Instead, they can only predict the sea level based on how fast it has risen in the past. Colbert has a great analogy for this. He suggests that his life insurance company should only be able to use historical data in predicting when he will die. Historical evidence shows that he has never died. Therefore, his life insurance company can only use that evidence in setting his life insurance rates. Never mind the fact that there is strong evidence from elsewhere that suggest it is highly likely that he will die at some point in the future. The analogy is not perfect but I think it illustrates the idea.

Using all evidence, scientists are predicting sea levels will rise by about a meter (Colbert makes a funny comment that no one understands what this means because it’s in metric–that’s the subject of another post) before the end of the 21st century. If this is true, anyone who develops property along the coast will see their property underwater in a relatively short amount of time. Insurance rates for such properties will probably be astronomical and it might even be impossible for such development to occur because without insurance, loans may not be able to be secured. That’s not good for business. In what can only be called “magical thinking,” the North Carolina legislature is putting it into law that climate change models can only use historical sea level rising rates to make predictions about future sea levels. Such models ignore the data that suggests that the rate of rise in sea levels is increasing. This will make the historical rates of increase look incredibly slow. In fact, the bill actually says, “These rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of seas-level rise may be extrapolated linearly … .” So despite evidence that sea levels are rising in a non-linear manner (because the rates of increase are actually increasing), predictions cannot use this fact. When scientists use a linear rate of increase, the models predict that sea levels will rise by “only” 8 inches by the end of the century. I think even these rates are scary, especially for coastal development projects, but scientists are pretty sure they vastly underestimate the extent of the danger. It’s as though these legislators think they can simply wish away climate change.

We live in a society where saying something is so is often as good as it being so. Is Barack Obama a citizen of the US? Evidence indicates that he actually is but critics persist in saying that he isn’t. As recently as 2010, 25% of survey respondents believed that he was born in another country and so isn’t eligible to be president. Were the 9/11 attackers from Iraq? Despite the objective evidence, 44% of the American public believe that several of them were Iraqis, which would then presumably be justification for the war in Iraq. Is global warming caused by humans? Despite overwhelming scientific opinion that it is, only 47% of the American public believe it is. Why do people believe these erroneous claims? Because the media (or at least parts of the media) advocate such positions. And because we are guilty of magical thinking. Say something is true and it will be true.

Scott Huler of Scientific American says it better than I can: “North Carolina legislators are now tossing around bills that not only protect themselves from concepts that make them uncomfortable, they’re DETERMINING HOW WE MEASURE REALITY.” Meanwhile, sea levels rise non-linearly, no matter what the North Carolina legislature legislates. And because we refuse to accept reality, we lose valuable time for an effort to reverse or at least to slow down this scary trend. So I have a tip for you: don’t buy any coastal property.



I don’t think anyone would accuse me of being a Luddite. I began to learn to program in the late 1970’s when I was in high school, majored in computer science, worked as a software developer and got a PhD in computer science. I love my tech toys tools and think that overall, we are better off with the technology we have today than we were before it was available. But I am often a skeptic when it comes to educational technology.

I was reminded of my skepticism about a month ago when I cam across this photo and caption. For those of you who won’t click through, I’ll describe it. It is a photo of a classroom smart board being used as a bulletin board, with large sheets of paper taped to it, completely covering the smart board itself. The poster of the photo asks a number of questions, including whether the teacher who uses the equipment in this manner should be reprimanded for educational malpractice. The comments on the photo imply that the fact that the teacher is using this equipment in this way is evidence that the teacher is resistant to using the equipment appropriately. I was happy to see that the poster of the photo also asked some questions about why a teacher might use the equipment in this way such as not enough training. But I think the issue really is that the teacher has not had the right kind of training and the probable reason for that is that the promoters of educational technology are almost always focused on the technology itself and not on the education that the technology can provide.

The fact that someone would consider reprimanding a teacher for using technology in this (admittedly inappropriate) way is part of the problem that I see in all corners of educational technology. When we engage in technology training for teachers, we almost always focus on how and not why. That is, we focus on how to use the technology and don’t engage in meaningful discussion of the pedagogical advantages of using the technology in the classroom. The impression then is that we want to wow our students with this new technology, to do something flashy because the flashiness will capture the attention of the students. I see several problems with this idea. First, if students are using similar technology in all of their classes, the newness of the technology wears off and the flashiness disappears. Second, we should be in the business of getting students to actually learn something and if we don’t have proof that a particular technology (used appropriately) improves learning, perhaps we shouldn’t be investing in such high-priced items. In other words, I do not see technology as a panacea to our educational problems.

I’ll give my own example of how this has played out in my own teaching. A few years ago, my University purchased a bunch of clickers. I went to several training sessions for the clickers, hoping to hear a pedagogical explanation for why the use of the clickers might improve student learning. I heard a lot about how to use the clickers (technical details) as well as the cool things I could do to survey my students to see where their misunderstandings are. But even this last point didn’t convince me that the technology was worth the cost or the effort to use it because I already have ways that I can survey my students to see where their misunderstandings are. In fact, I’ve been developing those kinds of techniques for years, without the use of technology. So what I wanted to know was how the technology will improve on those techniques so that my students learn better. And no one could provide me with those answers. This summer, however, I went to a technology institute for faculty in the University System of New Hampshire. One of our presenters told us about a learning framework which might help us think about technology use in the classroom. He cited several studies that sought to identify why individual tutoring of students is so effective at improving student learning. The results show that students learn best when they get immediate feedback about their learning (the more immediate the better), can engage in conversation about their learning (that is, when they have to try to explain what they learned to someone else) and have learning activities that are customized to their needs (so that they are not wasting their time going over material that they already understand). What technology can do, he argued, is help us provide individual tutoring learning experiences for large numbers of students cost-effectively. Therefore, we can use clickers, not to provide the teacher with information about student learning but rather to provide the students themselves with information about their own learning. That is, the clickers allows us to ask questions of the class, have all the students answer simultaneously and then when we reveal the answer(s), the student can see how he fared compared to his classmates and compared to the correct answer(s). This immediate feedback provides an individual tutoring type experience only if it is done with an eye toward making sure students understand what they are supposed to get out of the use of the clickers. But too often, clickers are used in the classroom because they are cool, and new, and innovative.

So back to the question of whether the teacher who used the smart board inappropriately should be reprimanded. If, instead of having students write on big pieces of paper which she taped onto the smart board, the teacher had the students type their items into a computer and then she had displayed them on the smart board in the “appropriate” manner, we would not be having this discussion. But in neither case have we asked what her pedagogical motivations were for the exercise that the students engaged in. That to me is the important question and the one that would determine whether she has committed “educational malpractice.” And before we spend tons of money on smart boards and iPads and clickers and and and…, I think we should focus on the learning improvements that might be gained from the use of such technology. In most cases, I don’t think we have a whole lot of evidence that it does improve learning. And I definitely don’t think we’re training teachers to use it in a way that takes advantage of the ways that it might improve learning.



A student from my Creating Games class came to my office today to talk about the keynote speech from a conference he had recently attended.  The speaker was lamenting the fact that kindergarten has become increasingly focused on “preparing children for first grade” rather than socialization through play activities.  Because we talk a lot about play and its importance in life (even adult life), he wanted to know what I thought about this.

We had a great conversation and in the middle of it, I had an epiphany that many of our society’s ills stem from the very philosophy that encourages (or even requires) kindergarten classrooms to be structured around preparation for first grade.  I think the philosophy comes from capitalistic tendencies to focus on “efficiency,” “productivity,” and “progress,” all of which are defined in a very narrow sense.  And the more I think about this, the more I see it everywhere in our society.

My original thought was that we are forgetting the importance of play because we are so focused on short-term, immediate, measurable outcomes.  We have few resources and so we need to use them efficiently in order to make progress toward some short-term goal.  Any “unproductive” use of resources is discouraged as wasteful.  That is, if we can’t see the immediate consequence of the use of those resources, the resources have been wasted.  So children engaging in unstructured, “unproductive” play in kindergarten is wasteful because they aren’t learning to read, something they must know how to do when they enter first grade.  We need to test our students regularly (using standardized tests) to measure their “progress” and if they aren’t all making the same “progress,” someone must be punished (with loss of funding or firing). So we eliminate art programs and physical education and other extra subjects so we can focus our resources on getting students to perform well on our measurement tools.

As I thought more about this, I started to see this idea everywhere. Because money is the only measurement tool that matters for the stock market, if a company is not making adequate “progress” (which means increasing profits every quarter–profits which stay the same are not “progressing”), it will be punished by shareholders leaving them (well, maybe not in this particular economic climate). So companies engage in practices which make (or save) money in the short-term but which might not make sense if we had a longer view.  And mathematicians and fund managers design financial products that will increase in profits every quarter. If we had a longer view, we would recognize the risk of these products and wouldn’t allow them to take down our entire economy with their collapse. We won’t fund basic research and development because it isn’t immediately clear what the benefits are. And so we won’t learn more about how the universe and the world works just for the sake of learning those things today but which tomorrow might lead to amazing technological advances. I could go on and on.

This kind of thinking is the root of many of our societal problems. Kids engaging in unstructured, unsupervised play is important to teach them skills that can’t be easily measured and whose benefits may not be visible for years. They will learn to entertain themselves. They will learn to focus on an activity for more than a half hour at a time. They will use their imaginations. They will learn to navigate the world on their own, without some external force guiding them to the next “correct” step. These things may take years to learn and are definitely not easily measured. But it seems to me that those are not valid reasons to give up on them. Yet, I think we have largely given up on them. Just as we’ve given up on many of the things in my list above.

I realize I probably sound like a curmudgeon longing for “the good old days.” Or that I think we shouldn’t measure anything in the short-term. But that isn’t my point at all. My point is simply that our societal focus on ONLY measurable, short-term outcomes has consequences. And I would argue that those consequences are mostly bad. They lead to less creativity and fewer workers prepared to adapt to the ever-changing world and economic collapses and fewer technological advances and and and. Focusing on these other things, these things we can’t measure or see the results of immediately, is risky. We might “waste” some resources. But sometimes, what seems like a “waste” today turns out to be life-changing, society-changing, at a point in the unknowable future. And the really sad thing is that if we don’t invest in these “wastes,” we’ll never even know what we might be missing.



{February 4, 2011}   Facebook Security

Robin pointed out an article about Facebook security today that made me think about some things that everyone who browses the web should know about but which the article unfortunately neglects to discuss.  The article is about the fact that, until today, Facebook has been available only through the hypertext transfer protocol (“HTTP”) and not through the encrypted hypertext transfer protocol secure (“HTTPS”).  That sounds a bit technical and boring but if you ever use Facebook on an open wireless network (in a cybercafe, for example), you probably want to pay attention to this particular issue.  If you don’t care about the details of how this works, at least read the next to the last paragraph where I explain all the steps (including one not mentioned in the orginal article) to keep yourself secure when using Facebook.

When you use your browser (Internet Explorer or Firefox are two of many, many examples) to browse the web, you are making connections from your computer to computers all over the world.  That is, when you put an address in the address box or you click a link on a page, you are sending a message from your computer to a computer out on the Internet, requesting some sort of service.  These computers all over the Internet come from many different hardware manufacturers and run many different operating systems.  To make sure that your computer can communciate with that computer out on the Internet, your browser must specify the protocol to use.  A protocol is simply a set of rules that specify a kind of language that the two computers agree to communicate in.  HTTP is one of these sets of rules while HTTPS is a different set of rules.  The difference between these two protocols has to do with security.  If your computer communicates using HTTP, every request for service is sent as plain text which means that if someone can listen to your request (by grabbing your messages from the wireless network, for example), that request can be read.  If, on the other hand, your computer communicates using HTTPS, your request is encrypted which means that someone listening to your request (other than the computer that you’re making the request of) will hear jibberish.

What do protocols have to do with you and Facebook?  Up until today, Facebook has only allowed communication to occur in plain text.  So if someone on the same wireless network as you listened in on your communication with the Facebook computers, they would be able to read everything that you sent, including your username and password.  So anytime you used a wireless network in a cybercafe to check your Facebook account, anyone else within that cafe (who had a bit of technical skill) would be able to capture your username and password.  This vulnerability is not something unusual within computing circles.  And the fact that Facebook has ignored it until now is pretty unconscionable.  A Seattle programmer named Eric Butler decided to push the issue and created a browser extension called Firesheep that made it extremely easy for anyone to capture HTTP messages on public networks.  In response, Facebook has finally allowed HTTPS (encrypted) communication to its computers. 

There are two things you need to do in order to use Facebook securely.  First, you need to change your account settings within Facebook.  The original article that Robin posted explains how to do this.  Go to Account Settings (under the Account menu in the upper right corner) and scroll down the third to the last item in the list, which is called Account Security.  Choose change and check the box that says “Browse Facebook on a secure connection (https) whenever possible.”  But it is really important that you also take a second step in order to be secure when you are browsing on an open network.  Up until today, whenever any of us has started to communicate with Facebook’s computers, we have typed in (or clicked a link to) the following address: http://www.facebook.com  Notice the letters before the colon–HTTP.  We begin our communication with Facebook’s computers in an insecure way.  We then enter our usernames and passwords in an insecure way.  When Facebook then realizes that this is an account that has requested secure communication, it changes the way the two computers communicate with each to HTTPS.  The problem is that we have already sent our username and password in an insecure way.  So the second step you have to take is that when you type in Facebook’s address, you MUST type: https://www.facebook.com so that the communication begins securely.  This second step is the one that the original article neglects to mention.

I set up my account to communicate securely with Facebook whenever possible.  Unfortunately, many applications on Facebook cannot use a secure connection.  That is, every time I play Scrabble or Go, for example, I have to change to an insecure connection.  So for now, I’m leaving my settings so that I communicate via HTTP rather than HTTPS.  I guess I’ll just have to remember to change my security settings before I leave home to use any computer (including my own) on an open public network.  That’s my only option because I’m definitely not going to stop playing my games.



{December 27, 2010}   Popular Culture and TIA

I just finished watching the five episodes of the BBC miniseries The Last Enemy.  Ann had recommended it because it is about computers and privacy and also because Benedict Cumberbatch (of recent Sherlock Holmes fame) is the star.  I mostly liked the series but there were a couple of things that really bothered me about it.

The plot begins when Stephen Ezard (played by Cumberbatch) returns home to England after living in China for four years.  He’s coming home to attend the funeral of his brother Michael, an aid worker who was killed in a mine explosion in some Middle Eastern desert.  Ezard is a mathematical genius who went to China to be able to work without all the distractions of life in England.  He is a germaphobe (at least in the first episode–that particular personality trait disappears once the plot no longer needs it) who is horrified by the SARS-like infections that seem to be running rampant on the plane and throughout London.  After his brother’s funeral, Stephen goes to Michael’s apartment and discovers that Michael was married to a woman who was not at the funeral and who appears to be in hiding.  She’s a doctor who is taking care of a woman who is dying from some SARS-like infection–and that woman is in Michael’s apartment.  Despite his germaphobia, Stephen immediately has sex (in this germ-infected apartment) with his brother’s widow.

Meanwhile, Stephen’s ex-girlfriend is an MP who is trying to push through legislation that would allow the use of a program called Total Information Awareness (TIA).  TIA is already largely in place but the people of England are not happy about it.  So Ezard is recruited as a “famous” apolitical mathematician who will look at the program and sell it to the public.  What is TIA?  It’s a big database that collects all kinds of electronic information.  Every credit card purchase, building entry with an id card, video from street cameras, and so on is stored in this database.  The idea is that by sifting through this information, looking for certain patterns, English authorities will be able to find terrorists before they strike.  The interesting thing about this idea is that it isn’t fiction.   In 2002, the US government created the Information Awareness Office in an attempt to create a TIA system.  The project was defunded in 2003 because of the public outcry.  At the time, I was concerned about the project both as a citizen with rights that would potentially be threatened and as a computer scientist critical of the idea that we could actually find the patterns necessary to stop terrorism.

This is where the plot of The Last Enemy became problematic for me.  Michael’s widow, Yassim, who is now Stephen’s lover, disappears.  Stephen takes the job as spokesperson for TIA primarily so he’ll have access to a system that will allow him to track Yassim.  We see many scenes of him sitting for hours and hours wading through data with the help of the TIA computer system.  At one point, he tracks the car that Yassim had been riding in by looking for video footage taken by street surveillance cameras and finding the license plate of the car in the video.  This is completely unrealistic and one of the main reasons that, with our current technology, a TIA system will never work.  We don’t yet have the tools to wade through the massive amounts of irrelevant data to find only the data we’re interested in.  And when that data comes in the form of photos or video, we don’t really have quick, efficient electronic means of searching the visual data for useful information.  Since so much of the plot of The Last Enemy hinges on Stephen finding these “needles in a haystack” in a timely manner, I had a difficult time suspending my disbelief.  The problem is that it is very difficult to find relevant information in the midst of huge amounts of irrelevant information.  Making this kind of meaning is one of the open problems of current information technology research.

The second major problem that I had with the plot of this series has to do with Stephen as a brilliant mathematician and computer expert not understanding that his electronic tracks within the system would be easy to follow.  He makes no attempt to cover those tracks and so as soon as he logs off, his pursuers log on behind him and look at everything he looked at.  And many major plot points hinge on his pursuers knowing what he knows.  He doesn’t even take minimal steps to cover his tracks and then he seems surprised that others have followed him.  This is completely unrealistic if he really is the brilliant computer expert he would need to be in order for the government to hire him in this capacity.

I won’t ruin the surprises of the rest of the plot of this series.  But let’s say that much of the premise seems pretty realistic to me, like we’re not too far off from some of these issues coming up for consideration soon.  For that reason, I recommend the series, despite the problems I saw and despite the unbelievable melodrama that arises as a result of Stephen’s relationship with his brother’s widow.  There is a particularly laughable scene between the two of them when she tries to teach him how to draw blood by allowing him to practice on her.  It’s supposed to be erotic, which is weird enough given the danger they’re in at that point, but the dialog is so bad that I laughed out loud.  Despite these problems, the series explores enough interesting questions that I kept watching, wanting to know how the ethical questions would be resolved.



{November 11, 2010}   Google and Privacy

A story about Google and privacy on NPR last week caught my attention because it seemed so strange.  And now that I know what the real story is, it still seems really strange to me.

Google Map’s Street View function is very cool.  It provides street-level camera views of many locations.  In Boston, for example, you could type “Prudential Center” in the Google Maps tool, choose “Street View” and then stand virtually in front of the Prudential Center and look around, as though you were actually standing at that spot.  You can then (virtually) move in any direction along the street, as though you were traveling in a car.  I’ve used the function before visiting new places, trying to find new addresses, to get a sense of what I’ll see when I’m actually there.

To create these street-level views, Google sends people in cars to drive around, video-taping the view at various locations.  To facilitate the coordination of the video with actual addresses, the people in the car utilize mobile computing technology to gather GPS information that is then attached to the video.  The software that Google used in this project apparently had a feature that captured other kinds data from the airwaves in addition to the data needed to create the street views.  In particular, this software sniffed out unsecured wireless networks and captured data such as email addresses, passwords, and IP addresses.  After denying that they were capturing such data, Google finally admitted that they were “inadvertantly” capturing it but that the data was never used for any purpose.  The data capture was inadvertant because the company was using software that had been developed for other purposes and they simply didn’t realize this capability remained intact.

In Britain, such data capture is illegal.  So the story I heard was about the British government deciding whether to fine Google for the “data breach” or not.  Instead of fining Google, the British government sought written assurance from Google that they would not engage in such practices again.  In addition, the government would like to conduct an audit of Google’s data protection practices.  And that, apparently, will be the end of the incident.

I think there are two interesting parts to this story that have not been discussed. 

First, there are a ton of wireless networks that are unsecured.  What this means is that people set up a wireless network in their house or their business and they don’t encrypt the data that is sent via that network.  So all information that is sent on the network can be read by anyone.  If you put in a password, it is transmitted in plain text, so anyone (with a sniffer–another type of program, readily available–that’s another post) can read it.  If you put in your bank account number, it is transmitted in plain text and anyone (with a sniffer) can read it.  In other words, it is a really bad idea to set up an unsecured, unencrypted wireless network.  When you buy a wireless router, the setup instructions are pretty easy for setting up a secure, encrypted network.  But many people choose not to.  I’m not sure why.  Of course, it still makes sense to me that it would be illegal to gather private information from unsecured networks.  If someone doesn’t lock the door to their apartment, we still think it’s a crime for someone to steal things out of that apartment.  It’s the same situation with an unsecured wireless network.

The second thing that I think is interesting about this story is the fact that Google’s software contained functionality left over from some previous project that was unrelated to the current project.  This might not seem like a big deal but I’ve seen this in other pieces of software and it is indeed a big deal.  A few years ago, Microsoft’s Excel was a hog, using huge amounts of memory and CPU time, far beyond what you would expect given its functionality.  I discovered (via the Internet, of course) that the Microsoft programmers had inserted a huge chunk of Microsoft’s Flight Simulator into the Excel code.  So if you pressed a bizarre sequence of keys while you were in Excel, you would suddenly find yourself flying a simulated plane, with some of the most realistic graphics available at the time.  This is called an “Easter egg.”  And here are some instructions for how to get to the Flight Simulator from within Excel. (By the way, I was unable to get this to work on Vista but you can go to Wikipedia to find some documentation of various Easter eggs in Microsoft products.)  It was a cool discovery.  Most Excel users never knew this functionality existed.  And it shouldn’t have existed because it was completely unrelated to spreadsheets.  It was (probably) the major reason that Excel was bloated, taking more memory and CPU time than necessary.

So although the story about Google’s privacy breaches is strange, it contains a couple of lessons for the average computer user as well as for software developers.  Average user–secure your wireless network!  Software developer–resist the temptation to play around as you develop your software.



{September 23, 2010}   New Definition of “Friend”

One of the ways that I first knew that Facebook was having a major impact on our society was that I heard my friends in the real world, many of whom are English professors, using the word “friend” as a verb.  Before Facebook, “friend” was a noun.  Before Facebook, the verb form of “friend” was “befriend.”  But now, it is common to use “friend” as a verb, as in “He wants to friend me” or “She friended me.”  Of course, this use of the word refers to the creation of a symmetrical relationship between two Facebook accounts in which each acknowledges the relationship in a way that allows the owner of each account to see the content posted by the owner of the other account.  At least, that has been how we Facebook users have used the word from 2004 (when Facebook was founded) until this week.

And that’s because Facebook is once again changing the definition of the word.  Until this week, when someone made a request to be my friend, that would appear on my Facebook page with two options.  I could either accept this friend request or I could ignore it.  I’m not sure why I wasn’t able to outright REJECT such requests but ignoring them certainly appealed to my ever-shrinking nice side.  In any case, in anticipation of the new Facebook movie (The Social Network) and the “real” Facebook movie (Catfish), Facebook has made a change.  We no longer get the options of accepting or ignoring friend requests.  Instead, we can either accept the friend request or we can say “Not Now.”

So what does “Not Now” mean?  When you click “Not Now,” you are putting that particular friend request into a pending state, indicating that you want to deal with it later.  While this friend request is in the pending state, the person who did the requesting, when looking at your profile, will see the “Awaiting Friend Confirmation” message that they would have previously seen before you dealt with their request at all.  In other words, they will have no idea that you have put them into this pending state. 

Meanwhile, if you look at the right side of your main Facebook page and scroll down, you’ll see a “Requests” section and the friend request will appear there.  If you then click on it, you will be given the option at that point to either confirm the friend request or delete it.  By the way, THIS is how you really say you don’t want to be friends with someone.

But there are some other important points to keep in mind.  First, remember that you have to pay attention to your privacy settings.   For example, I make the majority of my information available to “Friends Only,” which means that only my friends can see my information.  Another of the options is that “Everyone” can see your information.  If that is the choice you have made, you might be interested in this new change made by Facebook concerning Friend requests.  If you have some of your settings set to “Everyone,”  then any Friend requests that you have said “Not Now” to and have not yet deleted from your Requests menu will get your status updates in their Newsfeed.  As though they had been approved as your friend.  Even though you have put them into this “pending” status.

So I think there are a couple of important things to pay attention to here.  The first is that “Everyone” is always a dangerous setting for privacy.  So think carefully about whether you want something to be set to “Everyone.”  The only things I have set to “Everyone” are “Send me Friend Requests” and “Send me messages.”  In other words, everyone can request to be my friend.  And everyone can send me a message.  I set this to “Everyone” because I wanted people who were requesting to be my friends to send me a message about why I should accept their friend request.  But since I don’t have “Search for me on Facebook” set to “Everyone,” I feel pretty safe here.  I have that set to “Friends and Networks.”

Now that the logistics of these settings is out of the way, it might be interesting to consider why Facebook would be making these changes.  Why would Facebook be changing the way friend requests work?  I think Facebook wants to change the way we think about the word “friend” so that we will be prepared for some additional changes in the future.  Currently, I think most people think of a “friend” relationship as a reciprocal relationship, a two-way relationship between two people.  By allowing this “pending” state for friends, Facebook is trying to get us to believe that friendship may not be reciprocal, may not be two-way.  If you put someone in this pending state (and you haven’t set privacy settings correctly), then they will have things put in their newsfeed about you that “non-friends” won’t have.

Why would Facebook want to change the definition of “friend?”  I think it’s all about money.  More specifically, I think it’s all about advertising.  I think Facebook is trying to push the envelope in terms of the definition of “friend” so that we increasingly accept things from our “friends” (even those in a pending status) as somehow more valid than “real” advertising.  Somehow Facebook will make money from our acceptance of non-friends as friends of some type, even if that type is “pending.”  Facebook doesn’t want us to think too much about this.  They just want us to accept.  Or at least say “Not Now.”



{August 30, 2010}   Facebook Places

Here’s the status update of one of my friends on Facebook today (August 29): “IMPORTANT!!!   Facebook launched Facebook Places yesterday. Anyone can find out where you are when you are logged in. It gives the actual address & map location of where you are as you use Facebook. Make sure your kids know.  Go to ‘Account’, ‘Account Settings’, ‘Notifications’, then scroll down to ‘Places’ and uncheck the… 2 boxes. Make sure to SAVE changes and re-post this!”

I had heard something about this particular feature but, to be honest, until I saw this status update, I really hadn’t paid much attention to it.  But this status update felt so dire that I decided I really needed to check out what this feature is all about.  It turns out that this feature was released on August 18, nearly two weeks ago.

I checked the help section of Facebook and found that Places is a “feature that allows you to see where your friends are and share your location in the real world. When you use Places, you’ll be able to see if any of your friends are currently checked in nearby and connect with them easily. You can check into nearby Places to tell your friends where you are, tag your friends in the Places you visit, and view comments your friends have made about the Places you visit.”  So it seems that Facebook is trying to move its network into the real world in a new way.  In fact, they tell us that we can “Use Places to experience connecting with people on Facebook in a completely new way.”  They seem to see Places as a way to connect the real with the online in a way that hasn’t really been possible in the past.

Like many changes to the way Facebook works, this particular feature has raised privacy concerns.  People have worried that this feature can be used to track a Facebook user’s movements.  I think this is a valid concern but it’s one that is easily ameliorated.  The Places feature is currently only available to those users in the United States who access Facebook via their iPhone or via touch.Facebook.com, which is Facebook’s website for touchscreen mobile devices.  Although I haven’t been able to confirm this, I think your mobile device would need to have GPS capabilities so those of us who use the iPod Touch don’t need to worry about this feature (at least, not yet).

Some of the privacy concerns seem to be a bit misplaced, however.  Although I haven’t checked it out, Facebook assures us that no user’s location would be shared unless that user “checks in” with their location.  In other words, the feature requires active participation on the part of the user.  Which is a good thing, it seems to me.  No location sharing happens without the user explicitly allowing it.  So maybe the feature isn’t as dangerous as my friend’s status update would lead us to believe.

In my opinion, privacy is about choice.  Privacy is not necessarily about secrecy.  Instead, it’s about giving the owner of information the choice as to whether and with whom she will share that information.  Although Facebook has made some problematic privacy decisions in the past, from what I can see so far, the Places feature does not jeopardize the privacy of Facebook users.  I don’t quite understand yet the feature where your friends can tag you at a location so perhaps that’s an area of concern.  I’ll be curious about whether anyone else knows more about that.

Regarding the instructions given in my friend’s status update that I reference in the first paragraph of this post–those instructions are about notifications.  They specify whether you will be notified if someone tags you at a place.  If you uncheck the box (as the instructions tell you to do), you will not be notified of such a tagging.  Unchecking the box does not prevent someone from tagging you.  So I think you probably don’t want to follow those instructions–especially if you are concerned about the information that is out there about you.



et cetera