Articles  |    |  April 10, 2010

Philosophy, privacy, and pervasive computing

Article by Diane P. Michelfelder.
Published in AI & Society.


Philosophers and others concerned with the moral good of personal privacy most often see threats to privacy raised by the development of pervasive computing as primarily being threats to the loss of control over personal information. Two reasons in particular lend this approach plausibility. One reason is that the parallels between pervasive computing and ordinary networked computing, where everyday transactions over the Internet raise concerns about personal information privacy, appear stronger than their differences. Another reason is that the individual devices which can become linked in a pervasive computing environment: PDAs, GPS sensors, RFID chips/readers, publicly-located video surveillance cameras, Internet-enabled mobile phones, and the like, each raise threats to individual privacy. Without discounting the value of this approach, this paper aims to propose an alternative; and, as a result of recasting the threat to individual privacy from pervasive computing, to identify other, and deeper, moral goods that pervasive computing puts at risk that otherwise might remain concealed. In particular, I argue that pervasive computing threatens to compromise what I callĀ existential autonomy: the right to decide for ourselves at least some of the existential conditions under which we form and develop our ways of life, including our relations to information technology. From this perspective, some moral goods at stake in protecting privacy in an environment of pervasive computing emerge that have less to do with furthering human well-being through the promotion of self-identity and subjectivity, than with stimulating curiosity, receptivity to difference, and, most broadly, openness to the world.