Making Drones Civic

Link to Conference Paper

graeff-matias_makingdronescivic_isa2015

Abstract

Can drones be fully accepted as civic technologies? Are there values embodied by drones that undermine their ability to perform in a civic capacity? What design principles might make drones more civic? Where does responsibility lie between civil society actors, drone designers, and policymakers in pursuing this goal while balancing privacy, security, and innovation? Although drones have several proposed civic use cases, particularly involving practices described as monitorial citizenship, drones are different from other civic technologies. Civic technologies are about shifting power away from corrupt actors and toward virtuous actors. And a motivating concept and ethic for civic technologies, whether used for interacting with governments or against them, is participatory practice. If we aspire to a definition of civic action that is fundamentally participatory and we hope for our civic technologies to embody that value of participatory practice, we must investigate whether drones can be fully accepted as civic technologies. This paper will address these questions and issues, problematizing the use of drones for civic purposes by defining a set of values and design principles for civic technologies and by showing where drones may play a role, situating contemporary cases among relevant political and ethical questions.

Challenges for Personal Behavior Change Research on Information Diversity

Link

http://personalizedchange.weebly.com/1/post/2014/03/challenges-for-personal-behavior-change-research-on-information-diversity.html

Abstract

Researchers have tested a variety of personal informatics systems to encourage diversity in the political leaning, geography, and demographics of information sources, often with a belief in the normative value of exposure to diverse information sources. Methods attempted have included information labeling of media sources, personalized metrics of reading behavior, personalized visualization of social media behavior, recommendation systems, and social introductions. Although some of these systems demonstrate positive results for the metrics they define, substantial questions remain on the interpretation of these results and their implications for future design. We identify challenges in defining normative values of diversity, potential algorithmic exclusion for some groups, and the role of personal tracking as surveillance. Furthermore, we outline challenges for evaluating systems and defining the meaningful social impact for information diversity systems operating at scale.

What We Should Do Before the Social Bots Take Over

Link

http://web.mit.edu/comm-forum/mit8/papers/Graeff-SocialBotsPrivacy-MIT8.pdf

Award

Won the 2014 Benjamin Siegel Prize in Science, Technology, and Society at MIT

Abstract

Direct interactions between humans and bots generally conjure up images from science fiction of Terminator robots or artificial intelligence gone rogue, like 2001’s HAL or The Matrix. In reality, AI is still far from much of that sophistication, yet we are already faced with the ethical and legal ramifications of bots in our everyday lives. Drones are being used for collecting military intelligence and bombing runs. U.S. states have passed laws to address self-driving cars on public roads. And nearer the subject of this paper, the legality of search engine bots has been openly questioned on grounds of intellectual property protection and trespassing. Bots inspire fear because they represent the loss of control. These fears are in some ways justified, particularly on grounds of privacy invasion. Online privacy protection is already a fraught space, comprising varied and strong positions, and existing laws and regulations that are antiquated many times over by the rapid growth and innovation of the internet in recent decades. The emergence of social bots, as means of entertainment, research, and commercial activity, poses an additional complication to online privacy protection by way of information asymmetry and failures to provide informed consent. In the U.S., the lack of an explicit right to privacy and the federal government’s predilection for laissez faire corporate regulation expose users to a risk of privacy invasion and unfair treatment when they provide personal data to websites and online services, especially those in the form of social bots. This paper argues for legislation that defines a general right to privacy for all U.S. citizens, addressing issues of both access and control of personal information and serving as the foundation for auditable industry design standards that inherently value and honor users’ rights to privacy.

Slides

//www.scribd.com/embeds/163387938/content?start_page=1&view_mode=scroll&access_key=key-2bgbkx2arvndxy7mv916&show_recommendations=true