47th Symposium on International Relations
Social Media: Global Impact on Political Engagement, Youth & Privacy
Sponsored by the League of Women Voters of Connecticut Education Fund, Inc.
Co-sponsored in cooperation with PIER and the Councils of African and Middle East Studies at The Whitney and Betty MacMillan Center for International and Area Studies at Yale and the Yale Peabody Museum of Natural History.
Pat Sabosik, President, Elm City Consulting—advises companies on new digital strategies
Carolyn A. Lin,Professor, University of Connecticut Dept. of Communication (Impact on Political Engagement)
Erhardt Graeff, Fellow, Berkman Center for Internet and Society, Harvard (Impact on Youth)
Lauren Henry Scholz, Postdoctoral Associate in Law, Information Society Project, Yale Law School (Impact on Privacy)
Nancy Ruther, Visiting Fellow, the Whitney and Betty MacMillan Center for International and Area Studies
Laws, norms, policies, and institutions have failed to keep up with advances in artificial intelligence. Popularly, we still think of governance of these systems using quotes and metaphors from science fiction authors. The public awareness of the sophistication and capabilities of current systems are also skewed, often in extremes: predicting robot warfare and mind control or suffering complete naivete.
The reality is that intelligent systems are embedded in more and more everyday products and services. The so-called “internet of things” represents a kind of ubiquitous computing that anticipates our needs and provides us information or adjusts the room temperature based on usage patterns. Smarter algorithms power seemingly neutral services like Google’s search engine or Facebook’s news feed.
This panel brings together domain experts researching the impact of intelligent systems in a variety of arenas including household products, civics, and cyberwarfare. The panel will explore gaps in our existing framework of regulation around these technologies, identify challenges common to the deployment of different intelligent systems in a broad range of contexts, and suggest a common set of research goals to advance the cause of effective governance, mapping out the role different constituencies can play in this effort.
Won the 2014 Benjamin Siegel Prize in Science, Technology, and Society at MIT
Direct interactions between humans and bots generally conjure up images from science fiction of Terminator robots or artificial intelligence gone rogue, like 2001’s HAL or The Matrix. In reality, AI is still far from much of that sophistication, yet we are already faced with the ethical and legal ramifications of bots in our everyday lives. Drones are being used for collecting military intelligence and bombing runs. U.S. states have passed laws to address self-driving cars on public roads. And nearer the subject of this paper, the legality of search engine bots has been openly questioned on grounds of intellectual property protection and trespassing. Bots inspire fear because they represent the loss of control. These fears are in some ways justified, particularly on grounds of privacy invasion. Online privacy protection is already a fraught space, comprising varied and strong positions, and existing laws and regulations that are antiquated many times over by the rapid growth and innovation of the internet in recent decades. The emergence of social bots, as means of entertainment, research, and commercial activity, poses an additional complication to online privacy protection by way of information asymmetry and failures to provide informed consent. In the U.S., the lack of an explicit right to privacy and the federal government’s predilection for laissez faire corporate regulation expose users to a risk of privacy invasion and unfair treatment when they provide personal data to websites and online services, especially those in the form of social bots. This paper argues for legislation that defines a general right to privacy for all U.S. citizens, addressing issues of both access and control of personal information and serving as the foundation for auditable industry design standards that inherently value and honor users’ rights to privacy.
From SmarterChild to the Low Orbit Ion Cannon to Horse_ebooks, humans have relationships of varying quality with bots. Mostly it’s commercial spam. But sometimes it’s less benign: for instance, the 2012 Mexican elections saw thousands of Twitter bots published by one candidate’s side denouncing the opposition with a flood of messages. There are countless examples of bots used for nefarious purposes, in America, Iran and elsewhere. What would a future look like where instead we see a proliferation of bots for positive civic engagement? Could we automate the distribution of civic information and education? Manipulate information flows to improve our welfare? Engineer reverse-Distributed-Denial-of-Service attacks? Should we? This panel takes a critical look at the discourse around, and architecture of, information overload to facilitate an important and timely debate around the engineering, usefulness, and ethics of bots for civic engagement.