Tweens, Cyberbullying, and Moral Reasoning

Link

http://www.emeraldinsight.com/doi/full/10.1108/S2050-206020140000008016

Abstract

Purpose

To inform policy, curricula, and future research on cyberbullying through an exploration of the moral reasoning of digitally active 10–14-year olds (tweens) when witnesses to digital abuse.

Methodology/approach

Conducted interviews with 41 tweens, asking participants to react as witnesses to two hypothetical scenarios of digital abuse. Through thematic analysis of the interviews, I developed and applied a new typology for classifying “upstanders” and “bystanders” to cyberbullying.

Findings

Identified three types of upstander and five types of bystander, along with five thinking processes that led participants to react in those different ways. Upstanders were more likely than bystanders to think through a scenario using high-order moral reasoning processes like disinterested perspective-taking. Moral reasoning, emotions, and contextual factors, as well as participant gender and home school district, all appeared to play a role in determining how participants responded to cyberbullying scenarios.

Research limitations/implications

Hypothetical scenarios posed in interviews cannot substitute for case studies of real events, but this qualitative analysis has produced a framework for classifying upstanding and bystanding behavior that can inform future studies and approaches to digital ethics education.

Originality

This study contributes to the literature on cyberbullying and moral reasoning through in-depth interviews with tweens that record the complexity and context-dependency of thinking processes like perspective-taking among an understudied but critical age group.

Governing the Ungovernable

Announcement

http://www.tprcweb.com/tprc42-friday-panel-sessions/

Description

Laws, norms, policies, and institutions have failed to keep up with advances in artificial intelligence. Popularly, we still think of governance of these systems using quotes and metaphors from science fiction authors. The public awareness of the sophistication and capabilities of current systems are also skewed, often in extremes: predicting robot warfare and mind control or suffering complete naivete.

The reality is that intelligent systems are embedded in more and more everyday products and services. The so-called “internet of things” represents a kind of ubiquitous computing that anticipates our needs and provides us information or adjusts the room temperature based on usage patterns. Smarter algorithms power seemingly neutral services like Google’s search engine or Facebook’s news feed.

This panel brings together domain experts researching the impact of intelligent systems in a variety of arenas including household products, civics, and cyberwarfare. The panel will explore gaps in our existing framework of regulation around these technologies, identify challenges common to the deployment of different intelligent systems in a broad range of contexts, and suggest a common set of research goals to advance the cause of effective governance, mapping out the role different constituencies can play in this effort.

Challenges for Personal Behavior Change Research on Information Diversity

Link

http://personalizedchange.weebly.com/1/post/2014/03/challenges-for-personal-behavior-change-research-on-information-diversity.html

Abstract

Researchers have tested a variety of personal informatics systems to encourage diversity in the political leaning, geography, and demographics of information sources, often with a belief in the normative value of exposure to diverse information sources. Methods attempted have included information labeling of media sources, personalized metrics of reading behavior, personalized visualization of social media behavior, recommendation systems, and social introductions. Although some of these systems demonstrate positive results for the metrics they define, substantial questions remain on the interpretation of these results and their implications for future design. We identify challenges in defining normative values of diversity, potential algorithmic exclusion for some groups, and the role of personal tracking as surveillance. Furthermore, we outline challenges for evaluating systems and defining the meaningful social impact for information diversity systems operating at scale.