(via the MIT Center for Civic Media)
This session looks beyond platforms to explore the concept of media ecosystems. How do we understand, map, visualize, and ultimately shape the flow of texts across an increasingly diverse and complex media ecosystem? What are the relationships between professional and citizen, participatory and broadcast media? How do we understand what people are encountering, both in terms of supply (tools like Media Cloud that examine what’s published) and demand (tracking/logging efforts that look at individual or group consumption?
- Mapping Media Ecosystems at Center for Civic Media – My heart’s in Accra
- Civic Media Session: ‘Mapping Media Ecosystems’ – MIT TechTV
This article details the networked production and dissemination of news on Twitter during snapshots of the 2011 Tunisian and Egyptian Revolutions as seen through information flows—sets of near-duplicate tweets—across activists, bloggers, journalists, mainstream media outlets, and other engaged participants. We differentiate between these user types and analyze patterns of sourcing and routing information among them. We describe the symbiotic relationship between media outlets and individuals and the distinct roles particular user types appear to play. Using this analysis, we discuss how Twitter plays a key role in amplifying and spreading timely information across the globe.
- 111,741 tweets about Afghanistan and its presidential election posted between August 11, 2009 and September 9, 2009
- 11,255 tweets on August 20, 2009, the day of the election
- 29,642 users talked about Afghanistan in our dataset
- Top 10% of tweeters contributed 65% of tweets (same as Iran Election)
- Number of retweets for a user was not correlated to their tweeting volume (same as Iran Election)
- 483 hashtags were used at least 3 times
- No single, dominant hashtag (differs from Iran Election)
- 3 most used hashtags: #Afghan09, #Afghanistan, and #AfghanElection
Michael Jackson’s death created an emotional outpouring of unprecedented magnitude on Twitter. In this report, we examine 1,860,427 tweets about Jackson’s death in order to test various methods of sentiment analysis and gain insights into how people express emotion on Twitter.
Re-wrote Ruby Twitter scraper to be more efficient and designed and implemented a normalized database schema for storing tweets on Web Ecology Project’s server.
Original Twitter scraper prototype was coded in perl by Ethan Zuckerman (mentioned in a blog post on Apr 13, 2009). The script used Twitter’s URL-based API (since changed) to scrape tweets from a simple search query on a particular term like a hashtag. The script was ported to Ruby by Web Ecology Project member Dave Fisher, who also set up the initial database.
I re-wrote Dave’s code to make the scraper more efficient in how it handled the initial scraping of tweets and in writing to the database. I also designed a database schema for the tweets which organized the various metadata connected to each tweet in specific tables and columns that could be indexed for faster and easier queries across Web Ecology Project’s growing dataset.
My code was used to collect the tweets used in two studies I co-authored, Detecting Sadness in 140 Characters and Afghanistan and its Election on Twitter, and in another study authored by my Web Ecology Project colleagues, The Influentials.
“Our field poses two simple questions to researchers:
- ‘Where have studies about the web failed?’ and,
- ‘How can we do better?’
“The emerging field of Web Ecology is an attempt to unify contemporary research and practice under a common focus, set of principles, and general approach to promote new insights and more fruitful forms of exchange in this space. We believe that these lay the groundwork for a more vibrant, more dynamic, and more useful field of research and community of researchers.”