Close

Controversy Mapper

Research Assistant at the MIT Center for Civic Media in partnership with the Berkman Center for Internet & Society at Harvard, studying how a major media controversy changes over time and through the involvement of different actors in its media ecosystem, December 2009 – March 2012.

Website

Controversy Mapper at civic.mit.edu

Details of Work

  • Lead authored a case study of Trayvon Martin controversy from spring 2012
  • Advanced controversy mapper network research methodology using HITS algorithm to score the authority of media sources
  • Normalized and visualized multiple, disparate sources of media content along a time series to chart ebb and flow of story
  • Presented findings in multiple venues
  • Prepared slides for presentation of findings by PI on multiple occasions

Web Ecology Project Twitter Scraper and Database Schema

Re-wrote Ruby Twitter scraper to be more efficient and designed and implemented a normalized database schema for storing tweets on Web Ecology Project’s server.

Revision History

Original Twitter scraper prototype was coded in perl by Ethan Zuckerman (mentioned in a blog post on Apr 13, 2009). The script used Twitter’s URL-based API (since changed) to scrape tweets from a simple search query on a particular term like a hashtag. The script was ported to Ruby by Web Ecology Project member Dave Fisher, who also set up the initial database.

I re-wrote Dave’s code to make the scraper more efficient in how it handled the initial scraping of tweets and in writing to the database. I also designed a database schema for the tweets which organized the various metadata connected to each tweet in specific tables and columns that could be indexed for faster and easier queries across Web Ecology Project’s growing dataset.

Use

My code was used to collect the tweets used in two studies I co-authored, Detecting Sadness in 140 Characters and Afghanistan and its Election on Twitter, and in another study authored by my Web Ecology Project colleagues, The Influentials.

Aptivate’s Web Design Guidelines for Low Bandwidth

2008 Summer Internship at Aptivate, an international IT development firm.

Overview

My primary project for the summer was re-deploying Aptivate’s online manual Web Design Guidelines for Low Bandwidth. I updated the Guidelines’ content, and re-wrote the html and css code to be cleaner and more efficient. Simultaneously, I maintained the Guidelines bugtracker, developed a marketing plan for the re-deployed Guidelines, and outlined a possible book version.

Secondarily, I worked on the css code for a web site for Aptivate’s client Helvetas.

End of Internship Presentation

http://www.scribd.com/embeds/76918474/content?start_page=1&view_mode=slideshow&access_key=key-1nvrgi44nm6ji2urkfex&secret_password=2iq8m8wt2apvqvhihoig(function() { var scribd = document.createElement(“script”); scribd.type = “text/javascript”; scribd.async = true; scribd.src = “http://www.scribd.com/javascripts/embed_code/inject.js”; var s = document.getElementsByTagName(“script”)0; s.parentNode.insertBefore(scribd, s); })();

RIT Information Technology Honors Thesis

I won an undergraduate research grant to work to build a prototype of a web platform in Ruby on Rails that would enable users to add personal annotations to government documents and share them with other users. I continued work on the prototype and wrote up the documentation in Spring 2006 for my honors thesis.

Documentation

graeff-2006-honorsthesisdocumentation

Abstract

This undergraduate Honors capstone project involves the creation of the novel web application called .GOVernator. The purpose of this social software tool is to allow users to “markup” government documents, like the Bill of Rights, with XML tags using an interface that does require in-depth knowledge of XML. The application is programmed using the Ruby on Rails framework with JavaScript and its implementation in Asynchronous JavaScript and XML (AJAX), XHTML, CSS, XML, and XSLT. The resulting program is prototype for allowing users to create an account via registration, navigate their own home page allowing them to select government documents to markup (a.k.a scrutinize), and then use a browser-based interface for adding XML tags to government documents—storing tag names to a database for later analysis. All program functionality, except for the critical markup functionality, is in place. Help documents are also still needed to guide the user through interaction with the application’s interface. The future of the project is further development (programming) of the .GOVernator web application, a case study involving 10-30 users, and a proposal to the Lab for Social Computing as an adopted project for school year following this publication of this paper.