Researchers Examine Digital Experimentation at CODECon

e-mail symbol

When it comes to public facing websites issues like privacy and audience experimentation are front of mind for both development teams and policymakers. Consider the public outcry over knowledge that Facebook was experimenting with its users emotions this summer. Ripples from that revelation (and accompanying research paper) are still filtering through technology, policy, and academic circles as stakeholders grapple with new realities in digital experimentation. Over the weekend, top researchers from these disciplines presented a variety of new papers at MIT’s CODE Conference.

The research presented represented a heady mix – from how to get users to buy more virtual trinkets inside games to how to more efficiently crisis map during natural disasters. Researchers also took on digital experimentation gospel – the A/B test. (Spoiler: it’s probably not as informational or effective as your resident social media guru thinks it is.)

Other presenters looked at ways to increase participation in online labs for mass experiments. The website VolunteerScience.com is one such outlet for online experimentation. Dr. David Lazer, Principal Investigator and Professor of Political and Computer Science at Northeastern University, the developer of Volunteer Science presented on how the platform works and how gamifying experiments can increase participation.

Volunteer Science also touches on an issue at the heart of the Facebook outrage – consent. Users on the Volunteer Science platform have the ability to try out experiments before consenting. There is also a high level of transparency throughout the experiment and a no deception rule. This runs counter to a lot of the digital experimentation work done on websites like Facebook which rely on the lengthy, vague, and often ignored terms of service agreements.

Panelists weighed in on this debate during the “Fireside Chat” panel, which included Esther Dyson (EDventure); Michelle N. Meyer (Mt. Sinai); Leslie Meltzer (Univ. of Maryland); Duncan Watts (Microsoft), and Jonathan Zittrain (Harvard Univ.). All were divided on what constitutes appropriate consent for experiments like the one Facebook did on its users. Meltzer opened by arguing that the Facebook experiment violated what’s known as the “common rule” – a set of guidelines requiring institutional review of experiments involving human subjects. Other panelists were less convinced. Facebook is a private company they noted, which creates a gray area around compliance with academic research standards. Harvard professor Jonathan Zittrain argued that it may be worthwhile to consider a baseline scenario for private companies, which would require them to agree to a few bright lines around research. He proposed a requirement ensuring that private companies would not attempt to throw a public election by changing the messaging in individual newsfeeds as one example.

These debates are notable for policymakers (and civic engagement folks) considering the rules of the road for private companies that now manage research teams rivaling many of our best educational institutions. How far is too far when it comes to private companies using its customer base as a research pool? Some states are already supporting significant public-private research efforts into Big Data and predictive analytics, which is essentially what the Facebook experiment was (some 700,000 users were part of the emotional manipulation study). These partnerships typically examine healthcare population management, law enforcement or cybersecurity, but defining research protocols around these public-private spaces would be worthwhile across disciplines. Where do you think policy makers should draw the line? Leave your thoughts in comments.

The full agenda for the CODE Conference is available here including presentation topics – some quick Googling on a topic of interest should get you to the research.