The Ethics of Social Honeypots

29 Dec 2012 David Dittrich botnets ethics honeypots irb social-honeypots social-networks the-menlo-report

For the last few years, I have been participating in a Department of Homeland Security sponsored effort to develop principles and applications for the evaluation of information and communication technology (ICT) research. If you are not familiar with the Menlo Report, you can find a description in Michael Bailey, David Dittrich, Erin Kenneally, and Douglas Maughan. The Menlo Report. Security & Privacy, IEEE, 10(2):71–75, March/April 2012.

I and two of my Menlo colleagues – Wendy Vischer and Erin Kenneally – recently taught a didactic course at the PRIM&R Advancing Ethical Research conference in San Diego. (PRIM&R is the conference for Institutional Review Board, or IRB, professionals, with the annual AER conference having thousands of attendees). Our course primarily described the Menlo Report process to date, but we concluded with a mock IRB committee review of a fictional proposed research project in which researchers develop countermeasures to malicious botnets in social network platforms like Facebook using a combination of deception to build a social network of over 1 million users and to then use “good bots” that infiltrate the “bad bots”. (Just so you know, I have been an affiliated scientist full member on one of the University of Washington’s IRB committees since 2009. I lend my expertise in data security to investigators in designing their research protocols and in committee discussions of research studies associated with the UW. I highly encourage other computer security researchers to do the same for their local research institutions with IRBs.)

The paper I wrote to provide the background, synthetic case study description, and mock IRB application, can be found in David Dittrich. The Ethics of Social Honeypots. Available at SSRN: http://ssrn.com/abstract=2184997, 2012. If a researcher asked how to approach the UW’s IRB for such a study, this is the guidance I would provide to them. I would be very interested in hearing any feedback from researchers who study malicious software, botnets, and social networks, especially if you have experience interacting with your own IRB, as to whether this paper is helpful and in what ways, or how it could be modified to be more helpful.

P.S. At the PRIM&R conference, Stuart Schechter from Microsoft pointed out to me that Facebook provides a back-end data access service to researchers that would obviate the need to use deception in order to get the same information described in this paper that is necessary to detect malicious botnets. I was not aware of this service and possibly neither are many other infosec researchers. If you are familiar with it, please let me know your experiences or opinions. Regardless, I think the discussion of how to address the issues of the use of deception in ICT or computer security research studies is still an important one to consider. I wanted to also pass along something else that Stuart mentioned to me. Microsoft Research has a service that assists researchers who use deception in their research studies that sounded very interesting and useful to me. I am interested in feedback on this service as well.