Monthly Archives: February 2012

This post is motivated by a recent discussion on ethics and people protection standards in crisis mapping. Some important arguments and counter-arguments have been brought up, but in my opinion, one of the biggest threats to the security of crisis mapping sources on the ground cannot be disarmed with the proposed measures, as I will show.

The starting point for my train of thoughts was a recent blog post from the Satellite Sentinel Project. Their main argument is

For the evolution of this digital toolbox [crisis mapping]– part crowd sourcing, part field reporting, part social media, part digital cartography, and part data mining – has to date outpaced the development of widely accepted doctrine for responsible use thereof. And crisis mappers, who already commonly use a set of digital platforms and tools, now urgently need a shared set of ethical and technical standards for how to use these safely and strategically.

It has been an acknowledged fact that the use of social media for increased situational awareness, by crowd sourcing both the data collection as well as the data curation, faces critical issues of data quality. There is a lot of on-going research on the subject, as well as practical guidelines used by the Stand-By Task Force (SBTF), for example. However, considering the recent deployments of the SBTF in the Lybian crisis, and the use of Social Media in the Arab Spring, certainly a new challenge for crisis mappers has arisen:

It [crisis mapping] is increasingly about going head-to-head with hostile intelligence and security services intent on obstructing, co-opting and distorting the data that crisis mappers gather.

This development does not only increase the risk for maliciously introduced “wrong” data. So far, crisis mapping was more concerned with the credibility and reliability of the sources used – now it has to consider the security of the sources as well.

I think that an authoritarian regime’s security apparatus has four main attacks vectors:

  1. Injecting false information.
  2. Shutting down telecommunications infrastructures.
  3. Using the publicly available crowdsourced information as valuable (counter-)intelligence.
  4. Using meta-data (IP addresses, phone numbers, user profile information, …) to identify insurgent sources.

The issue of #1 is not to be underestimated, but covered by on-going work on assuring data quality. #2 hurts the authoritarian regime as well, and the events in Egypt for example have shown that it does not work well. #3 and #4 are the most dangerous and yet under-explored ways to use social media and crisis mapping against dissidents and protesters.

The SSP blog post argues thus that:

Crisis mappers are this era’s first responders, as it were, but they operate – it must be conceded – without the benefits of standardized training, technical benchmarks, field-tested equipment or peer-reviewed codes of ethics.

The blog post drew some criticism, because it did not mention that there are already a large number of initiatives doing just that: Developing standardized training, code of ethics, and best practices for crisis mapping. A non-exhaustive list includes

In another blog post, Patrick Meier shows that the crisis mappers community is breaking ground with this work on privacy, security and ethics of crowd sourced mapping: The Data Protection Manual of the International Organization for Migration does not mention social media once, and its principles “are not easily customizable for the context in which the SBTF operates.”

The SSP has responded to the criticism in yet another blog post, stating that they were aware of all those initiatives, yet:

These efforts are laudable, much needed, and constructive. They are also by themselves insufficient to address the challenges that our field and those we seek to assist face as a result of the work we all do.

The authors continue to propose a course of action:

1. We will seek to convene a diverse and inclusive meeting of stakeholders from across the crisis mapping community in 2012 to articulate a process for developing a comprehensive and binding code of ethics and technical standards for our field.

2. We will immediately form an ethics and standards task force that is representative of the multiple individuals, communities, and groups that contribute to our field.

3. We will seek out and convene experts from other, related disciplines such as ethicists in the field of human subjects research, the international humanitarian law community, and other professions to advise us in this process and share lessons learned from other, similar efforts in different contexts.

4. We will commit to ensuring that the input and voices of those we seek to assist are enfranchised as we define and carry out this enterprise.

This proposition is welcome and an indispensable course of action. I fully agree with the authors that the crisis mapping community must undertake all possible actions to ensure the security and protection of its sources, and avoid damage through publishing wrong or harmful information.

However, I must admit that I have been (and still am) a skeptic on the utility of very detailed and exhaustive guidelines and codes of conduct, all the more since the context of crisis mapping can be extremely varied, and the participation of (new) volunteers is part of the concept.

For example, in a natural disaster context, the prevention of harm to the people on the ground may require to forego all privacy concerns, and grab as much data as possible through an opportunistic sensing approach, process it automatically, and put it on a public map as quickly as possible. Yet during a humanitarian crisis or insurgency against an authoritarian regime, a more cautious approach is essential. An approach that relies on participatory sensing to ensure implicit or explicit consent of the sources, and avoid the public dissemination of sensitive information, since authoritative regimes can just as easily monitor public social media platforms and aggregation services as the public can.

Further, it is important to acknowledge the responsibility which the sources on the ground have for their own security, and inform or educate them on the possible dangers to breaches of their anonymity. This corresponds roughly to item #4 in the list above, which, in my humble opinion, needs more emphasis.

The above-mentioned guidelines from the SBTF are a step in the right direction and provide detailed information on the technical aspects of security. Yet they only hint at the fact that the greatest opportunity of social media offer is also the greatest challenge: The openness and ease of connecting with others and sharing of information. Social engineering is relatively easy to implement on both, the volunteer force curating the information, and the dissident network itself. While authoritarian regimes often are not the most tech-savvy, they usually have a large security apparatus with lots of informants on the ground and experience in manipulating people.

Ultimately, let’s not forget that it is the responsibility of the empowered people in the democratic countries to demand their governments to act on the crisis at hand. Otherwise, the courageous efforts of those demonstrating for freedom from oppression might be in vain.

A final note: Keep in mind that all of the above are my highly subjective ramblings, interpretations and selective quotations. It goes without saying that therefore, before you criticize any of the mentioned organizations or persons, please read the primary sources I linked.