Archive

Monthly Archives: October 2011

Since the earthquake on Haiti in 2010, the media have produced a significant number of articles on crowdsourcing information curation in the context of crisis management. I have tried to follow the developments closely, since in our research we are trying to harness the power of the crowd for improving situation awareness of public and decision makers during a crisis event. The motivation for our work has always been the conviction that the crowdsourcing effort will reach a stage in which the organization effort to coordinate the actions of the crowd will outweigh the advantages gained, or in other words, that the deluge of data/information might become too much even for the crowd of volunteers to handle. As the experiences from the SBTF show, you have to give the crowd volunteers a minimal amout of training, otherwise their effort will not produce results reliable enough in the context of disaster relief efforts. But there seems to be another catch besides curating (or mining) the data for useful information.

After having expressed several times here on this blog my admiration and confidence in the work of the crisis mapping community, it’s about time to have a more critical look on the utility of it for relief work on the ground during the immediate response phase. This doubt is expressed in this blog poston MobileActive. It is a very interesting post, it has sparked a lively discussion and I recommend it highly. The most important arguments made by the author seem to be:

1. You cannot be sure that the crowd is there when you need it. Depending on other events happening, or the geographic area affected, there might not be a sufficient number of volunteers.

2. The basic needs of  most members of the affected communities are always the same: Food, water, shelter, basic medical aid.

3. Uncertain reliability needs re-checking of information, and the information is of very transitory nature.

These arguments, which are difficult to disprove even under optimal conditions, mean the relief forces on the ground will have to check and gather intelligence about the situation irrespective of crowdsourced information.

However, I see the positive aspects in that crowdsourced disaster information can still provide the general public and the decision makers with useful information. The public, because they might be able to find information extremely relevant for them which they would not have using traditional means of information gathering (i.e. top-down authoritative broadcasting). Decision-makers, because they could get a better overall view on where the hotspots are, and where to send the relief workers. The experts on the ground will always have to decide on the spot where to engage first.

Advertisements
Example image from paper

Top Left: Protestant (blue) and Catholic (yellow) demographics; Top Right: Swiss cantons; Lower Left: risk of violence without accounting for canton borders; Lower Right: risk of violence with administrative borders included. Image: Rutherford et al/arXiv

I recently stumbled upon a research article that let me wonder about the visibility of several decades of Geographic Information Analysis / Science research outside of its disciplinary boundaries. In this paper (Good Fences: The importance of setting boundaries for peaceful coexistance; a – flawed- summary can be found at Wired), the authors describe how they used a wavelet filter to analyze differences in the spatial distribution of languages and religion in Switzerland, adjusting the process for natural and political boundaries, and trying to predict areas of tension. In other words, they investigated the geographic distribution of two variables and looked for patterns showing significant heterogeneity in close spatial proximity. Nevermind that there is not a single reference to any research on geographic analysis, I also wonder how explanatory their results are: The authors never mention how they determined the critical parameter of wavelet diameter, nor do they consider sufficiently the development over time of their variables. The model correctly “predicts” the violence in the Jura region using data from after the formation of a new canton, and we cannot know how much calibrating was necessary to achieve this result. In other words, the validation seems vague. Maybe I am too harsh in my verdict here, and I welcome any comments disproving me.