Monthly Archives: July 2011

We all make mistakes. Sometimes in private, sometimes in public. Sometimes, we even publish them. So what do you do when you’ve spread misinformation (accidentally, of course…)? Do you keep still and hope noone notices, or do you try to erase the traces of it, or do you go public and try to correct the error? I’ve stumbled upon this interesting article, which investigates the issue. It’s based on personal, anecdotal evidence, but well argued, and I tend to agree with the conclusion:

However, the article falls short on one account: It is addresses mostly those working as (web) journalists, (web) publishers, researchers or others who have something original to share. But what about the majority of social media users? Let’s face it, the majority links or writes comments on articles published by others which they find interesting and worth sharing. This is great, because I have discovered a lot of fascinating stuff I would have never learned about without social media. Yet this also means that each and everyone of us has a responsibility: The responsibility to check what we re-tweet, re-post, link, or comment on. Before we pass it on. Spreading rumours was a bad enough habit in pre-digital times. Nowadays, it has the potential to cause even more damage. We should therefore not only pass on news based on the trustworthiness of the source. But we should always check the credibility of the message itself. While this may seem a trivial observation, and certainly was true in pre-digital times as well, I am convinced that it has become more important, and that we don’t  act accordingly to it yet.


I’ve been following the development of the Swiftriver initiative with great interest over the past year. For those who don’t know what it’s about: They are tackling the same problem that I have to deal with in my research, i.e. the increasing amounts of social media data or information, and the need to filter, tag and validate it in order to be able to use it. While crowdmapping platforms such as Ushahidi currently rely on enthusiastic volunteers to curate the data (such as the Stand By Task Force), this approach will (in my humble opinion) break down in the foreseeable future. We need some automatic pre-selection and tagging, so that only the difficult (ambigous) cases are curated by humans.

That being said, the Swiftriver approach shows immense promise. However, I wonder what will happen to this project now that the Director Jon Gosier and Lead Developer Matthew Griffiths both embark on to new projects (here’s the Ushahidi blog post, and here’s Jon’s). I have no inside knowledge on this, since I have only tried out part of the platform (the Sweeper app) and engaged in some interesting discussions with Jon and Matthew on the Swiftriver google group. With this limited knowledge of mine, I am afraid I cannot share Jon’s optimism for the future of Swiftriver. While it is certainly impressive and fascinating what has been achieved already, there is still a lot of work to do before the platform is close to being operational. I sincerely hope that the community behind Swiftriver and the new partnerships mentioned by Jon are indeed already strong enough to support the project in the future.