anigif_spike-looking-over-my-tax-paperwork-yes-i-572-1296321613-6_preview

So I got into a (friendly) debate with someone on Twitter after sending an intentionally controversial Tweet:

Which subsequently got into a discussion about the role of bias in community engagement, and whether online surveys are a good method for community consultation.

Craig, who runs Delib Australia (who “[provide] awesome digital democracy apps and consultation software to facilitate online policy-making and consultation”) subsequently wrote a blog post about the topic, emphasising that the importance is in identifying goals, not tools.

Which is absolutely correct. Anyone doing anything (social media, marketing, consultation, or otherwise) without clear goals in mind is likely to waste time and/or money, in any context. This is the same with tools, and is the reason that Dialogue has always remained 100% vendor independent, receiving no bonuses or kickbacks from social media monitoring/engagement companies, or other affiliate programs. Otherwise we feel we wouldn’t be able to recommend the best tool for our clients.

However the area we disagree on here is the role of online surveys. It is my perspective and experience (perhaps different to his of course!) that online surveys are the default — the go-to ‘solution’ for community engagement. Because like all things, there are many ways to skin a cat. You can likely complete your consultation goals using MySpace if you want — it’s just that the success will be limited, and the sample extremely biased. Biased samples are a vital part of any consultation or market research project, and Mary has written previously on this subject so I won’t go into it in detail. As always, it’s important to make sure that the data you’re collecting is from the correct, representative sample: otherwise you return to the classic issue of garbage in equals garbage out.

Having conducted a large range of market and academic research in the past (including publications in peer-reviewed journals), I feel I am pretty well across what platforms offer for researchers and community engagement is (ultimately) a form of market research.

Sure, you can have all kinds of features in your survey — pretty maps, drag-and-drop interfaces, whatever you’d like. But the big problem that I see in government and elsewhere is what I term “Survey overload” — almost every organisation (government, NFP, corporate) is asking you to complete a survey these days after almost every interaction you take. Regardless of what features they incorporate, as we continue to ask people to spend time doing these things, we make it harder for us to do it in the future. I certainly don’t fill out every survey I see, and am very much swayed by a good promotion or incentive (everyone wants to win $10,000!).

A good example that I just saw was from an infographic about community consultation from a consultation platform provider who will remain nameless [however I will explicitly say it wasn’t Craig/Delib!]:

Incorrect. Surveys can provide both qualitative and quantative data, and also shouldn't be a "if you want x, use y" type option.

Incorrect. Surveys can provide both qualitative and quantative data, and also shouldn’t be a “if you want x, use y” type option.

Similarly, being ‘innovative’, ‘government 2.0’, a ‘social business’ or whatever buzzword you’d like to grab does not mean simply doing stuff online. An online form does not an innovative government make, in the same way that a Facebook Page does not a social business create.

The best thing that I believe has come from social media’s “revolution”, for want of a better word, is the return to customer/user centricity that it has emphasised. And that’s not a unique or recent idea — you just have to look at the history of ‘solution selling’ to see that the idea of becoming user or customer-centric has been around for quite some time.

As part of this user centric model, we need to go where the users are. While Craig (and others) would say that all platforms should be treated equally, my perspective has always been to start where people already are. And last time I checked, people don’t usually hang out around SurveyMonkey, waiting for the next consultation. Instead, we try and push them from the channels they use actively (email/Facebook/Twitter/etc) to another platform (SurveyMonkey/Owned communities/proprietary platforms) for the sake of our convenience. Which ultimately means that your data is instantly biased toward the people who will take this action.

Now I believe that my ‘bias’ (as people like Craig might describe it) isn’t about bias at all — it’s about  professional experience and knowledge of the limitations of tools, and the disconnect between these limitations and how common use is within the consultation / research sector, compared to using other methods available which may be harder, more effort or the road less travelled. Given that, I am very heavily aware of the strong response bias that usually occurs when pushing a user from one platform to another.

I am very happy to say that for clients, our last solution is an online survey or external consultation that requires users to go somewhere else than where they are to provide their feedback. Having said that, we still commonly recommend it — being the last option doesn’t make it any less likely to be used, and surveys definitely have their place in the mix. But they shouldn’t be the starting point, where the attitude is ‘just stick up a SurveyMonkey and get some feedback’. Regardless of whether you want to label the problem there as the goals, user or tools — it’s not the correct approach, and simply results in poor-quality data, annoyed users and perpetuates the concept that surveys are the ‘best practice’ solution.

I look forward to the days where the online survey is a space for detailed, primarily qualitative information, and the majority of data is collected in places that users already are in an ad-hoc manner.

Let the debate continue!

(image source)