Case Study: Conducting an Online Bulletin Board amongst Advisers

26 February 2018 | Research

Here we take a look at our experience of using an online bulletin board to research a number of proposed advertising executions aimed at advisers. Limitations on sample meant that we were going to struggle to pull together groups of advisers.

Introduction

Below we review a recent qualitative research project we conducted online to give a flavour of how the method actually works in practice – what the benefits and trade offs of using the approach were.

The subject matter was exploring the current pension advice landscape post  PensionFreedoms, and some related advertising targeted specifically at advisers.

The main challenge for this project was that of limited sample available – the sample lists had been used in the past, and we also knew we had an important future project where the sample lists would be required again.  If we conducted face to face groups we would have had to focus on major conurbations – reducing the urban sample available for planned future groups, and in effect ignoring the sample located further afield.

To relieve this impending burden on the densest sample clusters and to open up all of the sample to research, we opted for online research and decided on a bulletin board approach.

Recruitment

In the recruitment we gained the following benefits:

  • It was an easier and more straightforward recruitment task than for a group discussion. People said quickly whether they wanted to take part, and were not put off by the format
  • We were able to recruit from geographically scattered sample (instead of having to move in ever decreasing pools of grouped sample)
  • We recruited a greater range of people from different areas who do not normally get asked, so the sample was more representative geographically.  Some do not get invited to groups because of their location – so this was a way of involving people who are too far away from a central location
  • This reflected well on our client – the ‘reach’ was greater and extended to relatively ‘under researched’ advisers, who appreciated being asked to be involved and felt that our client had ‘listened’ to their views

All were very amenable to taking part, and happy to commit to logging on once a day for around 15 minutes over the course of a week.

Topic guide

In structuring the online topic guide, we grouped the questions into broad areas, and focused on one of these in each of the five days of the ‘fieldwork’.  On each day the participants were asked three questions focused on the subject area.  Each of these consisted of a ‘main’ question supported by more explanation/ probes as to the sorts of things we were looking for under each topic.

This meant that, whilst the questions were more ‘directive’ than in a personal discussion, they retained their essential qualitative nature.

Respondents knew the area that we were interested in, and we provided enough context to prevent them from wandering too far off the topic – but the style of the questions remained ‘open’, retained flexibility and encouraged respondents to give full responses.

Moderating

The moderating task is very different from a face to face group – it takes significantly more time and ‘organisation’ in terms of keeping tabs on different respondents and their previous answers.

Type of question: The bulletin board allows questions to be set up as open (respondents can see each other’s answers before giving their own), or to ensure that each respondent has given their own views before seeing what other people have said.  This level of control is very different from a face to face group where (unless specific techniques are used) questions are posed openly within the group.

In practice there are different benefits with each approach. Where questions were ‘open’, respondents did actually take the time to read previous comments and clearly incorporated these into their thinking.  Their responses were informed by other people’s thinking – and there were frequent references to agreement/ disagreement with previous posts.  As a result, the question thread felt quite ‘interactive’ and iterative.

However, where questions were ‘private’ – i.e. a respondent had to give their answer before seeing anything else – we did get ‘untainted’ responses to the question posed, and there was more diversity of opinion and expression. But the level of interactivity definitely dropped off, and despite all answers being visible once the individual had given their own response, it was unclear whether they reviewed these and rethought their own responses.

Our view is that the interactivity of the open approach generated a richer insight and helped the group ‘gel’ – the benefits and relationships moved on through the
subsequent questions.

While there are clearly instances where individual  responses are advantageous, where this occurs there is a trade off to be made.  The design of the question needs to account for this – perhaps by summarising previous responses and asking follow-up questions later in the process.

Timing: Some people started late – this meant that early questions were getting new answers (and replies to follow-up probes) at the same time as later questions were getting their first wave of answers.  However, the ‘structure’ of the platform made this easy to manage – it was always clear which question was being answered, and the system highlighted new responses to each question, including ‘late’ responses to earlier questions.

Answers kept coming in until the (real) Sunday cut-off after the (nominal) Friday finish date.

Catering for respondents to ‘over-run’ the theoretical end-time is an important factor to bear in mind during the initial project set-up.

Involvement: We asked all the respondents follow-up questions and were careful to ‘spread’ these over time and among different respondents. This was partly to ensure that we were not overburdening individual respondents, but also to indicate to everyone that we were paying close attention to what they said.  This approach worked well – everyone who was asked a follow up question responded to it and expanded on their views.

It also felt that a ‘connection’ was made between the respondents and the moderator, which served to enhance the respondents’ engagement with the research process.

Respondents ‘talked’ directly to the moderator (and occasionally to other participants), and over the course of the project a real rapport was established.

Stimulus: Stimulus worked well and was easy to manage online, and this included managing rotations and the order effect.  Any stimulus needs to be carefully labelled, and clear instructions given to respondents, so that it is always clear what they are referring to in their answers.  If this is done, stimulus management (and later analysis) is more straightforward online than face to face.

However, the ability to follow up with specific probes about individual pieces of stimulus was more constrained than would be the case in face to face groups.
This constraint can be mitigated somewhat by careful planning – for example, if there are a range of alternatives (e.g. alternative advertising executions), these can be whittled down early on, with the ‘winning’ set represented later in the week and more specific questions asked about it.

Whilst easy to do, this needs thinking through in advance to ensure that there is enough ‘space’ in the overall question structure to accommodate it.

Content and analysis

The online approach worked well overall in providing rich feedback on the questions asked, and in generating quotable verbatims to include in the debrief.

Questions needed to be more focused and more carefully worded than in a face to face group, in order to ensure that they were clear and unambiguous (and so easier for respondents to answer without a great deal of head-scratching).  This meant that the responses were also clearer and more focused than is often the case in a face to face conversation.

The best way to put in probes was to include them as sub-points in the original question, rather than posing them as a separate follow-up.  This meant that they were sometimes ignored – but then not everyone answers every question in a face to face group either.

We did not want to put follow-up questions to the full group, as this would have increased their workload beyond what they initially thought they were agreeing to, so we focused our follow-up questions on individuals.  Other participants could see their replies, so we were able to reference them in follow-ups with other people.

We needed to do this because the respondents were a little reticent about proactively ‘butting into’ discussion of replies to other people’s individually targeted questions.

Responses to the questions were longer as well as more focused and more concise than in conversation – they were written as opposed to spoken answers.  This meant they were always on topic, with no irrelevant asides, and often more eloquently expressed than is typical in traditional groups. They were less ‘off the cuff’, and in all likelihood included some second thoughts as well as instant responses.

The online approach also provided more detail in the answers – for example, more illustrations of helpful tools or resources, etc., than we trend to hear in comparable face to face groups.  In groups, examples are sometimes mentioned in asides (easy to miss) or they are more generic (e.g. a provider is named, but not their tool), and generally there are fewer answers (several examples are proffered across the group, so other respondents do not feel the need to contribute).

In contrast, in the bulletin board each respondent was able to give their personal preferences and examples, and was able to think through which to talk about – the responses were more considered and overall a ‘broader’ picture was obtained.

In comparison with face to face groups, the feedback on the stimulus/ different advertising alternatives was clearer in terms of preferences than could be  obtained without doing hand counts in a traditional group, as everyone expressed a view on every execution.

The drivers of these preferences were also expressed clearly and cogently. However, there was less scope for inviting suggestions for minor – but possibly significant – copy improvements than might be expected in a face to face group.

By the very nature of the methodology, we had immediate access to complete transcripts as soon as the bulletin board was closed.  We also knew who had said
what, and the answers were all organised by subject area.  This really helped to
speed up the analysis while still allowing full access to all the ‘raw material’ (no need for the typed-up transcripts or edited analysis of audio recordings that face to face groups would require).

Clients were also able to review the ‘transcripts’ (either in their entirety or specific question areas) as and when required.  This meant that they were able to get a sense of the conversation and ask further questions as the fieldwork progressed.  However, client feedback suggested that this is an onerous task –
following the written conversations of 50 or so individuals (a number based on four online groups) is challenging and time consuming – more so than simply listening to a ‘live’ conversation, where it is easier to gain an immediate overall sense of feeling/ views.

Download the article as a PDF…