Monday 16 August 2010

Analytics + survey data = customer insight [1+1=3]

It's very easy to get consumed under the shear volume web analytics data, so all you want to hear from me is, "have you considered measuring this or just try another data source". 


What's more important is getting the balance between data sources and knowing when to use them depending on what it is you want to improve....there's definitely another posting on that topic in the future.


Up until recently I'd always used quantitive data, the only exception being while doing my degree dissertation, (some five years ago now), I conducted the regulatory interviews on my chosen subject: "What are the barriers to growth in the mobile music download business models?", and turned those interviews into something meaningful by doing the following:


Having collected the primary data; email responses received and recorded telephone interview transcribed, a process of ‘abstracting and comparing’ was used to analyse the data. The responses to the questions were studied in detail one at a time, highlighting the most relevant parts of the data and making notes of any additional thoughts during this process. This process was akin to that of coding and memoing, although the exact definition of these terms is themselves debatable:
On the one hand coding is analysis. On the other hand, coding is the specific and concrete activity which starts the analysis. Both are correct, in the sense that coding both begins the analysis, and also goes on at different levels throughout the analysis (Punch, 1998, p.204).
Therefore the term, ‘abstracting and comparing’ was chosen to best describe the method of data analysis undertaken, although within the data analysis process, some coding was carried out in order to turn raw data into higher level concepts. The abstracted data was listed in a table, compared, grouped into like indicators and then the higher order concepts noted; such that an overriding result to each question could be determined. The process of comparing was ‘essential to identifying and categorising concepts’ (Strauss and Corbin, 1990, p.84) and meant that the more abstract concepts could be straightforwardly developed. This process was repeated for each question and a tabular format was designed to show the subsequent results, representing the evolution and logical relationships between: data, indicator, concept and result. 

Little did I know it at the time, but this methodology for analysing qualitative data has recently proved very useful. Trying to understand a site visitors: intent, expectations and wider issues, using only web analytics is never easy. We always have to: imply, create segments, trend etc etc; it is only through well placed surveys ( on and/or off-site),that we can know directly what our customers are feeling about us, sometimes the joy as a service has exceeded their expectations, sometimes unexpected pain, as parts of a product or service, maybe we aren't even in direct control of, are degrading our websites experience. Either way a qualitative source of data around a visitors experience has given me a whole new level of insight into both the expectations of our customers and what we need to improve.


The hard part once you start collecting  a large amount of qualitative data, which is often in the form of pure text comments is how the hell to analyse it!  Yes there are text analyzer tools out there, but will they ever know the intracacies of your business? I still struggle with trying to get away from the true value of reading every comment - the real gold, only then do you either feel the pain or share the love of your customers. Depending on how surveys are structured before any freeform text field, one should be able to try and categorise the issues to some extent, which helps with prioritising, but I am thankful to my degree dissertation for understanding how to analyse the customer comments we get, through being able to pick out, using coding and memoing what we need to improve.

No comments:

Post a Comment