Skip to end of metadata
Go to start of metadata

In July 2017 we have the Data Scientist's Challenge: Data-driven web design with A/B testing and experimentation.

Jordan Read and the Data Science team from the Office of Water Information help us think about how we can get quantitative information for improving science communication through our websites and apps.

This is a very timely topic since it is related to the issue of user needs and user experience for our sites and apps, which came up at our CDI Workshop. 

"Use data, not opinion!"

Comment below to continue the discussion, we'll come back to share the results at a future monthly meeting.

  • How do you decide how to add a feature?

  • What metrics do you use to judge feature success?

  • What types of applications/websites are high priorities for testing?

  • Does anyone have a current need for data-driven decisions for an application or website?

 

 

3 Comments

  1. Question on the call from Fran Lightsom: Is there a way to know what outcomes people are trying to accomplish when they use our websites?

    A: It depends on the website: Some are for the general public versus others for the scientific community. Sometimes we do user group sessions to get requirements, to see if the users have the same idea as we do for what the outcome should be. Sometimes we test given workflows and see if the users make it all the way through, for example, can they actually get to downloading the file, or do they give up half-way through?

  2. Are there results from previous A/B testing that could be used to inform those of us that haven't done it before?

    Do you test the A/B scenarios serially in two different time periods or in parallel? 

    How many data points do you need?

    It would be interesting to follow up and learn more about the background/theory behind A/B testing and other user experience topics! If you have any suggestions of good resources for that, please let us know.

    1. We are still gathering information about this topic, so we do not have all the answers but are really interested in learning more. To my knowledge, the USGS has not done A/B testing but we are hoping to hear from anyone if they know of a current or past effort. My understanding is that A/B scenarios are completed in parallel - when someone comes to your site, they are automatically directed to the A or B version. I think the sample size is dependent on many things; two that come to mind are 1) do the results influence a significant UX/UI change?, and 2) how many users does the site typically get?.

      A/B testing is well-known in the business/marketing community, so there are plenty of resources available online. The challenge is going to be tailoring this to the goals of our websites and applications which likely differ a great deal from those of businesses. Most things I read online talk about a "conversion rate" which is the thing you are measuring during your testing, i.e. "what percent of users do X". For businesses, it can be how many people purchased a product, or how many people signed up for an account, etc. The conversion rate metric we choose for USGS will be very dependent on our goals for that page or application. For some, the goal might be to have people learn something new; for others, it might be providing a specific link or piece of information; and for a lot of front-facing websites, the goal is to highlight the USGS brand and our relevance/importance in science. Translating these into potential conversion rate metrics: percent of people that continued to additional info pages, percent of time taken to click on specific link less than X, percent of returning customers, percent of time on the page greater than X, and percent of people who "shared" the page through social media if that option exists.

      Also, it's worth saying that we are interested in data-driven web design, but A/B testing is not the only method we could use. There are other methods to accomplish this.

      Here are some resources that I've found illuminate the topic:

      - Defining A/B testing (http://usabilitygeek.com/introduction-a-b-testing/)
      - An article outlining the statistics of A/B testing + comparing this approach to the scientific method (https://blog.kissmetrics.com/your-ab-tests-are-illusory/)
      - Blog post generally describing A/B testing with use-cases for Netflix (https://uxdesign.cc/how-netflix-does-a-b-testing-87df9f9bf57c)