How to Develop a Data-driven Strategy for Digital Engagement (that puts you in the driver’s seat)


Abstract

This how-to session will focus on the topic of developing a data-driven strategy for digital engagement. The topic will be explored through a case study of the Natural History Museum of Utah’s Research Quest, a free collections- and classroom-based program for developing critical thinkers. Working with outside partners, the Museum developed a new data-driven strategy for digital engagement to expand the reach and impact of Research Quest. Based on personas, journey maps, experience maps, expanded success measures, and rich data analytics, the Museum was able to make surgically precise design decisions, track the efficacy of marketing campaigns, launch a new national community of Research Quest boosters, and more.

 

This presentation is designed to walk the audience through what it means to go beyond “one’s gut” and create steady streams of actionable data to learn if one’s programs are actually hitting the mark. Using the Museum as the example, the presentation will begin with what inspired NHMU to take on this new direction. We will then explore each of the steps the Museum took along its journey: defining success measures (KPIs, metrics, benchmarks, etc.), quantitative components (like personas), and qualitative components (expanding beyond Google Analytics to tools like Heap and other data visualization tools).

 

At each step of the journey, the presentation will turn interactive, guiding the participants to explore how they might apply the same tools within their own programs. Examples from the audience will be shared from the podium.

 

Next, the Museum’s journey from collecting data to turning it into actionable analytics will be explored, in areas like site design and outreach campaigns.

 

The session will end with suggested resources for how attendees can get started once they return home, followed by a Q&A.

 

Participants will leave inspired, informed, and with a sense that “Hey, I can do this… and on a budget.”

 

Primary audience is anyone responsible for the success of museum programming (education, public programming, web-based, mobile, online courses, etc.) and looking for effective ways to define and measure their success.

 

Participants will workshop each step of the process being described in the case study. For example, they will pick one area to create a success measure and then sketch out a KPI, related metrics and benchmarks. We’ll do the same for tools like personas and experience maps. They will be engaged throughout and leave with a basic framework and feeling “Hey, I can do this.”

Authors: Madlyn Larson, Barry Joseph, and Tim Halstead

 

ABSTRACT

As the museum community increases the amount of high-quality digital content available for learning, developing a data-driven strategy for digital engagement with those resources is a key element that is too often left unaddressed. Our how-to session was designed to explore the need and approach for this work through a case study of the Natural History Museum (NHMU) of Utah’s Research Quest: online investigations designed for middle school teachers and their students. Research Quest (RQ) is a free collections- and classroom-based program for developing critical thinkers. Working with outside partners, the Museum developed a new data-driven strategy for digital engagement to expand the reach and impact of RQ. Based on personas, journey maps, experience maps, expanded success measures, and rich data analytics, the Museum was able to make surgically precise design decisions, track the efficacy of marketing campaigns, launch a new national community of RQ boosters, and more. While a paper cannot instruct in the same manner as a hands-on workshop, we hope this paper can walk you through what it means to go beyond “one’s gut” and create steady streams of actionable data to learn if one’s programs are hitting their intended mark.

 

KEYWORDS

digital engagement, data-driven decision-making, program analytics, marketing, audience development, data-informed design

 

INTRODUCTION

Historically, museums invest a great deal of financial and human resources into creating programs and resources for learning. And, while these programs and resources are typically mission-driven and well-received by the community, their overall impact is limited by a lack of strategy for how to grow impact using the power of data and automation. Too often, this impacts the associated funding for the program making an otherwise outstanding set of resources languish in a museum’s web archives. Further, implementing a data-driven strategy for digital engagement not only improves a museum’s ability to effectively attract an audience and understand their experience with the Museum’s digital programs and resources, but it can also drive significant growth and, as a result, attract new sources of program funding.

 

Using the Museum as an example, we will review what inspired NHMU to take on the development of a digital engagement strategy. Then, we will explore each of the steps the Museum took along its journey: defining success measures (KPIs, metrics, benchmarks, etc.), quantitative components (like personas), and qualitative components (expanding beyond Google Analytics to tools like Heap and other data visualization tools).

 

Next, we will examine the Museum’s journey from collecting data to turning it into actionable analytics in areas like site design and outreach campaigns. And we will wrap up with practical steps other institutions can take to dive into their own development of a digital engagement strategy that maximizes their program’s impact and growth.

 

BACKGROUND

Ten years ago, the Natural History of Utah (NHMU) was challenged by a close funding partner to develop an innovative approach to enhance critical thinking skills in young people. They were guided by a list of criteria that included being able to scale to a broad audience, have measurable impact, employ new technologies, build on NHMU’s strengths, and successfully compete for other sources of funding. Fast forward seven years and the Museum was nailing all these criteria through the program that grew out of this challenge; Research Quest – https://researchquest.org. Still, being able to scale to a broad audience wasn’t the same as reaching and retaining a broad audience. That would take more work. The Museum realized that it had to move beyond traditional, ad-hoc marketing efforts to spread the word about Research Quest. A strategy was needed to take this work to the next level; to go from reaching a few thousand students each year to a million!

 

To do this they needed a thought partner, one with a track record of developing digital strategies that could both inform program design and audience growth. This partner, Barry Joseph Consulting (BJC), with a data-analytics team from Mutually Human,  helped the Museum rethink how they defined success, who their users were, what experience users were having, and how to leverage this knowledge to make data-driven decisions about engagement activities.

 

DEFINING SUCCESS

Originally, NHMU defined success with Research Quest (RQ) based on whether we measurably “moved the needle on critical thinking” and the number of users we were engaging. We were not working with a rich or dynamic data set. These measures were all summative and were not available until the end of the school year. 

 

Working with BJC, we developed measurement goals that were more mission driven and aligned with our overall strategic plan. These goals considered the audience needs, something we’d always considered in educational design but had neglected when it came to measuring success. 

 

One of the first things we did was to have a conversation about what success really meant for RQ. It was a process of unpacking our assumptions and needs. We were able to distill it all down to three goals: one for educators (meet their demand for high quality resources), one for students (strengthen their abilities to think critically using evidence-based reasoning), and one for our department (drive the Education Department’s efforts to advance the Museum’s strategic goals). (Figure 1) This, however, was not the end of the story; this was just the beginning.

 

Figure 1: Program Goals

 

For each of those three goals we developed objectives, which were clear articulations of what measurable actions we planned to take to achieve those goals. For example, “effectively communicate the results of our work to internal and external audiences” (such as through this paper!).

 

We put all of this in a spreadsheet to organize the process. (Figure 2) The first columns were our goals, and the second column were their related objectives. Next, we added a column for KPIs (key performance indicators), which are the things we can measure to learn if we have achieved our objectives. For example, for the objective mentioned above, one KPI is “professional publications.”

 

Next to KPIs comes metrics, which in this case is the number of professional publications generated each year. Finally, we have benchmarks, which is how we can determine if, based on the metrics, we have succeeded or failed to achieve our defined goals. For example, is just one publication enough to meet our goals or ten (we decided two a year was our target). (Figure 3)

 

When we were done, we had replaced two general, summative success measures (movement on critical thinking and number of users) with three dozen points of data directly tied to actions we needed to take that could provide critical feedback throughout the year.

 

“Could” is the operative word in that sentence, as this was our ideal state. To implement a tool like this we needed to identify the gap between what metrics we needed to measure our success and the data we were currently collecting. And, to do that we needed to gain a better understanding of who our users were and how they were interacting with Research Quest.

 

Figure 2: Success Measures

 

Figure 3: Success Metrics

 

USER EXPERIENCE TOOLS

Parallel to developing our success metrics, we began work to develop a deeper understanding of our users and their experience with RQ. To do that we developed and tested user personas, a user journey, and experience map. Collectively, we were able to use this new information to model our user funnel. (Figures 4-7) Once we had these tools in place, we were able to use them to evaluate where and which metrics we were gathering and where we needed to set up additional data collection. (Figure 8)

 

Figure 4: Educator Persona

 

Figure 5: Educator User Journey

Figure 6: Experience Map

 

Figure 7: Educator Funnel

 

Figure 8: Data Mapping

 

UNDER THE HOOD

After identifying the missing data for measuring our progress on our goals we spent time evaluating the most appropriate methods for gathering the missing metrics. Figure 9 illustrates (in white) the tools and sources of the data we were collecting while the tools we added are shown in gold.

 

Unpacking the decision-making behind these may be the subject of future discussion, however, we can say that some of the choices were driven by the fact that several of these tools were already approved for use within our university, the University of Utah.

 

Figure 9: Data Engine

 

 

MEASURING SUCCESS

Scorecards

The next step was designing a reporting dashboard that would give the Museum a centralized, dynamic way to monitor progress on their goals. We needed a robust scorecard that could pull in data across multiple sources, including Google Analytics, Google Sheets, and a SQL database. Common business intelligence tools used for such data aggregation and visualization include Tableau, PowerBI, and Google DataStudio (now ‘LookerStudio’). While lacking some of the advanced analytics capabilities of Tableau and PowerBI, LookerStudio had no cost-of-entry and allowed users and developers free access to view and edit the Scorecard.

 

The first step within LookerStudio was connecting to the various data sources powering our success measures. Connecting to Google Analytics offered live access to the raw website traffic that Research Quest was generating, allowing us to track metrics such as page views, sessions, and the number of users visiting the site. Connecting to Google Sheets brought in repositories of Research Quest user survey results, as well as a number of other event-related data entered by Research Quest staff. The result was a series of scorecards that allowed the Museum to evaluate their performance in real-time and facilitated just-in-time decision-making for where to focus their engagement efforts. Figures 10-13 map directly to the success metrics referenced in Figures 2 and 3.

 

Figure 10: Scorecard Overview

 

Figure 11: Goal 1 Objective 2b

 

Figure 12: Goal 2 Objective 1a

 

Figure 13: Goal 2 Objective 2

 

Beyond scorecards to user insight tools

With the reporting fundamentals in place, we worked to identify additional ways to provide business intelligence for data-driven decision-making. We experimented with a variety of visualizations that could help the Museum deepen their understanding of where their top of the funnel activity was coming from to how users were engaging with the investigations. 

 

For example, we created a date-based visualization, Figure 14, that allows NHMU to see where their account sign up activity is coming from. This has been helpful in tracking the reach of their marketing campaigns and will help them in determining which campaigns are reaching their intended geographic audiences and where they should focus more attention. 

 

Figure 14: Zip Code Visualization

 

We, also, built a tool called active investigation monitoring. It allows NHMU to look at each individual investigation over any specified time frame to see which investigations are most active. For example, Figure 15 shows that in September 2022, their archaeology investigation, Range Creek, was receiving the most usage. Looking at November 2022, we see both Ceratopsians were most active. But of course, you can also look on the right and see neither one is the one that was most popular across the board. That was Ceramics. So, this data visualization offers different ways of looking at their usage information. 

 

This same tool provides reporting from a variety of other viewpoints. Figure 16 shows investigation usage by time on page. The circles on the left represent each investigation. The bigger the circle the greater the time on page. And on the right is every single page within all of the investigations shown on the left. By mousing over any of these circles you can glean additional details. (Figure 17) For example, hovering over the orange Range Creek circle we can see students have spent more than 2,000 hours in that investigation and racked up more than 116,000 pageviews. 

 

One can dig even deeper. In Figure 18, we selected the orange Range Creek circle. You can see everything on the left is not grayed out. Everything on the right has shifted and now shows only the pages associated with this investigation. This gives us the ability to see which pages of the investigation students are spending the most time. This is just one way this monitoring report can be used which can help NHMU’s instructional designers understand which parts of the investigation are getting the most use, where users may be missing key instructional supports, and inform design changes that may help users get more out of this resource.

 

In short, there are a variety of metrics we can choose to explore and play with. Sometimes what you need is a scorecard to say: Here’s exactly the information you’re looking for based on the KPIs. Sometimes it’s creating tools to explore the data. Letting  your intuition and your curiosity guide you allows opportunities to uncover new insights you can apply to your engagement related decision-making. Together, these provide the necessary information to help NHMU grow their RQ audience.

 

Figure 15: Active Investigation Monitoring

 

Figure 16: Time on page visualization

 

Figure 17: Mouseover Data Example

 

Figure 18: Deeper dive into an investigation

 

EXPANDING AUDIENCE

So, what’s next? How will these tools help us reach our goal of one million users? For one, we’ve implemented UTMs we use to track within our user funnels. When new users come into our website, the tools allow us a means to examine their engagement. As such, we can evaluate our audience building activities  to determine which are providing us with the ROI that is most meaningful. Prior to this, we had to rely on hunches we had as campaigns played out. We could see how much increased traffic was coming to the site and how many new accounts were signing up, however, we couldn’t tell which efforts were driving that activity.

 

We’ve also built an advocates program, Ed Corps, that rewards users who promote RQ with their peers. Giving users an easy way to share RQ, report their activity, and earn points helps us leverage one of our very best advantages, getting satisfied users to advocate for and encourage others to use these high-quality resources.

 

Another significant audience building activity we are ramping up for includes a series of email drip campaigns that are tied to our user journeys, have clear calls-to-action, and are sent out based on specific triggers. This means we can provide just-in-time support for exploring and using RQ in the classroom.

 

Each of these engagement efforts can be measured in real-time and over the lifetime of the lead which will help us to maximize the time and resources we put into any given campaign. And, our reporting tools, including our user funnels will help us monitor where we are making progress with conversions rate from discovery of RQ through retention and reuse of RQ by our users.

 

ACKNOWLEDGEMENTS

We’d like to acknowledge the team of experts and partners who make Research Quest go with special gratitude to the Joseph and Evelyn Rosenblatt Charitable Fund and the I.J. and Jeanné Wagner Foundation for their unwavering support of this work over the last ten years.

 



11041 [pods name="Paper" template="user_block" Where="_mw_paper_proposal_id="][/pods]

Silver Sponsor


Bronze Sponsors