kokobob.com

Beyond the Last Mile: Measuring Data Science Project Impact

Written on

Chapter 1: Understanding the Last Mile Problem

Recently, during a team meeting, I observed that some members were overly focused on the intricate details of project development and deployment. As they elaborated on their tasks, they struggled to articulate how these efforts aligned with the organization's objectives. When I probed one individual about the project's end use, their response was vague at best. This raised a significant concern for me, which I later discussed with the team leader. My worry extended beyond mere context; if the team lacks clarity on the tool's purpose, how can we assess its success or measure its impact?

The "last mile" issue in data science revolves around the challenges associated with transitioning a project into a production setting and deriving meaningful insights from it. This concept is borrowed from software development and is prevalent across various industries. While implementation and conclusions are vital, there is a need for enhanced project management and planning, especially in organizations where data science serves a supportive role rather than being the primary product.

Nevertheless, work must continue beyond the last mile to evaluate whether a project achieves its intended outcomes. Planning for this evaluation should commence during the initial stages of project conceptualization.

Section 1.1: The Importance of Evaluation

A crucial yet often overlooked step in many organizations is the evaluation of projects. This process, sometimes referred to as an audit or return on investment calculation, is especially relevant in data science, where human behavior frequently comes into play. My focus will be on the practice of evaluation itself.

What does evaluation entail?

The evaluation field can be quite intricate. Some professionals prefer to be seen as wizards, conjuring conclusions that rely on the audience's trust. In contrast, others embrace more inclusive methods, engaging the entire organization—or at least those most involved—in the evaluation process to deepen their understanding of the project.

At its core, evaluation combines project management, measurement, scientific methods, and organizational science. A competent evaluator possesses a profound understanding of ontology (the study of existence) and epistemology (the theory of knowledge)—knowledge that informs the design of studies leading to valid conclusions.

Section 1.2: Types of Evaluations

Various types and purposes of evaluations exist, and it's essential to consider these when assessing your project's needs. For instance, if you wish to ensure that your machine learning program functions as intended, you would conduct an accountability and compliance evaluation. This requires comparing the current situation to the planned state and adhering to rules, laws, and other requirements.

When it comes to understanding the impact of a data science project, two evaluation types come into play: implementation evaluation (formative) and merit evaluation (summative).

Subsection 1.2.1: Formative Evaluation—Implementation

An implementation evaluation is essential for nearly every data science project deployment. You need to assess how well the project aligns with the rollout plan. Too often, teams mistakenly equate "deployment" with success. For instance, one of my teams proudly announced the rollout of a complex dashboard they had developed over a year. Yet, when I asked, "Who is using it? Are the intended users engaging with it as expected?" they were left without answers.

This disconnect arises because they measured success through the lens of development and project management cycles. Evaluating project implementation is critical to understanding whether you've successfully delivered your intended outcomes. While you may already be performing parts of this evaluation during deployment, formalizing the process ensures that you ask the right questions and gather pertinent data to determine whether your deployment has succeeded.

Section 1.3: Summative Evaluation—Impact

I know that some team members dread my presence in meetings, as I tend to ask challenging questions. For instance, if you tell me that your dashboard is deployed and 95% of the intended audience is using it, I still want more information. Why? Because I need to ascertain whether successful implementation translates to the desired outcomes.

Frequently, I observe data science teams confusing project deployment and implementation with actual success. Yet, for most organizations, these projects merely serve as tools to achieve something more significant. Dashboards exist not for their own sake but to provide leaders with improved visibility into production rates and quality.

"Great, our C-suite is pleased with the dashboard."

But this is merely an output, not the ultimate goal. We must ensure that the insights and information lead to enhanced efficiencies and reduced waste. Are we tracking these aspects as part of our evaluation plan? If not, we lack the means to ascertain whether our dashboard has any impact.

Some may argue, particularly in the data science team, that this perspective is unfair. "We can't control everything. We can't compel leadership to heed the dashboard or make decisions based on the displayed information!"

While it's true that organizations may have limited control over certain factors, if they are invested in achieving specific outcomes, there should be a plan to facilitate these results and an evaluation framework in place to gauge the outcomes. Otherwise, we risk creating an attractive dashboard that merely provides visualization and analytical capabilities, ultimately wasting the organization’s resources.

Chapter 2: Planning for Evaluation

Designing an evaluation plan doesn't have to be complicated. It should begin during the project's design phase. Simply put, a solid project should follow a causal chain that starts with the goals or outcomes you aim to achieve and works backward to hypothesize how the data science project will aid in reaching those objectives.

Seeking to lower costs? How does your dashboard contribute to that?

"It provides real-time data and insights to key organizational roles, enabling them to make informed decisions that reduce costs."

Notice the implicit theory of change here. First, we assume that key individuals currently lack timely access to essential information. Second, we presume they will comprehend the information and use it to make decisions that lead to cost reductions.

Almost every data science project I have participated in required causal linkages to achieve results that extended beyond the project itself. These connections help shape the questions you should ask and the data you need to collect for evaluations. Ultimately, if you monitor changes to outputs based on the project, you can be reasonably assured of successful implementation.

Section 2.1: Benefits of an Evaluation Plan

Beyond simply measuring and understanding the actual impact of your data science project, maintaining an evaluation plan throughout the design and development phases enhances the project. This approach compels the team to consider the broader organizational goals and explicitly plan for how to gauge their project's contributions toward these objectives.

It encourages a shift from day-to-day development cycles to a larger perspective. Additionally, it equips you to troubleshoot the data science project, its deployment, and the necessary organizational adjustments to ensure the project makes a meaningful difference.

This video titled "Solving the Last Mile Problem of Foundation Models with Data-Centric AI" discusses how to effectively address challenges in deploying foundation models, providing insights into data-centric strategies.

The second video, "Data Science First Mile and Last Mile Problems," delves into the critical aspects of data science projects and the importance of addressing both initial and final challenges for successful implementation.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Why Young Digital Nomads Should Rethink Moving to Southeast Asia

Exploring the challenges young digital nomads face in Southeast Asia and why they should reconsider their plans.

# The Cool Factor of Sleep: Why Lowering Your Body Temperature Matters

Discover how lowering your body temperature enhances sleep quality and why it matters for your overall well-being.

# Challenging Gender Norms: Women as Big Game Hunters in Prehistory

New findings challenge long-held beliefs about gender roles in prehistoric hunting, revealing women as skilled big game hunters.