• Votes for this article no votes for this yet
  • Dashboard Insight Newsletter Sign Up

Dashboards and Scorecards
Explaining their essential differences

by Wayne EckersonMonday, August 13, 2007

By Hugh J. Watson

I attended the Executive Summit at the TDWI conference in Las Vegas last winter, an event for senior BI and data warehousing managers and their business sponsors. One of the sessions explored “what’s hot” in BI, and the participants voted on what was most important to them. Not surprisingly, dashboards and scorecards topped the list. This outcome is consistent with the results of other recent surveys.

Despite the attention given to scorecards and dashboards, it is my experience that the similarities and differences between these two approaches to business performance management (BPM) are little understood. I’ll give you my take on the topic in this article. I’ll begin with their precursors, and then I’ll discuss the technology currently available for implementing BPM based on the products I saw in the exhibit hall.

The Emergence of Dashboards and Scorecards

Even though dashboards and scorecards may seem relatively new, they are actually evolutionary developments. In the late 1970s, Jack Rockart (1979) introduced and popularized the critical success factors (CSFs) concept, which identifies and monitors what companies, business units, departments, and individuals must do well in order to be successful. Once identified and tracked, CSFs help organizations communicate about, and focus on, those factors: market share, number of new product introductions, the per-unit cost of production—whatever factors are most important.

As executive information systems (EISs) became popular in the 1980s, CSFs and key performance indicators (KPIs) were important components. Experts recommended that companies identify and monitor CSFs at the industry, company, work unit, and individual level. (Watson, Houdeshel, and Rainer, 1997)

In many ways, today’s dashboards and scorecards are the EIS's of yesterday because of their focus on key performance metrics. Consider two examples.

Reading Rehabilitation Hospital

This hospital serves patients recovering from serious injury, illness, and surgery. In the 1980s, the hospital had an initiative to improve the quality of patient care. After considerable study, 90 metrics related to patient care were identified. Among them was the percentage of patient charts with incorrect entries by the doctors, nurses, or rehabilitation therapists. For each group, a chart showed how the hospital was doing over time and against goals.

The most interesting part of the story is that in addition to putting the charts in a computer-based system, the hospital posted the results dashboard in the cafeteria for everyone to see. The impact was dramatic. Within a week, the doctors (who initially had the worst record) improved significantly and were no longer at the bottom of the list. Over time, there was a spiral effect as each group improved its performance. This example illustrates the adage, “That which gets watched gets done.”

This article excerpt appears courtesy of TDWI and originally appeared in TDWI’s Business Intelligence Journal, a Member only publication. To learn more about Membership and how to access additional articles please visit www.tdwi.org.

Tweet article    Stumble article    Digg article    Buzz article    Delicious bookmark      Dashboard Insight RSS Feed
 
Other articles by this author

Discussion:

No comments have been posted yet.

Site Map | Contribute | Privacy Policy | Contact Us | Dashboard Insight © 2017