Improving Operations Using PERC

Introduction

Improving operations and gaining competitive advantage is a goal of all businesses. In line with this goal Quanta Analytics (QA) offers its Performance Evaluation Report Card (PERC) Methodology as an eight-step methodology for improving business operations described in terms of strategic business objectives, data mining techniques, solid quantitative business analyses, and “best business practices”.  The approach QA describes is straightforward and designed for rapid implementation.

In this the new Information Age that we live in today, probably more so than in any previous time period of human progress, competitive advantage lies with those individuals and businesses that learn how to manage the vast array of information at their fingertips most effectively.

QA acknowledges that our new information technology capabilities are wonderful; even so, when it comes to improving operations QA argues that the most effective use of our new technologies still requires a good solid application of common sense. QA’s philosophy towards the data mining and analysis effort discussed herein is easy to understand.  QA believes in adding complexity only after consideration and only as necessary.   So in other words, QA believes in the old KISS principal, but with an addendum, KISS + COAN  ( Keep It Simple Stupid and add Complexity Only As Necessary).  QA believes that if what is produced with its methodology cannot be understood or clearly explained then the value, itself, is of questionable business use.

Typical Application Scenario

QA’s eight-step PERC methodology is generic in nature and can be applied to a wide variety of business applications and entities.  The PERC methodology is typically applied to business or government entities that have several operating units—all of which function in some generally homogeneous manner or with the same basic operating objectives and goals.  The PERC methodology is designed to help accomplish specific operational goals (e.g., cutting costs, increasing productivity, sales and margins, reducing risk and increasing market share) throughout the entire business entity.

Some examples of business or government entities with multiple business units should suffice to visualize a typical application for the approach we will discuss.

  • Ford, General Motors—and their automotive dealerships;
  • McDonald’s, YUM—or almost any other franchiser with multiple franchisees;
  • Coca Cola—with large bottling and distribution units globally.
  • United States Postal Service—and their multiple mail servicing centers;
  • Federal Housing Administration—with multiple mortgage lenders/servicers and security originators;
  • Department of Education—with multiple school institutions and guaranty agencies;

Again, QA uses these examples only in order for the reader to gain a better visualization of the applicability of its methodology.  QA makes no claim regarding these example entities as to the degree that they are utilizing their data resources for purposes similar to that discussed herein.  In today’s world, however, it is guaranteed that every one of the above example entities is living in a data rich environment.  And, whether business or government entity, each faces increased pressure to improve efficiencies and stockholder or taxpayer value.

QA believes that opportunities exist for possibly these and similar structured entities to improve operations with better information management and technology, using an approach or methodology similar to that which is outlined here.  QA believes its PERC methodology can be used to improve productivity, move units towards “best practices”, gain competitive advantage, increase shareholder/stakeholder value and more.

Improving Operations—Driving Towards Best Practices

The PERC methodology places great importance in fairly analyzing and quantifying operational performance within the similar operating units within a business.  Operating units within a business are not created equal—different operating units have different levels of management expertise, different ways of implementing operating efficiencies, different degrees of competitive environmental factors to deal with, etc.  That is a fact.  For this reason QA feels it is extremely important to develop operational performance measures that allow fair comparisons between similar operating units.  By developing a methodology based upon fairly applied business factors you can distinguish good performers from poor performers, while establishing a normal or standard level of performance based upon the entire population of business units.

In the beginning, the performance gap between good performers and poor performers is at its widest point.  However, as time moves forward and pressure is placed on the poor performers, changes are made, efficiencies are gained and the average or standard level of performance improves as everyone moves toward the level of “best practices”.

Simplistic in principal, yet it works.  Ever since receiving our first report card in grade school, most of us would acknowledge that periodic, fair feedback is a motivating tool.  If poor grades didn’t cause us to self-motivate from our own internally-generated pressure, then more times than not, our parents would apply externally-generated pressure to help us get motivated.   We may or may not have liked our days of reckoning; however, most of us would agree that some kind of feedback system was important for our improved development.

QA’s PERC methodology, based upon good corporate data and a logical, quantitatively-fair rating system, works in the same manner—probably even better—than these earlier educational report cards.  QA would argue that with the magnitude of corporate data available today and with a good statistically-based data mining effort, it has never been easier to generate timely, periodic feedback reports that provide a fair comparative measurement of operating performance between similar business units.  In addition, QA believes it has never been easier to measure and monitor on an on-going basis improvements over time with consistently applied trend analysis.

Eight Steps Towards Improving Operations

QA was tempted to extend its methodology for improving operations using data mining techniques to a twelve-step approach so that it could tip its hat to earlier twelve-step programs; however, consistent with the KISS principal QA decided it better to just keep it simple and stay with an eight-step approach, which follows:

  1. Obtain executive endorsement and form project team;
  2. Define clear business objectives and identify data sources;
  3. Evaluate available data sources for comprehensiveness, integrity and history;
  4. Develop “smart” rating scheme for monitoring business unit performance;
  5. Formulate concept and design for “smart” monitoring system;
  6. Develop the “smart” monitoring system and user support materials;
  7. Implement system, evaluate, and obtain operational feedback; and
  8. Refine or revise “smart” system and update appropriately.

QA believes the goal for a typical fast track implementation of the entire methodology should be to complete all steps within a three to six month time frame.  The length of time it takes to complete full implementation is driven primarily by the magnitude of the data sources that are applicable and available for the business analysis effort and the complexity of the business of the operating entities themselves.

There is one critical point where a decision can or should be made to stop or move forward during the implementation process—that is at the end of Step 3.  If the available data sources lack a sufficient level of data integrity or are found to be ineffective in developing useful performance measurements, there is no reason to move forward through the other steps until these problems are resolved.  In this day and age, however, QA believes that stopping at Step 3 should be a rare occurrence.  In developing fair performance measurements, the data does not need to be perfect or without flaw, however, the data does need to be in generally good condition without major flaws.

The success of most data mining projects depends as much upon organizational culture factors as they do upon technical and analytical acumen.  Defining clear, meaningful mission-oriented objectives and obtaining organizational buy-in up front will start the project on the right track.  In fact, the first two steps of QA’s methodology are as critical to the success of the project as the actual data mining effort itself.

Step 1 – Obtain Executive Endorsement and Form Project Team

A strong endorsement of the methodology to drive toward “best practices” by the chief executive or senior operations officer sets the stage.  Constructing a well-qualified team of individuals who will guide and monitor the project is critical.  Team members should have a business-oriented focus with relevant systems knowledge.  QA believes both business unit line staff and corporate staff level personnel should be on the team, bringing both operational and executive level considerations to bear.  The project team should have decision-making authority with an enthusiasm for rapid development projects.  A group of quantitative business analysts who are familiar with key business principals and who have strong statistical software skills should be made available to support the project team.  Once an executive level decision is made and the project team is formed the effort can begin.

Step 2 – Define Clear Business Objectives and Identify Data Sources

Once the project team is formed they should meet to establish the business objectives of the project. The business objectives in effect define the mission of the project—and like any good mission statement—they should be clear and concise, while encompassing the full breadth of the effort.  It is typically better to fall on the side of having broadly defined objectives rather than narrowly defined objectives.  For example, it would be better to have an objective to increase revenues, margins and customer satisfaction in every region or sector of the business than to simply have an objective to increase revenues by 5%.  The data mining effort performed in subsequent steps would be entirely different for each of these two objectives—and QA would argue that the broader, more encompassing business objective would lead to better results.

QA is aware of a case where a project’s business objective was changed from simply making sure bills were paid correctly and broadened to making sure bills were paid correctly and areas for cost reductions were investigated.  Adding the extra little phrase enlarged the viewpoint of the project team and the focus of the analytical effort, which subsequently resulted in annual cost savings that could be documented by trend analysis of more than $40 million, more than a five percent reduction.  That type of change improves competitive advantage.

Once the project objectives are defined, the team should pool their knowledge relating to available data sources that can be used to establish a set of operating benchmarks and performance measurements for the objectives.  Readily accessible corporate data—financial, accounting, processing data–with history should be used as the primary source of data as much as possible.  Even so, readily available external sources of data outside the corporation (e.g., census, economic, demographic data) can sometimes add insight and fairness in establishing benchmarks and perspective and should be considered.

Another decision should be made during this step.  Is a pilot effort best, working with only a sample set of information or a pilot group, or is it just as well to work with the entire population and fully implement without a pilot?  Considering the computing power that exists in most organizations today QA tends to favor full implementation from the start.  There are little additional costs associated with full implementation and the benefits associated with the rapid full implementation approach generally outweigh these costs significantly.

It is very important for the project team to inform the business units of the general intent of the project early on and periodically update them as it becomes clear that the project is moving forward with success.  The more informed the business units are made of the effort as it progresses, the easier the implementation process will be later on.  The business unit line staff on the project team should play a critical role here.

QA believes the effort in Step 2 should be performed in 1-2 days, depending upon the scope of the project.  Setting the tone for the project, focusing on objectives and rapid project implementation, is important during this step.  Sometime during this first set of meetings, the project team should choose a project leader whose responsibilities are to ensure that subsequent meetings and the project itself stays focused on the business objectives and on track.

There may be a time gap between the completion of Step 2 and the beginning of Step 3 as the project team obtains authorized access to the appropriate corporate databases.  With the executive or senior level endorsement of the project obtained in Step 1 this time gap should be somewhat reduced.

Step 3–Evaluate Available Data Sources for Accomplishing Objectives

Once access to the data is obtained the fun begins, or at least the analytical fun.  In Step 3 the data sources identified in the previous step should be evaluated for their comprehensiveness, value and integrity.  How is this done?  Consistent with the KISS principal, QA recommends an approach that is simple and logical and which can be performed quickly.  QA acknowledges that any approach, including PERC, must remain flexible because all data sources contain their own idiosyncrasies; however, there are several common steps that QA generally recommends.

The first thing QA recommends is to run frequency distributions on every coded and date field within the data sources.  This quickly tells the analysts and project team whether coded fields are complete and whether they are consistent with the purpose that was intended.  Good, clean data typically has a value in the coded fields with no unfamiliar or unspecified codes.  The magnitude of missing values and/or unspecified codes should be critically reviewed and an explanation gained.  In addition, the frequency distribution of date fields (e.g., by year, by year-month) should be evaluated in order to give the project team a quick sense to the degree of history that is available for trend analysis.  If you want to improve operations you must establish benchmarks and you must monitor trends and the date information tells you what is possible.  Historical information will give the project team a starting point from which to build.  There is much to be learned from these frequency distributions and there is nothing difficult about obtaining them. What is learned from this process will help direct future steps in this evaluation phase.

If there are several files that can be merged through some key field common to them all, then one master file should be created that will enable the analysts to work with all the data at the same time.  QA recommends that sums and other general statistics (e.g., min’s, max’s, variances, missing data, means, modes, etc.) on every numeric field within the data sources be obtained.  Potential outliers (extremely abnormal values) within the numeric fields should be identified so that they can be edited out and don’t inappropriately skew any subsequent analysis.  As a result of these efforts, the analysts should establish a control set of numbers that can be verified with corporately known facts.

Once these steps are completed, which typically take less than a week, the analysts should begin to slice and dice the key numeric fields by codes, logical peer groups (e.g., size, product type) and by time period.  Building upon this process, the project team and its supporting analysts should consider creating new analytical variables from the data at hand that further the objectives of the project.  As the project team builds its knowledge and hopefully respect for the data, it should begin to focus its attention towards potential candidates to be used as operational performance indicators.

At the end of this evaluation phase (a 2-4 week effort), QA believes strongly in putting a package together so the project team can step back and determine:

  • Does it all make sense?
  • Are key operating variables available for analyses?
  • Can benchmarks be established for measuring performance?
  • What are the appropriate characteristics that determine peer group divisions?

The package should present findings that include graphics of key variables by program meaningful codes, by time intervals (e.g., month/year, year, week) and by logical peer groups, while providing new and creative perspectives for consideration.  Based upon this information, the project team should be in the position to move forward or to stop right here, regroup, and reconsider.  If this evaluation phase goes well, then the project team should have a high degree of confidence for the future success of the project.  On the other hand, if this evaluation phase leaves things unclear, then it may not be worth moving forward with the project, until or unless increased clarity is obtained.  But as QA stated earlier, with the information available to most organizations today QA would expect most project teams would be able to move forward.

Step 4—Develop Rating Scheme for Monitoring Performance

The objective of this step in the methodology is to develop a set of performance indicators and a weighting mechanism that can be used to evaluate and monitor business unit performance going forward.  QA believes this phase of the project can be completed within a 4-6 week time period and can be started during the previous step if it becomes clear right away in Step 3 that the quality and comprehensiveness of the data is very good.

Before starting the analysis in this step the project team should try to identify any known good business unit performers and known poor business unit performers.  You don’t want to predispose the analysis, but if certain business units are known for their successes and others for their failures then they can be categorized into conceptual good/bad control groups.  These control groups can help in justifying and fine tuning the performance rating mechanism that is developed during this phase.

Generally, business units should be segregated into peer groups that have similar characteristics (e.g., size, population densities to serve, products, etc.) and separately evaluated against their peers.  If it is possible to begin the analysis with conceptual good/bad control groups as mentioned above, the performance indicators and weighting system can be tweaked until the results from the rating system are most consistent with the perceived control groups.  Even if conceptual control groups cannot be initially identified, establishing common sense business factors (e.g., net income, sales/employee, gross margin) and a simplistic weighting system should still segregate good performers from poor performers.

Each performance indicator should be evaluated independently, building an understanding of their statistical norms, variances, correlations and outliers.  Indicators should be judged on their ability to be comprehensively applied and their lack of hidden biases.  There may be reasons (e.g., missing or erroneous data) why a particular performance indicator cannot be applied to a specific business unit; in this case, the weighting scheme for the particular business unit needs to be adjusted to keep the rating mechanism on the same scale.  It is difficult, initially, to weed out all hidden biases within a performance indicator; however, during the feedback phase of Step 7 these should become clearer and then corrected as part of the revision phase of Step 8 of QA’s methodology.

QA is a true believer in developing multiple indicators to evaluate performance.  Multiple indicators reduce the influence of any single indicator.  Rating mechanisms that show business units flowing to the top (or bottom) because of several different criteria are much superior than a rating mechanism based upon a single criteria alone.  In addition, multiple criteria rating systems are easier to defend in terms of fairness.

There are several optional approaches (e.g., linear, normalized, step-functioned) that can be used when applying a scoring or ranking methodology to the weighting scheme and the performance indicators.  Each optional approach has its pros and cons.  QA believes, however, that even though individual unit rankings may vary from one methodology to another, from a bigger picture point of view, the “best practice” and “poor performers” should float to their appropriate respective ends of the scoring system regardless of which methodology is chosen—especially when multiple performance indicators are used.

If the general principles that QA has discussed so far are followed, QA believes a good rating scheme can be developed in this stage, which then can be subsequently fine-tuned to correct or adjust for new insights gained during the feedback periods within Step 7 and Step 8 of the methodology.

A good practical example that follows the principles of QA’s general methodology can be found in Business Week magazine’s annual rating of the S&P 500 companies.  Business Week ranks all 500 companies using eight key criteria for financial success, including total return, sales growth, profit growth, net margin and return on equity over different 1-3 year time periods.  It ranks all 500 companies as a whole and also separately within industry peer groups (e.g., banks, health care, telecommunications).

Such an application may not be foolproof for investing purposes or as an indicator of future financial success; however, QA applauds Business Week for its fundamental and logical regimen in developing its rating mechanism.  QA knows it is easy to attack measurement systems like this and like those that QA is proposing.  The question arises, however—what is the alternative?  Bumbling along in the blind is not a winning strategy in our information age.

Step 5–Formulate Concept and Design for “Smart” Monitoring System

Once the project group chooses a good set of performance indicators and a logical weighting mechanism, it must decide how this new knowledge should be implemented.  Decisions need to be made regarding issues such as:

  • Security control of the information—who gets to see what?
  • Periodicity of rating evaluation—daily, weekly, monthly, annually?
  • Forceful, experimental, pilot group, or entire population implementation?
  • Feedback and follow-up procedures?
  • Relative measurement ranking system or absolute measurement ranking system?

Successful implementation of the “smart” monitoring system depends heavily upon what is decided during this phase.  If everything has gone favorably in the previous steps, a good start has been made to implementing a fair and equitable monitoring system.  But if the system is to succeed in improving operations throughout the organization, it must be viewed as a fair tool for measuring performance by all of the business units.  In addition, the business units must feel that management oversight of this tool will be fair, including listening to their feedback.

Many of the decisions made in this phase will depend upon such things as organizational culture, nature of the operations and cost.  For example, the more regular the rating feedback (e.g., weekly vs. monthly, or monthly vs. annually), the quicker operations can react to the rating in a positive manner.  On the other hand, the cost is likely greater and it may not make sense to provide weekly monitoring when or if operations work and report on a monthly schedule.  A general rule of thumb should be that the periodicity for new rating evaluations should allow for new information to be processed, while allowing time for business units to react to previous reports.

The project team should formalize its decisions relating to any outstanding issues from Step 4 in this phase.  Decisions relating to alternative performance indicators and optional weighting mechanisms should be decided upon during this phase.  It is important to make these issues clear so that subsequent work, requiring the development of documentation and manuals, can be finalized with clarity.

The actual design layouts for business unit reporting should be well thought out.  Considerable consideration should be given to making the layouts user-friendly, including the possible use of textual language (e.g., good, average, poor—high, medium, low), standard grading methods (A, B, C, D, F), and color-coding to highlight good performance (green) and poor performance (red).  The relative weighting of the different performance indicators should be made clear and wherever possible trend information and/or graphics should be included.

The goal in this phase of the project is to develop the concept and design for implementing the rating mechanism in such a way that the business units themselves see the mechanism as desirable and valuable.  Impossible you say?  Then, think back to our earlier example relating to grades in school.  Very few of us would have chosen to go through our school years—high school or college—without having some idea where we stood amongst our peers.  QA would argue that managers of business units feel the same way about the operations they run.  QA would argue that the aggregation of corporate information on all business units provides an opportunity to establish a baseline or set of normal standards that no individual business unit would know otherwise.

Although this is a critical phase in the overall methodology, QA believes the issues relating to concept and design can be resolved within a 2-3 week period by the project team and its supporting analysts.

Step 6 – Develop the “Smart” Monitoring System and User Support Materials

Once the concept and design for the system has been decided, probably the easiest step in the methodology to perform is the actual development of the monitoring system.  Depending upon the complexity of the concept design, QA believes that the development of most monitoring systems could be completed within a 3-6 week time period.

During the development phase of the project it is extremely important to thoroughly test the system to ensure that the performance indicators and the algorithms associated with the weighting mechanism work according to the design concept.  In addition, the project team should make sure that a good set of documentation is created which clearly explains the program logic used in the system and the procedures to follow for future system updates.  As the system is implemented and users provide feedback, the logic used to rate or evaluate the business units may require some adjustment to account for biases that were not clear from the outset.  By having a good, clear set of system documentation, any such adjustments should be easy to incorporate for future updates.

Business unit documentation and user support materials should be developed during this phase.  This documentation should clearly explain the business objectives behind the monitoring mechanism.  It should provide a good description of the performance measurements and mechanism for rating performance, while providing a process for business unit feedback.  Nothing should be hidden; business units need to understand how they are being measured and they need to be able to respond to perceived inadequacies in the monitoring mechanism.  Good business unit feedback will enable the monitoring system to be improved over time, which in turn, should improve the perceived fairness and recognized value of the mechanism throughout the organization.

Step 7 – Implement System, Evaluate and Obtain Business Unit Feedback

Earlier on QA stated the importance for the project team to keep the business units informed of the intent and progress of the project.  Now when it comes time to actually implement the system, much of this upfront dialogue should pay off.  If the earlier groundwork has been completed with consideration given to business units’ issues and concerns then the system developed in the previous step should be easier to implement.

The key to the implementation phase is to be upfront with the business units and to remain accessible for their questions and inquiries.  It is important to openly provide the business units with supporting material that explains the objectives of the monitoring mechanism and clearly describes the performance indicators and the weighting system used for business unit evaluation.  This is what the business units will critique.  Using training sessions, networks, hotlines or some other vehicle, business unit feedback should be encouraged.  Just as importantly, attention must be given to the feedback.

The implementation phase should also include a normalized historical set of benchmarks relating to the performance measurements.  These should be provided the business units as part of the implementation phase.  This sets the standard and sets up the test.  Future system updates should compare new normalized values for the performance measurements against previous periods.  Periodically rechecking where you are and understanding the general trend for where you are going are two very important pieces of information to businesses.

Step 8 – Refine and Revise System and Update Appropriately

Every monitoring system needs updating to be effective.  The periodicity of the required updates depends upon several factors, including such things as the actual time flow or availability of data, a business unit’s need for and its ability to respond to the information, and costs.  It is conceivable that some monitoring systems should be updated daily, while others weekly, monthly or even annually.  QA will caution, however, that the tendency is to want more rapid updates than are usually required.  For example, it may be more effective to operationally respond to one month’s worth of summary performance results than to be trying to respond to a much more variant set of daily results.  Less frequent updates are less costly and they smooth out the larger variances of shorter reporting periods.

If the work QA described earlier is done well, QA believes that the monitoring system that is initially implemented will gain ready acceptance, requiring some but not extensive revisions.  System revisions will most likely be a result of hidden biases in the application of some of the performance indicators that the rapid development effort did not foresee initially and that the business units point out.  These can usually be addressed quickly and the monitoring system enhanced in time for subsequent updates.

Summary

Good monitoring systems help business entities improve their operations.  In today’s world, advances in our information technology capabilities increase our ability to process and distribute important performance information to front line units quickly and effectively.  The better the operational feedback to these units, the better chance there is to improve overall operations.  Herein, QA has presented an eight-step methodology that is designed to provide good, timely operational feedback to business units.  QA believes that this methodology, if followed, can help move business units towards “best practice” goals, thus improving overall operations and increasing shareholder and stakeholder value.