Glaring Problems with Survey Data
Mark Van Clieaf is a Managing Director of MVC Associates International
After reviewing a number of recent confidential
executive compensation reviews for clients from brand name
compensation consultants, it would appear that this data is not
sufficient to provide sufficient input for boards for them to be
able to properly determine equitable versus excessive compensation
or properly tie pay-to-performance.
This conclusion rests on four observations:
- Lack of Accountability Definitions
- The accountability of the roles being benchmarked by
compensation consultants have not been clearly defined – thus,
there is no executive position analysis to define which roles
within companies being compared (either internally or
externally).
As we all know,
the accountabilities attached to a particular title (CEO, COO,
President) might vary widely from company to company. Revenue,
market cap size, and reporting structures of themselves do not drive position
complexity.
Rather, position complexity determines the Organization Value Added-OVA™ of a
job - not the size of business, budgets, team.
In the surveys that I
have reviewed, there are no job factors disclosed to help that
similar roles are being compared nor are there any factor analyses
to calibrate positions with the same title, but different levels of
complexity.
- Lack of Consideration for Company Complexity
- In the surveys that I have reviewed, the choice
of benchmark peers (from which percentiles, averages and medians
are developed) appear to be focused on companies in the same
industry with no regard to the complexity of the enterprises /
roles being benchmarked versus size.
Yet, size (revenue, market cap, etc.)
and complexity (distinct number of businesses operating on two or
more continents) are not the same. Thus, the peer group is
really not a true peer group for the purposes of position
comparison.
- Too Much Reliance on Titles -
Although the survey and proxy comparator data in the surveys
that I have reviewed illustrate the revenue and market caps for
each comparator company, it appears the only factor to match the
compensation data (base, total cash, total direct) with the
position being evaluated is the title of similar
positions held at benchmark companies.
The above practices result in creating
compensation percentiles and averages, that are not defensible
comparisons, and lead to decision making that drive the ratcheting
effect of compensation that is so wide spread.
- Different Peer Companies Used for
Different Purposes - In surveys that I have reviewed, there
was no 3-5 year business (NOPAT, ROE, ROIC) or stock market
(TSR) performance data tied to the companies that are
benchmarked for compensation purposes. There wasn’t even a
comparison of the performance metrics that were disclosed in the
proxy statements filed by the benchmark companies.
Companies can fall
into one of four categories based on multi-year performance of
Market Value Added (Total Enterprise Value determined by equity
markets minus all Capital invested) and Economic Profit (Net
Operating Profit After Tax – Cost of Capital). The following are the
% distribution of the top 700 U.S, companies by value category over
5 years of performance ending in 2002.
Value Category |
MVA |
Economic Profit |
Value Builder – 24 % |
Positive |
Positive |
Value Myth – 14 % |
Positive |
Negative |
Hidden Value - 17 % |
Negative |
Positive |
Value Destroyer- 45 % |
Negative |
Negative |
Today, surveys used by compensation
consultants do not distinguish among these categories of 3- to
5-year true value creation and the linking of pay with performance
(value creation).
Value Builder’s should be paid more
than those that fall within the other value categories. Mixing them
up with the other categories for compensation benchmarking
contributes to "pulling up" the compensation averages and
percentiles of poor-performing companies to justify compensation
levels that are competitive, even though there is no demonstrable
link to business operating performance and equity market
performance.
|