It is 2200 and you are zombie tapping keys on your keyboard, debating whether you should spell out Navy and Marine Corps Achievement Medal, or just type NAM and save the extra writing space. Surely, the people on the board will know what NAM stands for? Or maybe . . . no, better type it all out. It’ is now 2231. You have typed, erased, and retyped the same sentence thirteen times. Using an acronym here might get you promoted this year. At least, it feels that way. This is why you are spending all this time on the version of NAVFIT98 that you have unceremoniously tainted your home computer with.
How much time does the Navy spend every year on its evaluation system? The man-hours alone are staggering between writing, editing, ranking, routing, and ultimately submitting. But are we really hitting the mark here? Well, let us talk about what the mark is.
Most sailors view the eval system as a means for communicating a person’s eligibility for promotion to a selection board. Some people view it as a snapshot of an individual’s performance for the entire year, while others view it as a metric of that individual’s current standing among their peers. The annual evaluation has turned into a behemoth of a program with influence in regular promotions, selection boards, and even Sailor of the Quarter boards. But over the last 5-10 years there has been a paradigm shift within the private sector from an annual evaluation system with a forced ranking structure to more of a continuous feedback process. Some fortune 500 companies like Adobe, Dell and Microsoft abandoned an annual review altogether, while other companies like General Electric and Deloitte have employed a “hybrid” of continuous feedback coupled with an annual evaluation, albeit with a significantly different approach.
According a worldwide survey conducted by Deloitte, 58 percent of HR executives considered reviews to be a waste of time. Within the Navy itself, the evaluation system is biased toward more recent performance, often not weighing accomplishments from an early quarter as heftily as more recent events. There is also the five-point bell curve— really a three-point scale—coupled with a forced ranking structure that forces commands to consider things like “showing progression” and placing sailors at a trait average that inaccurately reflects their performance, with the logic that next year they can be ranked higher. Additionally, with the current EP/MP distribution system, there is now room to overinflate an average performing sailor and undervalue a high performing one based on an arbitrary number of EP/MP recommendations. Though commands have the option to not utilize the entire allotment of EP/MPs, many shy away from it due to the detrimental impact on morale. In fact, many of those same HR executives surveyed by Deloitte felt that annual reviews forced an unhealthy and unproductive competition, spurred a lot of animosity within departments, and destroyed their talent retention efforts.
With that in mind, the Navy should employ an evaluation process that focuses on three major elements. One, encouraging growth among sailors through continuous informal feedback at the supervisor level. Two, measuring success at more frequent intervals, perhaps with a quarterly “snapshot” of how someone has performed in the recent months. Three, encourage a promotion process that targets sailors for promotion using a significantly pared down annual evaluation based on quantifiable metrics.
Continuous Feedback
Turns out, this one is nothing new. There is even a name for it: The Mentorship Program. Not very catchy, but it gets the point across. As an aircraft maintainer, I can tell you that the idea behind this program is nice, but the reality of it never connected with me on a productive level. Not because I lacked mentors, but because I never needed a “contact contract” to have a conversation with a supervisor about my career goals. The comedian Mitch Hedberg explains this best when talking about being given a receipt for purchasing a donut: “We do not need to bring ink and paper into this. I just could not imagine a scenario where I would have to prove that I bought a donut.” For the purposes of an evaluation program, the Navy should promote mentorship through frequent and informal check-in conversations and leave it at that.
Quarterly Snapshots
This idea was honed by Deloitte, a giant in the business solutions industry. A study published in the Journal of Applied Psychology in 2000 revealed that performance evaluations were subject to variance based on the rater, and often showed more about the rater than the ratee. Deloitte set out to find the best way to evaluate performance, without data being skewed by rater subjectivism, and discovered most of the information could be determined with four simple questions worded in a very specific way.
- Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus [measures overall performance and unique value to the organization on a five-point scale from “strongly agree” to “strongly disagree”].
- Given what I know of this person’s performance, I would always want him or her on my team (measures ability to work well with others on the same five-point scale).
- This person is at risk for low performance (identifies problems that might harm the customer or the team on a yes-or-no basis).
- This person is ready for promotion today [measures potential on a yes-or-no basis].
While these are both effectively-written and succinct questions, they are perfect for that organization and would certainly need tweaking for the Navy to use. Ideally, the immediate supervisor would provide this feedback. At Deloitte, the data was used to provide a graphical representation of how employees compared to others. No forced ranking, just a baseline comparison of where most employees sit, which also lends itself to more easily identifying high and low performing outliers. Again, this tool can be used to help sailors gain more perspective on what their strengths and weaknesses are, as well as help individuals focus on what they need to do to break out.
The Annual Review (2.0)
Despite the resentment toward it from both senior and junior sailors, there is still a need and use for the annual evaluation. The Navy’s goal in this area should be to remove rater bias so there is a clear picture of who should and should not be promoted. To start, Block 43 should be scrapped.
Yes, the whole thing. It is not necessary. Instead, the Navy could make an evaluation tailored to the rate and based on actual data. Instead of a forced rank structure, rate-specific metrics should be expressed as a percentile. For example, when you take a newborn to the doctor, and they measure his head and a doctor tells you that your kid’s head is in the 50th percentile. Fifty percent of most kids have a smaller head than your kid. How would this look on an evaluation?
Using my own rate as an example, a common metric is how many maintenance actions (MAFs) a sailor has inspected. For example, say I look through historical records and determine I have signed off 1000 MAFs. I then enter this number into my evaluation, and it spits back a percentile based on data from within my own command. Maybe I rank in the 5th percentile. That means 95 percent of all other collateral duty inspectors signed off more than 1000 MAFs. I didn’t do too well there, did I? We should also be factoring in things like our PRT score. Not to be mean, but because it helps promote the healthy culture that we are already trying to promote.
So, is a hybrid continuous feedback annual evaluation process the answer? This is hard to say, but there must be a reason that the annual eval process is being scrutinized by high performing businesses on a global level. We are an ever-changing Navy, with a talented work force that we should have a vested interest in retaining. If the Navy wants to remain competitive, then the leadership must look inward at the processes because I guarantee you cannot capture a person’s value in 18 lines.