I think everyone has ideas on how we can make our FITREPs better – mine have been the same since the latest version came out in the Clinton Administration – but there is one aspect that I never really had an issue with: rankings.

It had always made sense that you had to make the call; not everyone gets a trophy and someone must be #1, #2, etc. It seemed rough, but needed in order to help others read the entrails in our opaque system to divine who are our best players.

Is there something wrong with this part of our FITREPs that may, by its very nature, be destructive to fostering an environment of innovation and progress? Is this one of the sources of our problem with more of a focus on loyalty to individuals vice loyalty to institutions?

Over at Vanity Fair, our buddy Chap sent along to me a short but devastating piece on Microsoft’s Lost Decade. A little close to home?

…. a management system known as “stack ranking”—a program that forces every unit to declare a certain percentage of employees as top performers, good performers, average, and poor—effectively crippled Microsoft’s ability to innovate. “Every current and former Microsoft employee I interviewed—every one—cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees,” Eichenwald writes. “If you were on a team of 10 people, you walked in the first day knowing that, no matter how good everyone was, 2 people were going to get a great review, 7 were going to get mediocre reviews, and 1 was going to get a terrible review,” says a former software developer. “It leads to employees focusing on competing with each other rather than competing with other companies.”

Not just Microsoft in the “don’t be like them” category; ponder back some more.

“I see Microsoft as technology’s answer to Sears,” said Kurt Massey, a former senior marketing manager. “In the 40s, 50s, and 60s, Sears had it nailed. It was top-notch, but now it’s just a barren wasteland.

That rolled in the last line of the article reminded me of how we used to make fun of the Soviet Navy back in the day.

“They used to point their finger at IBM and laugh,” said Bill Hill, a former Microsoft manager. “Now they’ve become the thing they despised.”

How do those in the Royal Navy see our seamanship? How do the Japanese see our PMS and maintenance practices? How do the Dutch, Danish, and Norwegian shipyards see our methods?

If we want our future Navy to think and be nimble – perhaps we could start a conversation about what organizational cultures we like, and see how they recognize and grow talent internally. That could at least be a good way to kick things off.




Posted by CDRSalamander in Uncategorized


You can leave a response, or trackback from your own site.

  • Andy (JADAA)

    Once we force our way past the “it’s not broken, after all I made GOFO using the system” inertia, I think the biggest difficulty will be an over-swing of the pendulum. By necessity, the maritime services are hierarchical, not Silicon Valley “chill, dude.” I do however agree that especially for LCDR and below, rankings can cause the late-bloomers and the “not-quite-like-us” to be lost and discarded too early. A great deal of judiciousness will need to be exercised in developing a system that recognizes excellence, competence and those who frankly merit the comment “of all the officers in this command, this officer is one of them.”

  • Jim Dolbow

    Great post Sal! This one post sums up all what is wrong about our US Navy. Keep on fighting the good fight!

  • Robert_K

    I would agree with you on your assessment of the value of rankings if the ranking is for the reporting period in question. Too often the process is high jacked by a number of outside influences and the result is a top performer during a reporting period gets overlooked because of TIG, how long they have been at the command, or how many times they have been passed over. Ranking sessions often evolve into mini-promotion boards and are not an evaluation of performance during a designated period.

    FITREPS are a portion of the problem related to inhibiting innovation. The larger issue is incentivizing innovation. Of course, if you want innovation, you must be willing to accept failure – often (300:1 ratio in the private sector). How do you protect the careers of innovators who don’t hit a homerun every time? Conversely, how do you prevent a person with a single good idea from having his/her ticket punched for an entire career (the one trick ponies)?

  • Michael Smith

    Interesting questions. It’s been more than 20 years since I left as an O-4 in the Submarine Force. obviously my experience is not current but I don’t believe that the USN’s problem is the same as Microsoft’s. You can have rankings without declaring that someone is average or poor. Yes, I know each branch likes to think they are the best of the best but it isn’t accurate and is most likely one of the causes of FitRep inflation. But, even more than in industry, we need a cohesive team that functions well together yet recognizes that hierarchy is necessary and has a purpose.

  • http://tobeortodo.com J. Scott Shipman

    The USN could use a little sunlight. Like most of DOD, the politically correct drives everything at the expense of honesty. Perhaps we should have an “honesty standdown.”

  • http://cdrsalamander.blogspot.com CDR Salamander

    +1 to RK.

  • Ken Adams

    My employer uses a ranking/rating system similar to the military’s, but with some notable differences – impact from a ranking is much more immediate and personal, and is managed at a lower level.
    In our system, employees are racked and stacked by their individual front-line managers (15-30 people each), who then meet with their senior manager (~150 people) to mesh the whole group. The forced distribution of ratings based on rankings happens at this level, with the bulk of people receiving a “successful contributor” evaluation. It’s a 5-point system, with maybe 10% getting the highest rating, 15-20% the second level, 5-10% a step below fully successful, and very few rated unsuccessful. The immediate impact comes in with the distribution of merit raises, which are directly linked to the ratings. The middle pack gets the “merit pool” percentage, the two upper ratings get about 1 and 2 percent more, and the two lower ratings typically get no raise or something really minimal. Contrast this with the military system, where the only sure thing is that someone who is consistently ranked “1 of x” (x not equal 1) is very likely to get promoted. The relative rankings of the COs each period only get considered when (a) it’s time to figure out someone’s next assignment, or (b) a promotion/screening board happens. It can take up to several years for the impact of a rating to be noticed, which in my mind is severly detrimental to individual development. I think the rating / ranking system can be used effectively at the retail level, but the wholesale version leaves a lot to be desired.

  • Grandpa Bluewater

    It might be interesting to try dropping the weakest fitrep for everyone. The young adonis with nothing but perfection evah would just get one pulled at random.

    For the truly screwed up (cruelty to a subordinate, for example) a quiet general discharge after an en camera court of inquiry should suffice, for a limited range of non criminal offenses not involving death or serious injury due to gross negligence. The fitrep is not the tool to purge bullys, playboys, con artists and thugs.

    The real problem is 20 years of the norm of a shrinking navy. Somebody has to go, so any black mark is indelible; and less than perfect becomes a norm. B plus is a black mark, not slightly above center of the pack.

    But I’m old and looong (and happily) out of the loop, so I could be wrong.

  • YN3(SCW) Pawlikowski

    Funny,
    Was reading this article on ARSTechnica the other day and thinking the exact same thing. A lot of parallels.

  • Steel City

    I would venture that every Navy officer has heard the words uttered to the effect that “if the heat is on LT X, it isn’t on me”. What is implied is that the observing officer isn’t going to do a thing to help LT X lest the senior officer sending the heat will look elsewhere for possible issues/defects. That example is true Navy teamwork in far too many Navy commands.

    • FoilHatWearer

      Yep. Being an engineer, it’s what we call the 3rd Law of Thermodynamics: “Heat applied to somebody else is heat that is not applied to me!” It’s a crappy way to do business. Let’s let this guy twist in the wind so that the rest of us can sit back, chill, and maintain the status quo. This happens way too often in the military.

  • Rich B

    The problem for me are the artificial “quotas.” Right now in the fleet there are a group of department heads wasting an hour or two arguing for their JO to fit a CO’s “targeted average.”

    At times I really wished we followed more of a Merchant Marine construct. With a greater evaluation on testing and achievement.

    It could be workable to weight qualifications that matter.. and perhaps give JO’s a “credit score” of sorts. You could combine setting a bar based upon minimum experience level, passing your rank exam and CO recommendation. Let the FITREP be just that; a report reflecting why an individual is or is not qualified for his position and take the pressure off maintaining an average and reflect more on the individuals stated strengths or weaknesses.

    Points could be tweaked for platforms but it might looking something like this;

    OOD at sea 10pts
    OOD inport 3pts
    EOOW 9pts
    EOOW cold iron 5pts
    TAO 15pts
    VBSS BO 5pts
    SUWC 5pts
    ASWC 5pts
    AAWC 5pts
    IA 20pts
    FITREP up to 5pts

    Would it show who is or isn’t training their people?

    No system is without problems; and some platforms will offer more opportunity. So choose wisely, is all I can say. However, don’t we want to encourage sustained superior performance?

    I personally loved seeing my JOs pursuing qualifications on their own time.

    But right now we are advancing some young officers because of what we have invested in them not based upon their ability to continue as an officer. A point structure “could” show the poor developers and be grounds for separation if minimums were not met without the gross subjectivity of the current system. Did or didn’t the CO give the young officer a fair chance. When someone is 5pt officer and failing his LTJG exam after 2 years it makes it may help clear things up.

    Regardless of the point structure it would provide a measureable construct of performance when combined with the applicable “Rank Exam.” It would give a snapshot of someones development and perhaps encourage learning/cross training.

  • Jeffrey A Wendel

    About that Fitrep – I concur with everyone on this subject matter, as we have many sources and have been around the block a few times, thus the format, development, and outcome is nearly the same. The only exception to the rule is looking at the senior’s format and methods. One subject matter that always seems to cross the path is our peers who don’t give themselve enough credit for accomplishments and achievements, thus submit a poorly defined or inadequate FITREP, thus thinking the evaluator will clean it up? Well folks we must adhere to the standards and deligation of authority, assure the members have the FITREP clean and ready for signature prior to coming to your hands as either the first look or final look, this will certainly help many meet the deadlines.
    As for your own individual FITREP, only advice I can provide is track your accomplishments both military and civilian, also assure you set goals to achieve thus ables you to track and report. You are only as good as you are in taking care of yourself, but always take care of your young, mentor, and guide, new chew and spit on them, as what goes around will come around shipmates… Best of Luck

  • Robert_K

    Rich B,

    The examination is an interesting concept. The Marine Corps had a Battle Skills Test for Company Grade Officers and Enlisted – the results were insignificant and had no effect on a career.
    Having been involved in both Navy and Marine Corps performance evaluation system, one thing I noticed was the navy placed (places ?) far greater emphasis on “break out” factors: college education, quals, community involvement, chief’s mess participation etc than did the Marines. For the most part, there was an underlying assumption in the navy that the individual was actually proficient at his/her job because of their rank.

    From what I observed, a sailor/officer who is excellent as their assigned duties, takes care of the troops, etc, but does not go “above and beyond” is normally rated below a ticket puncher, regardless of their actual job performance. I see having an established point system would only make this situation worse. If an individual had numerous quals at year 1 of a rank – say LT – would the points follow them for the next 3 eval cycles (allowing them to coast) or would the points apply only to the year the qual was earned?

  • UltimaRatioReg

    Robert K,

    Would love to know your experience with the USMC PES.

    As for the Battle Skills Test, it was assumed that a Company-grade Officer in ground combat arms should max about everything. Most of us did. However, it is a bit of an apples/oranges debate, as almost all of our events were tactical or technique driven, and not the mastery of a highly technical system of systems. Not that many of us didn’t, in the artillery and with mastery and maintenance of ground combat systems like SP howitzers, tanks, LAVs, LVT-7s, etc.

    Woe betide the Lieutenant or Captain that did poorly, however, or who had his NCOs/SNCOs do poorly. I did see THAT affect careers, and with what seemed pretty good reason.

    Salamander asks one of the key questions that every military organization eventually must grapple with. Which shipping channel do we head for? One where we produce large numbers of generally competent leaders with exceedingly few innovators? Or one where we find and cultivate the superstars and pay the requisite cost for doing so?

  • Robert_K

    URR,

    I didn’t think the PES was a bad tool. (It’s been several years since I’ve used it so my terminology may be out of date) performance indicators were relevant and clearly defined. ROs were forced to add some meat to the bones for higher performance marks, and HQMC actually read each FITREP. I had several returned for using the words “however, but, or although” as they could be perceived as hidden negative comments. The Christmas tree and future potential section was good for the RS.

    I could request my data RO sheet to see how I ranked all my Marines by rank/grade. I found this was a good counseling tool at debriefing time to help explain how an individual’s grades fell out in comparison to peers that I rated.

    The only flaw I recall – and this may have been corrected – was there was no score between A and B. If a Marine needed to improve, there was no way to indicate that. A was negative and B was performing up to par.

    It’s been a few years since I’ve used it so things may have changed.

    One other difference that I recall between the services was the Marine Corps emphasized the RO was doing a duty for the Service – i.e. impartially rating the Marine for the benefit of the Corps, while the navy seemingly emphasized this was a tool to reward the Sailor/Officer and help them to advance.

  • UltimaRatioReg

    Robert K,

    PES isn’t perfect, but it was an improvement over the previous FITREP methods, and is far more comprehensive/complete. Honesty of the RS and REVO are still essential, and that is not always forthcoming, though it generally is. Being able to rate your reporting senior’s grading tendencies evened out a lot of bumps.

    I imagine then, that you had to write on Marines as their reporting senior or REVO?

  • Robert_K

    I think you are correct RS was the first line; RO was second. If that’s the case, primarily RS.

    “Being able to rate your reporting senior’s grading tendencies evened out a lot of bumps.”
    Absolutely!

    I also think being evaluated on the fairness in conducting evaluations (or something like that) was an incentive to play by the rules and not try to game the system.

  • Mittleschmerz

    Ya know…since the current system uses the concept of a reporting senior’s cumulative average (RS CUM), we could just totally do away with forced distribution and use a +/- standard deviation from the RS CUM to indicate a record’s “health”.

    ‘Course, that might mean we have to forcibly educate board members on what a standard deviation is…

  • UltimaRatioReg

    I suppose that they are counting on you guys to be too mature to giggle at that.

  • Bill Rogers

    I agree that ranking peers has an adverse impact on the climate and cohesion of a wardroom. However, the larger problem is that we are using short term quantitative measures for what should be a qualitative process. Here are three suggestions for creating a better officer evaluation/promotion system.

    First, officer evaluations should be used to accurately describe an officer’s responsibilities and how the officer contributed to accomplishing the unit’s mission (think resume). Second, the period of performance needs to be long enough that the results of an officer’s leadership and technical skills can be seen. This will make it more likely that officers who succeed by demanding unsustainable levels of effort from their subordinates will be identified. It will also result in a a natural ranking as top performers are given increased responsibilities in the unit (as one of my COs used to tell us, “the reward for doing well is never that you have less to do”). Third, the emphasis during promotion boards needs to shift to identifying reasons to promote an officer, rather than searching for flaws that can be used to justify non-selection. I’m not suggesting that documented poor performance be overlooked, however, one “Oh sh_t” shouldn’t cancel 100 “attaboys”.

  • Herbal

    Someone recently told a large group of us, “It’s okay to make mistakes as long as they are original mistakes.” There are mistakes we can learn from companies like Microsoft, Sears, HP, or Research in Motion before they become Harvard Business School case studies. They are not original mistakes.

    Agree that standard deviation from the RS CUM could be helpful (http://bit.ly/ieegtR). Our fitrep system was written for a non-networked paper record/microfiche era and doesn’t take full advantage of the information that’s available today. With the high importance and attention we place on picking the right commanding officers, there’s no better time to modernize our performance evaluation system.

  • Steamer

    I don’t see stack rankings as stifling innovation– aren’t people in the pool SUPPOSED to compete with one another?

    What I saw in my career (which included significant time in the Navy’s Science & Technology community) was that innovators were usually crummy at communicating their value to seniors (seniors who often didn’t understand the potential value of the innovations being pursued).

    Reporting seniors will use what the system gives them at the high end and the low end of the ranking scales. There are way too many EPs assigned because the quota is there to fill, in comparison to the number of people performing at the EP level.

  • http://disruptivethinker.blogspot.com Ben Kohlmann

    Another aspect to consider is that not all commands are equal — I’ll use Naval Aviation as an example.

    At the end of our JO fleet tour, 99 percent of us from the TACAIR side go onto production. Top Gun, Test Pilot School, Fleet Replacement squadron instructor, training command instructor. While the bureau has tried to convince us all that all options are equally competitive, and boards tend to agree with this, in reality some are better than others in the quality of people they attract.

    Is a #1 MP at Top Gun really worse than the #3 EP at a training command billet? Is he even worse than the #1 EP at the training command? A senior officer I respect greatly had a 3 of 3 O-4 select FITREP while at Top Gun…a career killer. The two guys in front of him re-wrote the entirety of air-to-air tactics, still used to this day.

    Had he been anywhere else, he would’ve been an EP player, and likely made command. But he wanted to be in a high octane job, and didn’t break out. He acknowledges his choice, and to this day works his tail off. But he is one of those late bloomers — and most of us JO’s are very disappointed that a guy like that got eaten up by the system — he lacked “sustained superior timing” as opposed to “sustained superior performance.”

    Which gets into the whole timing thing — even though the rankings are supposed to prevent the “everybody gets a trophy” mentality, rankings are often arranged and people are moved out of squadrons early or extended to get as many people as possible under the EP wicket. What if a guy is your a top performer withing 6 months of getting there? He almost always has to “wait his turn.” Or if he is rewarded, he is shuttled out of his command up to a year early to afford others the opportunity to get “their” EP.

    In general, the right people get promoted. But there is an incredible amount of waste in the whole process, and I know lots of guys who are rock stars that don’t want to roll the dice with getting to a wrong squadron at the wrong time, and not making command because of things not related to pure performance. So they leave.

  • Cap’n Bill

    The concept of rank order is foul. Start with the assumption that every one being reported on is damn near perfect. All less-than-mighty damn good would have been sent home. It then falls on the reporting senior to use the proper language with the required degree of accuracy to describe the strong and weak points of this most excellent person. Let the exact english words, backed up by specific examples, describe the person. Forget about numerical scores accurate to five digits. Now you have a useful tool to use as one etches out a high performance group of people who show the greatest abilities in certain zpecific and necessary types of work and duty.

    I’m sure glad that I don’t have to bother about such stuff anymore!

  • Sperrwaffe

    Cap’n Bill
    I like your approach. I do of course lack detailed knowledge about the structure of your FITREPs.
    But in our system the so called “free description” of the individual is often even more important than the technical description of the skills based on numbers. A horror for statistics based classification and the guys who prefer this.
    And to categorize with number is easier thatn having to make up your midn and write something about that person you are evaluating.
    When the process itself and measurable numbers are more important than the contents of a proper language description you end up in the situation CDR Sal brought up.

    However to write a text about your juniors/crew/team that is something you have to learn for a long time. When I was on my first boat my CO (that old SOB)really pushed me into this, besides kicking me to learn to drive that good old vessel built in the 60s (men of steel drive boats of wood). And today I am very thankful that he did so. But I am getting OT.

    The detail which really struck me after having read this post for at least three times is this small sentence:
    “It leads to employees focusing on competing with each other rather than competing with other companies.”
    This competition creates a lot of tension. People are more concerned about their next FITREP and Promotion instead of caring about the group, the team, the squad, you name it. Eric brought this item up over at the discussion about Jeannettes post. And I challenged him about that with regard to this thread.
    In many posts that challenged Jeannette I can see this competition.
    The competition about promotion, having to do more because of sabbaticals, competition about careers. Competition of Men and Women with the fear of loosing a career. What career? Do we all have to become the best?
    “The best of the best of the best, with honors…” MIB

    Aren’t we already “the best” (philosophically speaking, not do lower other achievements)? We have passed the acceptance tests, endured basic training, special training, crew training, you name it. And we are still here. Something to be proud of. Sometimes we get promoted, advance, become CO and so on. But still we are a member of the Proud.
    And on the other hand the reluctance to speak freely about problems, about issues which have an impact on the team, the group, the military. In order to protect your career. Disruptive thinking for instance.
    To say no! when it endangers your team. Speaking from for example from a equipment point of view (wrong Combat gear procured, on a bigger scale LCS and so on). I am not referring to combat situations where you execute your mission and your training in order to bring everybody home.
    Its not that we don’t have the same problems at our Bundeswehr.
    We certainly do. And its also a society issue.

    You are now authorized to open fire and try to sink me :)

  • Dave Schwind

    During my doctoral classwork, I spent quite a bit of time stewing over the “right” way to do FITREPs, even enlisting the assistance of a Marine O-9 to help formulate some of the Marine reporting concepts into a workable solution for the Navy. I’ve since moved on to other projects (maybe my ADD is kicking in…) so my FITREP-fixing project is on the backburner…for now, at least.

    I had a plethora of thoughts dancing in my head for ideas on how to improve FITREPs; some of them from my experience as a civilian, others from what I’ve seen from other services as well as things that I simply KNOW need fixing on FITREPs. Here are a couple of factors that I mulled over for a while:

    1. I really like the way my company does their annual reporting. It’s not a quick, nor easy, process that involves a lot of writing to justify any rankings given. The rankings are a 1, 2, or 3. If given a “1”, you might as well be submitting your resume elsewhere. A “2” is considered “doing your job”, and “3” is reserved for the top 10%, organization-wide. In other words, as the supervisor of a remote project, I may or may not have anyone who is a “3”, even if I really think my guys are hard workers…they simply might not break out across the pack. In order to get a “3” ranking, there has to be no-kidding hard documentation of what was done to deserve that ranking. Let me back up for a second… At the end of the reporting cycle, each individual project is evaluated based on its performance. This includes not just the amount of production, but also hitting all of the other “wickets”, to include a certain amount of innovation submissions. The latter aren’t just encouraged, but if you want to have a top-scoring project, you have to have submitted “X” number of organization-wide innovations. You also cannot have any government citations (to include speeding tickets in company vehicles, OSHA violations, etc.) as well as be 100% compliant with all health and safety and training requirements for the year. Failure to complete any of those wickets prevents a project from being top-ranked and thus, makes it impossible for that project manager to be in the competition for a “3” ranking.

    The rankings and project performance are the two factors that go into the creation of bonuses for the employees each year, so it is incumbent for everyone to work together to ensure they remain complaint with everything in order to get the top rankings and thus get their full bonuses. And everyone’s involvement counts: if my employees aren’t eligible for their full bonus, that means I’m not either…and that also means my boss isn’t either. I know that in the Navy something like this would mean serious micromanagement, but in the environment I work it, it means that everyone helps each other to ensure success…because without everyone being successful, no one is.

    Could this be applied directly to the Navy? In some ways yes, others, no. But there are some good concepts that could be taken away.

    2. Looking at reports and counseling forms from the other services, one of the ones that really got my mind going involved writing a paragraph to justify every ranking given to a subordinate. This way, each ranking wasn’t simply given out because they had to equal the reporting senior’s cumulative average or some such nonsense, but the subordinate was graded and the grades were individually justified. Yes, that would require a bit more work on the behalf of the reporting senior, but it would actually make people THINK about the rankings vice putting “X”s in boxes to make sure the RS average works out right.

    3. How about revising some of the performance traits to make them more specific to the expectations of their designator and paygrade? This is actually multi-part: First, it makes no sense to grade an Ensign in a training pipeline on the same form that would be used to grade a Captain on the Joint Staff. The Ensign is expected to learn and be a good student; the Captain is expected to be a technical expert, know how to manage people and budgets and the like, and make decisions “for the good of the Navy”. Aren’t these different skill sets? Looking at it further, aren’t the expectations of a division officer different than that of a department head or an executive officer? So why do we grade everyone on the same form? Why not make a form to grade O-3 and below, and then make a separate one for O-4 through O-6? Or even better, separate forms for O-1-O-2, O-3-O-4, and then O-5-O-6? I don’t see how difficult that would be; we’ve already been doing it with enlisted forms for decades, not to mention the O-7 and above forms.

    This way, specific performance traits can be tailored for each paygrade. For example, as an O-1/O-2, the expectation is that the officer is a “student”…whether that’s learning to fly, sitting in nuke school, or standing U/I watches on the bridge of a ship. O-3s and O-4s are the “middle management”…senior division officers and department heads. They are expected to be tactically savvy, know their weapons systems intimately, and be a student of mid-career schooling. O-5s and O-6s are the leaders and mentors…but also the ones who become experts at budgets, upper-level management, command, and so on. Make the FITREP applicable for each paygrade so the officer can be effectively evaluated on paygrade-specific requirements that would exhibit their potential for future advancement.

    And what about URL and RL specific FITREPs? Obviously, a PAO or JAG O-2 is going to have a different skill set to exhibit as compared to their SWO or NFO counterparts. This may sound incredibly complex, having three separate forms and having RL and URL specific forms (a grand total of 6 forms that would cover all O-1 to O-6 paygrade officers), but at the same time, isn’t the point of writing FITREPs to evaluate an officer’s performance and show their potential for future advancement? So why not tailor the reporting form to match the requirements of the paygrade and job?

    4. What about including a Reviewing Officer after the Reporting Senior? Particularly for department heads and above…the FITREPs written for them are the ones that will be used to select them for command. Would it not be worth spending an hour or two as an O-6 (or above) to review the comments of the Reviewing Officer on the FITREP to ensure they are what they should be? Let’s be realistic…not every O-5 is a master of writing FITREPs. So why does the Navy put complete and absolute faith on them to select the future leaders of the Navy? Once again…that doesn’t make sense.

    4. What about 360 degree rankings? I know those seem “faddish” to many, but studies have shown they are extremely effective for leadership pipeline development. The CNO mandated 360 degree evaluations for SWO department heads in OPNAVINST 1412.14. That’s a good first step, but it falls woefully short of what should be done. A page can be taken from the succession planning of major successful corporations: conduct 360 degree reviews and have the results routed up that person’s chain of command for review. What good does a 360 degree review do if it’s only reviewed by the reportee? The comments and rankings can be rationalized (e.g. “that guy was just pissed off at me because I made him late going on liberty”) and in the end, what effect will it have? Very little to none (sorry to burst anyone’s bubble…)

    On the topic of corporate succession plans, what about psychological testing for command-screened officers? Many corporations do it to ensure they are selecting the right leaders, not just those that look good on paper. If it’s good enough for corporations so they can maintain their profit and loss lines, perhaps it would be suitable for the Navy, who puts officers in charge of billion-plus dollar warships with hundreds (if not thousands) of lives onboard? I know that’s a bit off topic…but it’s worth thinking about.

    Anyway, these are a few of the concepts and thoughts I mulled over during the course of my project. Perhaps now would be a good time to jump start it again and see if the Navy really can make a marked change for the better in the promotion and selection of the right people – not just those who look good on paper.

    • FoilHatWearer

      I like your option #1. I have a strong suspicion that most organizations rack & stack their people like that anyway, although they’re not allowed to actually document it that way.

      I have a co-worker whose previous company based their evaluations the most heavily on what your co-workers think of your performance. They would have each person rated by several of their co-workers. It wasn’t just clicking boxes, they had to write a paragraphs on why they thought the person was good, mediocre, or bad in several different areas. That system makes the most sense to me. Your co-workers see what you do everyday, your boss only sees small snapshots.

      I’ve seen the front office mistakenly get ticked off at a guy and complain to his supervisor about it. Great, guess who’s getting screwed on the next eval? At the same time, I’ve seen guys who are real hamburgers fall assbackwards into doing something good in front of the big bosses, who tell the supervisor. So this loser is getting kudos, an award, and great evals when 98% of his work is pretty shoddy. I saw that kind of crap a lot in the military. What’s also bad is that a lot of bosses think that no news is good news. If nobody is complaining about my guy, he must be doing great!

      Peer reviews take a lot of the guesswork out of ratings. (You want it so that a person is rated by multiple people. That way, a guy’s buddy can’t lift him into the stratosphere and somebody who doesn’t like you can’t drop you into the basement.)

  • Pony Boy

    Many of these points are relevant to senior enlisted evals as well. Some thoughts:

    1. The practice of ranking across the CPO Mess can undermine the camraderie/networking power that the CPO Mess brings to the command. Rather than working together towards command goals and to ensure each Chief’s individual effectiveness, rankings can lead to an unhealthy sense of competition. Enlisted Sailors do compete for advancement within rate, so any ranking/breakout should be limited to those within the same rating. Hard to compare an MMC to an OSC or YNC who have different job demands that result in different levels of command contribution, none necessariy better or worse than the other, just different. Much more valuable and appropriate to rank like rated Chiefs in the same command, that’s where you want to note who’s the best since one of those Chiefs has the potential to be selected and lead those other Chiefs. Retired CPOs that I talk with say they saw a change in CPO Mess behavior when CPO Mess rankings were implemented.

    2. Selection board process is what drives eval behavior. When our current FITREP/EVAL system was implemented, rankings were prohibited. I would offer that selection board “laziness” was a factor in changing that. Much easier for a board member to simply look for an command breakout and recommendation line, then to read each eval thoroughly for merit/achievement. As a result rankings rule and the individual trait and trait average is driven to support the command ranking rather than the eval reflecting the performance of the individual against the trait standard. Puts the eval debriefer in a awkward situation to explain why a certain trait is graded as such when the actual performance is not congruent with it. We should focus on how well the person performs to the standard vice to each other on a broad scale. The individual traits drive the trait average which in turn, drives any ranking.

    3. Current CPOEVAL needs to be revised. For example, how do you objectively evaluate “Sense of Heritage”? I think we would be best served by evaluating on behaviors expected of the Chief.

  • Chris A

    The line from Millington for as long as I can remember is that we promote FITREPs, not people. The current system fails the navy through ranking process and the importance of collateral duties in ranking personnel.

    The FITREP system incorrectly assumes talent is evenly distributed throughout the navy. A command with several top performers is going to have to do a disservice to some because of the distribution requirements while in another command, less talented individuals may earn higher rankings because the local competitive pool is shallow. The system has no way to resolve those differences and that failure produces some perverse results when promotion boards meet.

    Another source of perversion is the weight given to collateral duties when making ranking decisions. The route to breaking out from the pack is not through professional excellence but rather collateral duties. CDRSalamander asks how others see our seamanship, maintenance systems and shipyard project management. The question we should be asking is how does performance in those areas get captured by FITREPs? Less than it should is my observation. It’s too easy to lump everyone’s professional skill into the same group and make distinctions based on other factors. Would we have as many ship-handling incidents if there was a way to show that an officer who aspires to command is lacking in that area? What about the toxic leader who gets results at the expense of their people?

2014 Information Domination Essay Contest