Tags: results-based training, training
If we want to get serious about putting Warfighting First and Reducing Administrative Distractions, we can start with how we assess training on ships. Our current system is process-based: superior commands issue detailed instructions for the administration of shipboard training and qualification, and then assess compliance by auditing the ships’ records. There is usually a results-based component (observed drills) of assessment which is combined with the audits to produce an overall score—commands with weak performance in drills might be saved if they exhibit fantastic recordkeeping practices.
The process-based approach suffers from two flawed assumptions:
Assumption #1: Performance is the result of directed training processes. I’ll illustrate this assumption with an anecdote from my previous command, when I had just become responsible for the Torpedo Division. I observed divisional training conducted by the Leading First, complete with a PowerPoint presentation and testable objectives in compliance with the Continuing Training and Qualification Manual. The topic, also in compliance with said manual, was the characteristics of various weapon classes, many of which were not employed by our ship.
It was the end of the work day, but the men were breaking out tools. I asked the Chief why he wasn’t cutting the men loose for the day. His reply:
“Well, we suck at [weapons] handling, so we’re going to do some training.”
“Didn’t we just do training?” I asked.
“Oh, that?” replied the Chief. “That was for the books.”
Assumption #2: The directed process is the best possible method. If a Commanding Officer is required to adhere to a directed list of requirements, then the only way they can innovate is to add new requirements to the crew. Given a painfully finite measure of time and manpower, prudence demands that COs budget their crews’ efforts carefully. Such an environment is prohibitive to innovation. COs can’t reasonably accomplish much more than the bare minimum—there just isn’t enough time.
One of my favorite bosses taught me that the best way to demonstrate effectiveness is to have it. Carrying that thought forward, I submit that the best way to assess effectiveness is to observe it. I’m going to propose something radical here: let’s stop assessing administrative compliance, and shift our focus instead to aggressively assessing performance. With commanders under constant threat of failing a drill but given the freedom to manage training at their level, you’d effectively see the combined intellectual firepower of the entire fleet directed at developing the most effective training program.
This proposal sounds crazy for several reasons. First among them is that assessing results is exceptionally difficult, whereas auditing paperwork is exceptionally easy. Drills and simulations are very time and labor intensive, and even incur some risk of accidents if a drill goes wrong. Shifting to results-based training would require a much greater frequency, variety and creativity of drilling and simulation than what we have today, and would increase demands on shore-side training assessors.
Another obstacle to results-based training (and an obstacle to innovation in general) is organizational parochialism: if multiple organizations impose requirements on a ship, and then one organization relaxes theirs, all of the ship’s attention is going to shift to those requirements that are still enforced by the others. Higher commands stake their claim on the time and energy of crews by generating and enforcing requirements. If you want to see this principle in action, then step aboard any American submarine today and compare the disparate amount of training attention devoted to nuclear power as opposed to literally all other warfare areas combined.
A final obstacle to results-based training is that it would invite significant organizational risk. For example, imagine a scenario where a security watchstander unjustly opened fire on a recreational boater, and it was later discovered that the ship did not employ a qualification process for its guards. In such a case holding the Commanding Officer accountable would not be enough; public outcry would demand a top-down solution from the higher Navy, the leaders of which would be held to task for the CO’s folly.
If the obstacles to results-based training sound too hard, I ask the reader the following questions: If results-based training were legitimately embraced by the Navy, would the ships we operate be better or worse at the conduct of naval warfare? Would the leaders we promote and the culture they create be more or less likely to prevail in a war at sea?
What percentage of the Navy’s effort should be budgeted toward effectiveness in the conduct of naval warfare? How much career risk should its leaders be willing to incur to this end?
Innovation is trending in the Navy today. Lasers, railguns, UUVs and 3D-printing have demonstrated fascinating potential to improve our ability to wage war, but sometimes the most important evolutions are organizational in nature. The trouble with organizational change is that while it is relatively inexpensive in dollar terms, its price comes primarily in the form of political risk.
If we want a culture of innovation to be anything more than a passing fad, we’re going to have to take some risks. I propose that the best risk our leadership could take on today is to return some trust to our Commanding Officers. Allowing sufficient latitude to try new things would be a fantastic first step to putting Warfighting First.