Archive for the 'Cyber' Category

4693766_l5There are a lot of people who are convinced that unmanned aircraft, ships, and subsurface craft are the future of warfare. Not just surveillance and reconnaissance, but full spectrum combat.

At least for the Western democracies, my initial push-back has always been that regardless of how good your AI gets the legal/ROE issues will get in the way if you cut away the man-in-the-loop such that we have now in the TLAM to Reaper spectrum of autonomy.

Other parts of the world? Not everyone has the niceties that we are used to when it comes to moral or safety considerations.

You cannot classify math, and what is cutting edge for one generation is old and primitive for another. The North Koreans building nuclear weapons is a case in point.

There is, of course, the usual reply from the AI advocated that AI will be the next thing in military etc etc etc.

What if we are scope-locked in our AI discussion? What if we simply do not get the big picture of what is going on?

Author Sam Harris is having me rethink all of my previous assumptions about the direction we are going with and thinking about AI.

The question we should be asking isn’t as much, “if it will meet the promise,” as “should we even let it get close.”

“We.” Unfortunately, there is no international “we” with the force to keep a genie in a bottle, is there?

You need to watch the full video from his TED Talk below, but in it he outlines three assumptions you need to hoist onboard in order to fully understand what the real challenge of AI is.

1. Intelligence is the product of information processing. General Consciousness will eventually be built in to our intelligent machines.

2. We will continue to improve our intelligent machines.

3. We are not near the peak of intelligence.

Agree with the above? Well, you may not want your AI air superiority fighter anytime soon


Please join us at 5pm EDT on 2 Oct 2016 for Midrats Episode 352: Building Resilience in the Face of Man Made & Natural Threats

At the height of hurricane season, people think of the impact such storms can have on the security, economy, and even the political direction of places if hit by such huge events such as Katrina.

As we saw in the attacks in New York City in 2001, terrorists are trying to create those same effects, along with a few more. With a global economy, local events can have international impact.

How do you best to prevent, prepare for, and recover from natural events – but on the high end, terrorist attacks that go beyond explosions, but reach the next level with chemical, biological, or even nuclear weapons?

Our guest to discuss this and related concerns for the full hour will be J. Michael Barrett, Director of the Center for Homeland Security & Resilience (CHSR), and Diligent Innovations.

Mike’s previous experience includes serving as the Director of Strategy for the White House Homeland Security Council, Intelligence Officer for the Office of the Secretary of Defense, and Senior Analyst for the Chairman of the Joint Chiefs of Staff.

A former Fulbright Scholar to Turkey, Mike has served as a Homeland Security Fellow at the Manhattan Institute, an Olin Foundation Fellow at Johns Hopkins University, and a research analyst at the Center for Strategic and International Studies.

Mike received an M.A. in Strategic Studies and International Economics from the Johns Hopkins University School of Advanced International Studies (SAIS) in Washington, D.C., and an M.B.A. from the Australian School of Business in Sydney, Australia. He also was graduated cum laude with a B.A. in International Relations from the University of Pennsylvania and is an Occasional Guest Lecturer at National Defense University, Georgetown University, and Joint Special Operations University.

Join us live if you can (or pick the show up later) by clicking here. Or you can get the show later from our iTunes page here or from Stitcher here.

General Robert Neller has always been regarded as a tough, no-nonsense Marine, and as Commandant of the Marine Corps he has also emerged as a genuine visionary. He deeply understands the future military environment and how his service must prepare for it. At the 2016 U.S. Naval Institute/AFCEA West Conference, the general provided critical insight into his vision, which closely aligns with that of Chief of Naval Operations (CNO) Admiral John Richardson, on the direction of leadership development the Naval Services should take.

According to the 37th Commandant:

I think the training systems we have as far as simulators and simulation are pretty good for individual task/condition/standard, for air crew, for drivers, for even firing individual weapons, gunnery, things like that, I think the thing that we’re looking for is, where’s the equivalent of our Holodeck, where a fleet commander or division commander or air wing commander can go in and get a rep. Right now that almost requires an actual provision of the real stuff, which is really expensive . . . . Where’s our Enders Game battle lab kind of thing where we can not just give our leadership reps, but we can actually find out who the really good leaders are.

General Neller’s comments compel us to further analysis. He invokes aspects of popular science fiction to paint a picture for how leaders will be trained, evaluated, and readied for operational challenges in the not-so-distant future. He identifies critical gaps in today’s approach to leadership development, where mid- and senior-grade officers have few opportunities to experiment with novel operational concepts, using multiple units, in a risk tolerant environment. He also places cognitive development, or military decision-making, on par with the physical fitness which has long been a hallmark of Marine Corps officers. Finally, Neller highlights the problem of assessing the true quality of leadership in today’s ranks, where a significant portion of an officer’s career is in non-operational assignments.

One Army study of the novel Ender’s Game describes the “battle lab” (or school) in this way:

Using virtual training environments, the children go head-to-head on an individual level against computers that simulate Formic battle tactics to gain the knowledge and abilities required to defeat the enemy. The children can then compete against one another in the virtual environments to further develop their strategies. The next phase involves live collective training. Divided into armies, the soldiers must learn to function as a single unit to accomplish a mission objective in the battleroom. With enough skill, soldiers can become commanders of their armies and must learn to lead them effectively. By merging these individual and collective training components, the soldiers’ knowledge, skills, and abilities can translate into operational readiness.

While the concept of an Ender’s Game battle lab may seem like pure fantasy to some, the technology to build it may be right around the corner. In order to turn Neller’s vision into reality, several organizational changes must occur.


Gen. Robert Neller, the 37th Commandant of the Marine Corps, speaks to participants at the Marine Corps Warfighting Lab’s Force Development 25 Innovation Symposium at Marine Corps Base Quantico, Virginia, Feb. 23, 2016

Harnessing advances in several emergent fields is critical for creating a naval battle lab, but we must exercise prudence in our approach. We must take full advantage of better private sector platforms and systems, and make using them our first choice, rather than taking the more expensive approach of designing our own systems. Reinventing the wheel, and the resulting exorbitant costs, will be the death knell of a naval battle lab long before the project would get underway in earnest.

As the current Pokémon Go craze clearly demonstrates, working augmented reality is now widely available to the public at virtually no cost. If built from scratch using the defense acquisition process, its cost surely would render such a system unaffordable. In fiscally constrained times, the DON must adopt new business practices and modernize outdated IT policies to capitalize on these types of commercial initiatives. Senior leaders and acquisition professionals need to consider open source software (OSS) services, such as GitHub, as the new norm for software procurement. OSS services allow users to take available code and modify it for a specific use at potentially a much lower cost than developing their own version from scratch or purchasing a commercial software license.

Another form of technological advancement needing consideration is the rise of machine learning and “bot” technology. Sophisticated software algorithms show great utility in modern computer networks, with their ability to monitor computer systems, offer data access, and to check network activity, while adapting themselves to varying conditions without human direction. This capability is being commercially used to improve customer service and to monitor network activity, among other private sector functions. Such advanced machine learning tools will be critical for creating virtual exercise controllers or simulated adversaries, using their adaptable artificial intelligence to challenge military tacticians based on their level of expertise.

Mobility will be an important enabler for leadership development in the future. It is difficult to find a naval officer today who does not own a smart phone. We must take advantage of these powerful tools by providing our people with appropriate network access and software to enable them to participate in scalable leadership exercises alone or as members of a networked team. Such access will allow them to develop professionally wherever they are. In short, we must make cognitive development as accessible as doing a set of push-ups. Leveraging commercial technology, however, is only one part of the changes required to implement General Neller’s vision.

The naval services have led at wargaming for decades. Over the past few years, improvements to analytical methods have resulted in game outcomes informing organizational decision-making processes. However, we must not lose sight of the fact that wargaming, and gameplay in general, serves as an excellent leadership development tool. In essence, traditional wargaming is a competition among participants based on a scenario that is conducted in a turn-based manner. They make people think and solve problems. This same process is easily replicated, repeated and expanded by using a virtual environment.

Virtual wargaming offers many advantages over traditional simulations. Consider popular online games such as World of Warcraft or Call of Duty. These games are played by millions of networked participants around the world every day. Fundamentally, they are designed to pose tactical problems to players who have a set of options from which to select. This interaction presents an incredible opportunity both to learn and collect useful data on military decision making.

In the future, for example, tactical problem X could be posed to a large and diverse group of naval officers in a virtual game format. From their answers, it would be possible to determine that a certain percentage would chose option Y, while others would chose option Z. This data could then influence policy changes or improve training and education programs, using any observed shortfalls. Further, if this virtual environment is shared with other services and coalition partners, it will be possible to determine the effect service and national culture has on tactical decision making.

Another advantage of virtual gaming is its ability to draw upon the expertise of the crowd to solve challenging problems. This is contrary to the norm of giving only a few elite players the opportunity to participate in large-scale events. Virtual environments are also more accommodating to various personality types and better for overcoming the power dynamics and hierarchies associated with the traditional approach to military wargaming.

The DON is at the forefront of crowdsourcing in the Department of Defense through its use of online platforms such as MMOWGLI, The Hatch, and the Marine Corps Innovation Challenge. Each of these forums provides Sailors, Marines, and DON civilians the opportunity to participate in virtual problem solving challenges. The lessons from these nascent systems could influence operational planning in the future, as the multitude of options available to our adversaries could be given to a network of operational planners, rather than myopically focusing on one or two likely courses of action. History has shown the current approach to planning often results in failure to anticipate our adversaries’ actions, an inflexibility we must remedy.

Virtual games are only as good as the environment in which they are conducted. Commercial gaming technology, geographic information systems, intelligence collection sensors, and repositories of global societal data are constantly improving. Much work remains to integrate these various sources of data in order to develop virtual environments of sufficient quality to enable realistic decision-making exercises. Excessive emphasis on environmental fidelity can often become an expensive distraction, however.

Virtual environments may be used to represent complex, networked, “wicked problems” better, as well as demonstrating the impact of our actions within, for example, complex civilian population centers. In short, virtual environments can present a different set of decision making problems and feedback mechanisms not available in live training exercises or traditional war games. This is yet another advantage offered by new forms of simulation.

The term “game” often connotes a recreational activity. If gameplay in the battle lab of the future is to become an effective tool for assessing the tactical decision making of naval leaders, proper incentives must be put in place so these exercises are taken as seriously as time on the rifle range. The emerging concept of gamification rests upon rewards or meaningful status upgrades to reinforce positive behavior, while penalizing negative behavior. Performance in the naval battle lab consequently must be incorporated into annual performance assessments and ultimately influence career decisions.


Wargaming at the Naval War College

In an examination of military innovation, Dima Adamsky notes a significant difference between the US and Soviet militaries during the Cold War in their approaches to technological adaptation. The Soviets would develop concepts and strategy for use ahead of delivering a technology, whereas the US military usually had the technology and then often took a decade to figure out how to turn it into an operational advantage. To prevent this problem in the future, DARPA and ONR could insert the latest weapons technology into the battle lab years ahead of its actual fielding. This would give future naval leaders the opportunity to experiment with weapons of the future, then speedily integrate them into their decision making cycle as soon the new systems arrive in the operating forces.

The DON’s Task Force Innovation was comprised of over 150 naval innovators from across the operating forces. Improving wargaming and expanding virtual environments were identified as important tools to promote innovative thinking. As a result, Secretary Mabus directed two policy memos to emphasize these two issues and take an integrated naval approach, when possible. While great progress is being made as a result of these directives, these two areas will ultimately form the foundation for a naval battle lab and must proceed in parallel and complement one another.

To operationalize this concept, the numerous stakeholders from across the naval enterprise must work towards a common vision. Developing the functional system as described here will require strong leadership and collaboration across numerous DON organizations. As we have seen, this topic is of great interest to the SECNAV, CNO and CMC. Therefore the current bureaucratic environment may be optimal to make meaningful progress.

There are many technical, fiscal, and organizational barriers which must be overcome to fully operationalize the naval battle lab concept. The most significant obstacle, however, will be cultural. Ultimately our leaders must see the lessons learned from traditional leadership tasks and day-to-day decision-making in an operational environment are invaluable and cannot be supplanted. As cognitive decision-making emerges as a critical capability on the battlefield of the future, we must leverage every opportunity to build the most tactically and operationally proficient naval officers possible. As we see in every aspect of society, technology will play a vital role. If a battle-hardened, infantry Marine like General Neller, who entered military service long before personal computing became part of our daily lives, recognizes the potential of a naval battle lab for building and testing naval leaders, others must take notice too.


The Cyber Dragon

July 2016


An excerpt of this article was published in the July issue of Proceedings. The full article is provided here for further context and explanation. This article does not reflect the views of the Department of Defense, Department of the Navy or U.S. Cyber Command.

China and the United States appear to be engaged in a long-term competition, and one area of particular concern is cyberspace. What used to be considered a significant, overwhelming advantage of U.S. military capabilities relative to the rest of the world, including China, has recently been called into question. Recent Chinese military writings confirm the centrality of cyberspace operations to the People’s Liberation Army (PLA) concepts of “informationized warfare.” This paper examines Chinese writing on these concepts. It proposes that China has been actively seeking to position its sources of information power to enable it to ideally “win without fighting” or if necessary, win a short, overwhelming victory for Chinese forces. It concludes with some recommendations for how the U.S. might counter China’s informationized war strategy.

Chinese Strategic Thinking and “Informationized War”

There’s a war out there, old friend. A world war. And it’s not about who’s got the most bullets. It’s about who controls the information. What we see and hear, how we work, what we think… it’s all about the information!

-Cosmo, from the movie “Sneakers”, 1992

You may not be interested in war, but war is interested in you.

-Leon Trotsky (1879-1940)

Chinese military and strategic thought is markedly different from Western tradition. Fundamentally, China views the natural state of the world as one of “conflict and competition” rather peace and cooperation. The goal of Chinese strategy is to “impose order through hierarchy.”[1] The natural conclusion is that due to this state, the world needs global powers, perhaps even a super power, to manage the conflict and competition and bring harmony. Timothy Thomas has identified several components to Chinese military thinking, to include: [2]

  1. A more broad and analytic framework that holistically incorporates information-age strategy;
  2. While remaining prominently Marxist, it “examines the strategic environment through the lens of objective reality and applies subjective judgment to manipulate that environment to one’s advantage”;
  3. The use of stratagems integrated with technological innovation, creating a hybrid combination targeting the adversary’s decision-making process to induce the enemy to make decisions China wants;
  4. The constant search for shi, or strategic advantage. Shi is thought to be everywhere, “whether it be with the use of forces, electrons, or some other aspect of the strategic environment”; and
  5. The object of “deceptively making someone do something ostensibly for himself, when he is actually doing it for you.”

Shi is the “concept born of disposition … of a process that can evolve to our advantage if we make opportune use of its propensity.” Chinese military thought seems to differ from Clausewitz, becoming focused on shi where Clausewitz finds “ends” and “means” as the most important. Shi aims to use “every possible means to influence the potential inherent in the forces at play” to its own advantage, before any engagement or battle takes place. Therefore, the engagement never actually constitutes the decisive battle that Clausewitz envisions, because it has already been won.[3]

Chinese military writing contemplates war transitioning to an “informationized” state “in which informationized operations is the main operation form and information is the leading factor in gaining victory.” Information is a resource to be harvested and exploited, as well as denied to the enemy or manipulated for advantage. Nations and militaries “can be wealthy or poor in this resource. Overall wealth in information is what will ultimately matter most in peacetime competitions, crises or military conflicts.” [4]

China considers herself at an information disadvantage, so her use of information harvesting and exploitation in cyberspace align with her strategic intention. Thomas likens it to three faces of a “cyber dragon”: peace activist, spook and attacker. The peace activist is the face of the dragon concerned with internal and external soft power (improving China’s image, respect and perhaps fear or awe of China abroad, while remaining on guard internally against a Chinese version of an “Arab Spring” or “Orange Revolution”). The spook is the uses of cyber techniques to not only acquire information but also to reconnoiter adversary information systems, perhaps laying the groundwork for future attack or deterrence capabilities. The attacker face uses offensive capabilities and concepts to deter, or if necessary, paralyze the information capabilities of the adversary. The goal is that these three faces “work in harmony to achieve dominance over any potential adversary.”[5]

People’s Liberation Army (PLA) books such as the Academy of Military Sciences’ Science of Military Strategy and Ye Zheng’s Lecture on the Science of Information Operations “reflect a consensus among Chinese strategists that modern war cannot be won without first controlling the network domain.” This tracks with current U.S. doctrine that emphasizes dominance in the network domain as “central to deterring Chinese forces and protecting U.S. interests in the event of crisis or conflict.”[6]

Importantly, PLA writers emphasize first strike and first mover advantage in the network domain to “degrade or destroy the adversary’s information support infrastructure and lessen their ability to retaliate.” This creates strong incentive to strike in the network domain just prior to the formal onset of hostilities.[7] China’s lines of effort in support of this strategy include:

  1. Gaining information through reconnaissance of cyber systems, and manipulating or influencing Western or American perception and technology to establish strategic advantage;
  2. Using that reconnaissance information to position its forces, to locate vulnerabilities, and be in a position to conduct system sabotage;
  3. In a crisis, using system sabotage to either render information technology systems impotent, or expose strategic cyber geography to establish offensive cyber deterrence.[8]

Chinese writers publicly state that China lacks the ability to successfully launch a first strike at the present time. This is because they believe that Chinese networks are constantly penetrated by adversaries, and because of U.S./western control of most of the Internet’s core architecture. PLA writers do recognize the vulnerabilities of relying on Western technology supply chains for hardware and software operating systems.[9]

Chinese writings suggest information is the bonding agent for strategic action from which China will be able to amass enough power that it will be unnecessary for her to use military force to accomplish her objectives. If force is necessary, China will be in such an advantageous position that the military conflict will be a forgone conclusion. Consider the game of chess. Andrew Marshall, former Director of the Office of Net Assessment, noted that “most of the game is not directly aimed at checkmating the opponent’s king. Instead, the early and middle parts of the contest are about building a more advantageous position from which checkmating the opponent almost plays itself out.”[10] Indeed this is why most competitive games of chess end not in checkmate, but rather concession or a draw. The player on the losing end knows that he or she will lose, perhaps in a finite number of moves.

Recently, the Chinese political and military leadership established a new unit within the PLA to enhance its cyber operations capabilities, space operations and cyber espionage. This new unit, called the “Strategic Support Force,” is part of a larger military reorganization program. In some ways, it might be seen as a counter to the establishment in the United States of U.S. Cyber Command. Along with hoped for improvements to China’s already formidable cyber offensive and defensive capabilities, the unit will also focus on space assets and global positioning services, as well as interference with RADAR and communications.[11] This is a clear sign of the importance that the leadership places on fighting and winning in the information domain.

Beyond its military activities, China’s information control system remains critical to ensuring regime survival. However, understanding this system is made more difficult by the fact that the PRC goes to great lengths to “deliberately and systematically attempt to control how China is understood by both foreigners and Chinese alike,” according to Christopher Ford.[12] He goes on to note:

The modern Chinese information space remains a controlled one, subject to pervasive government monitoring and censorship, widespread and increasingly sophisticated methods of media-savvy opinion management, and the ever-present possibility that the citizenry will face penalties for venturing too far beyond the bounds of the CCP’s official line.[13]

Diplomatic and international policies are also built around giving China maneuvering room to interpret norms, rules and standards to serve domestic needs, principally through the primacy of state sovereignty. China must constantly seek to balance economic growth with maintaining the Party’s grip on power. Not only is Internet usage controlled and censored, but it is also a tool for state propaganda.[14] Chinese “journalists” are, to a large degree, arms of the Chinese propaganda system, transmitting the official “party line” to the population, while at the same time providing feedback “to the leaders on the public’s feelings and behavior.”[15]

Chinese authorities use a number of techniques to control the flow of information. All Internet traffic from the outside world must pass through one of three large computer centers in Beijing, Shanghai and Guangzhou – the so-called “Great Firewall of China.” Inbound traffic can be intercepted and compared to a regularly updated list of forbidden keywords and websites and the data blocked.[16]

Within China, the government heavily regulates and monitors Internet service providers, cafes and university bulletin board systems. It requires registration of websites and blogs, and has conducted a number of high profile arrests and crackdowns on both dissidents and Internet service providers. This “selective targeting” has created an “undercurrent of fear and promoted self-censorship.” The government employs thousands of people who monitor and censor Internet activity as well as promote CCP propaganda.[17]

While the CCP retains the ability to shut down entire parts of the information system, to include Internet, cell phone, text messaging and long-distance communication, it truly prefers to “prevent such incidents from occurring in the first place. And here lies the real strength of the system.”[18] The “self-censorship that the government promotes among individuals and domestic Internet providers is now the primary regulating and control method over cyberspace and has experienced great success.”[19]

China has long been rightfully accused of being a state sponsor of cybercrime and intellectual property theft . This has led to a high level of domestic cybercrime “due in large part to rampant use and distribution of pirated technology,” which creates vulnerabilities. It is estimated that 54.9 percent of computers in China are infected with viruses, and that 1,367 out of 2,714 government portals examined in 2013 “reported security loopholes.”[20] Chinese networks themselves, by virtue of their size and scope, may represent a gaping vulnerability.

Options for the U.S.

Both the 2015 National Security Strategy and 2015 DoD Cyber Strategy state that the U.S. desires to “deter” or “prevent” China from using cyberspace to conduct malicious activity. To do so, the United States may want to consider strategies which have the following desired outcomes:

  1. Build up Chinese confidence that they are achieving their goals and devote resources to attacking networks where the United States wants them to be;
  2. Increase ambiguity in China’s understanding of the information they are able to acquire;
  3. Introduce doubt in China believing it has the ability to disrupt American information networks; and
  4. Force China to expend more resources focused inward to controlling information within China that threatens Communist Party control.

Unlike the other domains, cyberspace is entirely man-made and the physical properties which characterize it can be altered, almost at will and instantaneously. Traditional geographic constraints do not apply, and we can alter the cyber strategic geography to reinforce American competitive advantages that can aid in achieving some of the goals mentioned above.

For example, many American networks that interest Chinese cyber forces reside on public and commercial Internet service provider (ISP) backbones, such as those owned by Verizon and AT&T, and use commercially available equipment, like Cisco routers. We like to think of “cyberspace” or “the Internet” as being a “global commons,” (see the 2015 NSS), but in reality, nearly all the physical infrastructure and equipment is privately owned and subject to manipulation. The information itself travels on electrons, which can also be manipulated.

The U.S. might develop alternative information pathways and networks, perhaps solely owned and operated by the government or military and not connected to the public ISP backbone. By keeping the existence of a separate network a secret, China may continue to devote resources to attacking and exploiting existing government networks residing on public ISP’s. Alternatively, the U.S. could permit China to acquire access to this surreptitious network in order to feed it deceptive information. In either case, the Chinese regime’s confidence in its ability to disrupt or deceive U.S. information networks could be placed in doubt at a time of our choosing.

Existing information networks could be made more resilient. Peter Singer recommends that we think about resilience in terms of both systems and organizations. He identifies three elements underpinning resiliency: the capacity to work under degraded conditions, the ability to recover quickly if disrupted, and the ability to “learn lessons to better deal with future threats.”[21]

The DoD can also play a role by establishing more consistent network security standards. Cleared defense contractors (CDC), such as Lockheed Martin, Northrup Grumman and Boeing for example, are priority targets for espionage. The DoD can leverage its buying power to mandate accountability, not only for the products developed by the contractors, but also for the security of the information networks they use. It can work to bring “transparency and accountability to the supply chain” to include using agreed-upon standards, independent evaluation, and accreditation and certification of trusted delivery systems. It should address supply chain risk mitigation best practices to all contracting companies and the Department.[22] Resiliency, risk mitigation and security can reduce China’s confidence that it can successfully execute system sabotage or offensive deterrence.

Another strategy might be to develop capabilities that permit the U.S. to execute cyber blockades or create cyber exclusion zones. A cyber blockade is a “situation rendered by an attack on cyber infrastructure or systems that prevents a state from accessing cyberspace, thus preventing the transmission (ingress/egress) of data beyond a geographical boundary.” Alison Lawlor Russell has researched the potential of blockades, carefully examining case studies of Russian attacks on Georgia in 2008 and Estonia in 2012, and comparing them to more traditional maritime blockades and “no fly zones.” She notes that it is a “legitimate tool of international statecraft … consistent with other types of blockades” and can be, though not always, considered an act of war.”[23] Cyber exclusion zones seek to deny a specific area of cyberspace to the adversary, sometimes as a form of self-defense.[24]

As previously stated, China’s information strategy is designed foremost to ensure regime survival. It has erected a massive information control system for the purpose of monitoring, filtering and controlling information within China and between China and the world. The Chinese Communist Party spends more money and resources on domestic security and surveillance than the PLA.[25] Clearly, in the minds of the Chinese Communist Party, information control is a critical vulnerability. Therefore strategies which seek to keep China focused inward may be advantageous. The U.S. might invest in technologies which can be easily inserted into the Chinese market that encrypt communication or permit Chinese users to bypass government monitors. Targeting China’s information control regime should align with current and historic cultural proclivities. For example, environmental degradation, corruption and an urban-rural divide are areas of concern for the Chinese people. Sophisticated highlighting of these issues put pressure on the Communist Party.

The U.S. will not be as successful if does not address the modern, “informationized” concept of war. This should not be taken as a call to change our understanding of war or its nature. War remains violent and brutal, and should be avoided when possible. But the use of information to exploit the adversary and achieve strategic advantage is not being addressed by strategic and military planners as well as it might. Information capabilities in the electromagnetic spectrum, cyberspace, and elsewhere remain stove-piped and walled off from planners. The Department of Defense (and the U.S. government) continues to treat information as a separate compartmented capability rather than treat it holistically – a resource that supports our national security.

The 2015 DoD Cyber Strategy does make mention of force planning, to include the training and equipping of cyber forces. However, cyberspace is just one part of the information domain. We need to better integrate the growth in advanced technology into planning, not just acquisition. We need to consider the impact of dual use technology and its proliferation worldwide, not just to China. We must consider the implications of Chinese information technology companies providing goods and services in the U.S. – especially to the U.S. government. The DoD should develop human capital investment strategies that leverage America’s strengths, and consider new ways to recruit, train and keep the best and brightest in the military, intelligence and national security communities. Just as the “space race” of the Cold War ushered in the modern “Information Age,” .


China’s use of cyberspace operations to support her strategic goals is like the canary in the coal mine. While the U.S. maintains several competitive advantages, it is clear that China is investing large amounts of time, energy, people and resources to achieve her strategic desires, probably within our lifetime. Yet there is reason for the U.S. to be hopeful. It engaged in a long-term competition with the Soviet Union, and was ultimately victorious. This competition was not so long ago, and America has a wealth of talented veterans in the military, civilian and academic worlds who know what it takes to engage in a long-term competition with a rival while trying to avoid a shooting war.


[1] Jacqueline N. Deal, “Chinese Concepts of Deterrence and Their Practical Implications for the United States,” (Washington, DC: Long Term Strategy Group, 2014).

[2] Timothy L. Thomas, “China’s Concept of Military Strategy,” Parameters 44, no. 4 (2014-15).

[3] Francois Jullien, The Propensity of Things: Toward a History of Efficacy in China (New York: Zone Books, 1999). p. 34-38.

[4] Barry D. Watts, “Countering Enemy Informationized Operations in Peace and War,” (Washington, DC: Center for Strategic and Budgetary Assessments, 2014).

[5] Timothy L. Thomas, Three Faces of the Cyber Dragon: Cyber Peace Activist, Spook, Attacker (Ft. Leavenworth: Foreign Military Studies Office, 2012).

[6] Joe McReynolds et al., “Termite Electron: Chinese Military Computer Network Warfare Theory and Practice,” (Vienna, VA: Center for Intelligence Research and Analysis, 2015).

[7] Ibid.

[8] Timothy L. Thomas. China’s Cyber Incursions. Fort Leavenworth: Foreign Military Studies Office, 2013.

[9] Ibid.

[10] Watts, “Countering Enemy Informationized Operations in Peace and War.”

[11] (Rajagopalan 2016)

[12] Christopher A. Ford, China Looks at the West: Identity, Global Ambitions, and the Future of Sino-American Relations (Lexington: University of Kentucky Press, 2015). p. 13-14

[13] Ibid.

[14] Rebecca MacKinnon,. “Flatter World and Thicker Walls? Blogs, Censorship and Civic Discourse in China.” Public Choice 134 (2008): 31-46.

[15] Ford, p. 19-21.

[16] Michael Wines, Sharon LaFraniere, and Jonathan Ansfield. “China’s Censors Tackle and Trip Over the Internet.” The New York Times. April 7, 2010.

[17] Thomas Lum, , Patricia Moloney Figliona, and Matthew C. Weed. China, Internet Freedom, and U.S. Policy. Report for Congress, Washington, D.C.: Congressional Research Service, 2013.

[18] Ford, p. 32.

[19] Ibid. P. 38

[20] Amy Chang. Warring State: China’s Cybersecurity Strategy. Washington, D.C.: Center for a New American Security, 2014.

[21] P.W. Singer and Allan Friedman, Cybersecurity and Cyberwar: What Everyone Needs to Know (New York: Oxford University Press, 2014). p. 170-171

[22] Ibid., p. 202-205.

[23] Alison Lawlor Russell, Cyber Blockades (Washington DC: Georgetown University Press, 2014). p. 144-145.

[24] Ibid., p. 146-147.

[25] Chang.

U.S. aircraft deliver relief supplies to the airport in Port-au-Prince, Haiti in January 2010. UN Photo/Logan Abassi

U.S. aircraft deliver relief supplies to the congested airport in
Port-au-Prince, Haiti in January 2010. UN Photo/Logan Abassi

During the earthquake tragedy in Haiti, American aid planes often circled Haiti’s sole open runway for hours. How is this possible for a nation on an island? Would rapid revival of the seaplane capabilities perfected by the United States decades ago, materially improve such situations? And could seaplane technology be a force multiplier aligned with advances in stealthy, electrically-powered “E-Planes”, some of which could be airborne almost indefinitely?

In an era which prizes cost-effectiveness, emphasis on the coastal and littoral, and the innovative use of smaller, lighter forces, perhaps seaplane usage merits a review. Today, other maritime nations, and nations with maritime aspirations – such as Russia, China, Japan, Germany and Canada – each have impressive seaplane or amphibious aircraft programs underway. Even Iran has displayed maneuvers with numerous small indigenous military seaplanes, albeit their capabilities are uncertain.

For humanitarian and political situations such as Haiti and Japan, seaplanes could be uniquely capable of delivering large amounts of aid to earthquake, hurricane and tsunami victims, as well as rescuing survivors. This would be “showing the flag” in very productive way, and most importantly, delivering help speedily and efficiently. For purely military considerations, seaplanes can address urgent needs in coastal warfare, port security, maritime patrol, cyber warfare and decentralized “swarm” defense and attack.

Martin P6M SeaMaster with beaching gear in 1958. Naval Institute photo archive

Martin P6M SeaMaster testing beaching gear in Baltimore, Maryland in 1958. Naval Institute photo archive

A seaplane future is not merely hypothetical; many components were tangibly produced by the late 1950s, and some of the planes were in early series production and operational.[1] The main flying components of that force were the Martin Seamaster strike aircraft, the Convair Tradewind transport and tanker, and the Convair Sea Dart fighter. In addition to Navy use, both the Air Force, and Coast Guard had admirable records employing seaplanes after WWII. Airplanes such as the Grumman Hu-16 Albatross were not only “tri-service” but sometimes “tri-phibian” with land, sea, and “frozen-sea” – i.e. ski – versions.

By the late 1960s however, these and other major U.S. seaplane programs were canceled, and the seaplane was sunk without a trace from U.S. Navy service. And so the era ended. But should it? Recent advances in computerized design and composite aircraft construction, and discussions of rising sea levels, again pose the question – is there room in U.S. military and civilian doctrine and budget for a small but effective force of multi-role, long-range seaplanes?

Seaplanes, “E-planes”, and submarines may in fact be powerful cross-multipliers of force. The modern submarine’s almost unlimited capability for electrical generation and water electrolysis could provide indefinite fuel for stealth electrical or fuel cell engines of manned or unmanned sea planes and drones. Similarly, high-persistence sea planes could be the disposable, semi-autonomous eyes, ears, and delivery/retrieval platforms of submarines submerged many miles away. Perhaps most importantly, seaplanes could augment the recent increased national emphasis on cyber defense. Standing patrols would help address not just domestic cyber threats per-se, but the entire spectrum of offshore cyber, radio, electronic and electromagnetic threats. And they could ensure that such defense is not merely optimized for the Navy’s own networks and systems – vital as this is – but that it can efficiently protect American civilian assets with an effective deterrence and response – keeping these electronic and tangible “rogue waves” far from our shorelines.

Japanese ShinMaywa US-2 seaplane used for air-sea rescue. Image courtesy Wikipedia/Toshiro Aoki

Japanese ShinMaywa US-2 seaplane used for air-sea rescue. Image courtesy Wikipedia/Toshiro Aoki

In hindsight, the incremental costs and risks of a re-invigorated seaplane program can be expected to be a small fraction of the $40 billion spent on the V-22, with benefits and aircraft survivability equal or greater. And – as a counterpoint to the US/EU tanker acquisition spat – a American buy of a small quantity of say, ShinMaywa US-2s or Bombardier 415s may aid inter-country collaboration with our important allies. Perhaps a low-cost, high-impact, rapidly-effective plan could include such a buy until the United States’ own seaplane capability again “ramps up”.

We have spent hundreds of billions over the last few years guarding our vital sea lanes. We now need a judicious, cost-effective strategy for the Navy to help protect our “E lanes” – including not only tangible military action over the oceans, but domestic cyber assets, radio-frequency and electromagnetic activities. Hopefully, the next humanitarian crisis or military challenge will be aided both literally and littorally by seaplane technologies which are not “if only we still had” but rather “already here and available”.

[1] Trimble, Wiliam F. Attack from the Sea – A History of the U.S. Navy’s Seaplane Striking Force. Annapolis, Maryland: Naval Institute Press, 2005.

The Decision Process for Littoral Warfare

Our Navy expects to retain open ocean dominance by superior “shooting” with sufficient weapon reach and accuracy using manned or unmanned aircraft and missiles, and with an adequate set of anti-scouting, Command and Control (C2) countermeasures, and counterforce measures. Our present network of continuous but electronically detectable systems needs only to be kept secure from enemy C2 countermeasures to continue our blue water dominance with carrier battle groups, surface action groups, and expeditionary strike groups. The Navy calls the capability “network-centric warfare.”

In this piece, however, we concentrate on the dangerous environment close to a coastline that the full range of our sensors and weapons cannot be exploited. The threat of sudden, short range attack is a constant concern. We wish to describe an effective mesh network to fight in combat environments like San Carlos Water in the Falklands War, the coast of Israel in the 1973 War, and other waters that led to sudden surprise attacks on ships at relatively short range, like the missile attacks on USS Stark (FFG-31), HMS Sheffield, the British supply ship Atlantic Conveyer, the many missile attacks in the Gulf “Tanker War” of 1982-1989, and most recently against the Israeli missile ship, INS Hanit, off the Lebanon coast.

The littoral environment is cluttered with islands, coastal traffic, fishing boats, oil rigs and electromagnetic emissions. It is further complicated by shoal waters and inlets that offer concealment as well as threats to our warships imposed by land-to-sea missile batteries. In littoral waters the tactics are dominated by the need to be as undetected as possible with ships and aircraft that are small in size but large in numbers. Offensive tactics are achieved not by dominance at longer ranges but by covert, sudden surprise attacks and anti-scouting techniques. The mesh network we will introduce is resilient, agile and self-healing, employing intermittent and hard-to-detect communications to support offensive strikes as its foremost operational and tactical advantage.

The development of a mesh network that enables us to Attack Effectively First with a distributed lethal force in the littorals is essential to the full spectrum of future naval operations and tactics.

Command and Control Structures

All networks for Navy Command and Control must function within the context of twelve fundamental tactical processes. The mesh network we describe below fundamentally is intended to achieve what the late VADM Arthur Cebrowski espoused: a command system that is a network of people and things to perform three processes:

  • Sense (detect, track, and target enemy units)
  • Decide (make tactical command decisions and execute them with a communications system for control)
  • Act (which for simplicity we will treat as the acts of combat maneuvering and shooting at something to good effect. Other purposes include antipiracy, defeating drug runners, or conducting humanitarian operations, each of which requires other forms of action.)

What is the purpose of the sense-decide-shoot sequence?[1] Keeping to basics, the purpose in naval tactics and in this paper is to Attack Effectively First. Now we see why there are not three but twelve elements of tactical decision making. With the above examples in mind, it is clear that to Attack Effectively First a Tactical Commander must perform his three processes better than the enemy who simultaneously is performing his own Sense, Decide, and Shoot processes. Furthermore, each side is trying to interfere with his enemy’s processes, stopping or slowing them enough so that we can act (shoot) first. In Fleet Tactics and Coastal Combat, Hughes calls these network-supported actions: anti-scouting, command and control counter-measures, and counterforce.[2]

Table of the Twelve ProcessesUntitled-1

Each commander governs only six of the twelve processes with his network. He does his best to interfere with the enemy’s activities and network but he can’t control them. A complete discussion of what comprises the combat actions and what measures help achieve an advantage—to attack effectively first—can be found again in Fleet Tactics and Coastal Combat.[3]

Also observe that timeliness is an essential ingredient of the tactical commander’s networked decision process. Rarely is it possible for him to wait for a complete picture before acting. The Battle of Midway, the night surface battles in the Solomons, and the 1973 Yom Kippur War’s sea battles all demonstrate the extreme pressure on leadership and the genius by which a victorious tactical commander chooses the right moment to launch his attack while mentally assimilating twelve interacting processes.

What Is a Mesh Network?

Fully-connected_mesh_network.svgThe definition of mesh originates in graph theory language describing flexible self-forming, self-healing, and eventually self-organizing networks. From a pure mathematical standpoint, mesh network topology is described as a complete or fully interconnected graph. For a system of N nodes the mesh topology is represented by N(N-1)/2 links in which the every node is connected to all the others. From the computer and information networking standpoint, mesh networking could take place at every critical layer of network functionality, which is typically structured through the 7-layered hierarchy of cyberspace. At the lowest physical layer populated by moving assets such as platforms and their antennas, it could be viewed as a directional or physical network of highly dynamic components. Here advances in computing technology, signal processing, and transmission open up new opportunities we are exploring at the Naval Post Graduate School

Altogether, across the layers of cyber-physical space the mesh network of LCS nodes could be implemented as an interacting set of Hubs and Relays (physical layer, layer 1) interconnected by Bridges (link layer, layer2) and governed by Routers (IP space layer 3 and above) . The set is assisted by Gateways (application layers 5-7) that interface with other networks, for example those of other Services and nations, that use different protocols. In the Navy application the network is a Decision Support System for efficient but intermittent, hard-to-detect transmission of information (processed when desirable); complex orders; and compact commands, in order to conduct almost undetectable actions by the force components in the network. A key advantage of a mesh network is its mobility in (a) physical, (b) cyber, and (c) functional domains simultaneously to enhance our command-and-control (or decision-execution) process, and to degrade an enemy’s attempt to interfere with our command-and-control countermeasures.

Mesh Networking Effects on the Decision Process in the Littorals: C2 Migration to Cyber-Physical Space

The Littoral Combat Ship (LCS) was designed to operate in the global littorals. Today’s LCS configuration with its sea frame and mission module capabilities provides a set of defensive surface, anti-submarine and mine warfare capabilities. Plans under way to boost the LCS to frigate like offensive capabilities presume survivability in contested waters.

The LCS already is a multimodal networking platform that carries small, deployable manned and unmanned components. Adding dynamic short lifetime mesh nodes will enable the LCS to operate in time and space with intermittent transmissions. We describe an extremely dynamic mesh which doesn’t rely on time-space continuity but instead executes the Sense-Decide-Act (S-D-A) C2 cycle in highly discrete moments in time and space.

lcs1In a mesh network the Sense, Decide, and Act processes operate in both the cyber and physical domains. The C2 correspondence between the S-D-A phase in physical space and similar S-D-A steps in cyberspace can be exploited to create new options for concealment and surprise. For example, by turning on the Sense Mine Counter Measure component, we start collecting surveillance feeds from organic unmanned vehicles and other fixed or aerial-surface mobile assets in the physical space. Then the LCS commander will repeat the D and A steps in cyberspace. It could be as simple as prioritizing the sensor feeds or turning the situational awareness views “on” and “off” to save on bandwidth that is shared with many partner boats. Or the MCM mesh capability can be as complex as switching all assets feeding data to LCS from on or over an island with strictly directional peer-to-peer links, meshed in a less detectable non-line of sight (N-LOS) mode. The Physical “Sense” capability meshes with multiple, nested “D-A” performed in the cyber domain.

On the other hand, suppose we are fusing feeds on a peer’s activity in the LCS physically with N-LOS to the peer concealed behind an island. Suppose as well the radar or optical sensor feeds from a patrol boat in view of the site only intermittently. Now it becomes a priority data feed. The LCS commander shoots a projectile (physical space action) with a miniature wireless hub in its payload. The projectile’s compact communications unit reads the data from the boat sensor during the descent and sends it to the LCS, while in the line of sight. It is a process of a few seconds carried out in physical space, while the C2 process improves on the patrol boat’s cyberspace data feed. Meanwhile if the adversary is able to observe the act he is unable to decide whether it is threatening or not. There are other opportunities as we approach an enemy coast while we are establishing all domain access with a mesh network. The littorals are where the complexities of warfare all converge and where access to all domains will be required often simultaneously. The Naval Postgraduate School, is exploring the complexities and experimenting with these technologies.

By serving as critical nodes in a littoral mesh network, the LCSs and other vessels and aircraft both manned and unmanned can take on new operational roles. The configuration of information networks—well described in (Comer, 2011), and their decision making variants described in (Bordetsky, Dolk, Mullins, 2015)—will typically be guided by the presence and usage of four major types of critical networking nodes: the Hubs, the Bridges, the Routers, and the Gateways in a hierarchy of protocol layers, of which the Open System Interconnection (OSI), a seven-layered model, is the most popular one. In such a unified picture, stratified nodes perform across a scaled mesh of links, and hubs are connectors of physical layer (OSI layer 1). Bridges (or switches) operate one layer above, becoming the main connectors for clusters of nodes, which share the same type of medium and use the same rules for intermittent or on-demand listening to each other. In information technology vernacular these clusters are known as local area networks. The Routers take packets of data from a local network separately and “navigate” them from cluster to cluster as layer 3 main connectors.

In this mesh network, the LCS’s function is critical as Sense-Decide-Act information flow in connectors to local clusters of manned-unmanned nodes support the mission. They could naturally become C2 flow Hubs, Bridges, and Routers. This contrasts with the usual information network, in which Bridges connect separate nodes and communicate with easily detected transmissions.

The LCS’s self-forming mesh networks are unique due to the fact that their mobile nodes perform as Hubs, Bridges, and Routers all together. Any Router could operate as a Bridge and a Hub, as those become sub-functions of node-layered operations. A Gateway includes the Router function. A special significance of this is that the LCS now becomes essential for reconciling different protocols in partner nation’s vessels and teams. Because of the LCS modular mission architecture, we can map these fundamental connector roles into the LCS C2 mesh network. Each LCS could be a Gateway, a Router, a Bridge, or a Hub, based on rapid Mission Module switching, or it could delegate some of these roles to nearby or remote vessels, depending on the situation. There will be constant reconfiguration of Mission Module functions onboard the LCS as well as reconfigured connections across the littoral mesh.

A Maneuvering Littoral Mesh Network

One of the most remarkable changes that an LCS-based littoral mesh network brings is in redefining the component of “Act” to include “Maneuver” (Hughes, 2000). COL A. T. Balls’ concept of manned-unmanned teaming, which he devised in designing the ODIN Task Force for fighting the IED threats (Task Force ODIN 2009) is similar in performance to an LCS as a flexible Hub, Bridge, Router, and Gateway in an LCS-centered, manned-unmanned force.

Such an LCS force operating in cyber-physical space will combine physical and cyber “maneuvering”. The goal for maneuvering is not only to achieve better attack or defensive positions but also to comprise a better network within the LCS modular architecture. Here are two options:

  • Directionality of physical links in the cluttered environment of littorals. For the most part ship-to-ship networking is now dominated by omnidirectional communications. In the cluttered environment of a littoral battlefield, when an intentional enemy attack or unintentional neutral or friendly force interference is highly probable, the usage of highly directional, quickly switching links, from laser to 1.2-5.8 GHz mobile ad hoc network (MANET) radio platforms could make the difference between success and failure. It is physical space maneuvering, by getting “close enough” electronically through fast switching of highly directional links.
  • Relatively swift physical movement by a LCS with its manned-unmanned vehicles to different locations. This is a traditional type of maneuver that creates a non-traditional function: an additional set of virtually undetectable relays and new links to support vessels for plugging them into the critical attack/defense data exchanges. It includes nested directional links to extend reach to one-hop neighbors and deceive the adversary. Within a few minutes the physical configuration changes, confusing the adversary by suddenly appearing at a new location, and seemingly as a new threat. Fast movement and grouping in tight clusters creates a temporary high data transfer rate cluster, in which scouting and firing data can be shared, or alternatively can create cyberspace honey pots deceiving the adversary’s countermeasures and foiling a cyber-attack on our assets.


We have described warfare as a twelve-function process in which our aim is to attack the enemy effectively before he can attack us. We have shown that the interactions of all twelve functions going on simultaneously are especially dangerous when one must fight and win in the confined, cluttered waters off a coast. Defense of ships is much harder than in the open sea where defense in depth is possible and in a relatively uncluttered ocean which has been the focus of the U.S. Navy’s successful campaign planning for decades. On the other hand, physical and electromagnetic concealment is easier in cluttered coastal waters. With practice, and aided by mesh networking, the U. S. Navy can learn to take advantage of the unique aspects of the littoral environment and take the offensive against enemy ships and aircraft.

Ship Chirstening PreparationsWe propose to shift Navy thinking from projection of power from a safe sea sanctuary to a new and different emphasis on offensive operations that forces the enemy to defend his warships and commercial vessels against our surprise attacks. We propose an operational and tactical concealment that compels the enemy to be ever-ready for our surprise attacks from above, on, or below the coastal sea surface at times and places or our choosing.

We then assert that the command and control process is the central one that does the most to coordinate the six processes our commander controls while simultaneously he attempts to confound the six processes under enemy cognizance. We wish to enhance our power of command and control with a mesh network that is hard for the enemy to detect and take actions against. We illustrated with some specific ways to do that – all of which ways are ready for experimentation at sea.

Our fundamental conclusion is that until we deploy and become proficient with technologies that permit mesh networking, the U.S. Navy will not be ready to fight successfully in the cluttered waters off enemy coasts. We urge that the Navy advance quickly from experimentation with mesh network technologies to new combat doctrine, and then to training and proficiency, in order to restore our ability to go wherever and whenever needed against any 21st Century enemy who is aided by precision tracking and targeting, and has also practiced stealthy surprise attacks at sea. We urge a perspective that takes distributed lethality to sea with offensive tactics to force the enemy to respond to attacks when the choice of time and place is not his, but ours.


Hughes, W., Capt USN (Retired), (2000) Fleet Tactics and Coastal Combat, Naval Institute Press, Annapolis, MD.

Bordetsky, A., Dolk, D. and Mullins, S (2015) Network Decision Support Systems: A conceptual Model for Network Decision Support in the Era of Social and Mobile Computing, Decision Support Systems (In Review).

Bordetsky, A. (2015) Networks That Don’t Exist, CALCALIST Newsletter.

Bordetsky, A. and Dolk, D. (2013) A conceptual model for network decision support systems. Proceedings of the 46th Hawaii International Conference on System Sciences, (CD-ROM), IEEE Computer Society Press.

Bordetsky, A. (2012) “Patterns of Tactical Networking Services,” in: Anil Aggarwal (Ed.) Cloud Computing Service and Deployment Model: Layers and Management, IGI, 2012.

Comer, D. (2014) Computer Networks and Internets, Sixth Edition.

Ball, A. Task Force ODIN,

TNT MIO After Action Report (2005-2010): , Naval Postgraduate School, Monterey, CA.

Shrivathsan, S., Balakrishnan, N., and Iyenger, S (2009) Scalability in Wireless Mesh Networks, In: Sudip Misra (Ed.) Guide to Wireless Mesh Networks, Springer Publishing Co.

[1] Some readers will be reminded of John Boyd’s famous OODA loop. It is a useful benchmark for those who are familiar with it.

[2] W.P. Hughes, Jr., Fleet Tactics and Coastal Combat, 1999, Naval Institute Press, pp 174-177.

[3] Ibid pp 40-44; pp 180-202.

Today’s cyber world is getting more complex. For those charged with ensuring information systems remain secure the question remains – how can we be certain we are taking the right actions when we continually hear of systems penetrated, information stolen, and resources plundered due to nefarious cyber actors? Is our confidence in our cybersecurity efforts based on reality or something else? In Thinking, Fast and Slow, Nobel prize winner Professor Daniel Kahneman explores the manner in which we think. To ensure cybersecurity efforts will be successful, we must first understand how we think, and how the way we think impacts our ability to bring about real cybersecurity improvements.

110524-N-GS507-210 PENSACOLA, Fla. (May 24, 2011) Students from the Center for Information Dominance (CID) Corry Station, Cryptologic Technician Collection Seaman Recruit Ben Lowden, of Brownsberg, Ind., Cryptologic Technician Networks Seaman Apprentice Alicia Sutliff, of Jacksonville, Fla., and Cryptologic Technician Technical Third Class Steven Tometczak, of Reno, Nev., preview the Integrated System for Language Education and Training program (ISLET), which is being tested by the CID-based Center for Language, Regional Expertise and Culture (CLREC) and the Academic Consortium for Global Education (ACGE). Conceived as an alternative to traditional computer-based training and classroom instruction, ISLET employs online social networking, interactive role-play, competitive gaming and speech recognition to create an immersive environment for collaborative learning. (U.S. Navy photo by Gary Nichols/Released)
Students from the Center for Information Dominance (CID) Corry Station (U.S. Navy photo by Gary Nichols/Released)

Thinking, Fast and Slow Concepts

In his book, Professor Kahneman addresses the two ways we think. Thinking Fast, identified as System 1, is how we quickly and easily put limited information together to tell a coherent story. Thinking fast is hardwired into our DNA. It’s what gives us our gut feeling which will keep us safe in some instances. Thinking Fast is what we are doing when we breeze quickly through new articles, like this one, looking for information that is familiar, instead of trying to figure out if the concept really applies to us.

Thinking Slow, identified as System 2, takes serious mental effort. Thinking slow enables us to be factual, challenging accepted beliefs with all available evidence. Thinking slow is what gives us self-control, like not indulging in too much chocolate. Thinking slow takes real effort, which is why it is difficult to do all the time, or when we are fatigued. Thinking slow is what is necessary to grasp new concepts.

The unfortunate reality is we are all “lazy thinkers.” We rely on fast thinking for the large majority of activities in our lives. In many instances that is perfectly acceptable. In familiar situations, where we have a lot of experience, thinking fast usually works fine. However, in unfamiliar areas, thinking slow is what is needed in order to succeed. The complex and challenging world of cybersecurity is just such an area where it is critical to understand how our thinking could mean the difference between success and failure.

Two concepts brought forth in the book are critical in identifying where fast thinking can lead us astray. Those concepts are What You See Is All There Is and Cognitive Ease.

What You See Is All There Is (WYSIATI).

“System 1 (fast thinking) is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.” When we are thinking fast we tell ourselves a story that supports a specific belief. In creating this story, we grab whatever information will support a belief and don’t consider anything that may refute it. We are content with What You See Is All There Is (WYSIATI). Our ignorance of other evidence, which may be of greater quality, allows us to remain in bliss. “Contrary to the rules of philosophers of science, who advise testing hypotheses by trying to refute them, people (and scientists, quite often) seek data that are likely to be compatible with the beliefs they currently hold.” WYSIATI is fast thinking, and in the world of cybersecurity, this fast thinking can result in having faith in actions that do little to improve cybersecurity. Unfortunately, WYSIATI has a fast thinking partner in crime that also conspires to keep us ignorant. That partner is Cognitive Ease.

Cognitive Ease

Cognitive Ease is simply how easy it is to retrieve a thought from memory. Something we have heard or thought on many occasions will be retrieved more easily from memory. The easier it is to retrieve something from memory gives greater confidence that the belief is true, although the reality may be the exact opposite. For example, you could be performing a certain “best practice,” like patching software or upgrading operating systems. Labeling something a “best practice” can make you think this practice has been shown through data and analysis to result in significant improvements. However, if the initial conditions are different than those considered when developing the “best practice,” this “best practice” may only result in wasted resources. Regardless of the reality, the more you recall the “best practice” from memory, along with the story that you are performing it to improve cybersecurity, the greater your confidence will be that the best practice will improve cybersecurity. WYSIATI and Cognitive Ease are truly super villains. The super hero with an “S” on its chest that can save the day is Slow Thinking.

Slow Thinking to the Rescue

Slow thinking is what is necessary to end storytelling and discover the truth. Slow thinking is about reframing the problem in order to find information that can challenge existing beliefs. As slow thinking uncovers new and better information, Cognitive Ease will remind you of your confidence in prior beliefs. Your gut will be telling you that no additional information is necessary (WYSIATI). Slow thinking is what will give you the self-control to fairly assess the new information you have discovered.

Fortunately, the Department of Defense has leaders who encourage slow thinking. The Department of Defense Cybersecurity Culture and Compliance Initiative (DC3I) was signed in September 2015 by Secretary Carter and General Dempsey. The DC3I is based on “five operational excellence principles – Integrity, Level of Knowledge, Procedural Compliance, Formality and Backup, and a Questioning Attitude.” Similarly, in his Principles of Better Buying Power, Secretary Kendall instructs us that, “Critical thinking is necessary for success,” and we should “have the courage to challenge bad policy.” These three DOD leaders are asking us to think slowly. This article will examine three separate areas; Cybersecurity Training, Our Cyber Adversaries, and The Certification and Accreditation Process, to illustrate how slow thinking can lead to improved cybersecurity.

Cybersecurity Training

In order to utilize slow thinking to improve cybersecurity, we must first be able to recognize where we are thinking fast. Cybersecurity training is an area that can clearly illustrate the difference between fast and slow thinking.

A typical approach to training on cybersecurity is to track the percentage of people trained in a particular cybersecurity area. As the percentage of people trained goes up, then the cybersecurity readiness of the workforce is assumed to be improving. This is a perfect illustration of WYSIATI. Limited information has been put together to tell a coherent story. In order to determine if the story is fact or fiction, slow thinking must be used to actively look for information that can confirm or deny the assertion that training is improving cyber readiness.

Unfortunately, there are a number of potential flaws to the assertion that training is improving cyber readiness. The training could be incorrect or inadequate. The training may not actually provide the workforce with skills required to improve cybersecurity. The workforce may not take the training seriously and not actually learn what is covered by the training. In some cases, knowing what to do isn’t enough to ensure the correct actions are taken. In the area of spear phishing, which is still the most common way malicious software enters information systems, a person must first be able to recognize a spear phishing attempt before they can take the appropriate actions. Even if spear phishing training provides a number of examples of spear phishing attempts, when people are tired, or in a rush, or possibly just don’t believe they will get spear phished, the chances of them taking the correct actions are not good.

Now, compare training on spear phishing to actively spear phishing your employees. If your employees know they will be spear phished, and held accountable for their performance, then they will be more on the lookout for suspicious emails, whether they are actual or training spear phishing attempts. By actively testing your employees with quality spear phishing attempts, you will compile real data on how the workforce is responding to this threat, and be able to provide additional training for those who aren’t. Training on spear phishing is like reading a book on running. Actively spear phishing employees would be like timing your employees for a run around a track. One is a Fast Thinking story. The other is Slow Thinking reality. Unfortunately, as illustrated by Professor Kahneman’s book, our default response in most situations is fast thinking. This can be especially true in circumstances where we have a problem that we are desperate to solve. We look for information that supports our success, and fail to look for, or disregard, information that would tell us we aren’t improving.

Outside Secretary Kendall’s door is a sign that states, “In God We Trust; All Others Must Bring Data.” One of his Better Buying Principles is “Data should drive policy.” In this circumstance, the data that we seek isn’t the simple, fast thinking question of how many people have been trained; it is the more difficult, slow thinking question: are our cybersecurity training efforts improving cybersecurity readiness? Only through slow thinking will we obtain meaningful data to drive policy and our cybersecurity efforts.

Our Cyber Adversaries

The SONY attack, the OPM breach, the Target theft, Edward Snowden, Private Manning – all involve information destroyed and stolen, resulting in the loss of millions of dollars. The cyber threat is certainly real, as the incidents above all attest. Unfortunately, the above incidents, and the press coverage that brings these threats repeatedly to mind, can lead to the perception that any system can be exploited by our adversaries at any time. As we learned previously, thoughts that are repeatedly brought to mind are more easily remembered, which Professor Kahneman describes as Cognitive Ease. In the world of cybersecurity, Cognitive Ease can make us quite confident that every single system can easily be exploited by any random hacker. With limited time and resources to address every system, it is critical to gain a clear understanding of how vulnerable systems are, and the impacts that can result if systems are exploited. If we attribute capabilities to adversaries that they don’t have, or install unnecessary protections in systems that aren’t at risk, we not only waste resources, but we continue to remain ignorant of the actual threat to our systems. Let’s see if we can do some slow thinking on the challenges faced by our cyber adversaries.

Eliminating the Fog of War

Cybersecurity firms often demonstrate the damage that could be done to information systems if hackers got control of them. What needs to be recognized is that the people performing these demonstrations have full access to system documentation, the system itself, and can run tests repeatedly until they get a desired effect. These demonstrations are a perfect example of WYSIATI. The people performing these demonstrations would have you believe (and often believe themselves) that If these demonstrations can be done then surely our cyber adversaries can do the same thing. The problem with demonstrations like these is that they eliminate the Fog of War, the uncertainty that is pervasive in almost every aspect of warfare. For our adversaries the challenge is much greater. System software and hardware configurations are constantly changing, so even if adversaries have system documentation, that information often very perishable. How will our adversaries know if that configuration is still in the Fleet? How will they locate a system that has that specific configuration so that they can test to see if their cyber-attack will work? How will they conduct the test in a manner that won’t tip off their adversary (us) about a potential vulnerability? How will they gain the necessary access to test out the attack? If they are able to locate the system, and attempt to perform their attack, how will they get the necessary feedback to understand why a test may have failed? These cybersecurity demonstrations show what is possible – with perfect knowledge, perfect access, and perfect conditions. What they don’t address is what is probable. Every step in the enemy kill chain is assumed to be perfect, which can then, of course, generate extremely significant consequences. Under those conditions, tremendous damage can be caused in non-cyber areas as well. For instance, any of our fighter planes would cause an amazing amount of damage if it was crashed into a carrier by an insider threat pilot. While everyone would admit that is certainly possible, we all recognize that the probability of that occurring is extremely low so we don’t waste valuable resources trying to create technical systems that could stop a rogue pilot from crashing their plane. In order to obtain value from our cybersecurity efforts we must understand all the challenges our adversaries must overcome. We must not focus on what is possible and then try to fix every associated vulnerability. We must use slow thinking and improve our understanding of what is probable in order to best utilize limited resources.

The Certification and Accreditation Process

The Department of the Navy spends a lot of time and effort on certifying and accrediting information systems to ensure information systems have a certain level of cybersecurity. The WYSIATI approach to certification and accreditation is simply that by using this process, and tracking the correction of system vulnerabilities, then information systems will become more secure in terms of cybersecurity. Systems that are certified and accredited are better off in terms of cybersecurity than systems that aren’t.

Once again we have a fast thinking coherent story that seems to makes sense. Let’s now willingly look for information that can compete with this story. In his book, Professor Kahneman describes an approach to enable Slow Thinking called a Pre-Mortem. The Pre-Mortem is an intellectual exercise done prior to committing to a major initiative that challenges people to think about how the initiative might fail or make things worse.

A pre-mortem for the certification and accreditation process might predict that the process could fail by taking such a long time that it significantly delays the implementation of cybersecurity capabilities. The pre-mortem could predict that due to unclear requirements and untrained personnel the certification and accreditation process might generate very little improvement in cybersecurity, wasting precious resources on something that is primarily a paperwork drill. In this situation, since the C&A process has been in place for a number of years, we can look for indications that support these predictions.

Little value for the effort.

The Naval Surface Warfare Center (NSWC) at Dahlgren, Virginia is just one of the Navy’s centers for innovation. In 1920, only 17 years after the Wright Brothers flew at Kitty Hawk, engineers at Dahlgren launched the first remote control airplane. The plane crashed, but the boldness of such an effort, so soon after the first manned flight, is striking. Innovation remains a constant pursuit by the men and women who serve at Dahlgren NSWC today.

Recently, four of Dahlgren’s engineers, with combined experience of more than 100 years, noted their concern with the certification and accreditation (C&A) process. Over the course of 18 months they examined the resources and time required to get 43 information systems processed through the C&A process. These packages took 33,000 hours of work for a cost of $3.5M, and in the end all of the information system packages were certified. Yet all that administrative work only generated one minor technical issue that needed to be corrected. $3.5 Million worth of time and effort generated almost no changes to the systems in question, and took talented engineers away from the process of innovation, research, and development which our country needs them to be doing.

Forgetting the Commander in Situ

The “Commander in Situ”, which stands for the Commander in the Situation, is a military term that recognizes it is the Commander actually on scene, or in the situation, that has the best understanding of what is going on and what needs to be done. This principle has been evoked over the years after horrible mistakes have been made by those far from the scene who tried to order what must be done with imperfect knowledge of the situation. “Commander in Situ” is all about decentralized control, leaving control to those with the best information.

Unfortunately the C&A process is a very slow, centralized process that pushes information system packages through to one approving authority. What should be recognized is that the farther the approval chain gets away from the system requiring certification, the less knowledge and understanding decision makers have regarding the system in question. In many cases, the people who make the final decisions for approval don’t have any technical expertise on the systems they are approving. System experts have to educate those who give final approval of their system. In cases such as this, decisions that could be made, literally, in minutes by the local experts, have taken over a year to run through the certification and accreditation process. The lack of local authority for cybersecurity matters is quite stunning. For example, the Dahlgren Naval Surface Warfare Center is one of the few organizations in the United States that has the authority to handle the Anthrax virus. Dahlgren can also handle and detonate ordnance up to 10,000 pound bombs. Yet if engineers at Dahlgren want to connect a new microscope to a standalone laptop, that requires a process that can take over six months and requires routing paperwork through four other organizations to gain the necessary permission.

The Illusion of Authority to Operate

When an information system successfully completes the certification and accreditation process it is provided an Authority to Operate (ATO). The ATO authorizes a particular information system for operations, normally for a period of three years. So at two years and 364 days from the date the ATO is provided the system is still good, yet two days later these systems are no longer acceptable for operation. In some instances, when a system is deemed to be at higher risk, an Interim ATO is granted for a period of six months or less. How the length of the time periods of the ATOs are linked to reality is not clear. These information systems are being treated like cartons of milk with expiration dates. While we know the science behind why milk goes bad, there is no science behind why an information system should have an ATO of three years, two years, or six months. This is just a story we have been telling ourselves.

Disregarding Design Thinking

The movie The Imitation Game details the story of the United Kingdom’s efforts to solve the Enigma machine – the encrypting machine the Germans used during WWII to send messages. The movie pits Professor Alan Turing against a group of mathematicians and code breakers. Each day, the mathematicians and code breakers scribbled furiously on paper in order to try to break the code, and each day they failed. Professor Turing was an early practitioner of design thinking. He realized he needed to design a solution that would be a good match for the problem at hand. Professor Turing eventually solved the Enigma machine by creating a machine to do it. Unfortunately, like the mathematicians and code breakers in The Imitation Game, our certification and accreditation process is a slow, centralized, and bureaucratic solution, which is unfit for the very fast, decentralized problem of cybersecurity.

The examples and concerns I have brought forth above are not intended to blame or criticize, but instead to engage in the type of critical thinking that DoD leadership has encouraged us to do. In our efforts to address current cyber challenges we are all on the same team. The examples above are meant to illustrate the concepts of fast and slow thinking in order to best address these significant cyber issues. A fast thinking response to these concerns would be to dismiss them or dispute them. A slow thinking approach would be to willingly investigate them and try to confirm them. New processes should be developed for those concerns that are confirmed.

High Velocity Learning

Recognizing that we must respond to a changing global environment, in January 2016 the Navy issued A Design for Maintaining Maritime Superiority. In the document four lines of effort are established, one of which is to “Achieve High Velocity Learning at Every Level.” The objective of this effort is to “Apply the best concepts, techniques and technologies to accelerate learning as individuals, teams and organizations.” Our Chief of Naval Operations, Admiral John Richardson, has made it clear that the US Navy will be a learning organization. But to accelerate our learning we must first understand how we think. In the end, we should recognize that what we need to effectively address our cyber challenges, as well as achieve high velocity learning, is slow thinking.

The above views are solely my own and have not been endorsed by the Navy. All quotes are from Thinking, Fast and Slow by Daniel Kahneman, a tremendous book that I highly recommend.


To: IRGC Commander Mohammad Ali Jafari
CC: High Council of Cyberspace
From: IRGC Cyber Army Major General Esmail Madani
Subject: Operation Cyrus
Date: Oct. 25th, 2021

 How to Defeat America and Win Back the Persian Gulf: Operation Cyrus

America’s military center of gravity is, and has always been, public support for its endless wars. America’s enemies in Vietnam, Iraq, and Afghanistan understood this well. They drained public support by killing Americans and their puppets in hit-and-run attacks, forcing them to spend ever more blood and treasure to accomplish their ill-defined political goals. The public became fatigued and sought an end to the losses. They then elected politicians who promised to bring American forces home and end the wars of the day. Once the American troops left, their puppets collapsed. With our new cyber operational capabilities that can target America, we can now employ a variation on this well-proven strategy with far less risk to consolidate our control of the Persian Gulf: Operation Cyrus.

The Time To Strike is Now

President Trump is considering sending troops to support the Kingdom of Saudi Arabia after Daesh’s recent victory over Saudi forces around Jeddah. Signs of an imminent Saudi collapse are everywhere. Local Shiite militias have seized Bahrain and eastern Arabia. The other Gulf monarchies are shutting their borders and using their troops to impose martial law. Hundreds of thousands of Sunni refugees are crossing the Red Sea into Egypt on their way to Europe. While our enemies are weak and divided, we must seize the opportunity to annex Bahrain and eastern Arabia. We must immediately deploy IRGC units to take control of these areas. The only thing holding us back is the potential intervention of the United States.

The Means: Operation Cyrus

After decades of study of cyber weaponry and tactics in response to the Stuxnet catastrophe and the Chinese hack of OPM, our Cyber Army have found a way to gain control of the Social Security Administration’s records, which are tied to hundreds of billions of dollars in payments to almost 50 million seniors. Once Operation Cyrus is approved, our Cyber Army will shut off all payments to seniors. This will cause widespread panic amongst a huge voting bloc and reveal an unknown vulnerability. Once the government realizes what is happening we will send a private message to President Trump’s administration that we are prepared to let them regain access to their records once their military ships and airplanes have withdrawn from the area.

If they do not relent, we will begin destroying their records. Trump will have to deal with millions of angry voters or embarrassingly admit that Iran now has control over one of their most important data systems and public fear of follow-on attacks. If this does not bring enough pressure on Trump’s administration, our Cyber Army is well prepared to target the IRS or Medicare next to significantly impair the functioning of their government and society.

Why This Will Work

The Iranian flag button on the keyboard. close-upOur cyber attacks can accomplish the same economic and political disruption of a strategic bombing campaign. Our models show that this, unlike ballistic missiles or martyrdom operations, will not rise to provoke a confrontation with the American military. The attack will shock Americans’ trust in their government to an unprecedented level, yet it will not produce mass casualties or provide images of burning buildings or ships that might raise the ire of the American people to demand war. Also, President Trump is obsessed with his poll ratings and will do anything to avoid unpopularity. His victory in the 2020 election was based on his criticism of President Hilary Clinton’s poor handling of the Syrian and Libyan interventions, indicating the public’s reluctance to enter another Middle Eastern War. The American people have never experienced the massive and prolonged disruptions and deprivations of a war on their homeland. The threat of indefinite hardships without a clear cassus belli will deter the American public and political leadership from going to war. In order to deconflict Operation Cyrus with any ongoing Chinese and Russian operations, the IRGC representative to our Cyberspace Shared Interests Working Group will notify all parties.

Why Other Plans Will Not Work

Some in the Supreme National Security Council say we should launch martyrdom operations on their homeland or target US business interests or embassies abroad. My friends misunderstand the fundamental nature of the American people. Pearl Harbor and 9/11 demonstrate that mass casualty events only prompt the American people to support politicians who want war, the very thing we are trying to avoid.

Guerrilla strategies have proven effective against American forces abroad in the past, but Operation Cyrus is not without risks or costs. If the Great Satan rises to make war, the United States has a potentially inexhaustible supply of men, women and material to throw at any adversary. Destroying their will to fight, without targeting their means to fight, is the only way to achieve victory. Operation Cyrus can accomplish this with much less cost and risk than other strategies.

Avoid Their Strengths, Strike Their Weaknesses

All of warfare is an effort to maneuver and strike the enemy at his center of gravity. Operation Cyrus gives us a means to avoid battle in the air, sea, or land, where the Americans are strongest, while striking them where they are most vulnerable.

602The modern age has not just made the world flat, it has made it transparent. Just as the internet lowered the barriers to entry in the areas of reporting and commentary from everything from pet cats to grand strategy, so too has it changed the ability of military forces to move with any degree of confidence that they can control who knows where they are when.

A great example of this real-time OSNIT has been around for centuries, but what has changed is the timeline has moved from weeks to instantaneous. If there is an IP connection, you are being watched in real time. No way to block it – you just have to live with it.

For Mr Guvenc, 51, and a group of four friends, the parade of military hardware through their city is irresistible. Sipping coffee from a stunning balcony with a panoramic view of the channel, they explain that the photographs they share online are pored over by military strategists and analysts around the world.

“Usually these ships are out of sight. We don’t know what they are doing,” explains Devrim Yaylali, 45, an economist who has been spotting ships for nearly 30 years. “The Bosphorus or the port is the only place you can see them.”

His friend Yoruk Isik, 45, an international affairs consultant, chips in: “Here, you can be in Starbucks with an espresso and a ship is literally 250 metres away.” The sharp bends and strong currents in the channel means that the boats must slow right down to manoeuvre, making them easy to photograph. “There’s no other place on earth where you can capture them so well.”

That is the easiest way to do, but it applies world wide. Sure, you can sneak B-2 in to Diego Garcia, if you wanted to. Are you going to be able to hide a large scale movement anywhere else?

Real time video is no longer a competitive advantage that we have. It is quickly becoming a global commodity.

Smartphones, drones, commercial imaging satellites, and just the byproduct of a much more populated, connected, and vigilant world; we are all under the all-seeing eye. That doesn’t have to be a bad thing.

There are some positives to this. You have more effect with well positioned presence ops, feints, or for the highly sophisticated – “gaslighting” the enemy so their OODA loop becomes a mobius strip.

That last bit. That is the fun part. What isn’t fun is knowing that you have to assume that wherever you are, if you can see something besides water, even a small terrorist cell may know right where you are real time. Regardless of what you may or may not see, you may be seen anyway.

Please join us at 5pm EST on 6 March 2016 for Midrats Episode 322: Radical Extremism, Visual Propaganda, and The Long War:

In the mid-1930s, Leni Riefenstahl showed the power of the latest communication technology of her time to move opinion, bring support, and intimidate potential opponents. The last quarter century’s work of Moore’s Law in the ability to distribute visual data world wide in an instant has completely change the ability of even the smallest groups with the most threadbare budgets to create significant influence effects well inside traditional nation states’ OODA loop. How are radical extremists using modern technology, especially in the visual

arena, to advance their goals, who are their audiences, and how do you counter it? Using as a starting point the Strategic Studies Institute and U.S. Army War College Press’s publication, “Visual Propaganda and Extremism in the Online Environment, YouTube War: Fighting in a World of Cameras in Every Cell Phone and Photoshop on Every Computer, the Small Wars Journal’s ISIS and the Family Man
and ISIS and the Hollywood Visual Style, our guests will be Dr. Cori E. Dauber, Professor of Communication at the University of North Carolina at Chapel Hill, and Mark Robinson, the Director of the Multimedia Laboratory at the University of North Carolina at Chapel Hill.

You can join us live or listen later by clicking here or pick the show up later from our iTunes page.

« Older Entries