Kevin Mitnick, the infamous hacker and social engineer turned security consultant, gave a presentation at this year’s History Conference at the Naval Academy today. He gave numerous examples of extracting information from people and companies by using their own trust and knowledge against them. His demonstrations likely startled many of the audience members with the range of methodologies and, more importantly, the success rate.

Some may look at the seemingly endless list of ways attackers can obtain what they’re looking for and throw their hands up in despair. It’s important to take a step back and consider some important factors in responding to, and hopefully mitigating, attack vectors.

Technology alone won’t save you. If you fight technology with technology, you’ll lose. All the firewalls and intrusion detection systems in the world won’t be a guarantee that networks won’t be breached. There’s no such thing as an impenetrable system, and no such thing as bugless software. Kevin’s demonstration of exploiting vulnerabilities in widely used commercial software proves this. Moreover, this isn’t just software being used in the private sector. Many of the exploits he demonstrated take advantage of software that’s become an integral part of the way the military handles its information. As if this weren’t enough, the files used to carry out every successful exploit passed antivirus scanning without incident, and were run on fully patched, up-to-date systems.

That’s not to say technological security measures are pointless; far from it. Strong passwords, multi-factor authentication, limited access permissions, and strict data management are as important now as they’ve ever been. Placing full faith in their protection, however, is misguided.

Legislation and policy alone won’t save you. The first instinct of most government and private agencies is to react to new threats with new rules. Congress will propose laws, companies will write new usage regulations, and in the end they’ll do little to stem attacks. Punitive action will deter the low-level players for whom it isn’t worth the risk of fines or prison, and employees will perhaps comply with increased restrictions on their behavior. Those with the determination and skill will get what they’re seeking, and many of them won’t be caught.

In fact, regulations have an unintended consequence: complacency. In an interview with Online Editor Sam Lagrone, Mitnick indicated that, as an example, PCI DSS gives corporations a checklist for protecting client financial data, allowing them to be in legal compliance and in so doing avoid large expense in comprehensive security. Companies will only spend as much money as is necessary, and regulations spell out that exact necessity, whether it’s comprehensive or not. Companies feel secure in following the rules, and when the threat evolves beyond those rules the company lies vulnerable because they didn’t remain vigilant. Similar risks exist inside military structures, where policies exist to restrict certain behaviors but can’t account for new and inventive attacks, and result in training focusing in on symptoms rather than targeting the root problems.

Again, this isn’t to say legislation is pointless. It is very useful in punishing those that are caught. It’s also good incentive for organizations to take measures to protect data that they may otherwise be doing little to protect. Yet rules are inflexible, slow to change and expensive to enforce. The attacks against which they are designed to protect are anything but.

The military needs to take security training seriously. Anyone who’s currently serving in the military or works for the Department of Defense has probably gone through basic computer security education, often consisting of nothing more than a one hour self-guided online course once a year. Nobody can reasonably deny that the military is effective in training its people in executing their missions – they train hard, they train continuously, and the result is a force for whom reacting to threats becomes instinct. Yet when it comes to protecting computer systems and preventing data leakage, it appears to be applied as more of an afterthought than a real training regimen. With nearly all the information the military handles stored digitally, every servicemember should be trained continually, and tested in their response to threats on a regular basis.

Kevin Mitnick proposed this type of approach as part of his presentation (typically given to corporate managers and executives), and this component of his talk is especially germane to military operations. It doesn’t have to be significantly complex. Something as simple as an email intentionally crafted with inaccurate details that should throw up a red flag to trained users, and a link they’re convinced to click (if they fall for the attack) that then informs them of their mistake. Something like a random phone call from a person posing as a superior and requesting details about a mission or personnel, and verifying that the proper procedures are taken for verifying identity or identifying a suspicious request. It would cost more money, but it’s a crucial part of OPSEC and information assurance that isn’t being given due consideration.

The bottom line: all hope is not lost. There’s plenty that can be done to preserve military networks and defend against data leakage both from the outside and from the inside. The weakest part of any computer security strategy is always the user, and we should be putting more emphasis on doing everything we can to strengthen it.

Posted by BM2 Christiaan Conover, USCGR in Cyber, Naval Institute
Tags: , , , , , ,

You can leave a response, or trackback from your own site.

  • “Nobody can reasonably deny that the military is ineffective in training its people”

    (Take out one of those negatives.)

  • TheMightyQ

    As usual, REAL training, not formulaic, check-in-the-box, TORIS T-FOM-style training is the answer. Mr. Conover hits the nail on the head. Real training is difficult, annoying, and has consequences, and must be prioritized for it to have any real effect. For example, during Exercise Solid Curtain/Citadel Shield, massive traffic backups were noted outside of military bases who were running a real-deal force protection exercise (Bases going to and maintaining FPCON C for 24-28 hrs). This can be seen as detrimental by some. Contractors who are trying to get on base for a repair job, for instance. However, for those two days, the bases were given over to conduct the exercise. ADM Harvey made a statement with that mandate, and it was well received by those who actually take security seriously. This sort of prioritized training should be held up as an example to follow in all aspects of the Navy.

  • BJ Armstrong

    It wasn’t a history conference.

  • A very good post. The most important step of hacking, and the first step, is reconnaisance…gathering intel…and you can’t use technology to control all the info out there…especially with this wonderful tool called the Internet which basically allows companies to outsource our jobs overseas and hackers to link servers all across the world to bring down your corporate website. LOL

    Got to be careful what you post on the Internet. Hackers routinely scour the web using Google and other search engines to gather relevant info about their target.

    The safest practice is to not post any personal info online nor share too many details about your work, etc., otherwise hackers can use it with Google to look for a way into a system you are connected with, or even put software on your PC to use as a middle man to attack someplace else.

  • Bob-RJ Burkhart

    Seems like another case of NO NEW “Lessons
    Learned” from prior Anti-Social Engineering “lessons learned” … For
    example, review:

  • Bob-RJ Burkhart

    >> The
    weakest part of any computer security strategy is always the user, and
    we should be putting more emphasis on doing everything we can to
    strengthen it << (aka PeopleWare?) …
    Also see:

    • eric feinberg

      Using a social engineering technique EyeOn we have been able thru Facebook to identify Chinese Enterprises that are creating Fake Facebook Accounts and Paid Sponsored Ads Linked to Websites Selling Counterfeit Merchandise that Load Malware

  • eric feinberg

    Using a social engineering technique EyeOn we have been able thru Facebook to identify Chinese Enterprises that are creating Fake Facebook Accounts and Paid Sponsored Ads Linked to Websites Selling Counterfeit Merchandise that Load Malware