Words like “panic,” disaster,” and “terrible’ and “irresponsible” are being thrown around like confetti.
Do I agree with the commanding officer’s decision to take matters into his own hands? No. He was one man acting on his own intuition, rather than one part of a concerted effort with proper executive notification. In an organization as large as the US military, no test should be completed without a lot of feedback and forethought.
It was also unfair to include the Thrift Savings Plan in an attack they knew nothing about—and then leave them to clean up the messy backlash.
But let’s get to the brass tacks here: we can’t necessarily call the commander’s actions “irresponsible” just because some folks got panicked or felt like guinea pigs.
Now, certainly a key issue in this story is the confusion and alarm that can be caused by a phishing awareness test. It is in people’s nature to become concerned when money or benefits is involved.
But testing cannot be as simple as “Let’s do whatever will upset people the least.” Going into any security awareness analysis with that attitude will likely not produce valid and true results.
It’s the classic struggle: employee vs. management, little guy vs. “Big ole’ DOD.”
Those who will be tested say that when a test is done with false motivators, it can lower people's confidence in real programs and systems. People don’t like to feel used and they want to have faith in the systems that protect their assets. As Matthew Biggs said in the article “The big government bullies are just pushing us around and using us as guinea pigs.”
Yet, the testers say that security awareness evaluators need to use methods for testing that are representative of what malicious people would actually do. After all, social engineers don’t pull punches. Tests that don’t use real-world scenarios do not give a true picture of
a company’s security.
It really comes down to the severity of the threat. Before each security test, the powers that be must ask, "How serious are our threats and what underhanded methods will they use to hurt us?"
We have experienced numerous situations where our testing methods caused minor panic or upset the people we were impersonating—but it was necessary to discover security holes through which social engineers could gain access to extremely sensitive data or other high-risk assets.
Deceptive testing or alarming claims to motivate users might not always be required—but sometimes they just might. Sometimes, ruffling a few feathers is the only way to get the job done when the threat is high and the risk is great.