Misconceptions, Battle Scars, & Growth

Tim MalcomVetter
10 min readJan 18, 2019

I’ve been doing InfoSec stuff for ~20 years now (warning: time sneaks up on you!) and every 3–5 years I discover a better understanding of the subject. Just when I think I’ve got security all figured out, some new serendipitous moment reminds me that I do not, but I am a little closer.

20 years ago, I thought “perfect computer security” was possible if you just figured out the correct “recipe” of stuff for the technical problem you were trying to solve. If this is you, I’m sorry for the spoiler: it’s not. It didn’t take long for me to shatter that misconception. Fortunately, there were some already-jaded people out there at that time to open my eyes. That, and the never ending stream of bugs, followed by the bizarro world of side-channel attacks that remind you there never was a spoon.

Probably ~17 years ago, I thought a little crypto (e.g. SSH/VPNs) and a firewall, and you were sitting in a good spot. Now my red team regularly abuses those types of services with ease and entire debriefs are dedicated to the nuances of why details matter in that space.

13 years ago or so, Gary McGraw’s book series convinced me that “network security” is really just “software security.” After all, “appliances” (kids: that’s what we called “the cloud” back then) were just hardware with a Von Neumann Architecture (Yes, all that Comp Sci theory stuff has a place, if you’re interested read more here: https://en.wikipedia.org/wiki/Von_Neumann_architecture), i.e. CPUs, RAM, and IO with a software stack. I remember having knock-down-drag-out debates with colleagues who thought there was something magical about an intel based 1U pizza-box “appliance” that had a special badge from a security startup on it. To them, it was “hardware” and somehow immune from all the conventional security vulnerabilities of the day. But to me (because Gary McGraw stole my blissful ignorance), it was just a linux host with a custom “pile of software” that the developers probably couldn’t have comprehended — inherited vulnerabilities and all. This “software security” mindset was absolutely eye opening for me and still remains one of the most important shifts in my thinking (thank you, Gary).

About 11 years ago, Mikko Hyponnen (back when he looked … EXACTLY the same as he looks today — I don’t think he ages at all) at F-Secure published graphs on how many new virus signatures were developed each year, and the year-over-year acceleration was approaching something between exponential to full on asymptotic. I was immediately convinced signature-based AV (blacklisting) was dead and that whitelisting was the only scalable future — and told all my colleagues just as much (sorry?). Ironically, more than a decade later, enterprises struggle to scale whitelisting to very many endpoints.

Despite the failure of AV signatures (a prevention control), I was still convinced that security programs should focus on PREVENTION over DETECTION or RESPONSE. After all, detection and response is equivalent to a preventative control failing, and failure isn’t an option. Who wants to admit defeat? (We’ll come back to that one in a second.)

Right about 10 years ago, I discovered the importance of INCENTIVES for an enterprise security program. I discovered I couldn’t “move the needle” to improve security from a policy/program perspective until I figured out how to eliminate the pain points for stakeholders or to at least align my goals with theirs. It’s so obvious now, but it wasn’t before then.

Back then I really wanted to focus on the AppSec problem. I wasn’t the first to see attacks moving away from exploiting vulnerabilities in commodity products (like Operating Systems) to web apps, but I was definitely an early adopter to the concept — again, probably due to Gary McGraw’s “software security” influence over my thinking. Around that same time, commodity attacks were starting to become less frequent, thanks to secure development initiatives like the famous one kicked off at Microsoft by a Bill Gates memo. But outside the tech industry, nobody wanted AppSec yet. They barely could handle patching commodity products and most of those, except Microsoft Windows, did not even have an update mechanism, much less an enterprise-ready version. What the people wanted were less passwords to remember and to onboard personnel quickly. So I moved to what they wanted as the lever to create the most positive change: identity management.

Remember, this was back in the Stone Age (10 years ago), when it was common for an average enterprise “information worker” to have 10+ user accounts and passwords for various systems just to get on the network, access email, the web, and typical business applications for their jobs. Very few Identity and Access Management vendor solutions to streamline this access existed back then. The ones that did exist each cost a kajillion dollars per year, and I couldn’t get anyone to budget for that, despite everyone wanting less passwords. I saw a business problem: every year at benefits enrollment, the company had an influx of password reset calls to the HR helpdesk and the business knew PRECISELY how much it cost them. So I partnered with them and built a homegrown IAM system, offering to reduce their password reset problem if they would help solve the problem of terminated personnel still having active user accounts. Then our incentives aligned. HR pushed real time new hires, terminations, and job changes to my homegrown app. Within a few weeks, we had the on boarding and termination problem solved. After about a year of directory consolidation projects, we had everyone down from 10+ passwords to 1. Win, Win, and I learned the power of incentives.

Coincidentally, that event is also when I saw how valuable development skills are in InfoSec. Back then, most InfoSec people came into the field from network administration jobs, and development was limited to scripting (if they developed anything at all). Today, it’s common for a large portion of big InfoSec programs to be developer roles.

About 8 years ago, my “software security” bias led me to go into full-on opposition to “network penetration testing.” I would have told you “there’s no such thing” since it’s really just attacking services, and a service is an “application” just written by a vendor of a commodity product like an Operating System. While true, I threw out the baby with the bath water.

Probably about 6 years ago, I was in the throes of believing AppSec was the tip of the spear of the InfoSec conflict. Pay no attention to the reality that real attacks choose easier paths, like phishing. And definitely do not acknowledge how vulnerability discovery against a live, production (non-commodity) web application is actually quite noisy with all of those unexpected extra HTTP requests. I put extensive value in the super complicated puzzle-solving exploits that chained the exploitation of multiple vulnerabilities together in order to achieve a certain level of access. This bias also kept me from considering the operational aspects of an adversary actually moving laterally towards a target, persisting within the environment, and exfiltrating valuable data out of it. I ignored how this rarely is the process by which enterprises are breached — at best, it’s just the initial access phase.

I still see this bias in others in the security industry, as recently as the Equifax breach, where gangs of Monday Morning Quarterbacks nitpicked the missing patches for Apache Struts while completely forgetting that it was only the initial access vector — the attackers didn’t simply throw the exploit at a web server so that a few milliseconds later PII would rain out of the app like quarters from a Vegas jackpot. After they gained access, they had to follow the OODA loop. Observe: “Where am I?” Orient: “Where is the target?” Decide: “How do I get closer to the target?” Act: actually execute the step to get closer to the data, collect it, and exfiltrate it from the environment.

Another realization happened since moving into Red Teaming full time 3 years ago: the value of a good detection and response program has MASSIVELY grown on me. So much so, that my new (current) bias is that I might very well tell you that if your security program does not have good Detection or Response, that you do NOT (yet) have a security program at all. I would not have said that years ago.

To be clear, if you were to ask me 10 years ago, I most definitely would have said “yes, of course detection and response are important” … but I would not have comprehended just how important. I didn’t fully “get it” back then.

No organization can eliminate all vulnerabilities. It’s impossible. In fact, the 0day vulnerabilities from next year are probably already deployed in your environment — some directly facing the Internet. We just haven’t discovered them yet. You cannot eliminate vulnerabilities, but you can most certainly reduce the value of them to effectively zero.

Dan Geer, one of my early inspirations — go find his old talks if you can, taught me this a dozen years ago, but I failed to fully comprehend his warning. (If you’re a mere mortal like me, you’ll have to listen to Dr. Geer talk twice: once to hear the words, and a second time to replay them and to partially comprehend what he just said. I would rarely say [x] is important for all security professionals to know, but if I did, Dan’s talks would be near the top of the list). In one of Dan’s talks, I’ll never forget that he said (paraphrasing from memory here):

“There are 2 knobs in security: 1 to prevent failures and 1 to make failures meaningless.”

His example was how eons ago in the physical security realm, banks moved from armed guards (an obvious “prevention” control) to silent alarms (“detection”), as well as especially-prepared bags of money with things like electronic trackers, exploding ink canisters, and dollar bills with known serial numbers in them (each of those are “response” controls which make the theft much less meaningful).

Despite hearing Dan Geer say all that probably more than 12 years ago, it never resonated as strongly with me until I got into Red Teaming. It resonates so much now, that I use his “2 knobs” analogy regularly during debriefs (my colleagues may remember me doing this twice in the past 2 weeks).

To continue this pattern, if there WAS a false bias that needed shattering, it’s reasonable to suspect there probably IS one now as well.

What I think I’m observing as the next phase, is the realization of just how out of touch the pentest / red team (self included) community has been with the most common enterprise infiltration method: malware. Yes, red teamers typically acknowledge that phishing “just works,” and we have been sharing a lot of overlap with malware in terms of initial execution techniques (i.e. methods for constructing maldocs), but everything after that point is different.

If we consider that enterprises’ security perimeters are essentially breached daily (hourly?) with commodity malware that plays the signature evasion game to stay ahead of Anti-Virus, then we have to acknowledge that commodity malware is the single most successful attack strategy in operation today. When red teamers think about this malware, we typically think of it as a software artifact that is coded to execute a certain limited set of tasks (and some of them might be basic) in a very automated way. We do not consider them to be connected to a live human operator on the other side of the Command-and-Control connection. We consider them to be a bot operating in an algorithmic way as a child to a mostly automated parent. When red teamers compare that malware to other penetration testers or red teamers, we tend to consider ourselves as more formidable and flexible, but the malware as limited and “just” a nuisance. As a result, I find that very few pentest/red team people can rattle off current malware family/variant names and discuss their TTPs. They can definitely talk your ears off about Cobalt Strike or escalating privileges in Active Directory — but not the tools from the most common adversaries (the ones we are supposed to be simulating or even emulating). One psychological explanation might be because a red teamer’s gain from hacking is the joy of manually manipulating the target host, whereas the crime-ware authors are thinking about this like it’s a paycheck — they don’t want to babysit their access, they want it to collect monetizable data in a way that generates an elicit revenue stream for them. At scale malware is a much bigger problem than the lone human operator (or small team of operators), even if the detected human operator is a scary high fidelity event for the defender/responder.

What I believe we are seeing now, however, is the awakening of a sleeping giant: the move from PowerShell to C# is encouraging many members of the offensive side of security to rethink the importance of development skills. For years, pentesters and operators have been encouraged to have at least scripting skills, visible commonly in the form of bash and python, with a recent influx in PowerShell over the last 3–4 years. With many EDR vendors moving to detect malicious PowerShell, many of these “combat coders” have begun the transition to C#, which uses the same namespaces as PowerShell under the hood, but requires real development tools (Visual Studio) and comes with all the features of a richer development language. So the maturity of the tools are increasing. Now with Microsoft building AMSI support for all .NET languages (since the C# code can just be reflected out of the .NET binaries), I predict many (but not all, some will not put in the effort or discipline required to learn the skills) will eventually make the hardcore move over to C/C++.

As they move to less forgiving languages like C++, I think the offense community may start to realize and respect the tradecraft in malware families who have been living in this space for 20+ years (if for no other reason than you may simply want to borrow their code). As a result, we may see more red teamers who can rattle off malware variant/family names and recite how they operate, with the same reverence I see the top blue teamers use. This will be a sign of maturity — some of my colleagues at various orgs already do this.

TL/DR; the take-away for newcomers is that there is always something more to learn. Do not let your prior biases impact your future perspective, but also recognize that your prior experience may be unique and give you an advantage over your peers. In all things, find the balance, and never stop learning.

Also, many of the things the current “thought leaders” say were actually solved a long time ago. I cited just two examples in Dan Geer or Gary McGraw, both of whom you can listen to for free if you search — the talks may be old, but you’ll still learn things that apply to today. There are others as well — what’s old will be new again. Security doesn’t change, but our understanding of it just might — and that just might not be a bad thing.

--

--