Sunday, July 6, 2014

Dissecting /etc/passwd

Discussing local file inclusion with with @pacohope and @kseniadmitrieva, it became evident that I have more than a few non-obvious things to say about the value of /etc/passwd when in the hands of an opportunistic adversary. It is available on most (all?) Unixes, yet it is rich in idiosyncrasy. I threw together my thoughts and put them here.

If you know of more interesting details, drop me a comment.

Saturday, May 10, 2014

Sometimes Nmap Takes Down a Datacenter

"We don't perform denial of service testing," I say to my client.
Those words exit my mouth with an alkaline sting. Lies, all lies! We detect DoS by looking at version strings in server banners. There's no deep kung fu in this; Nessus has plugins for it. And if I see that my client's server could be brought down with killapache.pl, I will let them know. The truth is that our tests can occasionally identify DoS conditions safely, without actually denying any service.
But even this is only a half-truth. The whole truth is that even safe, industry accepted pen testing practices can wreak havoc on a network.
I was an information security officer for a respectable organization during a short lived run on the IPv4 address space market. Fortunately, mere weeks before network managers started jumping off the roofs of their data centers, one of my network guys snagged a few Class-Cs and had them pointed them at our DR site for safekeeping. Fat, dumb, and happy we were the doomsday preppers of the 32-bit address space apocalypse.
Time went by and we started gearing up for our annual external penetration test. There had been lots of changes in our network that year and I thought it would be best to scope the assessment by looking up our public IP assignments in ARIN, cross-referencing it against a scripted mass dig(1) of all the externally accessible DNS names in our asset management system. Everything checked out, a few externally hosted marketing sites notwithstanding.
The first night of testing went off without a hitch.
Eleven o'clock at night on the second night I got a call from the on-call system administrator, a bit miffed because the DR site had been taken down. The entire data center had gone dark! I quickly phoned my tester and had him pull the plug. Before you could say Ctrl-C, I had hung up and put the call back in to my system admin. Everything was back up.
I didn't expect that our pen tester pulled down our site. I only had him pull the plug for political reasons. If something went wrong, the networking team would blame the pen testers. Even when no tests were scheduled, they'd call me up to ask if there were any unscheduled scans going on. It's a defense mechanism they'd learned from years of being the infrastructure whipping boy. For years, every time a tablespace fills up, a process falls into an infinite loop, or a memory board smokes, people would run to the networking team.
At that place and time though, penetration testing was the new black magic. No one understood it; everyone feared it. If the lights flickered or the cafeteria ran out of chicken tikka masala, pen testing was probably the reason. So I had to put a stop to the testing before the real incident response could begin. I didn't expect that my lone tester could bring down a whole data center by himself though. The site coming back online mere seconds after he stopped scanning was too much of a coincidence. I called him back up and explained the synchronicity.
"I don't understand. All I was doing was an Nmap."
"Send me the command you ran." This breaks protocol a little. As a paying customer it is my responsibility to wag my finger across the phone at the tester and demand an assurance that this will never happen again. He would respond by categorically denying any wrong doing. Then he'd spend the rest of the engagement running a single thread discovery scan. The end result would be a blank report and a mysteriously fragile network.
Instead, I like to treat these nightmare inducing outages as a chance to learn something about the world. His Nmap scan seemed legit. No crazy plugins, no elaborate timing. It was a simple TCP connect scan. There were no payloads. I had him run it again while I sniffed the network at our border.

In front of our firewall was a border router who's only job was to be a DMARC between us and our ISP. It was a 1U implementation of a very simple bit of logic:
for each packet
  if packet.destination.ip_address ∈ $respectable_org.ip_addresses
  then
    send packet to $respectable_org
  else
    send packet to $isp
  end if
end for
Thing is, we had not yet added our new Class-C's to $respectable_org.ip_addresses. But our ISP's router implemented similar logic, and they had included the subnets in their list of our IP addresses. And of course I had included it in our tester's scope.
So what happened? Our tester started scanning our DR site on the second night, (the first night he had scanned production.) Our ISP sent each of his probes to our border router. Not recognizing the destination address as one of ours, the router sent the packet back out the port it came to our ISP. Our ISP recognized the destination address as belonging to us and send the probe right out the port it came, back to us. This two-node route loop lasted until the time-to-live on each packet decremented all the way--about thirty times per packet. And since the loop was as tight as the next hop away, it was much more impactful than running thirty times as many threads.
The penetration tester was exonerated, the appropriate routes were put in, and testing resumed. Ironically, it was the networking team's fault this time, though I'd hardly blame them.
The truth is, non-destructive means can reveal weaknesses that destructive attackers can exploit, non-destructive means can come to destructive ends with the most innocuous configuration problems, and that there's always the risk of something bad happening.
And to tell the whole truth, my client doesn't want to hear all of that. They want the assurance that their business won't suffer for having hired me. If I can't assure them of that with fewer than a dozen words, I can't assure them of that. So I'm stuck dragging out that old chestnut,
"We don't perform denial of service testing."

Saturday, April 5, 2014

Proactive Unix System Maintenance for Security

I have an article up over at WServerNews, an online newsletter for Windows sysadmins. My article is a collection of anecdotes, each with a little moral lesson at the end.

Lest anyone think I'm a Windows bigot, I wanted to follow up that article with some quick prescriptive guidance for Unix system administrators.

Opportunity 1: Apply Software Updates

Ever since Windows for Workgroups, Windows SAs have been leagues ahead of Unix SAs in patching. This is born out of necessity: Windows systems have to manage floors of buggy workstations that get viruses daily. Your Unix systems don't give you as mature a tool set as they have.

All good system administrators know how to use rpm, pkgadd, apt-get or whatever to bring a system up to current. Figure out (and document) how to roll back an update for a system or even just a package. Write a script that reports the last date a system was patched and incorporate it into your system monitoring solution.

You might even want to learn how to use Puppet or cfengine to automate configuration changes.

Opportunity 2: Review Logs

Unix admins have grep and grep goes a long way, but when grep doesn't cut it anymore, you will want to stand up a log server like splunk or a SIEM like OSSIM.

Opportunity 3: Change Default Passwords

This is pretty universal. Read your software docs and check the password lists here and here.

Saturday, February 15, 2014

Consider the Costs

My previous two posts covered two dysfunctions I see when people try to fix their security problems: ignore the low-severity issues, and try to fix massive high-severity issues and hit a wall. Not trying to fix a low should be the polar opposite of trying in vain to fix a high, but they share a common cause: they both ignore the cost of action.
Information security professionals have different methods for determining and describing the severity of a finding. When we use NIST 800-30, we eyeball a high/medium/low value based on our perception of impact and likelihood. When we use CVSS we think about the vulnerability in terms of exploitability and impact, dividing each into different factors to which we apply weights and modifiers. Adam Shostack has suggested that security professionals describe risk purely in terms of dollars. When we use severity in this way, we’re really trying to describe the benefit of remediation.
The process of remediation takes time. In most cases, money too. These are scarce resources. The science of deciding how to use scarce resources to achieve desirable ends is called Economics, and it has a tool we can use to help decide whether to remediate vulnerabilities: cost benefit analysis.
Simply put, the process for cost-benefit analysis is as follows: find out all the costs, find out all the benefits, and compare. If the cost is greater than the benefit, don’t do it. You can expound upon this as you like, accounting for stakeholders and externalities, comparing alternate projects by their cost-benefit ratios. But even bad CBA is better than none.
Enterprise IT is usually very good at cost-benefit analysis. If you have a PMO in your organization, ask them about it.
If you are responsible for the security of an organization, keep the costs in mind. If you are an assessor you can help by understanding your client’s business as best you can—especially their change processes—and look for cost effective solutions.