Thursday, August 02, 2007

So it appears the entire Rutkowska-Matasano thing is not over yet. I probably should not harp on about this in my current mood, but since I am missing out on the fun in Vegas, I'll be an armchair athlete and toss some unqualified comments from the sidelines. Just think of me as the grumpy old man with a big gut and a can of beer yelling at some football players on television that they should quit being lazy and run faster.

First point: The blue chicken defense outlined in the linked article is not a valid defense for a rootkit. The purpose of a rootkit is to hide data on the machine from someone looking for it. If a rootkit de-installs itself to hide from timing attacks, the data it used to hide either has to be removed or is no longer hidden. This defeats the purpose of the rootkit: To hide data and provide access to the compromised machine.

Second point: What would happen if a boxer who claims the ability to defeat anyone in the world would reject any challengers unless they pay 250 million for him to fight ? Could he claim victory by telling the press that he "tried out all his opponents punches, and they don't work, because you can duck them like this and parry them like that" ?
I think not.

I am not saying it's impossible to build a rootkit that goes undetected by Matasano's methods. But given access to the code of a rootkit and sufficient time, it will be possible to build a detection for it. Of course you can then change the rootkit again. And then the other side changes the detection. And this goes on for a few decades.

Could we please move on to more fruitful fields of discussion already ?

8 comments:

abc said...

Can we also impose bans on presentations involving material that only takes someone with a brain a few hours to construct? Last year it was OllyBone with the copy+pasted code from Shadow Walker, now this year some guys from offensivecomputing reused the OllyBone code (and retained all of its bugs and problems, no PAE, no NX, no SMP, problems inside a VM, swap issues, etc, etc) to generate dumps at what "may" be the OEP -- the dumps themselves not operable for anything but 1992AD technology-based packers, which already have generic unpackers. And they call this "amazing"? Come on. It's clear from their paper (see such choice quotes as: "page-fault trap handler") that these guys don't have a basic understanding of the Windows kernel, Intel Architecture, or post-1992AD packing/protecting technology.

When is someone going to do some *real* research and tackle the VMs of Oreans' CodeVirtualizer/Themida or VMProtect? That at least would be impressive and innovative, unlike these half-baked and over-hyped presentations on low-hanging fruit using techniques that have been obvious for years but somehow people seem to be re-"inventing".

Maybe people are so concerned with being the first to do something, that they don't bother themselves with being the first to do that something properly.

As for "blue chicken," if it goes into effect once it determines someone's trying to detect it -- that sure seems like a good place for the detector to disable interrupts and start looking for this timer event that will re-enable the rootkit. It's nothing more than a red herring: "Oh timing detection? Doesn't work due to blue chicken."

Ptacek & Company's presentation though was unimpressive and redundant. Edgar Barbosa's BluePill detection presentation was much more comprehensive and technical.

Joanna's presentation style is interesting. Definitely there is a lot of technical content there; a lot more work seemed to be put into it than Ptacek's presentation; but there's also a lot of hand-waving and selective arguments involved.

I think maybe when a presentation involves so much technical information (which, frankly, regardless of whether people admit it, the majority of BlackHat/Defcon attendees don't understand whatsoever) people tend to accept it and ignore any of the inconsistencies or hand-waving involved, which they're ill-equipped to question. The assumption is that since the presenter has included such technical information in other areas of the presentation, that the fuzzy areas must be equally factually based.

This is why Joanna had the "last laugh" as the article erroneously points out, simply because the reporter lacks the knowledge necessary to accurately discern whether it was a "last laugh" or a desperate attempt at remaining relevant.

-spender

abc said...

Also, on the technical side, slide 112 is incorrect regarding the problems with using private PTEs for scanning physical memory to detect BluePill. Though the detector can't know initially how BluePill decided to mark its private page table entries (since the cr3 value is unknown and thus it doesn't know where the PDEs/PTEs reside in physical memory yet), it can lookup the PTEs of running processes and ensure their consistency (even across multiple processors) by holding the PFN database lock. Thus it knows which physical pages are being pointed to by the guest PTEs and also which pages in the PFN database are marked as in-use and yet are being pointed to by no one.

The problem here being that for BluePill's code to be persistent in memory, it has to be marked as in-use in the PFN database, but to remain hidden to the virtual memory scanners, it can't be referenced in the PDEs/PTEs. Thus your generic method of detection.

Pages of code detectable through finding their private page tables or through heuristics for identifying x86 code, which is not referenced by any PTE should be reliably identifiable as BluePill, regardless of whether they're linear or not and regardless of whether they zero out some structures people may be looking for.

Though the slide may come off to others as a "why private PTEs for scanning physical memory won't work to detect BluePill" it's really more of "problems for a lame detector."

-spender

halvar.flake said...

Hey Spender,

I guess one of the reasons why tackling virtualizers in general hasn't happened is that it is simply not doable 'in a general case', throwing out a lot of pepole interested in academic 'strong' results. At the same time, specific solutions requires a very significant investment in time and resources that people can't/won't shoulder in their free time really.

Perhabs this is all economics: We get the security research that the security industry is willing to pay for :-)

I didn't see the OffensiveComputing talk, but the sheer fact that there are no publically available 'generic' unpackers that do the 'low hanging fruit' (which constitute ~80+% of the packers in the wild) IMO justifies giving such a talk. Dealing with the low-hanging fruit is a valid contribution - it might not excite those that deal with the last 20%, but it doesn't invalidate giving the talk.

Caveat: I have read neither the paper nor attended the talk. I also do not value 'rebuilding' dumps to be valid executables - for me, the core purpose of a memory dump is to retrieve me a maximum of control flow information from the original executable.

What is the talk about ?

abc said...

It's not new. There's been a number of these released, not at BlackHat of course, but that doesn't mean they don't exist. Just to name a few, there's QuickUnpack, and a generic unpacker from deroko (at http://deroko.phearless.org/rce.html) using the same technique as the offensivecomputing people. Deroko's packer is in addition more advanced and original then the offensivecomputing one, which is again why I wonder why someone who copy+pastes some code gets to present on it and claim that their "amazing" results are somehow better than what one can obtain with these generic unpackers that far surpass them in quality.

Also, just because working binaries don't matter for your particular case (which I believe to be a fringe use of unpackers), they are very important in malware analysis, which is what the offensivecomputing people claim to be generating these binaries for.

An "unpacked" binary in which no imports are resolved, where any redirected imports are therefore impossible to resolve -- is useless to an analyst. Maybe it doesn't matter to an "analyst" whose work consists of looking at the strings in a binary (and we know there are a ton of those), but for anyone any good at fully reversing malware this is necessary.

Also, i disagree with your 80/20 statistic. I don't think it's correct in the general sense if you talk about how many of the packers/protectors out there that their code will work against (you may need to check tuts4you.com and peidforums.has.it again to keep up to date on the techniques being used -- most of the new packers/crypters (which are becoming more common) are using the unpacking into a separate process trick, which this does nothing about.

When we go away from that, the 80/20 statistic starts to mean even less, since even if it may apply to a huge sample from all the files a particular vendor sees, that doesn't necessarily mean much to a malware analyst who is dealing with particular malware threats who would be using such an unpacking tool.

abc said...

Also, now that I remembered, the presentation makes unfair comparisons in an effort to make itself look better. It exaggerates the difficultly of fully unpacking a binary "manually" using the various tools out there like LordPE, ImpRec, and OllyDbg scripts by dragging out out over a dozen or so slides, yet their code really does nothing more than OllyAdvanced + OllyBone + OllyDump can do for you.

-spender

Unknown said...

bluepill should be there for referencing only. something that shows "new possibilities for a new computer chip". not more.

[as a sport spectator] i have learned that joanna is unsharp but can come up with something; that ptacek's qualities do not transcend utterance [about the obvious]; and that above the obvious, only edgar barbosa has been.

further, whoever speaks of using bluepill in the wild, is [just] more naive than joanna.


in completing, a sport spectator comments not before, not after reading or understanding. he comments only...

understanding "rutkowska-matasano thing" is out of question.

--t

halvar.flake said...

Hey Spender,

(sorry this is going to be a rushed comment, need to run in 2 mins):

I will not draw into doubt the need to resolve imports properly. Generating a valid PE file is something that I happen to not need -- I'll go for any representation where I can deduce the control flow (including flow into libraries) and that compresses well enough to not be much bigger than 10-15 megs per binary.

I think this is a longer discussion though, and this (rushed) comment is not a very useful contribution. I'll write more later (or we can discuss in some real-time medium).

One thing I learnt over the last year or so when I followed crypto stuff: The average quality of publications seems to be almost the same in CompSec and crypto, from which I conjecture that it is the same in any field, and falls under the 90/10 rule. I think in most fields you can cut the number of publications to the top 10% a year, without losing more than 10% of the useful contributions. No need to be angry about it (although it happens to me sometimes too ;)

Unknown said...

I guess I mostly agree with Spender here: having been in the unpacking game since ages, I saw more people trying to reproduce existing things (sometimes in a highly inferior flavour) rather than totally innovate (even the UI are blatant rip-off), botching sometimes totally the implementation. I guess it's easier to follow than lead, and that, with today's expectation that's enough to brag (hype) and think one achieved something. Halvar might be on the money to think that innovation only is driven by what security company puts money in, but i don't think that can be the case for everything.

Now to be more specific:

- Unless someone does OllyBone (or clone) using NX it will never work in a VM correctly (TLB emulation isn’t perfect), the other problems mentioned are total lack of knowledge as Spender mentions it. I must admit I find it unbelievable, it's not done yet (publicly that is).

- I don't personally believe that dealing with a VM based protection in particular is such a good idea (as in implementation specific), being a big supporter of generic solutions, I'd be more on being able to "isolate" it and graft it back on a "dumped" version, if that's enough for what you want, or, analyse the system impact executing a given VM "branch", and generate some replay, or whatever you fancy. Of course, some people might prefer specific solution for a perfect result, but I'd rather avoid wasting the time for it.

- In the same token, an executable without imports resolved is not necessarily useless, I remember looking into some dumps and clearly understanding what was going on, simply because you can, in some cases, deduce what the call is by runtime "signature", API typical sequences/parameters and so on. It clearly depends on the situation, and it's clear that for Halvar a dump is enough. You might have to be a bit more resourceful, and/or have a special toolkit ready for such cases, etc., but in no way it's useless. In fact, if you were working on building such an unpacker, this is all you will have in your hand to be able to do it ;). Of course, if it was fully rebuilt, it makes things a lot easier to read, that's for sure. Anyway, there is no "generic" working statement for everyone's need.

Bottom line, indeed rehashing the same technique and do presentation on it should not be the primary goal; we can leave that to companies catching up with their competition's products. And even if a technique is not fully adequate or complete for everyone needs, it shouldn't be discarded totally. Now a total lack of knowledge, quick run for fame, shouldn't be an excuse for a botched up implementation, at least in the long run (early mistakes is ok). At least, that's what I always believed. Now realistically, seeing how things went in the last couple of years, I am afraid this is not going to change any time soon, if at all.

G-RoM