It seems the new trend for successfully exploited weak web applications is that they will only show their seedy content to google or other search engines.

They will only be triggered to show their content when they are accessed by GoogleBot from a Google IP (yeah they are getting that specific).
When you click and view a page that google say’s is full of Viagra spam, you won’t see anything, its tricky and VERY frustrating and hard to troubleshoot.

So far, the common sign i’ve seen of successful exploits have been:

1. .bak files (installed as wordpress plugins, you have to scour your ‘active_plugins’ field in the database
2. .pngg .giff .jpgg and .old files, trying to upload malicious PHP and get around unsecure uploaders
3. the use of the base64_decode PHP function, while there are legit uses for this function, it can be a sign of a baddie
4. Use of the ‘eval’ function in PHP. Also, legit uses are out there, but i’ve seen it used for the dark side of the force.
5. a ‘WordPress’ user in your WordPress user table.

If you want to scan a *nix system for the file names i’ve found to be ‘bad’ use the following commands.
find -name *_old.php*
find -name *.php.jpgg
find -name *.php.giff
find -name *.php.pngg

To look for those functions i talked about your can use your friend ‘grep’

grep -inrH "eval(base64_decode(" <your dir here>
grep -inrH "gzinflate(base64_decode(" <your dir here>

For anyone interested I’ve recently installed mod_security with their core rule sets on our Apache webserver and after tweaking the config files and creating some white-lists I have be able to ward off a number of baddies and exploit attempts.

http://www.modsecurity.org/

Its worth the hassle of setting it up. It also has a ‘detection only’ mode which does a great job letting you know what you have running and tweak the rules before it starts to block requests.

Leave a Reply