You Don't Need the Ultimate Pen-testing Framework!

Mon, 23 Feb 2009 12:50:02 GMT
by pdp

You've already got it! It is laying on your PC and it is called the "shell". The shell was designed to start/strop and control process with ease so why do we need yet another universal pen-testing framework, which does what another tool is already doing for us and it comes by default? In this post we are going to delve in the world of advanced shell programming for penetration testing purposes.

The shell is defacto the interface to your operating system. Over the years it has turned into a very powerful machinery heavily used by programmers, hackers and system designers around the world. It is simply the ultimate environment. There are plenty of tools to support it. It is remotely accessible. It is simple, yet extremely powerful.

Because I am quite aware of the power the shell provides, every time I see another pen-testing framework which implements its own shell (obviously a lot less powerful in nature) or anything else shell incompatible, I am shaking my head in disapproval. Where are my pipes? Should I ignore the plethora of good pen-testing tools sitting on my box just to use your custom shell. Obviously, not!

Penetration testing frameworks today turn into unmaintainable monsters: abstractions, and deep inheritance all over the place; dependency nightmares and monolithic cores which no longer interact with the shell so nicely. They try to be the ultimate framework but fail immensely as they cannot be what the shell and the OS already is (i.e. a framework) simply because there are a lot less man-hours put into them and they are a lot less diverse in terms of code and originality.

Over the last couple of days I was busy with putting a small set of command line utills in the spirit of some of my previous two experiments in the same sphere of study: Infocrobes and Bashitsu. The toolkit (also known as Jeriko) currently resides within our random source code repository which contains random codez which hasn't materialized fully yet.

The reason I wrote it is because it was fun but most of all I wanted to showcase that quite many advanced things can be achieved with a few bash scripts wrapping around common pen-testing tools. For the rest of this article we will explore some of the features of this toolkit and discuss how it can be extended upon (as it is no where near complete but it is a good start, imho) and used in various quite basic pen-testing scenarios.

Let's start with the common stuff: automation of port-scanning and vulnerability assessment. We will start by adding some targets:

$ mkdir pen-test
$ cd pen-test
$ targets-add target.com
$ targets-add << EOF
more-targets.com
10.10.10.10
10.10.20.0/24
EOF

Now we have a bunch of targets. You can also remove targets by executing the targets-rem script which usage is exactly like the targets-add script.

Once we have a bunch of targets we might want to expand them into usable IP addresses/ranges. Keep in mind that our targets list is a mixture of domain names, ip addresses and CIDR ranges. We are going to use another tool part of the collection which will convert all of this into something that we can use:

generate-ip-batch

This tool actually wraps around nmap and outputs everything onto the screen. It is not useful unless we pipe it into something. We are going to use another script for that: generate-scan-batch. This script will execute generate-ip-batch and pipe out a list of commands for performing the basic penetration tests. The list will look something like this:

scan-ports-tcp-full [ip]
scan-ports-udp-full [ip]
scan-vulnerabilities [ip]

Ok, this can be piped now into our run-in-parallel tool which obviously runs things in parallel in order to speed up the process. This is how we do it:

$ generate-scan-batch | run-in-parallel

We can customize the run-in-parallel script by either by modifying .jerikorc resource file or by going the bash way like this:

$ RUN_IN_PARALLEL_MAX_PROCESS=32 generate-scan-batch | run-in-parallel

Luckily for us we can also supply this information as command line arguments like this:

$ generate-scan-batch | run-in-parallel 32

If you read the source of run-in-parallel script you will see that it is no more then 40 lines of code packed with quite a lot of power. This is what the shell gives you - excellent way to manage processes.

Once all tasks are completed, we should have several work files into our directory. We can now parse them with a set of basic command line utilities which are prefixed with extract-. In order to extract open services from both gnmap and nbe files we use extract-services-gnmap and extract-services-nbe respectively, like this:

$ cat *.nbe | extract-services-nbe

How about exacting all services that has something to do with SSL? This is how we do it:

$ cat *.nbe | grep -i ssl | extract-services-nbe

That was easy but we might want to correlate all results. It is easy once you know shell scripting. The following script does it for us.

$ extract-services

Basic indeed! Let's now mirror the front-page of all HTTP servers. We might want to do some analysis on the results. This is how we do it:

$ cat *.nbe | grep -i http | extract-services-nbe | awk -F, '{ print "http://"$1":"$2 }' | scan-urls

This will mirror only the front-page we can do a lot more. How about making a copy of the first 10 levels? This is how we do it:

$ cat *.nbe | grep -i http | extract-services-nbe | awk -F, '{ print "http://"$1":"$2 }' | WGET_URL_SCAN_METHOD="-l10" scan-urls

Alright! Now we have mirrored all HTTP servers. Let's analyze them:

$ find ./ -type f -exec cat '{}' ';' | extract-emails

This will give us all emails that we have encountered. How about retrieving everything that looks like IP address which we can add to our targets list:

$ find ./ -type f -exec cat '{}' ';' | extract-ips

There are many more utilities which can extract things from files. We can even look for name/title looking strings and feed them to our whois scripts in order to find more about the organization we are pen-testing. This is how we do it:

$ find ./ -type f -exec cat '{}' ';' | extract-names | scan-whois

Easy! Once we've done the basic analysis and we have identified several issues and other things and we have the permission to go further we can autopwn all targets. This is how we do it:

$ autopwn-services</pre>`

This tool simply wraps around metasploit's msfconsole. However, because msfconsole is yet another shell we might want to send the entire process back into session from which we can detach. This is useful for many reasons and this is how we do it:

$ session-start autopwn-services

If we press CTRL+AD we can detach and continue with our normal pentesting tasks until all services are fully exploited. Then we can resume by doing the following:

$ session-list
$ session-resume [name]

The output of the autopwn session will be saved, which is great as we might want to do further parsing abd later-stage analysis on the data.

There is a room for a lot more tools to be written. For example we can quite easily put ettercap/tcpdump into use for capturing browser cookies off the air and feeding all the information into a simple command line tool which will switch us to a different browser session depending on our choice. We don't need to write yet another framework for this. Most features already come by default and can be used if you know how.

Keep always in mind the following: don't write something someone else has already written for you unless this other product is complete crap and it needs replacement. Also, think whether your tool integrates nicely with other tools. The more integrated it is, the more it will be used in combination with others. And this is quite important.

So yes! You don't need to write everything from scratch. You don't need to mimic screen, script, wget or any other common tool unless you have no other choice. The ultimate pen-testing framework already exists within the most basic components of your operating system.

Archived Comments

rvdhrvdh
Good post PDP. True. most stuff even allready resides in the C libraries you already have. Same with nmap which uses those C libs. Like the sockets API et all. Same with sniffing a network, most stuff is provided in C libs as well and some parts are already accessable through the command line, takes a few commands to start capturing packets in your console in real time. Absolutely no need for a wireshark at all.
pdppdp
actually I quite like wireshark :) but I was trying to encourage people to develop for the command line as it proves over time to be the simplest and fastest way to do things and given that you understand bash, you can do tones of good hacks.
PentoPento
It's like window managers and DE in Linux. You can live for example in minimal WM like fluxbox or ever dwm and find and use some small application for different purposes. But you can install gnome|kde and spend this time for work. As I think nmap+GNU Core Utilities+metasploit is the best choice.
pdppdp
imho, metasploit would have been a lot better if all it was just an exploitation framework, good for writing and running exploits only. The auxiliary modules are a bit redundant. perhaps the useful stuff like the BailiWicked auxiliary modules should have been put as exploit modules. and if only you could make the framework run faster :) that would have been great. other than that, it is one fine framework.
pdppdp
same with nmap. although nmap's script scan option is quite powerful indeed, it just turns the tool into something which is not - a vulnerability scanner. coding in lua is no fun either.
postmodernpostmodern
Why still use shell scripting for this task, when it's far easier in a Python, Jython, Ruby, JRuby or even a NetBeans shell? Instead of using primitive commands, you could use a general purpose programming language with a syntax that is conducive towards one-liners. Taking it one step further, you could even load in various libraries or frameworks (there's more than just Metasploit) into your interpretive language's shell. You can get the same effect with cleaner syntax.
pdppdp
bash is pretty much a general purpose programming language. it has sockets, ways to interfere with C libs, etc. the reason I chose bash to implement most of the functionalities of this toolkit is because it comes by default and also it does a better job when it comes to managing processes and tasks then most general purposes languages. it is also quite fast compared to ruby for example. for sure python, ruby and perl can be used to do similar things but because they are too general purpose you will endup with much larger source code which will quickly turn into a framework, imho. my argument is that the framework is already written for you. we do not need yet another abstraction layer on the top of the standard shell which already contains the majority of the functionalities that we need. the rest of the functionalities are provided by various standard utilities which are well integrated with the particular distribution you are running and the packet management system. some of the standard shell utills also provide features to interact with 3rd-party components in the most simplistic way (expect for example), while it will be a lot harder to do something similar with a general purpose language or the missing functionalities rely on a 3rd-party lib which is not easy to install or requires a fair bit of dependencies. another example is metasploit's session management feature. it is very useful, indeed, to start several sessions to the targets and switch between them on the go. though, this is a metasploit specific feature. however, screen and script provide similar functionalities which happen to be a lot better and work for all utilities including metasploit. therefore, I would rather rely on the toolkit that comes by default. at the end of the day Jeriko is not a framework. it is a toolkit which is designed to run from the command line. all it does is it to save yourself some efforts typing long commands. it also ensures that no orphan processes are left when exiting different tasks. :)
hartoghartog
@pdp; great post, proofs the power of the shell once more :->. Some nice constructions in their as well :-> @postmodern; since when are sed, awk, grep, wget and many many others primitive? Most of them are/have (micro) expression/programming languages and they can be piped together. Throw in some Perl one-liners to become really powerful. *grrrrr* ;->
sid77sid77
Hi, I've written a small patch against Jeriko-r31 to add some functionalities: + option to choose whether to run nessus or openvas + option to choose which metasploit db plugin to load + a bigger jerikorc + fixed a small typo in scan-vulnerabilities The patch is hosted here: http://sid77.slackware.it/jeriko/ ciao
pdppdp
excellent, I will have a look :)
pagvacpagvac
i agree that most of the stuff we need is on the shell already. pentesting frameworks is like the new security-testing hype. first we had hundreds of portscanners, then hundreds of webapp MiTM proxies, then hundreds of fuzzers, then hundreds of SQL injectors, now it's about pentesting frameworks :) Knowing a few scripting tricks is extremely powerful, as already-available tools are not customized enough for our tasks sometimes. furthermore, sometimes learning how to do something with a publicly-available tool can be MORE time-consuming than writing your own bash script to do so.
JCJC
Love this to bits - it just works. Have you tried it with OpenVAS? I am not sure that OpenVAS-client is command line plug compatible with nessus
pdppdp
My humble opinion is that we should have scraped nessus altogether and started something better from scratch but this is my humble opinion only.
enigmaenigma
A lot of useful information and I totally agree with some of the issue's raised, I've never installed nessus, its great for security testing but only if the server your scanning has nessus on the backend, which half of them dont!
JerryJerry
I totally agree with Your article. Further more I would say that You can skip altogether frameworks and their pre-made exploits and payloads. Exploits have to be tuned each time to be effective and payloads are almost all catch by AV when touching the disk, even if You encode them. They can still have some chances if they run all in memory. But I had most success with unique non staged binary payloads. I also agree that with the CLI You can handle everything, even multiple sessions, better and faster then in msfconsole. For example using the cli command 'socket' You can create a listening server which forks on a unix socket on each new connection and then You can connect to each background process. This way You're handling unlimited sessions fast and in 1 line.