Properly encoding and escaping for the web

encoding

When processing untrusted user input for (web) applications, filter the input, and encode the output. That is the most widely given advice in order to prevent (server-side) injections. Yet it can be deceivingly difficult to properly encode (user) input. Encoding is dependent on the type of output - which means that for example a string, which will be used in a JavaScript variable, should be treated (encoded) differently than a string which will be used in plain HTML.

When outputting untrusted user input, one should encode or escape, based on the context, the location of the output.

And what's the difference between escaping and encoding ?

Encoding is transforming data from one format into another format.

Escaping is a subset of encoding, where not all characters need to be encoded. Only some characters are encoded (by using an escape character).

There are quite a number of encoding mechanisms, which make this more difficult than it might look at first glance.

URL encoding

URL encoding is a method to encode information in a Uniform Resource Identifier. There's a set of reserved characters, which have special meaning, and unreserved, or safe characters, which are safe to use. If a character is reserved, then the …

more ...

Verify security vulnerabilities: A collection of Bash one-liners

security test one-liners
There's manual pentesting and writing reports, and there is blindly copying the output of automated scantools. I am fortunate enough to write and review a lot of pentest reports, as well as read pentest reports of a other companies.
Nothing looks as bad as "vulnerabilities" in a report that haven't been verified as such. This really degrades the quality of the report.

Below is a number of simple one-liners that can help with verifying vulnerabilities. All examples can be run in a basic shell (Bash, zsh), where the TARGET variable contains the hostname of the target that needs to be verified (without protocol).

SSL/TLS: BREACH

for compression in compress deflate exi gzip identity pack200-gzip br
bzip2 lzma peerdist sdch xpress xz; do curl -ksI -H "Accept-Encoding:
$compression" https://$TARGET \| grep -i "content-encoding:
$compression"; done
*Might* be vulnerable when: one or more compression methods are shown.

SSL/TLS: Client-Initiated Secure Renegotiation

echo "R\nQ" | timeout 10 openssl s_client -connect ${TARGET}:443
Vulnerable when: Renegotiation is successful (exit code == 0)

HTTP: TRACE enabled

for proto in http https; do echo testing ${proto}://${TARGET};
curl -qskIX TRACE ${proto}://${TARGET}\|grep -i TRACE; done;
Vulnerable when: the verb TRACE is shown

HTTP: Open …

more ...

Verifying webserver compression - BREACH attack

BREACH attack

A few lines of Bash script let you check which compression methods are supported by a SSL/TLS-enabled webserver.

 target=URL-OF-TARGET
for compression in compress deflate exi gzip identity pack200-gzip br
bzip2 lzma peerdist sdch xpress xz; do
curl -ksI -H "Accept-Encoding: ${compression}" https://${target} | grep -i ${compression}
done
If you see any output (and the server supports one of these compression algorithms), the site might be vulnerable to a BREACH attack. Might, because an attacker has to 'inject' content into the output (and have some control over it): This is called a chosen plaintext attack.
By carefully injecting certain content to the page, an attacker is able to deduce (parts) of the page content by merely looking at the response size (speed). An attacker therefore also has to be able to observe the server's response. A third prerequisite is that the secret (that an attacker wants to steal) is contained in the server response's body, and not 'just' in the response's header. Cookies are therefore out of scope.

The easiest mitigation is to disable HTTP compression completely. Other less practical mitigations are adding random content to each page, which changes the compressed size per page request, rate limiting the …

more ...

Automating OAuth 2.0 / JWT token retrieval for pentests

OAuth 2.0

Recently I was pentesting a complex API which used the OAuth 2.0 framework for authentication. Each API call needed an Authorization: Bearer header, containing a valid JSON Web Token (JWT).

To access the API I needed a lot of JWT tokens, as the tokens had a very short expiry time. To facilitate the quick generation of tokens I created a basic script that automated the OAuth authorization: It logs on to a domain, requests an authorization code, and converts that token to an authorization token.

One or more of these steps can be circumvented by command line options (e.g. by specifying valid cookies), to speed up the process.

Another feature of the script is that it automatically performs GET, POST, PUTs and DELETEs with valid tokens against a list of API endpoints (URLs). This preloads all API calls into a(n) (attacking) proxy, and helped the pentest speed up tremendously.

JSON Web Tokens

A JSON Web Token (JWT) is basically a string, representing a collection of one or more claims. Claims are name/value pairs which state information about a user or subject. The claims are either signed using a JSON Web Signature (JWS) or encrypted using JSON …

more ...

Getting to know your pentesting tools - curl and the HTTP/1.0 protocol

Tooling is important

When pentesting, it's always handy to have a bunch of automated scanner do the grunt work for you. Using automated tools saves time and can help in spotting potential vulnerabilities. Usually I run analyze_hosts.py, a wrapper around the open source tools droopescan, nmap, nikto, Wappalyzer and WPscan, with a bit of intelligence built in.

Recently I had a pentesting engagement where nikto flagged a IIS server as leaking the internal IP address (see https://support.microsoft.com/en-us/kb/218180 for more information).

This is a very common issue with older, unhardened IIS servers. The issue is triggered when a HTTP 1.0 request is made to the server, without supplying a Host header. The resulting Content-Location header will contain the server's (private) IP address, thereby leaking information which can subsequently be used for other attacks.

Example of a server's partial response:

HTTP/1.1 200 OK Content-Location: http://1.1.1.1/Default.htm

I ran into an interesting observation when I needed to reproduce this using curl. Curl is a great tool do do all kinds of HTTP requests on-the fly, and it's very well suited for scripting. It has flags to specify the protocol (e.g …

more ...

Open secure redirect

left or right

Aren't those RFC docs amazing ? Reading up on standards ?

I needed plenty of time for them, as I encountered some interesting issues. As it turned out, some websites / loadbalancers are overly optimistic in encrypting all the things - actually, in redirecting all the things.

TL;DR

Never trust HTTP(s) clients, and be careful when setting up redirection rules. A non-RFC compliant client can trigger a (difficult to exploit) open redirect vulnerability, due to a non-RFC compliant server.

This vulnerability can be tested using

analyze_hosts.py --http TARGET

See https://github.com/PeterMosmans/security-scripts/ for the latest version of analyze_hosts.py

Be warned, long post ahead: A while ago I came across some servers that, when being sent insecure requests, responded with a redirect to the secure version.

Request:

% curl -sI http://VICTIM/

Response:

HTTP/1.1 301 Moved Permanently
Connection: close
Location: https://VICTIM/

So far so good, nothing fancy going on here. In fact, this is excellent behaviour. Insecure requests are immediately upgraded to secure requests.

However, the server seemed to be overly happy in redirecting, as it listened to the client-supplied Host parameter:

% curl -s -I -H "Host: MALICIOUS" http://VICTIM/

And the server responded by

HTTP/1 …
more ...

FREAK!

As you probably read somewhere else, and on another place, and another... on March 3rd 2015, another attack on SSL/TLS was published. Following the tradition of BEAST, CRIME, Heartbleed, LUCKY13 and POODLE this one also has a catchy name: FREAK (Factoring RSA Export Keys).

It's a man-in-the-middle attack where a man in the middle can decrypt a SSL/TLS connection between a client and a server.

FREAK

Vulnerable *servers* are servers that accept export-grade ciphers (RSA-EXPORT). Checking whether a server is vulnerable can be done in many ways.

analyze_hosts --ssl HOST

If you see any EXPort ciphers, the server is vulnerable.

cipherscan HOST:443

If you see any EXPort ciphers, the server is vulnerable.

  • Yet another way is by using nmap:
nmap --script ssl-enum-ciphers -p433 HOST

If you see any EXPort ciphers, the server is vulnerable.

You get the idea...

Mitigate this vulnerability server-side by making sure that your server doesn't allow export ciphers in the OpenSSL configuration: add the following expression

!EXP

There are also vulnerable clients...

Clients using OpenSSL are not vulnerable if they were built after CVE-2015-0204 was published.

The …

more ...

Creating and verifying digital signatures of files

Digital signatures can be used to establish the authenticity and integrity of a (binary) file. These signatures can also be used for non-repudiation purposes, but that's usually not the intention when you're distributing or receiving files. (Note: non-repudiation means impossible to reject; to make sure beyond a doubt that the signer's key has been used to create that signature).

The easiest and most secure way of creating and verifying digital signatures is by using PGP. The following commands assume that you have downloaded and configured GPG, the free and complete implementation of the OpenPGP standard.

Create a digital signature of FILENAME

gpg --armor --detach-sig --output FILENAME.sig FILENAME
--armor make sure that the file is ASCII armored (Radix-64 encoded)
--detach-sig create a separate signature file
--output the name of the signature file

Paranoid options

--no-version don't show which software version has been used to create the signature
--comment don't show which software has been used to create the signature

Verify a digital signature

gpg --verify FILENAME.sig

This command assumes that the original file is FILENAME and resides in the same location as the signature file FILENAME.sig. To verify a signature you also need the signer's public key. If …

more ...

analyze_hosts

If you're like me, you don't want to spend your precious memory on remembering awkward command line parameters. However, lots of tools require exactly that: awkward command line parameters.

To simplify scanning of hosts for network vulnerabilities I wrote a simple wrapper script around several open source security tools. The script lets you analyze one or several hosts for common misconfiguration vulnerabilities and weaknesses.
My main objective in writing the script was to make it as easy as possible to perform generic security tests, without any heavy prerequisites, make the script as informative as possible, and make use of open source tools.

Note that the latest version is the Python version - please use that one.

How to install

Clone the git archive using the command

git clone https://github.com/PeterMosmans/security-scripts.git

Needed

Linux, and nmap

Optional

  • curl
    for fingerprinting and to test for TRACE
  • dig
    to test for recursive DNS servers
  • git
    to update the script
  • nikto
    for webscanning
  • testssl.sh
    to check the SSL configuration

Usage

Oh irony - the command line parameters for the tool:

usage: analyze_hosts.sh [OPTION]... [HOST]

Scanning options:
 -a, --all perform all basic scans
 --max perform all advanced scans (more thorough)
 -b, --basic …
more ...

securing AMFPHP

I regulary run into Flash applications when I perform a web application penetration test. One of the most widely used server frameworks for communicating with a Flash object is AMFPHP.

Unfortunately the default installation of AMFPHP is insecure. A system administrator or developer actively has to secure the installation, which is often forgotten.

There are some tips lying around the Internet how to secure an AMFPHP installation. The summary:
In the root of your AMFPHP deployment,
  • delete the DiscoveryService.php file
  • Delete the browser folder and its contents
  • Edit gateway.php and set the PRODUCTION_SERVER property to true

Of course it's at least as important to write secure code, harden your server and implement proper patch and maintenance procedures.

more ...