Rebase OpenSSL 1.0.2-chacha to use TLS 1.3

the-road-ahead

Since its inception in 2014, the OpenSSL 1.0.2-chacha fork [1] has been used as standard OpenSSL distribution for numerous SSL/TLS pentesting tools. It includes default support for ciphers that are deemed insecure, and has extensive starttls support.... in comparison with the vanilla 1.0.2 branch.

However, even though 1.0.2 is deemed a Long Term Supported (LTS) version, no new ciphers or functionality will be added to it.

The initial reason to start the fork was a lack of ChaCha20 / Poly1305 support in the 1.0.2 branch. After that, more and more features and insecure ciphers were added or ported back in from other branches.

As ChaCha20 / Poly1305 support has been added to the 1.1.1 branch, which also contains (preliminary) TLS 1.3 support, it might be time for the insecure OpenSSL version to be rebased onto a new branch. The initial goals will still be the same:

  • Add as much ciphers and functionality as possible
  • Keep the source aligned as much as possible to the vanilla version
  • Keep the patches atomic, transparent and maintainable
  • Write as little custom code as possible

This will be quite the challenge, as the architecture and …

more ...

Properly encoding and escaping for the web

encoding

When processing untrusted user input for (web) applications, filter the input, and encode the output. That is the most widely given advice in order to prevent (server-side) injections. Yet it can be deceivingly difficult to properly encode (user) input. Encoding is dependent on the type of output - which means that for example a string, which will be used in a JavaScript variable, should be treated (encoded) differently than a string which will be used in plain HTML.

When outputting untrusted user input, one should encode or escape, based on the context, the location of the output.

And what's the difference between escaping and encoding ?

Encoding is transforming data from one format into another format.

Escaping is a subset of encoding, where not all characters need to be encoded. Only some characters are encoded (by using an escape character).

There are quite a number of encoding mechanisms, which make this more difficult than it might look at first glance.

URL encoding

URL encoding is a method to encode information in a Uniform Resource Identifier. There's a set of reserved characters, which have special meaning, and unreserved, or safe characters, which are safe to use. If a character is reserved, then the …

more ...

Security through obscurity means better operational security

Are YOU OPSEC ?
What I personally like so much about being a penetration tester, is that (I'd like to think that) we make the world a safer place. Better security means better privacy means more democracy.
It's not about telling people what "they" did wrong. On the contrary, it's a learning process for all of us. No single application, network or system is the same. Each company has its own risk model, which means that there often is no one-size-fits-all solution.

It's about "how can we improve the security" for everybody. That's why I think it's so important that penetration testers should lead by example. Apply proper operation security procedures themselves.

Recently my first Pluralsight course was published, operational security for penetration testers. It deals with what opsec is, and how to apply it to your penetration testing workflow. The trailer of the course can be found at https://www.youtube.com/watch?v=DSF6XbCxYGY. The course itself can be found on Pluralsight's site, https://www.pluralsight.com/courses/opsec-penetration-testers

As beautifully stated by the third law of OPSEC: "If you are not protecting it, the adversary wins".

more ...

Use Emacs to create OAuth 2.0 UML sequence diagrams

OAuth 2.0 abstract protocol flow

It seems that the OAuth 2.0 protocol is more and more being used by web (and mobile) applications. Great !

Although the protocol itself is not that complex, there are a number of different use-cases, flows and implementations to choose from. As with most things in life, the devil is in the detail.

When reviewing OAuth 2.0 implementations or writing penetration testing reports I like to draw UML diagrams. That makes it easier to understand what's going on, and to spot potential issues. After all, a picture is worth a thousand words.

This can be done extremely easy using the GPL-licensed open source Emacs editor, in conjunction with the GPL-licensed open source tool PlantUML (and optionally using Eclipse Public Licensed Graphviz).

Emacs is worlds' most versatile editor. In this case, it's being used to edit the text, and automatically convert the text to an image. PlantUML is a tool which allows you to write UML in human readable text and does the actual conversion. Graphviz is visualization software, and optionally - in this case, it's used to show certain images.

Download the compiled PlantUML jar file, Emacs and optionally download and install Graphviz.

Once you have Emacs installed and running …

more ...

Why Sharing Improves Us

why sharing improves us

If you're a perfectionist, it's difficult to release a product: Whether that's source code, a pentest report or a blogpost. It's always a work in progress, and never finished.

That's why I like open sourcing code for example, releasing it for everybody to see. Knowing beforehand that the code, your work will be read by others (while you're working on it) forces you to think longer, deeper and harder about the variable names, the structures, function names and coding styles.

I'm the lead pentester for a company where we allow the customer to peek over our shoulder while we're working. The customer can see everything that we try, do and find out during the pentest. This improves the relationship with the customer, as s/he sees what we're doing and even can think along with us.

It also improves the customer satisfaction, as they know exactly what they're getting. And, it improves the mutual respect. Instead of becoming a classical us-versus-them pentest (the pentesters versus the developers), it becomes a 'let's improve the overall security together' exercise.

According to all the positive feedback we're receiving, we're onto something here. A win-win.

Sunlight is not only a great disinfectant, it's also …

more ...

Verify security vulnerabilities: A collection of Bash one-liners

security test one-liners
There's manual pentesting and writing reports, and there is blindly copying the output of automated scantools. I am fortunate enough to write and review a lot of pentest reports, as well as read pentest reports of a other companies.
Nothing looks as bad as "vulnerabilities" in a report that haven't been verified as such. This really degrades the quality of the report.

Below is a number of simple one-liners that can help with verifying vulnerabilities. All examples can be run in a basic shell (Bash, zsh), where the TARGET variable contains the hostname of the target that needs to be verified (without protocol).

SSL/TLS: BREACH

for compression in compress deflate exi gzip identity pack200-gzip br
bzip2 lzma peerdist sdch xpress xz; do curl -ksI -H "Accept-Encoding:
$compression" https://$TARGET \| grep -i "content-encoding:
$compression"; done
*Might* be vulnerable when: one or more compression methods are shown.

SSL/TLS: Client-Initiated Secure Renegotiation

echo "R\nQ" | timeout 10 openssl s_client -connect ${TARGET}:443
Vulnerable when: Renegotiation is successful (exit code == 0)

HTTP: TRACE enabled

for proto in http https; do echo testing ${proto}://${TARGET};
curl -qskIX TRACE ${proto}://${TARGET}\|grep -i TRACE; done;
Vulnerable when: the verb TRACE is shown

HTTP: Open …

more ...

Verifying webserver compression - BREACH attack

BREACH attack

A few lines of Bash script let you check which compression methods are supported by a SSL/TLS-enabled webserver.

 target=URL-OF-TARGET
for compression in compress deflate exi gzip identity pack200-gzip br
bzip2 lzma peerdist sdch xpress xz; do
curl -ksI -H "Accept-Encoding: ${compression}" https://${target} | grep -i ${compression}
done
If you see any output (and the server supports one of these compression algorithms), the site might be vulnerable to a BREACH attack. Might, because an attacker has to 'inject' content into the output (and have some control over it): This is called a chosen plaintext attack.
By carefully injecting certain content to the page, an attacker is able to deduce (parts) of the page content by merely looking at the response size (speed). An attacker therefore also has to be able to observe the server's response. A third prerequisite is that the secret (that an attacker wants to steal) is contained in the server response's body, and not 'just' in the response's header. Cookies are therefore out of scope.

The easiest mitigation is to disable HTTP compression completely. Other less practical mitigations are adding random content to each page, which changes the compressed size per page request, rate limiting the …

more ...

Automating OAuth 2.0 / JWT token retrieval for pentests

OAuth 2.0

Recently I was pentesting a complex API which used the OAuth 2.0 framework for authentication. Each API call needed an Authorization: Bearer header, containing a valid JSON Web Token (JWT).

To access the API I needed a lot of JWT tokens, as the tokens had a very short expiry time. To facilitate the quick generation of tokens I created a basic script that automated the OAuth authorization: It logs on to a domain, requests an authorization code, and converts that token to an authorization token.

One or more of these steps can be circumvented by command line options (e.g. by specifying valid cookies), to speed up the process.

Another feature of the script is that it automatically performs GET, POST, PUTs and DELETEs with valid tokens against a list of API endpoints (URLs). This preloads all API calls into a(n) (attacking) proxy, and helped the pentest speed up tremendously.

JSON Web Tokens

A JWT token is basically a string, representing a collection of one or more claims. Claims are name/value pairs which state information about a user or subject. The claims are either signed (JSON Web Signature, JWS) or encrypted (JSON Web Encryption, JWE). JWT's serve …

more ...

Getting to know your pentesting tools - curl and the HTTP/1.0 protocol

Tooling is important

When pentesting, it's always handy to have a bunch of automated scanner do the grunt work for you. Using automated tools saves time and can help in spotting potential vulnerabilities. Usually I run analyze_hosts.py, a wrapper around the open source tools droopescan, nmap, nikto, Wappalyzer and WPscan, with a bit of intelligence built in.

Recently I had a pentesting engagement where nikto flagged a IIS server as leaking the internal IP address (see https://support.microsoft.com/en-us/kb/218180 for more information).

This is a very common issue with older, unhardened IIS servers. The issue is triggered when a HTTP 1.0 request is made to the server, without supplying a Host header. The resulting Content-Location header will contain the server's (private) IP address, thereby leaking information which can subsequently be used for other attacks.

Example of a server's partial response:

HTTP/1.1 200 OK Content-Location: http://1.1.1.1/Default.htm

I ran into an interesting observation when I needed to reproduce this using curl. Curl is a great tool do do all kinds of HTTP requests on-the fly, and it's very well suited for scripting. It has flags to specify the protocol (e.g …

more ...

Open secure redirect

left or right

Aren't those RFC docs amazing ? Reading up on standards ?

I needed plenty of time for them, as I encountered some interesting issues. As it turned out, some websites / loadbalancers are overly optimistic in encrypting all the things - actually, in redirecting all the things.

TL;DR

Never trust HTTP(s) clients, and be careful when setting up redirection rules. A non-RFC compliant client can trigger a (difficult to exploit) open redirect vulnerability, due to a non-RFC compliant server.

This vulnerability can be tested using

analyze_hosts.py --http TARGET

See https://github.com/PeterMosmans/security-scripts/ for the latest version of analyze_hosts.py

Be warned, long post ahead: A while ago I came across some servers that, when being sent insecure requests, responded with a redirect to the secure version.

Request:

% curl -sI http://VICTIM/

Response:

HTTP/1.1 301 Moved Permanently
Connection: close
Location: https://VICTIM/

So far so good, nothing fancy going on here. In fact, this is excellent behaviour. Insecure requests are immediately upgraded to secure requests.

However, the server seemed to be overly happy in redirecting, as it listened to the client-supplied Host parameter:

% curl -s -I -H "Host: MALICIOUS" http://VICTIM/

And the server responded by

HTTP/1 …
more ...