It's about the journey: Compiling 64-bit Unison / GTK2 on Windows

Unison File Synchronizer

The excellent MSYS2 (mingw64 and mingw32) subsytem makes compiling native Windows compilations "as easy as compilation can be". However, as with everything in life, sometimes when trying to do one thing (compile a program), you end up chasing other vaguely related issues (one exotic compile error after another).

For synchronizing files between servers and workstations I use the open source GPLv3 licensed Unison File Synchronizer [1]. Although the text interface version of Unison compiles straight-out-of-the box on mingw64, the GTK2 interface proved to be a bit more cumbersome.

To compile Unison with the GTK2 interface, lablgtk [2] is needed, an OCaml interface to GTK.

So, the journey began with firing up a shell in a fresh mingw64 environment, and installing the build prerequisites:

pacman -Sy --noconfirm base-devel git \
mingw-w64-x86_64-{glib2,gtk2,ocaml,pango,toolchain}

After downloading the latest source (2.18.5 [3]) and trying to compile it (using make ) after running

./configure --prefix=/mingw64 --disable-gtktest

the first error message is shown:

mingw64/include/gtk-2.0/gdk/gdkwin32.h:40:36: fatal error: gdk/win32/gdkwin32keys.h: No such file or directory

It seems that GTK2 version 2.24.31 contains an error [4], and incorrectly still references the file …

more ...

Why Sharing Improves Us

why sharing improves us

If you're a perfectionist, it's difficult to release a product: Whether that's source code, a pentest report or a blogpost. It's always a work in progress, and never finished.

That's why I like open sourcing code for example, releasing it for everybody to see. Knowing beforehand that the code, your work will be read by others (while you're working on it) forces you to think longer, deeper and harder about the variable names, the structures, function names and coding styles.

I'm the lead pentester for a company where we allow the customer to peek over our shoulder while we're working. The customer can see everything that we try, do and find out during the pentest. This improves the relationship with the customer, as s/he sees what we're doing and even can think along with us.

It also improves the customer satisfaction, as they know exactly what they're getting. And, it improves the mutual respect. Instead of becoming a classical us-versus-them pentest (the pentesters versus the developers), it becomes a 'let's improve the overall security together' exercise.

According to all the positive feedback we're receiving, we're onto something here. A win-win.

Sunlight is not only a great disinfectant, it's also …

more ...

Verify security vulnerabilities: A collection of Bash one-liners

security test one-liners
There's manual pentesting and writing reports, and there is blindly copying the output of automated scantools. I am fortunate enough to write and review a lot of pentest reports, as well as read pentest reports of a other companies.
Nothing looks as bad as "vulnerabilities" in a report that haven't been verified as such. This really degrades the quality of the report.

Below is a number of simple one-liners that can help with verifying vulnerabilities. All examples can be run in a basic shell (Bash, zsh), where the TARGET variable contains the hostname of the target that needs to be verified (without protocol).

SSL/TLS: BREACH

for compression in compress deflate exi gzip identity pack200-gzip br
bzip2 lzma peerdist sdch xpress xz; do curl -ksI -H "Accept-Encoding:
$compression" https://$TARGET \| grep -i "content-encoding:
$compression"; done
*Might* be vulnerable when: one or more compression methods are shown.

SSL/TLS: Client-Initiated Secure Renegotiation

echo "R\nQ" | timeout 10 openssl s_client -connect ${TARGET}:443
Vulnerable when: Renegotiation is successful (exit code == 0)

HTTP: TRACE enabled

for proto in http https; do echo testing ${proto}://${TARGET};
curl -qskIX TRACE ${proto}://${TARGET}\|grep -i TRACE; done;
Vulnerable when: the verb TRACE is shown

HTTP: Open …

more ...

Zen provisioning: Bootstrap the installation of Ansible using Vagrant

zen

I'm a big fan of the DevOps attitude of "cattle" versus "pets": machines should be built in a repeatable, automated and consistent way. If there's something wrong, don't be afraid to replace a sick "cow" instead of trying to revive your "pet".

This Zen mindset also helps when preparing for demos, trainings and workshops: Usually I need a number of machines, and what better way than create them by using automation ? For that I'm using the tools Ansible, Packer, Vagrant and VirtualBox - they are all Open Source and can be used on a number of platforms (e.g. Windows, Linux and Mac OS X).

Ansible is a tool for managing systems and deploying applications, licensed under the GNU General Public License version 3 (my personal favorite).

Vagrant is a tool for managing virtual machines and is licensed under the MIT license.

VirtualBox is a virtualization environment for local use, licensed under the GNU General Public License version 2.

Packer creates a machine image by installing an operating system to a multitude of local and cloud platforms, for example VMWare, VirtualBox as well as Docker, Amazon EC2 and DigitalOcean. Packer is licensed under the Mozilla Public License Version 2.0.

How …

more ...

Compile Emacs for Windows using MSYS2 and mingw64

Emacs 25.1

Emacs 25.1 was officially released on September 17th, 2016. The excellent MSYS2 subsystem and the open source gcc compiler make it super-easy to build binaries on/for Windows (7, 8, 10). In three easy steps from source to binaries:

1: Install and prepare the MSYS2 subsytem

Download and run the installer at http://repo.msys2.org/distrib/msys2-x86_64-latest.exe After installing, run MSYS2 64bit which drops you in a Bash shell. Update all packages using the following command:

pacman -Syuu

Sometimes updates of the runtime/filesystem can cause update errors. This is no cause for panic - kill and restart the terminal. For building 64-bit Windows binaries, always use mingw64.exe to start the terminal.

Install all packages necessary for building:

pacman -Sy --noconfirm base-devel git \
mingw-w64-x86_64-{giflib,gnutls,jbigkit,lib{jpeg-turbo,png,rsvg,tiff,xml2},toolchain,xpm-nox}

2: Clone the Emacs source

To simplify building, you can define the environment variables BUILDDIR (where the binaries are built), INSTALLDIR (where the binaries will be installed to), and SOURCEDIR (where the source lives, the git repository). Note that since you're in the MSYS2 subsystem, paths are Unix-style, using forward slashes. This command creates SOURCEDIR if it doesn't exist yet, clones the …

more ...

Are you a PenTexter ? Now in BSidesLV

BSidesLV

If you're around in Las Vegas during BlackHat / BSidesLV / DefCon, I'll be presenting at BSidesLV 2016 together with Melanie Rieback on an open source pentest reporting framework.

The talk will announce a new OWASP project: PenText, a fully open-sourced XML-based pentest document automation system. The PenText system is a document automation framework that supports the entire pentesting lifecycle: from the initial inquiry, through pentest scoping, quotations, pentesting, and reporting, through the final invoice.

During this talk, Melanie and I will demonstrate the OWASP PenText system live, in the context of our larger Pentesting ChatOps infrastructure (RocketChat, Hubot, and Gitlab). We will describe the basics of how the OWASP PenText system is architected (XML, XSLT, XSL-FO), and show how the system can be used to manage the entire lifecycle of pentesting data, including the automatic generation of documentation at various points in the process (including quotations, pentest reports, and invoices).

Lots of ChatOps in combination with pentesting and open source.

See https://bsideslv2016.sched.org/event/7azc/are-you-a-pentexter-open-sourcing-pentest-reporting-and-automation

See you there ?

more ...

Verifying webserver compression - BREACH attack

BREACH attack

A few lines of Bash script let you check which compression methods are supported by a SSL/TLS-enabled webserver.

 target=URL-OF-TARGET
for compression in compress deflate exi gzip identity pack200-gzip br
bzip2 lzma peerdist sdch xpress xz; do
curl -ksI -H "Accept-Encoding: ${compression}" https://${target} | grep -i ${compression}
done
If you see any output (and the server supports one of these compression algorithms), the site might be vulnerable to a BREACH attack. Might, because an attacker has to 'inject' content into the output (and have some control over it): This is called a chosen plaintext attack.
By carefully injecting certain content to the page, an attacker is able to deduce (parts) of the page content by merely looking at the response size (speed). An attacker therefore also has to be able to observe the server's response. A third prerequisite is that the secret (that an attacker wants to steal) is contained in the server response's body, and not 'just' in the response's header. Cookies are therefore out of scope.

The easiest mitigation is to disable HTTP compression completely. Other less practical mitigations are adding random content to each page, which changes the compressed size per page request, rate limiting the …

more ...

Automating OAuth 2.0 / JWT token retrieval for pentests

OAuth 2.0

Recently I was pentesting a complex API which used the OAuth 2.0 framework for authentication. Each API call needed an Authorization: Bearer header, containing a valid JSON Web Token (JWT).

To access the API I needed a lot of JWT tokens, as the tokens had a very short expiry time. To facilitate the quick generation of tokens I created a basic script that automated the OAuth authorization: It logs on to a domain, requests an authorization code, and converts that token to an authorization token.

One or more of these steps can be circumvented by command line options (e.g. by specifying valid cookies), to speed up the process.

Another feature of the script is that it automatically performs GET, POST, PUTs and DELETEs with valid tokens against a list of API endpoints (URLs). This preloads all API calls into a(n) (attacking) proxy, and helped the pentest speed up tremendously.

JSON Web Tokens

A JSON Web Token (JWT) is basically a string, representing a collection of one or more claims. Claims are name/value pairs which state information about a user or subject. The claims are either signed using a JSON Web Signature (JWS) or encrypted using JSON …

more ...

Getting to know your pentesting tools - curl and the HTTP/1.0 protocol

Tooling is important

When pentesting, it's always handy to have a bunch of automated scanner do the grunt work for you. Using automated tools saves time and can help in spotting potential vulnerabilities. Usually I run analyze_hosts.py, a wrapper around the open source tools droopescan, nmap, nikto, Wappalyzer and WPscan, with a bit of intelligence built in.

Recently I had a pentesting engagement where nikto flagged a IIS server as leaking the internal IP address (see https://support.microsoft.com/en-us/kb/218180 for more information).

This is a very common issue with older, unhardened IIS servers. The issue is triggered when a HTTP 1.0 request is made to the server, without supplying a Host header. The resulting Content-Location header will contain the server's (private) IP address, thereby leaking information which can subsequently be used for other attacks.

Example of a server's partial response:

HTTP/1.1 200 OK Content-Location: http://1.1.1.1/Default.htm

I ran into an interesting observation when I needed to reproduce this using curl. Curl is a great tool do do all kinds of HTTP requests on-the fly, and it's very well suited for scripting. It has flags to specify the protocol (e.g …

more ...

Automating repetitive git / setup tasks

repetitite work

Imagine you work on a large number of projects. Each of those projects has its own git repository and accompanying notes file outside of the repo. Each git repository has its own githooks and custom setup.

Imagine having to work with multiple namespaces on different remote servers. Imagine setting up these projects by hand, multiple times a week.

Automation to the rescue ! Where I usually use Bash shell scripts to automate workflows, I'm moving more and more towards Python. It's cross-platform and sometimes easier to work with, as you have a large number of libraries at your disposal.

I wrote a simple Python script that does all of those things more or less 'automated'. Feed the script the name of the repository you want to clone, and optionally a namespace, patchfile and templatefile variable (either command-line or using a configuration file). The script will then:

  • clone the repository
  • modify the repository (e.g. apply githooks)
  • optionally modify the repository based on given variables
  • create a new notes file from a template
  • optionally modify the notes file based on given variables

The advantage is that you can use a configuration file containing the location of the remote git repository, the patchfile …

more ...