Bash vs Python (dependency hell)

For a number of years I maintained a small collection of open source security scripts, written in Bash. The main purpose of these scripts was to act as a wrapper around other open source tools. Why try to remember long and awkward command line parameters, when you can ask a script to do that for you ?

Bash was chosen, as it was distribution-independent. It works almost everywhere (although sometimes OSX support is troublesome, due to outdated Bash versions).

After more and more (requested) features crept in, the

analyze_hosts.sh
Bash script became more and more complex. That's why I decided to port the script to Python. In my experience, it's at-least-as portable, and the usage of third party (pip) packages means that less time is spent on re-inventing the weel, and more on the actual functionality.

Yes, sometimes people talk about the dependency hell of Python, and in some cases, the usage of third party packages means you have to be careful of what you're doing.
However, when using virtual environments each Python script and its dependencies can be safely separated from the 'main' Python. For example, the following commands create a separate virtual environment for the security scripts repo …
more ...

Open secure redirect

left or right

Aren't those RFC docs amazing ? Reading up on standards ?

I needed plenty of time for them, as I encountered some interesting issues. As it turned out, some websites / loadbalancers are overly optimistic in encrypting all the things - actually, in redirecting all the things.

TL;DR

Never trust HTTP(s) clients, and be careful when setting up redirection rules. A non-RFC compliant client can trigger a (difficult to exploit) open redirect vulnerability, due to a non-RFC compliant server.

This vulnerability can be tested using

analyze_hosts.py --http TARGET

See https://github.com/PeterMosmans/security-scripts/ for the latest version of analyze_hosts.py

Be warned, long post ahead: A while ago I came across some servers that, when being sent insecure requests, responded with a redirect to the secure version.

Request:

% curl -sI http://VICTIM/

Response:

HTTP/1.1 301 Moved Permanently
Connection: close
Location: https://VICTIM/

So far so good, nothing fancy going on here. In fact, this is excellent behaviour. Insecure requests are immediately upgraded to secure requests.

However, the server seemed to be overly happy in redirecting, as it listened to the client-supplied Host parameter:

% curl -s -I -H "Host: MALICIOUS" http://VICTIM/

And the server responded by

HTTP/1 …
more ...

automatic XML validation when using git

XML

Recently I worked on a project which involved manually editing a bunch of XML files. Emacs is my favorite ~operating system~ editor, and it has XML validation built in (using the nXML mode). It highlights validation errors while-you-type. Unfortunately, even with Emacs showing potential issues in RED COLOR, I managed to commit a number of broken XML files to my local git repository. Subsequently when I pushed my errors to the remote 'origin' git repository, the errors broke builds.

Of course this can be completely prevented by locally using pre-commit hooks. If your local git repository validates XML files before you can commit them, and denies invalid XML files, then one part of the problem is solved.

A pre-receive hook on the receiving server side can do the same as a pre-commit hook locally: Validate XML files before letting somebody push a commit which can break the build process.

I looked around the Internet but couldn't find a lightweight quick script to do only and exactly that. That's the reason I whipped up a basic pre-commit and pre-receive hook, written in Python.

You can find the very basic and rough code at https://github.com/PeterMosmans/git-utilities. By changing the …

more ...

Preparing your team for a CTF competition - Defcon style

Defcon

Playing Capture The Flag with a team on location is something completely different than performing penetration tests, security assessments or even trying to solve CTF challenges over the Internet.

At Defcon 23 I joined a team of really knowledgeable, nice and friendly people for the OpenCTF competition. It was an exhilarating ride from setting up all equipment to the glorious finish. Playing Capture The Flag on Defcon was educational but foremost fun, fun and fun.

So why would you spend a good chunk of 48 hours sitting in a chair behind a screen while there is so much more to see and experience at Defcon ? In one word: The undescribable exciting atmosphere of playing during a conference, of competing against all these bright people from all over the world, desperately trying to solve the challenges.

Here are some of my personal notes on how to get the most out of competing in an OpenCTF competition with a team:

  • Allow plenty of time before the competition to set up (and harden - don't be a fool like me) your machine. Make sure you have all necessary tools and notes.
  • Make sure beforehand that all team members have one communication channel (eg. IRC …
more ...

Defcon 23 was great - people are great

Defcon 23
For quite a while now, I work in the security industry. One of the things I do is providing security advice for companies on all sorts of guidelines, policies and hardening stuff. Web penetration tests is also something I do very regularly. In other words, a disclaimer before you read on: I should have known better...
VirtualBox, Packer, Vagrant and Ansible are tools that I use a lot. These four tools make virtualizing and provisioning really easy. You can create new machines, experiment with them and test different setups in a repeatable and automated way.
As I sometimes organize pentesting workshops, I have several virtual machines with Kali (a penetration testing distribution) installed on them readily availabe.
So, I connected my laptop to the network of the 23rd Defcon conference in Las Vegas, when one of these standard Kali virtual machines was (still) running as guest on my machine. Not only was Kali running, the guest was also configured to run in bridged networking mode. This means that Kali got it's own network IP address assigned.
What I hadn't changed on that machine was Kali's default root password. To make matters worse, what I had changed was the ssh server …
more ...

The future is here: HTTP/2

Last month I held a number of presentations on the latest and greatest HTTP/2 protocol. It's an area where there's currently a lot of demand for knowledge and practical tips. Most people are surprised to find out that the're actually already using it on a daily base.

If you're interested you could check out an Ansible role which installs a number of client-side and server-side tools all HTTP/2 enabled:

  • curl - A data transferring tool with HTTP/2 support
  • h2load - A benchmarking tool for HTTP/2 and SPDY servers
  • nghttp - A HTTP/2 client with SPDY support
  • nghttpd - A HTTP/2 server with SPDY support
  • nghttpx - A transparent HTTP/2 proxy with SPDY support
  • openssl - A cryptographic library with ALPN support (1.0.2-chacha)

The following libraries will be installed:

  • libcrypto - OpenSSL
  • libcurl - CURL library
  • libnghttp2 - A HTTP/2 and HPACK C library
  • libspdylay - A SPDY library
  • libssl - OpenSSL

You can find the role at https://github.com/PeterMosmans/ansible-role-http2

more ...

Safely storing Ansible playbook secrets

see the forest for the trees

More and more organizations use dedicated software to safely handle the creation and management of secrets (for example SSL certificate keys, private variables and passwords). Three 'well known' solutions are Square's Keywhiz, Hashicorp's Vault and crypt in combination with etcd or consul.

As with all security solutions the roll-out can be quite cumbersome. The correct implementation (think key management, think audit trails, think key recovery) of any one of these solutions is difficult. And difficult means that most people won't use it, at least not right away (remember SELinux ?).

There are a number of tools available to encrypt secrets within (Ansible) repositories. One of them for instance is Ansible Vault (look here for a more in-depth review). Although the idea of selectively encrypting data is a good one, text-oriented version control systems like git or Subversion aren't meant to store binary blobs of encrypted data. Moreover you still run the risk of accidentally uploading or sharing unencrypted files. Mitigations like adding filenames of unencrypted secrets to a .gitignore file are error-prone.

How to facilitate developers and system operators to store secrets in a safe place, outside the repositories where Ansible playbooks and configuration files are kept ?

This article describes a …

more ...

OWASP AppSecEU 2015 review - more and more DevOps


This year, the European edition of OWASP AppSec conference was held in Amsterdam, The Netherlands.
One of the things I really like about OWASP conferences is the atmosphere. Usually it consists of a nice blend of IT people from literally all over the world, and this conference didn't disappoint. One of the added values of visiting such a conference is that you hear stories from the trenches from peers and likeminded people. It makes it easier to (try to) spot trends in the security world.

Some observations:

DevOps

I'm a big fan of the DevOps movement, and what it means for security. More cooperation plus more automated testing means more secure systems. Thankfully there were a lot of presentations that focused on how to integrate automated security testing into the continuous deployment pipeline. As the O from OWASP stands for open, mainly open source testing tools were covered, like OWASP ZAP, Arachni and the Gauntlt framework. Some tools still need quite some tweaking to be successful, but the landscape surely is promising.

Dev is running faster than Ops

I'm still under the impression that the DevOps movement is mainly led by developers. The tools that are improving faster are the …

more ...

OpenSSL the Ansible vault... using PBKDF2

OpenSSL the Ansible vault

Ansible is a popular open-source software platform for configuring and managing computers. It helps sysadmins to provision new servers in a reliable and repeatable way, and helps developers who want to push their code as fast as possible. It takes scripts (playbooks) as input, which a lot of people can and do share with each other. The beauty of open source. Playbooks can contain sensitive data like passwords and SSL keys - stuff that you don't want to share, or incidentally upload to GitHub.

Last year Ansible added a tool to its arsenal to easily encrypt structured datafiles (containing sensitive data), called Ansible Vault. You can specify a key or keyfile when running a playbook, which decrypts the data on-the-fly. Encrypted data can still be edited

I love it when people make it easier to use encryption. The easier it becomes, the more people will use it, the safer everybody will be.

Another beauty of open source is that you can inspect the code. And modify it! I wanted to be able to encrypt and decrypt the data where/when you cannot use Ansible vault, by using other tools and languages like OpenSSL and Bash script.

Under the hood Ansible vault …

more ...

Replacing ChaCha20/Poly1305: a new owner

A post back I wrote about the 'design goals' of the 1.0.2-chacha fork of OpenSSL - see https://www.onwebsecurity.com/openssl/the-work-flow-of-the-full-featured-openssl-fork-chacha20poly1305.

A new owner

The ChaCha20 / Poly1305 code in the 1.0.2-chacha fork is originally from the OpenSSL repository, but has since been abandoned there. BoringSSL became its new home, where it's actively being maintained by Google (primarily Adam Langley and David Benjamin). Over time I applied several patches that BoringSSL applied to the ChaCha20 / Poly1305 code, to keep it as up to date as possible.

The issue now is that BoringSSL diverges more and more from the OpenSSL code, which makes it more difficult to maintain (error-prone), and, more important, makes the fork itself diverge too much from OpenSSL.

That's why it's my intention to replace the current ChaCha20 / Poly1305 code from 1.0.2-chacha with more recent attributions that align better with the official OpenSSL code. As far as I understood the official OpenSSL distribution will add ChaCha20 / Poly1305 at some time in the future, which of course would be the best possible outcome. Official support.

Until that time I will do my best to maintain the 1.0.2-chacha branch.

more ...