Tooling is important

When pentesting, it's always handy to have a bunch of automated scanner do the grunt work for you. Using automated tools saves time and can help in spotting potential vulnerabilities. Usually I run, a wrapper around the open source tools droopescan, nmap, nikto, Wappalyzer and WPscan, with a bit of intelligence built in.

Recently I had a pentesting engagement where nikto flagged a IIS server as leaking the internal IP address (see for more information).

This is a very common issue with older, unhardened IIS servers. The issue is triggered when a HTTP 1.0 request is made to the server, without supplying a Host header. The resulting Content-Location header will contain the server's (private) IP address, thereby leaking information which can subsequently be used for other attacks.

Example of a server's partial response:

HTTP/1.1 200 OK Content-Location:

I ran into an interesting observation when I needed to reproduce this using curl. Curl is a great tool do do all kinds of HTTP requests on-the fly, and it's very well suited for scripting. It has flags to specify the protocol (e.g. --http1.0, --http1.1 and --http2).

However, this only changes the protocol version in the Request-Line (the first line of the HTTP protocol). It doesn't change anything to the request itself. That means if you would use curl with the HTTP/1.0 parameter to verify the vulnerability, it would fail. You wouldn't find it using that command. It does something different than you would expect

Although the RFC 1945 doesn't define the request headers Accept, Connection and Host, when you ask curl to use the HTTP/1.0 protocol it still sends those headers. This means that the request is strictly speaking non HTTP/1.0 compliant, not what you might expect when using the --http1.0 flag.

So when verifying the IIS leaking internal vulnerability, one cannot 'simply' use curl by specifying the HTTP/1.0 flag (-0 or --http1.0). Curl doesn't follow the HTTP/1.0 protocol here to the letter...

The command

curl --http1.0
will result in the following request:

GET / HTTP/1.0 Host: User-Agent: curl/7.48.0 \
Accept: */* Connection: close

User-Agent is actually the only request header that is defined in the official HTTP/1.0 RFC. About the 'additional' headers the RFC has the following statement:

"Request-Header field names can be extended reliably only in
combination with a change in the protocol version. However, new or
experimental header fields may be given the semantics of request
header fields if all parties in the communication recognize them to
be request header fields. Unrecognized header fields are treated as
Entity-Header fields."

Even though Accept, Connection and Host are not official part of the HTTP/1.0 specification, they are very common. To make a curl request almost completely HTTP/1.0 compliant, one has to overwrite the non-HTTP/1.0 fields (I say almost, as it seems impossible with curl to overwrite/remove the connection header - this field was introduced in HTTP/1.1 (RFC 2068):

curl --http1.0 --header 'Accept:' --header 'Connection:' --header 'Host:'

This will result in the following request:

GET / HTTP/1.0 User-Agent: curl/7.48.0 Connection: close

And indeed, with this command one can reproduce the earlier mentioned IIS vulnerability.

Lesson learned: spending time on getting to know (the quirks of) your tools is time well spent...


comments powered by Disqus