Improving cross-subsystem git workflow: The different git configuration files

Cross-platform

Git configuration settings can be stored in three different files: The system configuration file, the global configuration file and the repository's local configuration file. See git on Windows - location of configuration files [1] for their locations.

When you use multiple subsystems on Windows (like MSYS2, Cygwin or any of the the Windows Subsystem for Linux distributions) it can be a chore to keep the git configurations synchronized. In other words: The less configuration files to maintain, the better.

Whether it's git for Windows, or one of the subsystem-specific git binaries:

Each of the git binaries that runs on Windows expands the tilde ( ~ ) to the home directory, and the path separator is always a slash ( / ).

These features can be used in our advantage in order to simplify the git configuration files between all subsystems.

Re-defining the system

The system configuration file is meant to store all system-specific configuration settings, which will be applied to all users and git repositories on the system.

If you're the only user of your workstation, it makes sense to re-define system as subsystem:

All subsystem-dependent git configuration settings should be set in the system git configuration file.

This means that settings depending on underlying binaries, like …

more ...

When to sharpen, and when to cut

Cut down a tree

When performing a task for the first time, I think of whether it's a one-off, or that it will become a recurring thing. Python scripts for example can be developed blazingly fast, and a little bit of automation can go a long way.

However...

...sometimes, while developing an automated solution that looked so simple beforehand, becomes a wild ride from one rabbit hole into the other. Missing dependencies, compile errors, functions that don't lend themselves very well for automation; Everything that can go wrong will go wrong.

That's why I like The Pomodoro Technique [1] so much, where you work in discrete time chunks of say 25, or 30 minutes. You decide upon the maximum cost for the implementation beforehand. Given the expected return, what is a sane investment ? If the time is up, then it's back to the original task at hand.

I have learned the hard way to always budget some time for documenting the (partial) solution, so that at least there's the profit of knowledge gained. Or, another record of a failed attempt...

[1]https://francescocirillo.com/pages/pomodoro-technique
more ...

Rebase OpenSSL 1.0.2-chacha to use TLS 1.3

the-road-ahead

Since its inception in 2014, the OpenSSL 1.0.2-chacha fork [1] has been used as standard OpenSSL distribution for numerous SSL/TLS pentesting tools. It includes default support for ciphers that are deemed insecure, and has extensive starttls support.... in comparison with the vanilla 1.0.2 branch.

However, even though 1.0.2 is deemed a Long Term Supported (LTS) version, no new ciphers or functionality will be added to it.

The initial reason to start the fork was a lack of ChaCha20 / Poly1305 support in the 1.0.2 branch. After that, more and more features and insecure ciphers were added or ported back in from other branches.

As ChaCha20 / Poly1305 support has been added to the 1.1.1 branch, which also contains (preliminary) TLS 1.3 support, it might be time for the insecure OpenSSL version to be rebased onto a new branch. The initial goals will still be the same:

  • Add as much ciphers and functionality as possible
  • Keep the source aligned as much as possible to the vanilla version
  • Keep the patches atomic, transparent and maintainable
  • Write as little custom code as possible

This will be quite the challenge, as the architecture and …

more ...

Tools for setting, tracking and achieving long term goals

planner2018

Immediately after reading an article on David Allen and his brainchild Getting Things Done, I started with implementing his methodology. I loved it. I still love it - especially the Getting Things Done concepts of inbox ZERO, maintaining lists, and periodic reviews.

Inbox ZERO for me is not so much about having empty email inboxes, as well as making sure that input is collected from multiple locations and stored into one dedicated location. An inbox can also be a notebook, or note taking software like Google Keep.

Electronically stored lists have the benefit of being available on a multitude of devices, the ability to synchronize between them, backups, and their biggest advantage - providing dynamic views.

emacs

Both tools that I have been using so far (the open source Java application ThinkingRock [1], and Emacs in Org mode [2]) for maintaining lists of actionable items and projects were great in that perspective. Using those tools for periodic reviews was a different story. After trying numerous configurations I never got the hang of using ThinkingRock and Emacs for that purpose. Items become abstract letters on a screen. Views never fully captured what was important or which project served which goal.

Periodically reviewing projects and …

more ...

Diff binary files like docx, odt and pdf with git

conversion_tools

Working with binary file types like the Microsoft Word XML Format Document docx , the OpenDocument Text odt format and the Portable Document Format pdf in combination with git has its difficulties. Out of the box, git only provides diffing for plain text formats. Comparing binary files in textual format is not supported.

With a simple configuration change and some open source, cross-platform tools, git can be adapted to diff those formats as well.

Installing the tools

First, one needs the tools which can convert the binary files to plain text formats. For most formats like docx and odt , the open source tool Pandoc [1] will do the trick. It can even export those files to Markdown format, or (my personal choice) reStructuredText [2]. A markup language like reStructuredText makes it possible to make a detailed comparison between structured documents, for instance when the heading level changed.

For PDF, there's the open source tool pdftotext , which is part of the Poppler [3] utils package and available for (almost) all operating systems. This can convert a PDF file to plain text.

There's a tiny catch with pdftotext , as it has issues using stdout as output, instead of writing to files. This is …

more ...


Properly encoding and escaping for the web

encoding

When processing untrusted user input for (web) applications, filter the input, and encode the output. That is the most widely given advice in order to prevent (server-side) injections. Yet it can be deceivingly difficult to properly encode (user) input. Encoding is dependent on the type of output - which means that for example a string, which will be used in a JavaScript variable, should be treated (encoded) differently than a string which will be used in plain HTML.

When outputting untrusted user input, one should encode or escape, based on the context, the location of the output.

And what's the difference between escaping and encoding ?

Encoding is transforming data from one format into another format.

Escaping is a subset of encoding, where not all characters need to be encoded. Only some characters are encoded (by using an escape character).

There are quite a number of encoding mechanisms, which make this more difficult than it might look at first glance.

URL encoding

URL encoding is a method to encode information in a Uniform Resource Identifier. There's a set of reserved characters, which have special meaning, and unreserved, or safe characters, which are safe to use. If a character is reserved, then the …

more ...

Hacker Summer Camp: BSides Las Vegas and DEF CON 2017 review

BSides Las Vegas 2017

The 2017 edition of Hacker Summer Camp is over... Blackhat, BSides and DEF CON: Arguably the best security conferences in the world, being held during a week in Las Vegas. And wow, what an amazing edition it was this time.

I tried to learn, network, enjoy and soak up as much as possible - which unfortunately means not seeing each and every talk, and (probably) missing out on amazing content. That's why I'm so glad that recordings and slidedecks are being released by BSides and DEF CON, so that you can see where you should have been - after the fact.

The biggest draw for me personally to BSides and DEF CON is that you can immerse yourself in fields and interests that are outside of your daily work or routine. Car hacking, lockpicking, the Internet of Things, this year even voting machines: It's all there. You can learn from and play with everything.

As with playing Capture the Flag, it's a great way to touch a lot of surfaces in a short amount of time.

Josh Corman's BSides Las Vegas keynote was amazing - each time I hear him speak, he manages to get everybody even more enthusiastically about cooperation, about personal …

more ...

Generate list of used content tags for Pelican

If your Pelican-generated site uses lots of different tags for articles, it can be difficult to remember or use tag names consistently. Therefore I needed a quick method to print (comma separated) unique tags that were stored in text files.

This shell one-liner from within the content directory will sort and show all tags from reStructuredText ( *.rst ) files:

grep -h '^:tags:' *.rst | sed -e 's/^:tags:\s*//;s/\s*,\s*/\n/g' | sort -u

First grep will filter on the :tags: property and will only print out the matching line (without filename, thanks to the -h flag).

Then sed will remove the :tags: keyword (and trailing spaces), and all tags will be split using newline characters.

Finally, sort takes care of sorting and only printing unique entries.

Analogous, one can do the same for categories:

grep -h '^:category:' *.rst | sed -e 's/^:category:\s*//' | sort -u

As Pelican only allows one category, this is somewhat simpler.

For maximum readability, tr can convert the newlines into spaces, so that the output is one big line:

grep -h '^:tags:' *.rst | sed -e 's/^:tags:\s*//;s/\s*,\s*/\n/g' | sort -u | tr '\n' ' '; echo

The last echo is meant to end …

more ...

Convert WordPress to static site generator Pelican

Pelican

After a number of years using WordPress as blogging software, I converted the site to a static site generator: Pelican.

Pelican converts reStructuredText into static HTML. No more PHP, no more databases, but straight static HTML.

The process of converting the site was relatively painless. The conversion tool did a great job of converting an XML export of WordPress into reStructuredText pages.

What needed (and still needs) some manual care were/are the code blocks (the biggest reason of the move from WordPress to Pelican) in articles, and the escaping of variables. WordPress gets pretty complex once you're trying to use it for code snippets and console outputs. The reStructuredText is much more flexible and allows you to edit the site using any text editor. There are tools to do that with WordPress and its API, but it always felt like a difficult workaround.

I thought about keeping the URLs as-is: Over the years the number of visitors of the site has steadily risen, as has the level of indexing by search engines. You don't want dead links - but on the other hand, a transition to another content management system would be the perfect moment to 'clean up' the category …

more ...