“smart”quotes in Outlook on Mac

Spent ages trying to figure out how to stop Outlook 2011 converting my straight “quotes’ into “smart”(curley, UTF-8, doesn’t render well on Windows machines)-quotes.

Turns out it’s not an Outlook setting at all but a system-wide OSX setting in the Keyboard preferences which controls how all text-input boxes work:

Screen Shot 2015-02-16 at 10.34.36

Quick and dirty delete all top-level files in a directory not owned by any user with a process currently running

Handy for shared systems where you want to reap files in, for example, /tmp without affect currently running processes. There are some unhandled exceptions, such as files disappearing after the file list is built but before the file is deleted (which will throw an uncaught file not found exception) but as a quick and dirty first attempt I think it’s not too shabby.

#!/usr/bin/env python

import collections
import logging
import getopt
import os
import shutil
import sys

def usage():
        sys.stderr.write("Usage: {scriptname} [[-h | -? | --help] | [-d | --debug] path [path1 [path2...]]]\n".format(scriptname=sys.argv[0]))

def remove(path):
        if os.path.isdir(path):

def reap(directory):

        # Get a list of all files in the directory to consider for reaping, and group them by owner's uid
        user_files = collections.defaultdict(list)
        map(lambda file: user_files[os.lstat(os.path.join(directory, file)).st_uid].append(file), os.listdir(directory))

        # Get a list of users who have processes running on the box
        users_with_processes = [ os.lstat('/proc/{proc}'.format(proc=proc)).st_uid for proc in os.listdir('/proc') if proc.isdigit() ]

        # Now find the users who do not have running processes, as these are the users' whose files we are going to reap (always skip root)
        users_to_reap = [ user for user in user_files.keys() if user != 0 and user not in users_with_processes ]

        # remove the files
        if DEBUG:
        map(action, [ os.path.join(directory, file) for file in [ file for user in users_to_reap for file in user_files[user] ] ])

        optlist, args = getopt.getopt(sys.argv[1:], 'dh?', ['debug', 'help'])
except getopt.GetoptError as err:
        sys.stderr.write(str(err) + "\n")


for opt, value in optlist:
        if opt in ('-d', '--debug'):
        elif opt in ('-h', '-?', '--help'):
                sys.stderr.write("Unhandled option: {opt}\n".format(opt=opt))

if len(args) == 0:


for dir in args:

Preventing git commits as root

This is a quick pre-commit git hook to prevent committing annonymously as root – it refuses to allow root to commit directly and insists that –author is given if a user is commmitting via sudo.



# Check that if committing as root the author is set, as far as possible.
if [ "$UID" -eq 0 ]
echo "Warning: Committing as root." >&2
if [ -n "$SUDO_USER" ]
if ! echo $SUDO_COMMAND | grep -q -e '--author'
cat - >&2 < or (if you have committed before - see 'man git-commit'):
git commit --author=some_pattern_that_matches_your_name

Previous authors in repository:
$( git log --all --format='%an <%ae>' | sort -u )

echo "Committing as root, without using sudo. Please do not do this." >&2

if [ "$abort" -ne 0 ]
echo -e "\n\ncommit aborted\n" >&2
exit 1

Version controlling server configuration with GIT

Often I want to version control certain, usually critical, system configuration files. In the past I’ve either set this up on a directory-by-directory basis or not bothered (which results in me creating a lot of superfluous files by doing a ‘cp config_file config_file.bak_`date +%F`’ before editing).

I, with a colleague, have come up with this solution that, while not necessarily perfect, is a lot more manageable then previous alternatives I have used and has the benefits of creating a centralised repository on a given machine as well as not polluting system directories with ‘.svn’ or ‘.git’ directories. It also avoids having to play with nested repositories (i.e. directories with a ‘.git’ dir under another that also has a ‘.git’ dir), which are unlikely if you’re just version controlling /etc but more common if controlling /home/$USER. I think it’s quite neat and keeps the revision control itself away from the core system files.

  1. The first step is to create a suitable directory to host the version control. We won’t actually be storing files here but it does need to be a “normal” repository (i.e. not a bare repository) as it does represent a working copy of the repository.

    mkdir /root/vc # Calling it 'vc' for 'Version Control'
    cd /root/vc

  2. Step two is to initialise our repository:

    git init

  3. Now, and this is the clever bit, we need to configure the repository to use ‘/’ as the base of it’s working tree (so we end up with a repository, with all it’s revision control files, in /root/vc but really the files under ‘/’ is under revision control). I also excluded everything by default (so git does not list everything as not being controlled and we can cherry pick the files we actually care about). UPDATE: I’ve since found the config variable ‘staus.showUntrackedFiles’ (reading man-pages FTW!), which achieves the same end in a much saner manner.

    git config core.worktree /
    echo '*' >> .git/info/exclude
    git config status.showUntrackedFiles no

…and that’s it. Just use the ‘/root/vc’ directory as a normal git repository but with any files under ‘/’. Simple, eh?

There is a drawback, however. Since I have excluded everything, files have to be added to the repository with a ‘-f’ (force) flag:

git add -f /etc/ssh/sshd_config

This also applies when using ‘git add’ to stage modified files however ‘git commit -a’, which stages modified & deleted files and commits in one step, does not required ‘-f’.
UPDATE: This is no longer an issue using ‘status.showUntrackedFiles’ to disable showing untracked files by default. There maybe other issues with this approach but I’ve not spotted them in my (5 minutes!) of testing/experimentation.

Scientific Linux 6 (and, by extension, RHEL6) authentication woes

Since no-one seems willing to fork out the cash for RHEL, yet management insist on using CentOS as “our skills are with RedHat”, and there is no sign yet of CentOS6 I have been doing some experimenting with Scientific Linux 6 (which is also a RHEL rebuild).

Configuring the new sssd daemon to do AD authentication seemed straightforward enough until I hit an interesting problem. It would appear that, beginning with RHEL6, RedHat has split /etc/pam.d/system-auth into /etc/pam.d/system-auth and /etc/pam.d/password-auth. In of itself this is not a problem HOWEVER I have also discovered that, out of the box, GDM uses the /etc/pam.d/gdm-password stack (which includes password-auth) and gnome-screensaver uses the /etc/pam.d/gnome-screensaver stack (which includes system-auth). The result of this is that if I configure system-auth only (which is what the RHEL6 deployment guide says to do[0] then I cannot login to GDM. If I set up password-auth only (described in the migration guide[1] as for “remote services”) I can login to GDM but once my session is locked with gnome-screensaver (either manually or by the screensaver timeout (which is on and locks by default) I cannot unlock it from the prompt which appears when I click the mouse or touch a key (although I can click “switch user” to get back to GDM where I can unlock it). Setting both up seems counter-intuitive if I only want to configure local access and ‘password-auth’ is for remote services.

[0] http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/chap-SSSD_User_Guide-Setting_Up_SSSD.html
[1] http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Migration_Planning_Guide/ch07s05.html

even more Python vs Perl performance

Following my last two posts (http://blog.entek.org.uk/?p=106 and http://blog.entek.org.uk/?p=112) I took some profilers to my codes.

First up my revised Python implementation:

> python -m cProfile checkmail2
         16225093 function calls in 8.663 CPU seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    8.663    8.663 :1()
        1    0.000    0.000    0.000    0.000 UserDict.py:17(__getitem__)
        1    5.906    5.906    8.662    8.662 checkmail2:3()
        1    0.000    0.000    8.663    8.663 {execfile}
       37    0.000    0.000    0.000    0.000 {method '__enter__' of 'file' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
       22    0.000    0.000    0.000    0.000 {method 'get' of 'dict' objects}
        1    0.000    0.000    0.000    0.000 {method 'iteritems' of 'dict' objects}
 16224990    2.755    0.000    2.755    0.000 {method 'startswith' of 'str' objects}
       37    0.001    0.000    0.001    0.000 {open}
        1    0.000    0.000    0.000    0.000 {posix.listdir}

Quite clearly there is a huge number (16 million!) of calls to startswith which is the biggest time-sink outside the main script.

Comparing the Perl implementation:

> perl -d:DProf checkmail3
> dprofpp
Total Elapsed Time = 3.426896 Seconds
  User+System Time = 3.401494 Seconds
Exclusive Times
%Time ExclSec CumulS #Calls sec/call Csec/c  Name
 0.12   0.004  0.007      4   0.0010 0.0016  main::BEGIN
 0.03   0.001  0.001      5   0.0002 0.0002  File::Basename::BEGIN
 0.03   0.001  0.001      1   0.0009 0.0009  warnings::BEGIN
 0.03   0.001  0.001     37   0.0000 0.0000  File::Basename::_strip_trailing_sep
 0.03   0.001  0.001     37   0.0000 0.0000  File::Basename::fileparse
 0.00   0.000  0.002     37   0.0000 0.0000  File::Basename::basename
 0.00   0.000  0.000      1   0.0003 0.0003  File::Glob::doglob
 0.00   0.000  0.000      1   0.0001 0.0001  DynaLoader::dl_load_file
 0.00   0.000  0.000      1   0.0001 0.0003  XSLoader::load
 0.00   0.000  0.000      1   0.0001 0.0001  File::Basename::fileparse_set_fstype
 0.00   0.000  0.000      1   0.0001 0.0001  Exporter::import
 0.00   0.000  0.000      2   0.0000 0.0000  warnings::import
 0.00   0.000  0.000      3   0.0000 0.0000  strict::import
 0.00   0.000  0.000      1   0.0000 0.0003  File::Glob::csh_glob
 0.00   0.000  0.000      1   0.0000 0.0000  strict::bits

Ignoring the actual times which are not directly comparable due to the profiling overheads we can clearly see Perl is benefiting hugely from the inbuilt regex engine as there are 0 function calls associated with each line check.

I did replace the ‘str.startswith’ implementation of the Python script with a version which used ‘re’ regex objects, but this showed even worse performance:

> time python checkmail4
python checkmail4 8.42s user 0.33s system 99% cpu 8.765 total

Profiling this one we see the overhead of using ‘re.match’ was about double that of ‘str.startswith’ and, obviously, the number of calls remained the same. On top of this I introduced additional overhead of two calls to ‘re.compile’ at the start of the script, which the profiler showed incurred not insignificant function calls of their own:

> python -m cProfile checkmail4
         16225312 function calls in 12.416 CPU seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000   12.416   12.416 :1()
        1    0.000    0.000    0.000    0.000 UserDict.py:17(__getitem__)
        1    6.662    6.662   12.415   12.415 checkmail4:3()
        2    0.000    0.000    0.000    0.000 re.py:186(compile)
        2    0.000    0.000    0.000    0.000 re.py:227(_compile)
        2    0.000    0.000    0.000    0.000 sre_compile.py:367(_compile_info)
        2    0.000    0.000    0.000    0.000 sre_compile.py:38(_compile)
        4    0.000    0.000    0.000    0.000 sre_compile.py:480(isstring)
        2    0.000    0.000    0.000    0.000 sre_compile.py:486(_code)
        2    0.000    0.000    0.000    0.000 sre_compile.py:501(compile)
       15    0.000    0.000    0.000    0.000 sre_parse.py:144(append)
        2    0.000    0.000    0.000    0.000 sre_parse.py:146(getwidth)
        2    0.000    0.000    0.000    0.000 sre_parse.py:184(__init__)
       21    0.000    0.000    0.000    0.000 sre_parse.py:188(__next)
        2    0.000    0.000    0.000    0.000 sre_parse.py:201(match)
       19    0.000    0.000    0.000    0.000 sre_parse.py:207(get)
        2    0.000    0.000    0.000    0.000 sre_parse.py:307(_parse_sub)
        2    0.000    0.000    0.000    0.000 sre_parse.py:385(_parse)
        2    0.000    0.000    0.000    0.000 sre_parse.py:669(parse)
        2    0.000    0.000    0.000    0.000 sre_parse.py:73(__init__)
        2    0.000    0.000    0.000    0.000 sre_parse.py:96(__init__)
        2    0.000    0.000    0.000    0.000 {_sre.compile}
 16224990    5.751    0.000    5.751    0.000 {built-in method match}
        1    0.000    0.000   12.416   12.416 {execfile}
        6    0.000    0.000    0.000    0.000 {isinstance}
       44    0.000    0.000    0.000    0.000 {len}
       37    0.000    0.000    0.000    0.000 {method '__enter__' of 'file' objects}
       59    0.000    0.000    0.000    0.000 {method 'append' of 'list' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
       24    0.000    0.000    0.000    0.000 {method 'get' of 'dict' objects}
        2    0.000    0.000    0.000    0.000 {method 'items' of 'dict' objects}
        1    0.000    0.000    0.000    0.000 {method 'iteritems' of 'dict' objects}
        4    0.000    0.000    0.000    0.000 {min}
       37    0.002    0.000    0.002    0.000 {open}
       13    0.000    0.000    0.000    0.000 {ord}
        1    0.000    0.000    0.000    0.000 {posix.listdir}

Quite clearly from the profiler output each call to either ‘str.startswith’ or ‘re.match’ use a very small amount of processor time (too small to be output) but the cumulative effect of 16 million calls is where the big slowdown was occurring. To get around this I tried implementing the ‘str.startswith’ version using string splicing (i.e. “line[:5] == ‘From ‘” rather than “line.startswith(‘From ‘)”) and the result was dramatic:

> time python checkmail4
python checkmail4 3.86s user 0.31s system 99% cpu 4.186 total

The profiler output for this version shows that the number of function calls is now on a par with the Perl implementation:

> python -m cProfile checkmail4
         110 function calls in 4.311 CPU seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    4.311    4.311 :1()
        1    0.000    0.000    0.000    0.000 UserDict.py:17(__getitem__)
        1    4.308    4.308    4.311    4.311 checkmail4:3()
        1    0.000    0.000    4.311    4.311 {execfile}
       37    0.000    0.000    0.000    0.000 {method '__enter__' of 'file' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
       29    0.000    0.000    0.000    0.000 {method 'get' of 'dict' objects}
        1    0.000    0.000    0.000    0.000 {method 'iteritems' of 'dict' objects}
       37    0.003    0.000    0.003    0.000 {open}
        1    0.000    0.000    0.000    0.000 {posix.listdir}

This puts the Python version within 0.6s of the Perl version, which is close enough for me. Especially considering this is effectively comparing Perl to Python on Perl’s hometurf of text matching.

I think Perl would probably still out perform Python if I was wanting to do something more fancy involving regex substitutions but Python’s performance issues, in this case, seem to be purely down to function call overheads which Perl sidesteps by incorporating the regex engine into the core language.

Perl vs Python speed cont’d

Following my post yesterday I decided to both slightly refine my implementation and see if I could improve the speed of the Python version of my script. All that it really required it a simple search of an mbox file looking for messages with no ‘Status’ header (which is added by mutt when it has seen the message). Both the Perl and Python script I was using yesterday were doing far more than necessary as both (at least partially) parsed the messages which is not required with the mboxes I have. I have therefore re-implemented the Python version as a simple loop over each line in the mbox files which counts the number of items with no status header:

#!/usr/bin/env python

from os import environ, listdir

MAILHOME=environ['HOME'] + '/Mail'

for file in listdir(MAILHOME):
	with open('%s/%s' % (MAILHOME, file)) as f:
		for line in f:
			if line.startswith('From '):
				if no_status:
					new_mailboxes[file]=new_mailboxes.get(file, 0) + 1
			elif line.startswith('Status: '):
	# Loop ended, make sure we count the last message if we need to.
	if no_status:
		new_mailboxes[file]=new_mailboxes.get(file, 0) + 1

for box, count in new_mailboxes.iteritems():
	print "%s (%d)" % (box, count)

This revised script completes in a much more respectable time, substantially quicker than the original Perl script I was comparing my first attempt to:

> time python checkmail2
python checkmail2 5.30s user 0.32s system 99% cpu 5.625 total

Curiosity got the better of me and I implemented the exact same algorithm in Perl and timed that:

#!/usr/bin/env perl

use strict;
use warnings;

use File::Basename;

my %new_mailboxes;
for my $file (glob("$ENV{HOME}/Mail/*")) {
	my $no_status = 0;
	my $basename = basename($file);
	open INPUT, $file;
	while(<INPUT>) {
		if (/^From /) {
			$new_mailboxes{$basename} += 1 if $no_status;
			$no_status = 1;
		} elsif ( /^Status: / ) {
			$no_status = 0;
	close INPUT;
	$new_mailboxes{$basename} += 1 if $no_status;

print $_, ' (', $new_mailboxes{$_}, ")\n" for keys(%new_mailboxes);

time perl checkmail3
perl checkmail3 2.96s user 0.42s system 97% cpu 3.465 total

Interestingly Perl was still faster by quite some way (the Python version took around 1.6 times as long to run). The question is, is this purely down to the overhead of object-orientated vs procedural or is Perl faster at IO and/or pattern matching (although the Python version should have been quicker here as it was not using a full-blown regex engine to match the start of strings)?

Perl vs Python speed

I needed to write a quick script to find which mailboxes in “~/Mail” had unread messages in them. I decided to knock it up in Python, but the script was not performing very well:

> time ./checkmail
./checkmail 56.17s user 1.86s system 98% cpu 59.181 total

A quick google found a Perl program which did pretty much the same thing (at http://www.perlmonks.org/?node_id=552218 if you’re interested) which ran, unaltered, on the exact same files performed significantly better:

> time perl checkmail2
perl checkmail2 16.66s user 1.27s system 99% cpu 18.043 total

Not only did the Perl version take about 1/3 of the time of the Python implementation but it also counted the number of unread messages and displayed it, while my simple Python script break’d on the first match to avoid needlessly looping over the rest of the messages. The Perl version is managing not to fully decode every message through the use of the Mail::MboxParser library however I could not find a way to achieve the same result in a straightforward manner with the standard Python libraries. Indeed, looking at the documented examples in the python docs, http://docs.python.org/library/mailbox.html#examples, it appears this is the suggested way of doing it (in essence all I need to do is examine the ‘status’ header, the example which uses a very similar loop to just examine the ‘subject’ header).

My Python script is here:

#!/usr/bin/env python

import mailbox
from os import listdir, environ, system

MAILHOME=environ['HOME'] + '/Mail'

for file in listdir(MAILHOME):
	for message in mailbox.mbox(MAILHOME + '/' + file):
		if message['status']:
print "\n".join(new_mail_mailboxes)

The problem with ‘middleware’

Both Python’s WSGI and Perl’s PSGI (and presumable Ruby’s Rack, but I have no experience of that) have a concept of ‘middleware’ which is part of a web application which sits between the server interface (WSGI or PSGI) and the application itself. This middleware can act as a filter or manipulate the environment (using PSGI’s terminology) before the application sees it. This makes it great for implementing features such as authorisation and sessions and indeed there are pre-built middlewares for both platforms which will do this.

The problem is that there is no standard for what parts of the environment get set by these useful middlewares which means (for the most part) they cannot be instantly swapped out for an alternative. I think what is needed is a simple definition of the bare minimum API for a given object (i.e. what Java would term an interface) and a defined location within the environment where the object will be found. Obviously objects could implement additional methods to provide bells-and-whistles specific to the implementation which applications can then use at the cost of no longer being able to do a straight swap out of the middleware.

For example a ‘session’ object might implement ‘get(key)’ and ‘set(key,value)’ could be found under ‘session’ in the environment hash. A ‘user’ object (as part of a larger authentication middleware) might implement ‘login_id’ and ‘roles’ attributes and be found under ‘auth.user’ in the environment hash.

An application developer would then be able to choose between just using the published interface standard or using some of the specific bells and whistles of a middleware. Even with the user of the extra features the amount of refactoring involved in switching between middlewares would be limited to just where the extra features had been used. It make switching from a generic authentication middleware to an in-house single signon solution very straight forward, for example.

Comments, suggestions?

Python fail

Rather annoyingly the pycrypto website is down, which means easy_install cannot download the code, which means I cannot build my code, which means I cannot deploy the bugfix I’ve just incorporated. For all its failings, at least with CPAN the code is always available from CPAN’s servers so even if the project’s own site is down you can still get hold of the modules you’ve used in your code.

Python–, Perl++

Searching for pycrypto>=1.9
Reading http://cheeseshop.python.org/pypi/pycrypto/
Reading http://cheeseshop.python.org/pypi/pycrypto/2.2
Reading http://www.pycrypto.org/
error: Download error: (104, ‘Connection reset by peer’)