Scientific Linux 6 (and, by extension, RHEL6) authentication woes

Since no-one seems willing to fork out the cash for RHEL, yet management insist on using CentOS as “our skills are with RedHat”, and there is no sign yet of CentOS6 I have been doing some experimenting with Scientific Linux 6 (which is also a RHEL rebuild).

Configuring the new sssd daemon to do AD authentication seemed straightforward enough until I hit an interesting problem. It would appear that, beginning with RHEL6, RedHat has split /etc/pam.d/system-auth into /etc/pam.d/system-auth and /etc/pam.d/password-auth. In of itself this is not a problem HOWEVER I have also discovered that, out of the box, GDM uses the /etc/pam.d/gdm-password stack (which includes password-auth) and gnome-screensaver uses the /etc/pam.d/gnome-screensaver stack (which includes system-auth). The result of this is that if I configure system-auth only (which is what the RHEL6 deployment guide says to do[0] then I cannot login to GDM. If I set up password-auth only (described in the migration guide[1] as for “remote services”) I can login to GDM but once my session is locked with gnome-screensaver (either manually or by the screensaver timeout (which is on and locks by default) I cannot unlock it from the prompt which appears when I click the mouse or touch a key (although I can click “switch user” to get back to GDM where I can unlock it). Setting both up seems counter-intuitive if I only want to configure local access and ‘password-auth’ is for remote services.

[0] http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/chap-SSSD_User_Guide-Setting_Up_SSSD.html
[1] http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Migration_Planning_Guide/ch07s05.html

Python fail

Rather annoyingly the pycrypto website is down, which means easy_install cannot download the code, which means I cannot build my code, which means I cannot deploy the bugfix I’ve just incorporated. For all its failings, at least with CPAN the code is always available from CPAN’s servers so even if the project’s own site is down you can still get hold of the modules you’ve used in your code.

Python–, Perl++

Searching for pycrypto>=1.9
Reading http://cheeseshop.python.org/pypi/pycrypto/
Reading http://cheeseshop.python.org/pypi/pycrypto/2.2
Reading http://www.pycrypto.org/
error: Download error: (104, ‘Connection reset by peer’)

The UK Government strikes again

Since my previous rant on the UK Government’s inability to understand how technology works, it would appear the Government has still not advanced its understanding.

Apparently they are going to force ISPs to record the time, to and from details of all emails. Aside from failing to see how this will possibly prevent terrorism, one has to ask; “what about about those of us who do not use ISPs to send email?” Am I going to be marked as a possible terrorist simply because I run my own mail server, rather than use my [parent’s] ISP’s?

Also, what information are they using as the from and to? If they use the IP address’s of the sender and receiver then they will neither be able to readily find the actual identity of the sender/receiver or to record the IP address of the receiver until they collect their mail from the mail server by pop/imap/webmail. If they, alternatively, record the from/to email addresses then the information will be useless due to how trivial it is to forge from addresses (as anyone who has received spam claiming to be from their bank will be able to testify). If they record the IP of the sender and the email address of the receiver (probably the most sensible combination) then they will still be unable to determine who sent the email easily since a single IP may have many computers behind it (due to NAT routers) and the fact that (especially with ISPs which dynamically allocate IPs) the fact that IP addresses are constantly being re-allocated.

Yet another brilliant, and useless, idea from our illustrious leaders.
</rant>

Vista has got to go.

I’ve finally reached the end of my teather with Vista and it has got to go. Why? Not because of UAC, the constant RAM useage of 900MB with nothing else running, the rebooting without prompting me to save whenever windows update feals like it or the fact the a third of my PC games won’t run on it. It’s going because I played two games of minesweeper last night and both times minesweeper crashed (“Minesweeper is not responding”), before I got to the end of the game, in response to me doing nothing more than right-clicking on a square to mark it as a mine (and yes, I did let it send the crash reports off to MS).

On a not entirely unrelated note I spent Monday afternoon slipstreaming SP3 and last Thursday’s emergency hot fix into an OEM XP Pro CD (having discovered I didn’t have an OEM disk at home after taking my boss’s laptop home to reinstall it) using nLite. It was surprising easy and having a CD into which I only have to type the product key and the owner information to get a properly localised British XP install was a very nice experience. One thing I don’t understand is why Microsoft cannot supply localised install disks in the first place. I appreciate that there would be additional cost in producing the different disks but Microsoft is large enough and should be profitable enough to be able to do that. Failing that they could always re-write the installer so that once I tell it I’m in the UK it automatically sets they keyboard, timezone and language preferences (like most Linux distributions) rather that me having to change it in 5 different places.

Oh, incidentally, I am still alive ;) .

Why it is technologically impossible to block modern filesharing.

Yet again, our country’s government has demonstrated a complete inability to grasp the fundmental issues involved in a technology-related issue. I’m talking about the recent news story that the British government plans to introduce legal sanctions against ISPs who do not take “concrete steps to curb illegal downloads” (BBC News, 22 Feb 08). In order to avoid these sanctions, what could the ISPs do? Here is a list of a few suggestions, and reasons why they are completely ineffective:

  • Packet sniffing – This has two major failings:
    1. Any kind of packet-sniffing on the amount of data ISPs are constantly shuffling around is going to require some hefty (and expensive) equipment
    2. It is will be instantly circumvented by the introduction of encryption by the filesharing protocols (in fact, according to Wikipedia, bittorrent already uses this to circumvent some ISPs throttling).
  • Block filesharing ports – This is a non-starter as most filesharing applications seem to use a range of ports. If the ISPs start down this route, the filesharers will simply move to different ports (port 80, anyone?). Sooner or later they will block some of the higher-numbered ports which people and businesses care about – blocking port numbers over 2000, for example, would mean that I could not access my VPS’ admin console, my place of work could not update their website and I could not access my svn repository preventing me from performing my job.
  • Block access to websites which facilitate filesharing – This is the simplest (and perhaps least effective!) method of trying to control filesharing. This month, the Danish authorities ordered one ISP to block access to a website called “The Pirate Bay”. According to one source this action resulted in a raise in traffic to the site, from Denmark, of 12%. Clearly censoring the web in this way is not going to work (as well as being questionably legal, anyone remember free-speech?).

Even if the ISPs do find an effective means to block the current popular filesharing software the users will just start using something else, potentially a brand-new protocol and this whole saga will start again. And what of the people who use filesharing technology to distribute legitimate files. I have used “bittorrent” software to download DVD images of some Linux distributions, perfectly leagally (they are free to distribute and re-distribute), as the distributions in question do not, and cannot, provide any other means to downloading the large images (usually in excess of 4GB). How will the ISPs be able to block illegal sharing and still protect the interests of organisations which legitimately provide software for free?

Clearly the coorporations, which are pushing the government for this action, would be better off looking at themselves and trying to understand why people seem to prefer piracy to legitimate routes of acquiring products. One BBC article (whose link I have misplaced, sorry) sums it up quite neatly, saying that the quality of pirated downloads is superior to that of purchased downloads (aside: the poor quality of downloaded music is why I still buy CDs and refuse to buy from the likes of iTunes) and that one of the major reasons piracy of so prevelent in the UK is the huge gap between the showing of headline programs in the US and the UK.

/rant.

SELinux

One of my colleagues gave me a VMWare image to use to test authenticating Linux (CentOS in this case) with Active Directory. Unfortunately the image in question is about 10GB and after the existing images on the machine there was not enough for it in /var (a 17GB partition on a 20GB disk). As I could not find any more space on the existing drive I clearly needed to add another disk to the machine. Three dead disks later I finally found a (250GB! – effectively winning the hard-drive lottery) hard disk which no-one was using. Now just to move /var to the new disk.

I partitioned and formatted an 80GB partition, mounted it and copied the existing contents of /var accross. One edit to fstab later, I rebooted. The disk mounted fine, but various services refused to initialise with “permission denied” errors on /var. I checked the permissions against the old /var and they appeared to be identical. Some head scratching later I decided to go an ask the advice of one of my colleagues. He was equally bemused, but suggested that I tar up the old /var and untar it over the new partition incase the copy had not preserved the permissions (even though it had been told to, and they appeared to be correct). I did this however it had no effect. When I returned to my colleagues office, another one of my colleagues was talking to the first and the first suggested that he take a look. He had a quick glance at the problem and asked if SELinux was enabled. It was. One quick `restorecon -R /var` later everything worked. We then proceded to have a rant from colleague #2 about how Fedora and RHEL now had SELinux in enforcing mode by default where as it used to just warn by default, which was better in a production environment where it needs to be run in warning only mode for a while to check nothing is hitting it that should be allowed. Still it is all good fun.

Windows FTS

Windows just helpfully rebooted itself (in order to install updates) because I did not spot that the “Rebooting in 5mins…” dialogue had appeared in the background. The last time is showed (10 minutes before) I only managed to catch it 3 seconds before it was about to restart the system. I find a number of things wrong with this:

1. A dialogue which will result in something happening if immediate action is not taken to prevent it (which is a bad idea for a dialogue in the first place) should draw as much attention as is (sensiabily) possible to itself. Being always on top would be a good start, as would flashing the taskbar. That way it could not be ignored and the user would have to clear the dialogue by selecting the “reboot now” or “postpone” buttons. (Note I said “on top” not “focused” – focus stealing is evil and even a critical dialogue like this should not practice this method of grabbing the user’s attention.)

2. Rebooting the system should require elevated privalidges (in my opinion – and if Windows update is already running with elevated pricalidges why am I not prompted to allow it to have these? I am when most Linux distros wish to install updates). What is the point in preventing programs from installing software, altering critical system files etc without my being harrassed by UAC if a process can reboot the system on a whim?

3. For the love of bob can IE please prompt me to save the tabs when a reboot is being forced upon me, in the same way as when I click on the big red ‘X’ in the top right? I not only lost the tabs I was still reading through, but also the contents of my shopping basket on an e-commerce site. As I now do not have the time to rebuild the contents of the basket (due to venting my frustration at my blog, which is far more fun ;) ) the site in question has lost a sale (for a few pounds shy of £100), thanks to Microsoft.

 On a completely different note: I am currently debating what UI convention (specifically related to button placement in dialogues, at the moment) to follow for a new web app I am thinking about developing in my own time. The choice boils down to:

1. Follow MS Windows’ convention, which is likely to be most familiar to the user

or

2. Follow GNOME style convention, which make logical sense and (following button labeling guidelines, as well as placement) significantly reduces the likelyhood of dialogues in which the choice can be ambiguous as to which button will perform which action.

The fact that this is a web app means that following a non-MS convention may be more easily accepted by unskilled (strictly in the sense of computer use) workers than if it were a stand-alone app which was designed to be used within a Windows environment.  On the flip-side, following MS’ convention would probably decrease the learning curve due to the user’s existing familiarity with Windows-style dialogues.

…and then there were two (posts)

Having survived another day at work, I’ve now gotten round to writing the final few things I missed off this mornings blog post.

One thing I forgot to mention this morning was that, although MSSQL deleted over 1,000 records from a table by a cascaded delete, the output says “4 rows affected” as only four were deleted from the first table. If a higher number had been reported anywhere in the output it might have allerted to us that there was a problem earlier than the customer calling support because their site no longer functioned correctly.

Rant aside, since my last blog post (in May, this is just an extension of this morning’s) my Grandfather, who was formerly a Commando and then a coal miner, died. He’d been ill for sometime but we did not expect him to die quite so suddenly. Fortunately he died peacfully, in A&E where he’d been taken after coughing up some blood at home.

Yesterday Pete wrote about a document on maintainable code he found at work. The document makes some very good points for writing “maintainable code”. However I would dispute the suggestion that “Every function should be most 20 lines of code”. The rule where I work is that a function should be the length necessary to perform its given task, no more and no less. Usually this means that the function will fall well within the 20 line limit suggested, however it is not uncommon for a complex function which performs a very specific task (such as manipulating the contents of a particular input file, from a manufacturer, to fit the database schema)  to be 100 or more lines in length. Setting a hard and fast limit on the length of a region of code, be it an if block, a function/method, a class, etc. is not, in my opinion, conducive to maintainable code.

Another interesting item I saw noted on Planet Compsoc was this BBC article about Lenovo (who made my wonderful T60) preparing to sell laptops with Linux pre-installed on them. At the bottom of the article it says “Analysts believe that approximately 6% of computers users run Linux, similar to the numbers choosing Apple Macs”. I find this fact extreemly interesting as the company I previously worked for, in the holidays, had a statistics analyiser (which I installed) for their web logs, which showed approximately 6% of visitors to the site used Linux. The Mac quotient of Visitors was significantly less than that, however, and a full 90% of Visitors used Windows XP. Another random fact I found interesting was that use of IE 7 and IE 6 to visit the site was evenly split at 45% each. It makes me wonder how many of those have IE 7 simply because Windows Automatic Updates have installed it for them, and how many of the IE 6 users only have that because they never run the Automatic Updates.

Finally; At christmas I undetook the task of re-writing the stock management system I had previously written for my then employer. The re-write was necessary as the system had started out as a very small and simple thing, which had then had bits and pieces botched onto it as and when my boss decided that it would be nifty to have feature X (or Y or, more commonly, X, Y and Z. By lunchtime.). The result, as always with projects which develop like this, was a hideous mess with, for some reason, worked. Until it stopped working. And then something would hit the fan and land on my desk.

As a result I decided to dump the hacked-to-death php code, and re-write it using an MVC framework. I settled on Rails as it promised great productivity and allowing the developer to concentrate on writing functionality while it worried about the nittity-gritty, such as interfacing with the database. I completely re-wrote a system which had taken over 2 years to develop in 3 months, and Rails did deliver on its promises. Since I’ve stuck to the (somewhat enforced) MVC seperation of the Rails framework adding functionality is a doddle, as is maintaining the code. I have, however, found a small flaw in my approach.

The rails URL scheme opperates on the theme of ‘[controller]/[action]/[id]’, where the controller is the name of the controller (duh!), action is the method within that controller which is being called (and is also the name of the view) and id is an identifier (intended for identifing a db record, for example). I am aware this can be hacked somewhat with the Rails cofiguration, but deviating from the intended path for such frameworks often leads to problems down the line when the framework developers decide to fundamentally change the framework such that these hacks no longer work as intended. Anyway, back to the URL scheme. This is all fine and dandy when I have a stock management system with a ‘browse’ controller, which has such actions as ‘list’, ‘view’, ‘pdflist’ and so on, and an ‘edit’ controller which (also) has a ‘list’, ‘edit’, ‘uploadimages’, ‘uploadpdf’ etc. . (I know it looks like the two list actions violated the DRY (Don’t repeat yourself) philosophy, but they operate in fundamentally different ways, the browse one only operates on a specific subset of the database limited, among other things, to just what is in stock.)

My problem is that, although this is fine for a stock management system, I also need to integrate the old parts management system in as well (on the old system this was a HORRIFIC kludge). There are two obvious solutions, neither of which I’m keen on. One is to create a ‘parts’ controller in the existing app, which contains ‘editlist’, ‘viewlist’, ‘edit’, ‘view’, ‘uploadphotos’ etc. . This could possibly extended to move all of the stock stuff into a ‘stock’ controller. I do not like this as it a) feels too much like bolting the thing on, like the old mess which I’m obviously keen to avoid recreating, and b) the controllers would then get very large and the maintainability provided by seperating out these systems will vanish. The second alternative is to create a seperate rails app to do the parts management. As I mentioned I’m trying to integrate these systems, so creating a seperate app for it seems like a bad move towards that end. It would also mean hacking the Rails config to not assume it is at the root url, and setting up the webserver to rewrite urls. It is all hassle I’d like to avoid.

I’m now wondering if I should have use Django instead, where a project (or site) is supposed to be a collection of apps and I suspect that, as a result, the integrated stock and parts management system would be a lot easier to realise. I’m now back into the realm of trying to justify, either way, another rewrite of the system. I will add that Rails has given me some major performance headaches, and I’ve had to re-write portions of my code to not use the Rails helper functions, which I view as bad, as my code now relies of certain aspects of the Rails framerwork not changing, where as the helper functions should (I would hope) be updated to reflect changes made in the future, in order to achieve something of the order of an acceptable performance.

People suck…

About two months ago a coworker of mine suggested that I implement a Palm-based data entry program for our stock database system (which I wrote). As I had no experience at all of writing applications for anything more (physically) portable than a laptop I was not overly keen on the idea, and a few hours of me being less than enthusiastic seemed to have kept him quiet. At least until a few weeks later when I was presented with a brand new palm (the company’s (of course!), and now in the less than safe hands of one of my other coworkers) and told to write said application.

Last week I had finished implementing a storage and retrieval system and was just starting work on the facility to edit stored data (a fairly trivial task once it was possible to save a new record and view existing ones). The coworker who initially requested the system duely demanded a progress report, which I gave. I was promptly told that editing existing data was not needed, so I deployed the application. Fast forward to yesterday, when another coworker (the one who actually uses the shiney new palm, and my app) says, “You know what’d be really useful? The facility to store part of the data and to come back and edit it and fill in the blanks later, without returning to the office to use the PC-based frontend.” (or words to that effect). People suck, in this case because they don’t know what they want (or, rather, they think they know what they want and then demand that you provide what you thought they wanted after they told you they did not want that (still with me?)).

On a completely unrelated topic, Vista is still going strong on my laptop with only 2 major gripes at the moment. The first is that it uses well over 1.5GB of memory (of which less than half seems to be accounted for by Task Manager’s process list – and yes that is running with administrator rights, my user alone seems to only be using ~200MB although IE7 doubles that when running) which means doing anything (from loading an application to compiling a test build of a program) involves waiting about 3 minutes for Vista to swap enough stuff to disk(the laptop has 1GB physical RAM) to perform the task it was asked to do. The second is that I can not seem to lay my hands on a decent free archiver which works with Vista, my usual choice (IZarc) has major issues, as does 7Zip and several other ones I’ve never heard of before but tried. At the moment I’m using the WinRAR trial and hopefully IZarc’s issues will be resolved before the trial expires. I have not yet had chance to play about with getting WMP11 or MPC to play my music collection due to work, coursework, a sister in hospital and other bits and pieces I have to do to survive and pass my degree. I will probably have to activate Vista soon too, hopefully it will require less effort than the 5 calls to Microsoft it took to activate (pre-installed!) XP on my old laptop.