…and then there were two (posts)

Having survived another day at work, I’ve now gotten round to writing the final few things I missed off this mornings blog post.

One thing I forgot to mention this morning was that, although MSSQL deleted over 1,000 records from a table by a cascaded delete, the output says “4 rows affected” as only four were deleted from the first table. If a higher number had been reported anywhere in the output it might have allerted to us that there was a problem earlier than the customer calling support because their site no longer functioned correctly.

Rant aside, since my last blog post (in May, this is just an extension of this morning’s) my Grandfather, who was formerly a Commando and then a coal miner, died. He’d been ill for sometime but we did not expect him to die quite so suddenly. Fortunately he died peacfully, in A&E where he’d been taken after coughing up some blood at home.

Yesterday Pete wrote about a document on maintainable code he found at work. The document makes some very good points for writing “maintainable code”. However I would dispute the suggestion that “Every function should be most 20 lines of code”. The rule where I work is that a function should be the length necessary to perform its given task, no more and no less. Usually this means that the function will fall well within the 20 line limit suggested, however it is not uncommon for a complex function which performs a very specific task (such as manipulating the contents of a particular input file, from a manufacturer, to fit the database schema)  to be 100 or more lines in length. Setting a hard and fast limit on the length of a region of code, be it an if block, a function/method, a class, etc. is not, in my opinion, conducive to maintainable code.

Another interesting item I saw noted on Planet Compsoc was this BBC article about Lenovo (who made my wonderful T60) preparing to sell laptops with Linux pre-installed on them. At the bottom of the article it says “Analysts believe that approximately 6% of computers users run Linux, similar to the numbers choosing Apple Macs”. I find this fact extreemly interesting as the company I previously worked for, in the holidays, had a statistics analyiser (which I installed) for their web logs, which showed approximately 6% of visitors to the site used Linux. The Mac quotient of Visitors was significantly less than that, however, and a full 90% of Visitors used Windows XP. Another random fact I found interesting was that use of IE 7 and IE 6 to visit the site was evenly split at 45% each. It makes me wonder how many of those have IE 7 simply because Windows Automatic Updates have installed it for them, and how many of the IE 6 users only have that because they never run the Automatic Updates.

Finally; At christmas I undetook the task of re-writing the stock management system I had previously written for my then employer. The re-write was necessary as the system had started out as a very small and simple thing, which had then had bits and pieces botched onto it as and when my boss decided that it would be nifty to have feature X (or Y or, more commonly, X, Y and Z. By lunchtime.). The result, as always with projects which develop like this, was a hideous mess with, for some reason, worked. Until it stopped working. And then something would hit the fan and land on my desk.

As a result I decided to dump the hacked-to-death php code, and re-write it using an MVC framework. I settled on Rails as it promised great productivity and allowing the developer to concentrate on writing functionality while it worried about the nittity-gritty, such as interfacing with the database. I completely re-wrote a system which had taken over 2 years to develop in 3 months, and Rails did deliver on its promises. Since I’ve stuck to the (somewhat enforced) MVC seperation of the Rails framework adding functionality is a doddle, as is maintaining the code. I have, however, found a small flaw in my approach.

The rails URL scheme opperates on the theme of ‘[controller]/[action]/[id]’, where the controller is the name of the controller (duh!), action is the method within that controller which is being called (and is also the name of the view) and id is an identifier (intended for identifing a db record, for example). I am aware this can be hacked somewhat with the Rails cofiguration, but deviating from the intended path for such frameworks often leads to problems down the line when the framework developers decide to fundamentally change the framework such that these hacks no longer work as intended. Anyway, back to the URL scheme. This is all fine and dandy when I have a stock management system with a ‘browse’ controller, which has such actions as ‘list’, ‘view’, ‘pdflist’ and so on, and an ‘edit’ controller which (also) has a ‘list’, ‘edit’, ‘uploadimages’, ‘uploadpdf’ etc. . (I know it looks like the two list actions violated the DRY (Don’t repeat yourself) philosophy, but they operate in fundamentally different ways, the browse one only operates on a specific subset of the database limited, among other things, to just what is in stock.)

My problem is that, although this is fine for a stock management system, I also need to integrate the old parts management system in as well (on the old system this was a HORRIFIC kludge). There are two obvious solutions, neither of which I’m keen on. One is to create a ‘parts’ controller in the existing app, which contains ‘editlist’, ‘viewlist’, ‘edit’, ‘view’, ‘uploadphotos’ etc. . This could possibly extended to move all of the stock stuff into a ‘stock’ controller. I do not like this as it a) feels too much like bolting the thing on, like the old mess which I’m obviously keen to avoid recreating, and b) the controllers would then get very large and the maintainability provided by seperating out these systems will vanish. The second alternative is to create a seperate rails app to do the parts management. As I mentioned I’m trying to integrate these systems, so creating a seperate app for it seems like a bad move towards that end. It would also mean hacking the Rails config to not assume it is at the root url, and setting up the webserver to rewrite urls. It is all hassle I’d like to avoid.

I’m now wondering if I should have use Django instead, where a project (or site) is supposed to be a collection of apps and I suspect that, as a result, the integrated stock and parts management system would be a lot easier to realise. I’m now back into the realm of trying to justify, either way, another rewrite of the system. I will add that Rails has given me some major performance headaches, and I’ve had to re-write portions of my code to not use the Rails helper functions, which I view as bad, as my code now relies of certain aspects of the Rails framerwork not changing, where as the helper functions should (I would hope) be updated to reflect changes made in the future, in order to achieve something of the order of an acceptable performance.

IE is lame

This is an old draft which I never properly wrote up, since I’m unlikely to find the time in the near future to do it I’ve decided to just post as-is. At some point I may edit it to make it a proper post.
The plan:

The bug:

The spec:

Why SVN sucks:

The solution:

More bugs in IE (5):

Language, Language, Language

I’m yet to settle on a final language to write the core of my Bot in. This is both advantageous and a hinderence. It means that I’ve not committed myself to a language, which at a later stage will prove to be insufficient or extreemly difficult to use for the intended task. Conversely it means that, at the moment, I cannot write any final code and I’m rewriting portions of the same code in various languages while I try out the options.

Initially I looked at Python as the inbuilt structures (lists, dictionaries etc.) lend itself to easy storage of socket information and trivial implimentation of buffers. I had recently been having a go at rewriting it in C++, which is proving to be much harder than it first looked.

While randomly browsing the web, I began looking at various language’s benchmarks on the great language shootout I observed that (as you’d expect) Python, Perl, $other_interpreted_language < java, mono(C#), $semi_compiled_language (by which I mean it's not compiled to native code, but rather runs through a VM or similar) < C, C++, $compiled_language. No surprises there. The core (and security module) of the Bot are going to spend most of their time pushing data (strings) between the various sockets. Ideally they want to do this as quickly as possible to keep the overall performance of the Bot as good as possible (hence why I was looking at C++). While I was (virtually) pitting various languages against each other I found the D is actually comparable, performance-wise, to C and C++. This impressed me, I was expecting it to be slower, more comparable to C# and Java due to the garbage collection and other features it shares with them.

After browsing the D website and other community sites linked from there I found that D aims to be, essentially, C++ with garbage collection and dynamic memory allocation. The Wiki4D has many more usefull links to community websites. This D tutorials page also looks like a good place to start with learning the language.

To quote its website, “Mango is a collection of D packages with an orientation toward server-side programming“. It includes, among a lot of other suff, a socket class. This could be very useful, at least initially while poking the D language even if I drop D in favour of something else or end up writing my own socket library.

Well, at least I’ve got something else to procrastinate over for a few more days ;)

Shiney hardware and IBM Recovery Disks

My new hardware turned up yesterday. I’ve installed the RAM and Case fans in my PC (although one of the fans isn’t quite fitted properly – it appears to be the only (normal)case fan I have ever seen which is not reversible (i.e. designed to be mounted either way round depending on whether I want to suck air in or blow it out). Still, some blistered fingers and brute force later it’s in and working.

Also in my hardware order I bought a 512MB usb pen-drive. Although I own a 64MB Creative Mu-vo it just doesn’t have the capacity for anything in addition to music (it holds just under a CDs worth of 128Kbps MP3s). Having bought this drive, and copied all of the lecture notes for the up coming exams onto it I decided I would have a go at putting some usefull apps onto it;

Last night, after returning from the LUG I also copied to contents of the 5 IBM recovery discs onto a hard drive in my desktop. This morning I burnt the files to a DVD, and once I’ve backed up the current contents of my laptop to external hard disk I’m going to test it. If it works it’ll be a fantastic improvement on having to swap the CDs backwards and forwards as the recovery program demands.

In the mean time, revision is called for with tomorrows exam being dangerously close.

Planet 3YP

Chris Lamb has set up a website, called Planet 3YP, which is “an aggregate feed of University of Warwick Computer Science third-year project blogs”. It looks good, although it only has 2 projects on there at the moment. Hopefully some more people will signup (I’ve just sent off my project’s details to him) and we will get some interesting information reading about each other’s projects.

The shiney hardware I ordered hasn’t been delivered yet today, but due to yesterday being a bank holiday may not come until tomorrow. I hope it arrives before I return to Coventry tomorrow evening even though I probably won’t have an opportunity to actually install it all until after the exams.

Rofl, revision and stuff.

This entry from Dave-Miller.com, is one of the funniest things I’ve read in a while. It quite literally had me laughing out loud;

Important announcement from UK Department of Transport

There is concern over the current driving standards in England, so the Department of Transport have devised a scheme to identify poor and dangerous drivers.

This system will allow all road users to recognise the potentially hazardous and dangerous ones, or those with limited driving skills.

From the middle of May 2006 all those drivers who are found to be a potential hazard to all other road users will be issued with a white flag, bearing a red cross.

This flag clearly indicates their inability to drive properly.

These flags must be clipped to a door of the car and be visible to all other drivers and pedestrians.

Those drivers who have shown particularly poor driving skills will have to display two flags:

One on each side of the car to indicate an even greater lack of skill and limited driving intelligence.

Please circulate this to as many other motorists as you can, in order that drivers and pedestrians will be aware of the meaning of these flags.

Thank you for your co-operation.

Department of Transport.

Back to revision, the first exam we have is ‘Introduction to Software Engineering’ on Friday. A while ago I downloaded all of the ISE notes onto my laptop so I could revise without an internet connection (say at the RAF’s formation-flying pig display…). At the time one of the lecture notes was unavailable due to the familiar 404 error. I checked again today, and it’s /still/ a 404. How the hell are we ment to revise if the (examinable)lecture notes are returning 404s?!?

Ututo, SuSE 10.1 & Debian and the evil KDE

Last weekend was a Warwick CompSoc LAN party weekend, which for me ment the customary ‘trying the latest Linux releases’ as well as playing the odd game. (Incidentally, Fred introduced me to TO:Crossfire which is a great mod for UT2004, and looks to be better than Tactical Ops for UT)

A friend of mine (and shameless free-as-in-speach-software advocate and HURD user), Tim, asked me to try Ututo for him, so I tried to install it on my laptop first. For some reason it failed to boot past the ‘ISOLINUX’ stage. Suspecting a possibly duff CD-R I used the downloaded ISO image to install it in a virtual machine. It successfully installed in the VM, but failed to boot. Declaring Ututo an unmitigated failure, I moved on…

Next on the list was SuSE 10.1 which had been released on Thursday. SuSE 10.1 seems a lot stabler than 9.3 and 10.0 were. I only had one KDE application crash on me, where as on 9.3 and 10.0 this was a regular occurence. The knetworkmanager application is a fantastic addition to the KDE desktop, and automatically picked up the university access point which was within range at the LAN. For Linux newbies, and people who want a ‘just works’ Distro SuSE 10.1 has to be one of the best. Shame I don’t fall into one of those groiups ;)

…so back to Debian. After re-installing it I decided to install KDE since it is a nice desktop (dispite the fact that, by default, konqueror, konsole and konversation all use different shortcuts for switching between tabs. (Alt+Left/Right, Shift+Left/Right and Ctrl+,/. iirc)). For a distro which has a reputation for being Gnome-oriented when using a full desktop environment its KDE packages are great. Everything just works as expected, and I’m very pleased with it. I wonder how long it’ll last until I decide to go back to something more minimal (e.g. evilwm).