Accessing Debian Sid software from stable

So, I needed to access a file which had been created on my old Mac laptop using a newer version of an open-source application (installed via Homebrew on the Mac) than was currently packaged in Debian stable.

To my mind, I had 3 obvious choices:

  1. Download from source, build by hand and use it
  2. Download the Debian source package, rebuild it on stable, install and use it
  3. Create a Debian unstable (Sid) chroot, install it there and use it

I decided to go with option 3, which has a number of advantages over the other 2:

  • apt/the developers have already handled any dependencies (and version differences) needed by the new version
  • I don’t pollute my root filesystem with a version other than the one packaged and tested in stable (option 2)
  • I don’t have different versions of the same software installed on the path, in /usr and /usr/local (option 1)
  • If this were more than a one-off I could use apt in the chroot to track and keep the software updated with the current version in Debian unstable
  • I can install other software in the chroot, once it’s setup, direct from the repository

Alternatives I didn’t look would be installing unstable in a virtual machine or containers. I need this for a quick and dirty one-time task (install new version, convert file to old version, throw away chroot and use version installed with stable from now on) so either of these would be more effort for me than required (writing this blog post took longer than the actual task, below!).

To get this working, firstly we need debootstrap installed:
apt-get install debootstrap

Make a directory for the chroot:
mkdir /tmp/sid-chroot

Install Debian into the chroot:
debootstrap unstable /tmp/sid-chroot http://ftp.uk.debian.org/debian/

Change into the chroot:
chroot /tmp/sid-chroot

Update software:
apt-get update

Install and use new version of software within chroot.

This was a quick-and-dirty solution to a temporary problem (once opened and saved in the older format, I can use my file with the old version).

The Debian wiki recommends making these configuration changes to a chroot, which I’ve not bothered to do (as it was going to last all of 5 minutes):

  1. Create a /usr/sbin/policy-rc.d file IN THE CHROOT so that dpkg won’t start daemons unless desired. This example prevents all daemons from being started in the chroot.

    cat > ./usr/sbin/policy-rc.d <<EOF
    #!/bin/sh
    exit 101
    EOF
    chmod a+x ./usr/sbin/policy-rc.d
  2. The ischroot command is buggy and does not detect that it is running in a chroot (685034). Several packages depend upon ischroot for determining correct behavior in a chroot and will operate incorrectly during upgrades if it is not fixed. The easiest way to fix it is to replace ischroot with the /bin/true command.

    dpkg-divert --divert /usr/bin/ischroot.debianutils --rename /usr/bin/ischroot
    ln -s /bin/true /usr/bin/ischroot

A more complete chroot solution would involve the use of schroot to manage it, which I’ve done before to get an old ruby-on-rails application working on newer versions of Debian.

Migrating from fastcgi to cgi

I’ve just migrated all of the sites on my VMS (including this blog) from running on a fastcgi backend to running on a cgi backend. Why, you may ask. Well, although fastcgi is substantially more responsive than plain old cgi (since it keeps the processes running between requests so the process start-up and take-down times are removed) it consumes much more memory (due to keeping the processes around). Nowadays this is not normally a problem but on a virtual machine with only 128MB available and no swap memory usage becomes a big issue.

By moving from fastcgi to “normal” cgi and tweaking my mysql config I have increased the free memory when the machine is idle from 4MB to 80MB and now have lots of headroom if a single process (e.g. DenyHosts, which is idling at 20MB) decides it wants loads of ram before the Linux low memory reaper starts killing processes.

As a slight aside, the bigest memory hogs on my box are Python programs (DenyHosts [~20M] and rss2email [~45M]). I don’t know it this is a fault with Python or the way the scripts are written (or both, not being intimately familiar with Python I don’t know if it encourages conservative use of memory or not).

Coping when languages change their api between minor versions

I have a rails application(yes, I know, I’ve been regretting it for some time), which I wrote over 2 years ago using the (then) stable rails version of 1.2.3 and whatever ruby version was around at the time (Debian Sarge was the OS of choice for the server). Then Debian Etch was released, so I dist-upgraded to that including the new version of ruby (1.8.5) without any major headaches. Now Lenny is the stable version but the version of rails I used originally does not work with the current version of ruby (1.8.7) because, apparently, `[]’ is no longer a valid method for ‘Enumerable::Enumerator’. This error is thrown in the rails libraries themselves, not my code.

There are two obvious solutions, upgrade the version of rails (which involves re-writing large portions of my code due to api changes in rails, rails is now at v2.3 (http://rubyonrails.org/)) or stick with the old version of rails and install an older version of ruby.

I did the latter. Originally I did this by <cringe>holding back the version of ruby, and its dependencies when doing the dist-upgrade from Etch to Lenny</cringe>. (I appologise for the kittens that were inevitably killed by me doing this!) This did, despite the horribleness(is that a word?) of the method, work.

Today I am installing a new server. Not only am I installing Lenny, which makes manually going to fetch the old versions a pain, but the new box is “amd64″ (it’s an Intel, actually, so x86_64 is more accurate but Debian refers to is as amd64) so I can’t just steal the packages from the cache on the old box. Thankfully all this means that I have been forced to install the old version in some sort of sane manner, by installing Etch in a chroot and calling the old rails app from there. Here’s the steps I took:
(prerequisits: debootstrap dchroot)

# mkdir -p /var/chroot/etch # Make the new chroot directory
# debootstrap --arch amd64 etch /var/chroot/etch http://ftp.uk.debian.org/debian/
# mkdir -p /var/chroot/etch/var/rails # Where the rails app is going to live (well, it'll actually live outside the chroot and be mounted here)
# mkdir -p /var/chroot/etch/XXXXXXX # removed to protect the innocent
# mkdir -p /var/chroot/etch/var/run/mysqld # this will be bound outside the chroot so that rails can access the mysql instance on the server

I added the following to /etc/fstab, and mounted them:

# Etch chroot (for rails)
/proc /var/chroot/etch/proc none rw,bind 0 0
/tmp /var/chroot/etch/tmp none rw,bind 0 0
/dev /var/chroot/etch/dev none rw,bind 0 0
/var/rails /var/chroot/etch/var/rails none rw,bind 0 0
XXXXXXX /var/chroot/etch/XXXXXXX none rw,bind 0 0
/var/run/mysqld /var/chroot/etch/var/run/mysqld none rw,bind 0 0

From here I could enter the chroot and install some needed applications:

# chroot /var/chroot/etch
# apt-get install ruby
# apt-get install rubygems
# apt-get install libfcgi-ruby
# apt-get install rake
# gem install -y -v=1.2.3 rails
# gem install -y pdf-writer

Then I can configure dchroot by adding this to /etc/schroot/schroot.conf:

[etch]
description=Debian etch (oldstable) for rails
location=/var/chroot/etch
groups=www-data

And finally a quick change to the lighttpd config which runs the fcgi program:
Old:

"bin-path" => "/var/rails/$app/public/dispatch.fcgi",

New:

"bin-path" => "/usr/bin/dchroot -c etch -d -q /var/rails/$app/public/dispatch.fcgi",

and it all works quite nicely. Now I have a stable Lenny system which I can keep up to date and an etch chroot for the legacy code.

It’s been a while…

I’ve not posted to my blog since the end of May, so after two-and-a-bit months it’s high time wrote something.

Whilst I’ve not been writing, I’ve also not been checking the comments. Due to the amount of spam, I require all comments to be approved by me before appearing on the site, so appologies to all the people who had comments stuck in moderation.

I’ve now been working in my new job for 2 months and it is generally okay. Windows, VisualStudio (2003) and Sourcesafe are all colluding to slowly drive me insane but for the time being I’m keeping the urge to take a Linux LiveCD into work at bay with healthy doses of Ruby and Debian in the evenings.

The one major cock-up I’ve made at work was a MS-SQL script to delete four rows from a table. Another, related, table had been corrupted and every row had been altered to point to the same (one) record in the first table. I had written a script to delete four faulty record and then fix the data in the associated table. Since I was deleting data I, as I make a point to always do, only used the primary key column of the table I was deleting from to ensure only the specific record which needed deleting was dropped. Unfortunately I was not aware of SQLServers ability to cascade delete record, nor was I aware that this feature was in use of the tabels in question. As a result the related table ended up with nothing in it. Whoops! We are waiting for the backup tape to be sent from Derby to Nottingham in order to restore the data to a point before the script was run. Fortunately all scripts which are run on live database servers have to be peer-reviewed, both for syntactic correctness and that they perform the task intended, before they are run so I have someone to share the blame with. I am, as the script writer, ultimately responsible for this mistake (through my own ignorance) however my colleague who reviewed the script should have been aware of the cascade delete and he did not spot the potential problem either. Nevermind.

For the past week I have also been shadowing another colleague who is left the company yesterday to learn about the systems where he was the only person with any knowledge. Last night hosting services, in their infinite wisdom, decided to move all of the servers involved in these systems from one location to an entirely different part of the country. The one thing that could possibly break everything should have now been performed the very night after the last day of the only person who knew these systems! Go go gadget forward planning.
I have a number of other things to write about, but I have to go to work early today in order to be there should the server move cause any problems. Maybe I’ll find time to write some more tonight (I wouldn’t hold your breath, though).

Samba lameness

While playing with my new remote server from tektonic I installed and setup virtual domain hosting for various subdomains of entek.org.uk (including blog.entek.org.uk :) ). Additionally, with the help of Tim I set up subversion and trac and created repositories for my config files and for my third year project. I’ve even already got some tickets I’ve filed against my 3rd year project!

Since I have this new server I have complete control over, and is accessible from anywhere with an internet connection I decided that it would be a good idea to set up duplicates of the websites I work on when emplyed during the holidays. Additionally I will be able to use svn for revision control of them, so no more developing on the live system and easy reverting to old versions.

I then decided to dig out a backup of work’s websites I happened to have lying around (although it’s from December, so just a “little” out of date). Since I was feeling lazy, and it was convenient at the time I decided to plug the USB storage device the backups were stored on into my desktop (running Windows XP) and use samba to get the files onto my laptop (running Debian).¬†Upon connecting to my desktop I found that smbclient just quit with an error, ‘tree connect failed: NT_STATUS_INSUFF_SERVER_RESOURCES’. Google appeared to be not terribly fothcoming with a solution, but some refinement of the search found a Tips and Tricks page with a solution. Apparently Windows was being speshul (surprise, surprise!) and by (creating and )editing the DWORD registry value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters\IRPStackSize” to be higher than the XP default (15 or 0xf) everything magically started working. I used the value suggested by the website of 17 (or 0x11). Just remember that the value you’re putting in is in hex, or you could get a nice surprise ;).

Courier-imap-ssl woes

In order to be able to resize RAID5 arrays in my mailserver, I upgraded from Debian Stable->Testing as it broke less than trying to manually install the relivent packages needed from experimental and unstable. In order to resize RAID5, according to Steinar H. Gunderson, you need a 2.6.17-rc* kernel and mdadm tools>=2.4.1. Thankfully the updated mdadm tools are in unstable so installing them on testing was trivial. linux-imager-2.6.17-rc3 is in experimental, so installing that was also strightforward, just a case of adding an experimental source, aptitude update, aptitude install linux-image-2.6.17-rc3, and removing the experimental source).

After the stable->testing upgrade everything seemed to be working fine. My mail was still being fetched and delivered locally, mutt was working fine, apache2 was still running and the imaps daemon was still going. This morning I tried to access my email through the copy of SquirrelMail I have installed for eash access without having to ssh into the box. It failed to login with the message:

Error connecting to IMAP server: tls://localhost.
115 : Operation now in progress

To see if the courier-imap-ssl daemon was just not accepting connections I fired up Thunderbird (which I havn’t used in sometime, since setting up my mailserver). Thunderbird connected successfully and happily talked to the mailserver, fetched my current inbox and allowed me to poke my emails although it didn’t seem to like some locally created emails with attachments (it just refused to show the attachments). Starting the non-ssl daemon and telling SquirrelMail to use that instead work, but it should be able to use the ssl daemon. It was working fine under stable!

According to DirectAdmin Knowledge Base the error is caused by a bug in PHP. Their solution seems to be to rebuild everything from source. I think I’ll try some less-drastic solutions first, such as downgrading SquirrelMail to the version in Stable, and if that doesn’t work downgrading PHP too. or I could try installing PHP5 (I assume it’s still using 4.something atm).

Anyway, I have two exams in the next 24hours so more pokeage of this will have to wait until the weekend.

**UPDATE**
Following some interesting reading on php.net,freebsd.org and bugs.debian.org on the matter I decided to try installing PHP5 (as those seem to indicate, on debian, the problem is an openssl<->php incompatibility). After installing PHP5 it all worked as expected. Hurray! Now for some revision, honest.

Ututo, SuSE 10.1 & Debian and the evil KDE

Last weekend was a Warwick CompSoc LAN party weekend, which for me ment the customary ‘trying the latest Linux releases’ as well as playing the odd game. (Incidentally, Fred introduced me to TO:Crossfire which is a great mod for UT2004, and looks to be better than Tactical Ops for UT)

A friend of mine (and shameless free-as-in-speach-software advocate and HURD user), Tim, asked me to try Ututo for him, so I tried to install it on my laptop first. For some reason it failed to boot past the ‘ISOLINUX’ stage. Suspecting a possibly duff CD-R I used the downloaded ISO image to install it in a virtual machine. It successfully installed in the VM, but failed to boot. Declaring Ututo an unmitigated failure, I moved on…

Next on the list was SuSE 10.1 which had been released on Thursday. SuSE 10.1 seems a lot stabler than 9.3 and 10.0 were. I only had one KDE application crash on me, where as on 9.3 and 10.0 this was a regular occurence. The knetworkmanager application is a fantastic addition to the KDE desktop, and automatically picked up the university access point which was within range at the LAN. For Linux newbies, and people who want a ‘just works’ Distro SuSE 10.1 has to be one of the best. Shame I don’t fall into one of those groiups ;)

…so back to Debian. After re-installing it I decided to install KDE since it is a nice desktop (dispite the fact that, by default, konqueror, konsole and konversation all use different shortcuts for switching between tabs. (Alt+Left/Right, Shift+Left/Right and Ctrl+,/. iirc)). For a distro which has a reputation for being Gnome-oriented when using a full desktop environment its KDE packages are great. Everything just works as expected, and I’m very pleased with it. I wonder how long it’ll last until I decide to go back to something more minimal (e.g. evilwm).

Some interesting links

Here’s some random URLs I thought might be interesting:

Browse Happy
A website explaining why Internet Explorer is unsafe for use on the web. Unlike most other websites of its kind it is not favoring any particular ‘alternative'(read: broadly safe) browser, but instead provides a list of alternatives and a possitive description of each.

sorttable: Make all your tables sortable
This site has a nifty looking piece of Java script which instantly allows any table on the web-page to be sorted by any column by defining it to be of the class ‘sortable’. Since this is Java-script the sorting is done client-side so no need to resubmit the page for re-ordering nor will it whore over the server its running on with lots of needless(at least as far as serving web-pages is concerned) sorts.

apt-get.org: Unofficial APT repositories
A place to share usefull (Unofficial) repositories for Debian.

Simple PHP Blog
The software my original Blog was using – no SQL needed, it’s all stored as text files. Easy to configure and update – just decompress and go. Fantastic! My only gripe is that most of the themes are fixed-width, and the only non fixed-width theme is not configurable wrt colours. Creating themes doesn’t appear to be very straight forward either, unfortunately. Maybe I’ll have to write my own blog software which only uses CSS for theming, so creating a new theme simply means modifying a CSS file… hmm, yet another project I’ll probably never finish.

Weighted wired & wireless network (and ifplugd)

I was looking for a way of preventing me from having to wait for DHCP to timeout when booting with no network cable attached (actually I was looking for the correct parameter to adjust the timeout and make it much less – but the solution I eventually found was much neater). Most of this comes from an article on the CLUG Wiki about roaming between wireless and wired networks.

First of all I installed ifplugd. Under Debian this was easy, I configured eth0 (my inbuilt wired card) as the only static interface, and ath0 (my inbuilt wireless card, using madwifi) as a dynamic interface so I could turn the wireless on and off using ifup and ifdown without it connecting to the network when it was in range without my permission (I’m not paranoid, I know they’re coming to get me ;) ).

# apt-get install ifplugd

I also changed -d10 to -d1 so that the interface goes down 1 second after the cable comes out instead of 10 as suggested on the CLUG Wiki (link above).

I then edited ‘/etc/network/interfaces’ to ensure that no ‘auto’ lines pointed to eth0 or ath0.

Starting ifplugd was then just a case of:

# ifdown eth0
# /etc/init.d/ifplugd restart

The install was tested by unpluging and re-pluging the network cable and listening for the ‘beep’s that ifplugd emits when it detects these changes.

I continued following the instructions on the CLUG Wiki to setup the priorities of the interfaces so that if both a wired and wireless connection were available it would use the wired one in preference to the wireless.

First I installed iproute:

# apt-get install iproute

Next I modified ‘/etc/network/interfaces’ to look like this:

# The loopback network interface
auto lo
iface lo inet loopback# The primary network interface
noauto eth0
iface eth0 inet dhcp
up /usr/local/sbin/route-prios $IFACE 1

# Wireless
noauto ath0
iface ath0 inet dhcp
wireless-essid omitted for security
wireless-key omitted for security
up /usr/local/sbin/route-prios $IFACE 10

Debain, madwifi-ng and module assistant

I installed Debian GNU/Linux on my laptop (over Arch Linux) last week, and used module-assistant to install the Madwifi driver for my atheros-based wireless card using The Debian Way(TM).

Here is just a quick note of the commands needed to install madwifi using module-assistant under Debain GNU/Linux:

# apt-get install madwifi-source madwifi-tools module-assistant
# m-a update
# m-a prepare
# m-a a-i madwifi
# modprobe ath_pci

…and that’s all there is to it. Not quite as easy as ‘emerge madwifi-driver’ or ‘pacman -S madwifi-ng’ but still fairly straight forward.