Thursday 22 March 2007

Dear lazyweb I am looking for an IRC proxy...

I know that I have seen somebody blogging about irc proxies and I remember that somebody said that there was a better alternative than dircproxy, but I can't find that article, although I know I flagged it...

So, dear lazyweb, do you know of a good irc proxy which is better than dircproxy and is packaged in Debian (Etch)? I am especially interested in one that can connect to multiple networks and which doesn't force me to change my IRC client (so I don't care about screen+some_console_irc_client). Oh, yes, it must not need a degree in rocket science to configure (dircproxy is ok).

Tuesday 20 March 2007

commit mails, made simple OR integration of services

Collab-maint never ceases to amaze me. And Subversion, too (all dvcs fans, stop now! I know, I use darcs, is cool). I always find more and more interesting this about Subversion.

And, no, this is not about Subversion or some VCS, is about integration of services. We need more of that. generally speaking. I am also thinking about look and feel of different services (BTW, I have found a nice pretext to motivate my halt on the work for the Debian wiki theme :-D - see current state of work, if you care).

Why did I wrote this? I had this conversation (trimmed for the sake of brevity):
goneri wrote:
> On Tue, Mar 20, 2007 at 03:42:56AM +0200, Eddy Petrișor wrote:
>> wrote:
>>> -use lib "/usr/share/svn-buildpackage";
>>> +#use lib "/usr/share/svn-buildpackage";
> Yes, you're right.
> How do you have the commit messages?

I subscribed in PTS - see "Subscription - Package Tracking System" and choose "Adanced Mode"
instead of "Subscribe", check the corresponding check boxes ;-) .

/me will blog about this.

Friday 16 March 2007

sleep broken, again; thanks to xorg? gnome-core?

Today is the second day in which I find that my laptop does not resume properly from sleep and I start to wonder why. None of he old problems, blank screen, no reponse...

This time it just looses the GNOME session. I saw two segfaults on the console, on resume: one from iceweasel and one from another app. Then gdm comes up.

I think is xorg's fault or gnome-core. Why? Well, it didn't happened 4
days ago and I have these in the dpkg.log (grepped for upgrades only):

2007-03-12 00:19:04 upgrade xorg 1:7.1.0-13 1:7.1.0-15
2007-03-12 00:19:04 upgrade xvnc4viewer 4.1.1+X4.3.0-20 4.1.1+X4.3.0-21
2007-03-13 03:10:34 upgrade libeel2-data 2.14.3-3 2.14.3-4
2007-03-13 03:10:35 upgrade libnautilus-burn3 2.14.3-8 2.14.3-8+b1
2007-03-13 03:10:35 upgrade libnautilus-extension1 2.14.3-9 2.14.3-11+b1
2007-03-13 03:10:36 upgrade nautilus 2.14.3-9 2.14.3-11+b1
2007-03-13 03:10:37 upgrade nautilus-data 2.14.3-9 2.14.3-11
2007-03-13 03:10:51 upgrade nautilus-cd-burner 2.14.3-8 2.14.3-8+b1
2007-03-13 03:11:00 upgrade gnome-mount 0.5-2 0.5-3
2007-03-13 03:11:24 upgrade libxine1 1.1.2+dfsg-2 1.1.2+dfsg-3
2007-03-13 03:11:25 upgrade lintian 1.23.27 1.23.28
2007-03-13 03:11:26 upgrade unionfs-tools 1.4+debian-3 1.4+debian-4
2007-03-14 09:46:32 upgrade foomatic-filters 3.0.2-20061031-1.1 3.0.2-20061031-1.2
2007-03-14 09:46:32 upgrade libntfs-3g0 1:0.0.0+20061031-6 1:0.0.0+20061031-6+b1
2007-03-14 09:46:33 upgrade ntfs-3g 1:0.0.0+20061031-6 1:0.0.0+20061031-6+b1
2007-03-14 09:46:33 upgrade ekiga 2.0.3-4 2.0.3-5
2007-03-14 09:46:48 upgrade gnome-desktop-environment 1: 1:
2007-03-14 09:46:48 upgrade gnome-core 1: 1:
2007-03-14 09:46:48 upgrade gnomemeeting 2.0.3-4 2.0.3-5
2007-03-14 09:46:48 upgrade libgraphicsmagick1 1.1.7-12 1.1.7-13
2007-03-14 09:46:49 upgrade graphicsmagick 1.1.7-12 1.1.7-13
2007-03-14 09:46:50 upgrade obexfs 0.10-3 0.10-3+b1
2007-03-14 09:46:51 upgrade graphicsmagick-imagemagick-compat 1.1.7-12 1.1.7-13

I'll turn back time with and see if I get back my resume ...

  • I use Etch, almost no packages from Sid or Experimental...
  • no, I didn't submit a bug yet; although this is not RC from Debian's PoV, it sure feels like this kind of regression shouldn't happen...

Wednesday 14 March 2007

ext2online is not always online

If you are running a system with ext3/ext2 on top of a LVM partition, or just when you made the partition bigger, you obviously want the file system to grow, too.

Easy, ext2online is the answer:

ext2online /dev/mapper/hdapool-homefilesystem

Unless you run into an error like:
ext2online v1.1.19 - 2001/03/18 for EXT2FS 0.5b
ext2online: group 0, block 2 not reserved

ext2online: unable to resize /dev/mapper/hdapool-homefilesystem

That's when you start googling and find that ext2online is not that online and needs the file system to be unmounted you need to run first:

/dev/mapper/hdapool-homefilesystem 23G

Note that the size there is the desired new target for the filesystem.
So you log off, go to a console, umount the file system (unlucky you if is the root file system, in that case you'll need the rescue mode - does D-I have ext2prepare?), run the command.

Now fsck.ext3 the device, mount it and try again ext2online. Now it should work.

Getting big changes in Debian

Russell Coker enforces Erich's point about out another disease that affects Debian: large scale changes in packages are hard to accomplish, and we all know it:
  • kfreeBSD people know it as their patches often rot in BTS
  • SELinux people know it, since Debian could have had SELinux support a long time ago, as Russell explains
  • the community knows it, proven by the question in the DPL debate about the unique VCS
  • Stefano knows it and pointed it out wen he proposed the XS-VCS field
Sadly, we are still away from this goal and others are still getting ahead also because of this.

Tuesday 13 March 2007

Wishlist for lenny... or why debian packaging is considered hard

(Update: Yes, Gunnar, the policy says that the debian/rules file must be a makefile, so if you do something else, you have a serious bug because of a policy break. Of course, probably there is nothing that should stop one from using, let's say bash, or python or perl, from a technical PoV.)

Erich has a wish list for Lenny.

What draw my attention was the very first "define and deploy a standard way of packaging software, that can be used for like 90% of packages", later he mentions rationale "other distributions (e.g. *BSD ports, Gentoo?) are ahead of us in this respect, having a standard way of packaging and building things and keeping track of changes."; these two quotes contain the real essence of Gentoo's success in the area of fast development.

A half a year ago I did an experiment in packaging a rpm, a deb and an ebuild for the same piece of software. What I observed back then made me think a lot about the entry level knowledge needed for a person to create a decent package for those types of package systems.

With the risk of being flamed by many people that will miss my point, I will tell you from the start that the deb would have been the most painful if had to start from scratch. Second would have been the rpm due to lack of good/updated documentation and the clear winner in lack of pain points was the ebuild. In reality places were switched between the deb and the rpm.

And before I start, please understand that what I want to point out is that the building system in debian is showing its age and we should do something about it, and is not about pointing people to documentation; it is about making things behave more sanely by default and not about having to read a whole day to understand why the package fails to install a file you just added then to find that you had to add the destination path to a debian/dirs file etc...

The deb had to be the most difficult. Not because I didn't do it before, or because it was hard, but because there are a lot of things that need to be done because the package management is not smart enough to do it by default (i.e. class inheritance does not exist, like is the case for the ebuilds or the macros for rpms).

The Debian approach is backwards, is reversed with regard to the others and the way it should be by default, to allow easier understanding of the packaging. You can, by default, make a package that is tweaked a lot. There is no concept (if you don't count cdbs which is boo-ed a lot by some people) or possibility to just say "make this package a package that is a standard linux app with the regular ./configure && make && make install recipe". And I don't count such a recipe the templates based on dh_make, that is way too verbose. A new maintainer wannabe has difficulties understanding why are those lines in there, and what they do, and most importantly, why is that order correct and no other. Another question which is legitimate is "is that the minimum or can that be trimmed"? (Follow debian-mentors and see how many people leave the .ex files just because they are afraid they will not know how to generate them again.)

Even the rpm has a concept of standard stuff to do for a package, there are some macros that implement that. And that is more logical for an untainted mind (as in not yet debianized) than what we have in the dh_make template, or worse, a clean and new debian/rules file. Why should one care to learn the ways and philosophy of make just to make a small package?

Let me tell you something that will feel like knife stab for many people in our community: the clear winner was the ebuild, not because I was smart or exciting, but because:
  • it was dead simple to make a working one
  • the documentation is really verbose and complete
  • there is a default behaviour for every ebuild, even for a minimalist ebuild
  • if your ebuild has a special need, the ebuilt can override the default behaviour by redefining the function (target, in debian/rules-speech), but you don't need to, in most cases
  • there are some already defined classes which can be inherited to obtain a tweaked default behaviour (imagine typical cmake build system, typical autotools build system, typical kde build system - and iirc, this can include build dependencies)
  • all the information about the package is centralized in one file which is human readable from the start, plus that is obvious, or one can understand fast what a field/function/variable means from the start, without digging through the documentation continuously

I did the ebuild in one day's work (it was less, but some will not believe me, so I'd say 1 day is reasonable enough) starting from absolutely 0 knowledge about the packaging system (if you don't count knowing to emerge package as knowledge).

What is done properly in the portage system is the fact that there are some default classes which your ebuild can inherit to behave more like a certain type of packages (base - a generic recipe, gnome, kde, games, sourceforge - they download tarballs directly, if needed - ...)

Why this is not possible for deb right now? Simple, we have it as a rule in the policy that the debian/rules file is a makefile. So even if one would implement a class-like model for deb packages, you'd still have the debian/rules file as a make file...

Gentoo implemented initially portage's default behaviour (and as a consequence, the "classes") in shell, later they migrated some parts to python and the whole portage is implemented in python now (while classes are implemented in bash). I am not saying that we should put python in essential, God forbid, probably the emdebian guys would stab me :-), I am saying that possibilities exist, alternatives should be possible.

I know, some will just say "show me the code", "making such a change over the entire archive is insane" or "the makefile is a really good way to do things like installing and building, after all, that's what it was designed for". Yes, all of these will miss the point, which is having nothing-to-do as a default policy for a package is a bad policy, it should be an empty/default package will use a standard recipe to build itself, and that recipe should be implemented in the build system.

And if you still don't believe me, just look at these few ebuilds and tell me you'd not be able to write them after a quarter after seeing one the first time and skimming over the docs or/and knowing what a regular gentoo user knows (what a keyword is, what is a slot). Also compare with the debian counter parts and don't forget the different building aids that we have.

One more thing, the ebuilds mostly need not be updated when an upstream release occurs, they just need to be copied and given a different filename.

Wednesday 7 March 2007

"I forgot the PIN of my card" and what results from that

I forgot the PIN of my card recently, I dug into the "safe place where the PIN is written" and found that ... (let's say) I don't have it there anymore.

I went to the bank determined to reset my PIN and go on with my life. Here is a short version of the dialog:

- Hello. Your ATM has blocked my card. I would like it back.
- Ok, here you have it, but first, please tell my the secret word you wrote on the form when you made the card.
- Of course... [blabla]
- Ok. The card is unblocked, you will be able to use it in about 15 min.
- Well, yes. I would like to reset my PIN number since I forgot it.
- What kind of card do you have?
- Maestro
- I am sorry, for this type of card, we can't reset the PIN number. We will have to make another card.
- I am sorry, I think there is a misunderstanding here. Are you telling me that I have to make another card just because I want to reset my PIN?
- Yes, for this type of card is not possible to reset the PIN. We have to make another one. We don't have the list of PIN numbers for this kind of cards, so we can't reset the PIN.

[That sounds like "for other types of cards, we have the PIN numbers for every card out there listed on a tally sheet, but we don't for the type of card you have", put in another form. Spooky, huh?]

... If you had the PIN you could change it at the ATM, but we can't do that since we don't have the PIN number.

- Didn't you think that resetting the PIN is a feature useful enough to have?
- [here the broken recorder model enters the scene] For your type of card we can't reset he PIN, we will have to....

[more question from my side trying to understand why is it not possible to just ignore the old PIN and overwrite the information on the card were, of course, fruitless]

[after some time, I gave up]
- OK, how much will it take for the new card to be created?
- Two weeks.
- What? Anyway... how much will that cost?
- 7 RON (approx. 2€ or 2.5$)
- OK. Can I withdraw now money from my account?
- Yes
- OK, good, at least that. I would like 500RON. Also, I would like to start the card change procedure.
- Of course.
[The guy gives me the money and starts to look into the system to start the "new card" procedure. Before giving me the money, I asked if he needed my ID or something, but I am not sure if he would have had asked me by himself, if I didn't...]
- There is a small problem...
- Oh, really? Let me guess, it is not possible to start the procedure from this sales point and I will have to go to another one...
- Yes. This card was created at the sales point in Calea Dorobanţilor, through a corporate contract and I don't have access to that contract from here... Actually, if I think of it... your employer made that card for you, isn't it?
- Yes.
- In that case you will have to go to your employer and ask for a new card...


On the positive side, at least I can withdraw money from my account... OTOH, I didn't ask if having the card with me would be necessary to withdraw money, so I am not sure if, in the case I will have to give the card to my employer, I will be able to get any cash from the bank...


Christian is talking about problems with suspend-to-RAM and suspend-to-disk and the major cause of issue being the non-free Conexant HSF modem driver.

Since I got my new laptop I had my share of issues with suspend-to-*.

At the beginning everything worked fine, out of the box. Then (I think) I realized that my video card (damned ATI) was not working in accelerated mode, so I decided to fix that and since the card is too new to be even listed as existing on radeon driver's support page, I had to use fglrx.

Then I made the DRI work but suspend-to-ram ceased to work (after a few testing->experimental; experimental--> upstream updates, both ways); I started using the experimental driver (8.31.5-1) which proved to be stable enough (this time) just for suspend-to-disk, if it had dri enabled, or suspend-to-ram, if not having DRI enabled.

I even made a patch to help the maintainer update faster the package (I added a get-orig-source rule in debian/rules).

Now I have ported the patches in the debian package and updated it to 8.33.6 and I am running that. Problems include: consoles are not accessible, but suspend-to-ram and dri work. I will submit a bug with the patch to updated fglrx package.

I expect that I'll try soon to update to 8.34.8 in the hope that I will get all the things working: suspend-to-*, consoles, dri.

The game Oolite was liberated

Yet another liberated game: Oolite, a clone of Elite.

I am the maintainer of the package; it was introduced as a non-free game in Debian, but a week ago upstream has announced that the game is now relicensed with a dual model licensing with GPL and cc-nc-sa 2.0 (the old license). Also, the change applies retroactively to the 1.65 release while further releases will be licensed GPL only and GPL/cc-nc-sa 3.0 for the code and data, respectively.

I have prepared the packages (oolite 1.65-5 and oolite-data 1.65-2) and Simon Richter agreed to make the honorary upload :-) tomorrow morning.

I hope he doesn't forget to build the packages by passing the '-sa' parameter to dpkg-buildpackage :-) and that these packages will reach Etch just in time for release.

As a bonus to this change, the game will be autobuilt on amd64, too.

(BTW, why isn't there any amd64 buildd in the non-free buildd network? Any technical reason?)

Update: the amd64 build seems that was behind in January when I installed the game on my new laptop. 1.65-4 seems that has been built a month ago, much later than the other arches.

Update: It seems that the package is on a positive trend on popularity: a constant rise in the number of oolite package installations is visible...

Monday 5 March 2007

about stable and oldstable (not sarge and etch)

I was wondering, would it make sense if it would make sense to allow updates to stable packages (more specifically, maintainer scripts) just to allow clean upgrades from a current stable to a new stable release (as in allow changes in Sarge to make upgrades to Etch smoother).

I have seen already many packages that would be better to have a small change in the old version rather to have some kludges in the new postinst.