Showing posts with label cool apps. Show all posts
Showing posts with label cool apps. Show all posts

Monday, 6 April 2015

HOWTO: Dnsmasq server for network booting using TFTP and DHCP

Dnsmasq is a very lightweight server that besides the expected DNS caching functionality, it also offers DHCP and TFTP functionality in a single binary.

This makes it very useful if one needs to network boot a system since you can have the TFTP and DHCP part of the setup done easily, and only add NFS for a complete network boot.

Add to that that

One extra nice thing dnsmasq has is that it can mark specific hosts, addresses or ranges with some internal markers, then use those markers as symbolic names to apply settings based for classes of devices.

In the configuration snippet below, there is a rule I set up to make sure I would apply the 'netbsd' label to any system connecting through specific ethernet interfaces (one is the interface of the system, the other is a USB NIC I use from time to time):
#You will need a range for static IPs in the main file
dhcp-range=192.168.77.250,192.168.77.254,static

# give the name 'kinder' to any machine connecting through the given ethernet nics and apply 'netbsd' label
dhcp-host=00:1a:70:99:60:BB,00:06:4F:0D:B1:95, kinder, 192.168.77.251, set:netbsd

# Machines tagged 'netbsd' shall use the given NFS root path
dhcp-option=tag:netbsd, option:root-path,/export/netbsd-nslu2/root
# Enable dnsmasq's built-in TFTP server
enable-tftp

# Set the root directory for files available via FTP.
tftp-root=/srv/tftp
Saving this configuration file in /etc/dnsmasq.d/kinder-netboot will enable this to be used by dnsmasq if this line is present in /etc/dnsmasq.conf
conf-dir=/etc/dnsmasq.d
Commenting it will disable the netbsd part easily.

Monday, 2 September 2013

Integrating Beyond Compare with Semanticmerge

Note: This post will probably not be on the liking of those who think free software is always preferable to closed source software, so if you are such a person, please take this article as an invitation to implement better open source alternatives that can realistically compete with the closed source applications I am mentioning here. I am not going to mention here where the open source alternatives are not up to the same level as the commercial tools, I'll leave that for the readers or for another article.



Semanticmerge is a merge tool that attempts to do the right thing when it comes to merging source code. It is language aware and it currently supports Java and C#. Just today the creators of the software have started working on the support for C.

Recently they added Debian packages, so I installed it on my system. For open source development Codice Software, the creators of Semanticmerge, offers free licenses, so I decided to ask for one today, and, although is Sunday, I received an answer and I will get my license on Monday.

When a method is moved from one place to another and changed in a conflicting way in two parallel development lines, Semanticmerge can isolate the offending method and can pass all its incarnations (base, source and destination or, if you prefer, base, mine and theirs) to a text based merge tool to allow the developer to decide how to resolve the merge. On Linux, the Semanticmerge samples are using kdiff3 as the text-based merge tool, which is nice, but I don't use kdiff3, I use Meld, another open source visual tool for merges and comparisons.


OTOH, Beyond Compare is a merge and compare tool made by Scooter Software which provides a very good text based 3-way merge with a 3 sources + 1 result pane, and can compare both files and directories. Two of its killer features is having the ability split differences into important and non-important ones according to the syntax of the compared/merged files, and the ability to easily change or add to the syntax rules in a very user-friendly way. This allows to easily ignore changes in comments, but also basic refactoring such as variable renaming, or other trivial code-wide changes, which allows the developer to focus on the important changes/differences during merges or code reviews.

Syntax support for usual file formats like C, Java, shell, Perl etc. is built in (but can be modified, which is a good thing) and new file types with their syntaxes can be added via the GUI from scratch or based on existing rules.

I evaluated Beyond Compare at my workplace and we decided it would be a good investment to purchases licenses for it for the people in our department.


Having these two software separate is good, but having them integrated with each other would be even better. So I decided I would try to see how it can be done. I installed Beyond compare on my system, too and looked through the examples


The first thing I discovered is that the main assumption of Semanticmerge developers was that the application would be called via the SCM when merges are to be done, so passing lots of parameters would not be problem. I realised that when I saw how one of the samples' starting script invoked semantic merge:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"" -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\"" -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""
Can you see the problem? It seems Semanticmerge has no persistent knowledge of the user preferences with regards to the text-based merge tool and exports the issue to the SCM, at the price of overcomplicating the command line. I already mentioned this issue in my license request mail and added the issue and my fix suggestion in their voting system of features to be implemented.

The upside was that by comparing the command line for kdiff3 invocations, the kdiff3 documentation and, by comparison, the Beyond Compare SCM integration information, I could deduce what is the command line necessary for Semanticmerge to use Beyond Compare as an external merge and diff tool.

The -edt, -emt and -e2mt options are the ones which specify how the external diff tool, external 3-way merge tool and external 2-way merge tool is to be called. Once I understood that, I split the problem in its obvious parts, each invocation had to be mapped, from kdiff3 options to beyond compare options, adding the occasional bell and whistle, if possible.

The parts to figure out, ordered by compexity, were:

  1. -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"

  2. -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""

  3. -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\""

Semantic merge integrates with kdiff3 in diff mode via the -edt option. This was easy to map to Beyond Compare, I just replaced kdiff3 with bcompare:
-edt="bcompare \"#sourcefile\" \"#destinationfile\""
Integration for 2-way merges was also quite easy, the mapping to Beyond Compare was:
-e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\""
For the 3-way merge I was a little confused because the Beyond Compare documentation and options were inconsistent between Windows and Linux. On Windows, for some of the SCMs, the options that set the titles for the panes are '/title1', '/title2', '/title3' and '/title4' (way too descriptive for my taste /sarcasm), but for some others are '/lefttitle', '/centertitle', '/righttitle', '/outputtitle', while on Linux the options are the more explicit kind, but with a '-' instead of a '/'.

The basic things were easy, ordering the parameters in the 'source, destination, base, output' instead of kdiff3's 'base, source, destination, -o ouptut', so I wanted to add the bells and whistles, since it really makes more sense for the developer to see something like 'Destination: [method] readOptions' instead of '/tmp/tmp4327687242.tmp', and because that's exactly what is necessary for Semanticmerge when merging methods, since on conflicts the various versions of the functions are placed in temporary files which don't mean anything.

So, after some digging into the examples from Beyond Compare and kdiff3 documentation, I ended up with:
-emt="bcompare  \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'"

Sadly, I wasn't able to identify the symbolic name for the output, so I added the hard coded 'merge result'. If Codice people would like to help with with this information (or if it exists), I would be more than willing to do update the information and do the necessary changes.

Then I added the bells and whistles for the -edt and -e2mt options, so I ended up with an even more complicated command line. The end result was this monstrosity:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="bcompare \"#sourcefile\" \"#destinationfile\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'" -emt="bcompare  \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'" -e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'"
So when I 3-way merge a function I get something like this (sorry for high resolution, lower resolutions don't do justice to the tools):



I don't expect this post to remain relevant for too much time, because after sending my feedback to Codice, they were open to my suggestion to have persistent settings for the external tool integration, so, in the future, the command line could probably be as simple as:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java
And the integration could be done via the GUI, while the command line can become a way to override the defaults.

Thursday, 18 July 2013

HOWTO: git - remove those pesky remotes/origin/

Often people create branches to fix various bugs or implement various features, and publish those to a public repository to be pulled in the official repository (e.g. pull requests in the github model).

The problem is that after the fix is merged the local repo still has the branch that tracks the origin (your public repository).

To remove the local branch it is simple:

git branch -d feature_x

To remove the actual branch in the remote repository:

git push origin --delete feature_x

But, still the local tracking branches remain, those pesky 'remotes/origin/feature_x'. Here is an example from one of my repos:

$ git branch -a
  activestate
  master
  next
* py.test
  sysplatformtest
  remotes/activestate/master
  remotes/origin/HEAD -> origin/master
  remotes/origin/appauthor-fallback
  remotes/origin/linux-fixes
  remotes/origin/master
  remotes/origin/next
  remotes/origin/py.test
  remotes/origin/root-directories
  remotes/origin/sysplatformtest
  remotes/origin/wrapper-defaults

All of those can be removed with this command:

git remote prune origin

 And the output for my appdirs repository is:

Pruning origin
URL: git@github.com:eddyp/appdirs.git
 * [pruned] origin/appauthor-fallback
 * [pruned] origin/linux-fixes
 * [pruned] origin/next
 * [pruned] origin/root-directories
 * [pruned] origin/sysplatformtest
 * [pruned] origin/wrapper-defaults
Which, finally removes those damned refs:

$ git branch -a
  activestate
* master
  py.test
  remotes/activestate/master
  remotes/origin/HEAD -> origin/master
  remotes/origin/master
  remotes/origin/py.test
Great!

Update 2013-07-24:  There have been also other solutions proposed in the comments section. I am quoting them here for better visibility.

Besides the solution I proposed, people commented about other alternatives for pruning remote branches, by using git fetch --prune or by using git-push's --prune option.

Brice provided a method to remove specific branches one by one with git branch -dr command:
git branch -dr origin/foo
What is ironic is that in the past I have used this format, and before writing the original article I was looking for this, but haven't found it that easily. I hope this article will help some other poor soul or myself in the future.

Sunday, 30 December 2012

Looking for a backup application for a small home network

I have been looking at backup solution for the last few days and I am stuck, so I am asking for pointers or suggestions.

Since all backup solutions are appropriate to various needs, here are my requirements for my home network (two laptops backup on a very low powered server with a 3TiB HDD):
  • Free software/open source cross-platform solution - must be able to backup both Linux and Windows clients
  • network backup (to a Debian server, storage on HDD)
  • very low CPU and memory needs on the server side (server is a de-underclocked Linksys NSLU2 running Debian armel)
  • automatic backups with low maintenance cost and easy setup and recovery (setup once, forget about it)
  • easily accessible filesystem based storage
  • clients should be smart enough to detect when they aren't in the home network and not try to backup when away
  • available in Debian
These are the basic requirements, and bonus-points requirements include:
  • Windows clients don't need Cygwin
  • optional encryption (for storage)
  • default sanity checking for stored files (detection/correction of corrupt backed-up files)
  • unduplication (if present, sanity checking is mandatory)
  • logarithmic storage is a plus
  • Web/nice interface for both client and server is a plus
According to the info I found, candidates for these criteria are Amanda and BackupPC. I like the rsnapshot idea of deduplication and incremental backups, but it's not available for Windows. BackupPC seems to be OK-ish, but it sounds too much like a bunch of Perl scripts and Windows support sounds like an after-thought and I dislike the idea of installing Cygwin just to provide some *nix tools.

I discarded Bacula, because the general impression I got from what I read is that it is hard to set up, has its own storage and has heavy needs for both clients and server. It sounds like overkill.

So what do other people recommend for my setup? Is Amanda OK? Did I get the wrong impression regarding BackupPC?

Monday, 15 October 2012

HOWTO: sudo + cowbuilder (+git-buildpackage)

In case you tried to use git-buildpackage and wanted to use cowbuilder as a builder, you might have ran into the error

sudo: sorry, you are not allowed to preserve the environment

This is due to a change in sudo default configuration in version 1.7.4p4-2 (I know, is true since 2010) which doesn't allow the execution of commands via sudo using the parameter '-E' which means 'setenv'.

The news item even explicitly states pbuilder can be affected by this because it wants to port over to the pbuilder environment the HOME environment variable and suggests using

Defaults env_keep += HOME

But adding such a line to your /etc/sudoers.d/01_pbuilders file (/etc/sudoers is recommended to be touched only by the package) will do the  same for all commands and users ran via sudo, which is, according to my preferences, too permissive.

The irony is that running git-buildpackage --git-pbuilder  will invoke sudo -E cowbuilder so the env_keep suggested fix will not work because for -E to be allowed, setenv needs to be set in sudoers. This would defeat the purpose of env_reset, if done for all commands, but we can do a better job if we allow this kind of change only for the cowbuilder and pbuilder commands. You do have different commands that are allowed explicitly stated, don't you?

On my system I have made a group especially for people allowed to do packaging work, the group is called 'pack'. The only account in that group is my own.

Also, I have defined a command alias named PBUILDERS which looks something like this:

Cmnd_Alias PBUILDERS = /usr/sbin/pbuilder, /usr/sbin/cowbuilder

Running PBUILDERS is already restricted to the pack group. Here is an example that requires password on run:

%pack ALL=PBUILDERS

So all that needs to be done is to allow setenv for the PBUILDERS commands. Reading through the sudoers manpage and after some trial and error (use visudo for editting sudoers files - visudo -f /etc/sudoers.d/01_pbuilders) I found out that the symbols that distinguish between commands, users and groups in a 'Defaults' line needs to be right next to the 'Defaults' word. For command the sign is an exclamation sign '!' (for user lists it's ':') so since we want to link the exception to the command, not the user list, we'll use 'Defaults!':

Defaults! PBUILDERS setenv

Assuming you also have a PBUILDERS commands alias, this is what you need to be able to use git-buildpackage in conjunction with cowbuilder and sudo.

If there is a non-intrusive way to prevent sudo to use -E when invoking cowbuilder, please add your comment, I would be interested to know it.

Saturday, 25 August 2012

HOWTO: Things to remember about cowbuilder

Here are a few things to remember about cowbuilder:

If you run cowbuilder through sudo, and you want to build a source package whose result should be available to the user who initiated the build, then

  • you should have "BUILDRESULTUID=the_user's_id" in ~/.pbuilderrc, and
  • you might want to invoke cowbuilder with

'sudo cowbuilder --build the_pack_to_build.dsc --buildresult destination_dir_for_build_results'

If you want to login into a chroot environment in which you'd like to see part of your directory alongside the unpacked source tree of your package, then invoke cowbuilder with

sudo cowbuilder --login --bindmounts /path/to/the/dir/you/want

Your base.cow directory can be updated/changed manually with

sudo cowbuilder --login --save-after-login

You can update the base.cow directory with

sudo cowbuilder --update

 That's about it.

Also, noteworthy tip: ccache might not work correctly in the cowbuilder chroot.

Sunday, 4 March 2012

HOWTO: Fix: Baobab opens directories in Totem/VLC (and some Xfce4 related things)

If you ever used filelight or baobab you probably know how useful they are. If you didn't, then you missed a lot on how you can spot (and fix) where your disk space is wasted.

With my recent attempt to upgrade to GNOME 3 which, because of its innate property to be useless and counter-productive, actually made me to use Xfce4 with a mix of GNOME aplications (since Xfce lacks a few functionalities here and there), I ran into all sorts of problems.

As a side note, Xfce4 is quite decent, but if you like some icons on your panels to be left aligned and some right aligned, you should know that you can add a Separator item to the panel, right click on it -> Properties and tick the Expand* check box. And if you also set the Transparent style, it will look nice, too.

Back to the topic. With my mix of Xfce and Gnome apps, I configured my top panel to contain a Free Space Checker for my /home file system and today it alerted me that I was low on disk space, so I started baobab to check what I can clean up.

When I found a possible suspect, I wanted to open the directory with a file browser, but, instead, Totem was started and started trying to queue all the files in the offending directory. The problem is that one way or another, Totem (or VLC) was configured to be the default handler for directories instead of the file manager.

The solution is simple, open with an editor the file ~/.local/share/applications/mimeapps.list and search for the line starting with inode/directory= and you'll see something like:

inode/directory=nautilus-folder-handler.desktop;baobab-usercreated.desktop;vlc-usercustom.desktop;


Remove the offending part, vlc-usercustom.desktop;, save the file and try again to open that directory from baobab. If you are double-lucky :P and now it opens with Totem, you will have to remove a reference to a "totem-usercustom.desktop;" or something of that sort. Now, on my system, that line looks like this:

inode/directory=nautilus-folder-handler.desktop;baobab-usercreated.desktop;


And now it works as expected**

* I suppose it's called like that in English, I have my desktop in Romanian
** Except that I would like it to start my desktop preferred file manager, not Nautilus, but that's another issue.

Wednesday, 21 October 2009

Microblogging, let's try it...

I have figured that I might try to use some microblogging service, so I decided to use identi.ca because is a free platform, as opposed to twitter.

I will probably post really occasionally, but since I have a client for ideni.ca on my phone I expect to update more often than the full blog:

Follow me at: http://identi.ca/eddyp

Saturday, 20 June 2009

MSI PR200WX-058EU sleep - 91a6c462b02d8dc02dbe95e5a407d78078a38d01 is first bad commit

Nailed it!

After a rocky start, I managed to find the commit that broke sleep for my laptop.


91a6c462b02d8dc02dbe95e5a407d78078a38d01 is first bad commit
commit 91a6c462b02d8dc02dbe95e5a407d78078a38d01
Author: H. Peter Anvin

Date: Wed Jul 11 12:18:57 2007 -0700

Use the new x86 setup code for x86-64; unify with i386

This unifies arch/*/boot (except arch/*/boot/compressed) between
i386 and x86-64, and uses the new x86 setup code for x86-64 as well.

Signed-off-by: H. Peter Anvin

Signed-off-by: Linus Torvalds



Simply reverting this commit wouldn't fix the problem entirely since the screen was always blank after the successful resumes; OTOH, the script used for testing was supposed to do stuff after resume, stuff which would have effects visible on the hard-disk, so it was visible on the next reboot if the last sleep/resume cycle was successful or not.


Great. Oh, and git bisect rules!




Now, if you're interested in finding a regression in the kernel and you might be interested in how I automated the thing, here are some small scripts I used:

  • linux-build - a wrapper script around make-kpkg to build .deb packages of the linux kernels I build; I used it way before this bisect, but now I modified it in such a way the kernels are clearly versioned and indicate the commit to which they correspond, too
  • sleepit - a script that automated the actions needed for a linux kernel to be tested; is really trivial and highly specialized on sleep/resume debugging; it assumes to be ran in the directory where you'd later want to grab dmesg-s outputs from
  • sleeptest - a wrapper script that is smart enough to detect if the current kernel is a kernel to be tested or a stable (regular kernel) one
    • if the kernel is a stable one:
      • looks for the signs left by the last test kernel and depending on them, mark the kernel bad or good in the bisect; this would result in a new checkout which would be processed or, if the bad commit was identified, the script would stop
      • in the case of a new bisect, the new checkout is cleaned up, patched, built, then the script installs the new linux-image .deb[1] and update-grub[2], leaving the reboot command at my discretion for the eventual case something went awry; a failure to compile the kernel in an automated fashion would have dropped me in an interactive console which meant I had to manually do the steps necessary to be ready to boot into the next kernel
    • if the kernel is a test kernel run the sleepit script
The main script was the sleeptest script which is ran as root to allow sleep commands, installation of the kernel and update-grub; when building, the build is done via su to my user.

As a supplemental speed up, I configured libpam-usb to authenticate root and myself through a USB storage device, which is quite cool. I am still pondering if I should keep this enabled or migrate to something like libpam-rsa[*].

Of course, the scripts contain stuff hard-coded into them (my user name for one), but they can easily be modified to remove those limitations (generally they use variables).


linux-build


#!/bin/sh
# License: GPLv2+/MIT
# Author: Eddy Petrișor
#
# Acest script trebuie rulat din directorul nucleului cu comanda:
# linux-build [--no-headers] [--rebuild]
#
# This script must be ran from the kernel tree directory with
# linux-build [--no-headers] [--rebuild]

FATTEMPT=../attempt

TARGETS="kernel-image kernel-headers modules_config modules"
[ "$1" = "--no-headers" ] && shift && TARGETS="$(echo $TARGETS | sed 's#kernel-headers ##')"

if [ -f $ATTEMPT ]
then
ATT=`cat "${FATTEMPT}"`
if [ $# -eq 0 ]
then
ATT=`expr $ATT + 1`
make-kpkg clean
else
if [ $# -eq 1 ] && [ $1 = '--rebuild' ]
then
# nothing to do, we are already set
echo 'Preparing for rebuild'
else
echo 'Illegal parameters'
exit 2
fi
fi
else
ATT=1
fi

# no problem if is rewritten on rebuild
echo "$ATT" >$FATTEMPT

# must define MODULE_LOC for mol module compilation
DIR=`pwd`
cd ..
MODULE_LOC=$(pwd)/modules
# this didn't work
# export ALL_PATCH_DIR=$(pwd)/linux-patches
cd ${DIR}

echo "Modules should be here: ${MODULE_LOC}"
echo "Stop by ctrl+c, if the independent modules aren't there"

# press ctrl+c, if needed -- disabled for now
#read

export MODULE_LOC
export CONCURRENCY_LEVEL=$(grep -c 'processor' /proc/cpuinfo)

[ -d .git ] && PREFIX="g$(git log --pretty=oneline --max-count=1 | cut -c 1-8)-" || PREFIX=""
APPEND=$PREFIX$(hostname)

#time make-kpkg --rootcmd fakeroot --revision ${ATT} --stem linux --append-to-version -`hostname` --config menuconfig --initrd --uc --us kernel-image kernel-headers modules_config modules
#time make-kpkg --rootcmd fakeroot --revision ${ATT} --stem linux --append-to-version -`hostname` --added-patches 'ata_piix-ich8-fix-map-for-combined-mode.patch,ata_piix-ich8-fix-native-mode-slave-port.patch' --config silentoldconfig --initrd --uc --us kernel-image kernel-headers modules_config modules
time make-kpkg --rootcmd fakeroot --revision ${ATT} --stem linux --append-to-version -$APPEND --config silentoldconfig --initrd --uc --us $TARGETS


sleepit


#!/bin/sh

FAILEDRESUME=/failed-resume
RESUMED=/resumed

modprobe i915
invoke-rc.d acpid stop
echo "$(uname -r)" > $FAILEDRESUME
dmesg >dmesg_before_$(uname -r); echo mem > /sys/power/state; dmesg >dmesg_after_$(uname -r); sync
echo 'resumed, oh my god' > resumed
echo "$(uname -r)" >> $RESUMED
rm -f $FAILEDRESUME
sync
sleep 10
reboot



sleeptest


#!/bin/sh

RESULTSDIR=/root/var/debug/sleep/regression
UNAMER="$(uname -r)"
FAILEDSLEEPFILE=/failed-resume
RESUMED=/resumed
SOURCEDIR=/home/eddy/usr/src/linux/linux-2.6

check_same_commit ()
{
local COMMIT
COMMIT=$(git log --pretty=oneline --max-count=1 | cut -c 1-8)
[ "$COMMIT" = "$1" ] && return 0 || return 1
}

get_rev_from_unamer ()
{
echo "$1" | sed 's#.*-g\([0-9a-f]*\)-heidi#\1#'
}

mark_bad ()
{
cd $SOURCEDIR
su -c 'git reset --hard HEAD' eddy
su -c 'git bisect bad' eddy
cd -
}

mark_good ()
{
cd $SOURCEDIR
su -c 'git reset --hard HEAD' eddy
su -c 'git bisect good' eddy
cd -
}

compile_next ()
{
cd $SOURCEDIR
if [ -f $FAILEDSLEEPFILE ] ; then
LKVER=$(cat $FAILEDSLEEPFILE)
else
LKVER=$(tail -n 1 $RESUMED)
fi
PREVCOMMIT=$(get_rev_from_unamer "$LKVER")

if check_same_commit "$PREVCOMMIT" ; then
echo "It looks like you got your result!"
exit 1337 # of course $? isn't 1337, but anyways
fi

su -c 'make clean && rm -fr debian && git reset --hard HEAD && patch -p1 < lkver="$(cat">>> BAD <<< $LKVER ($(get_rev_from_unamer $LKVER))" mark_bad else LKVER=$(tail -n 1 $RESUMED) echo "Marking >>> good <<< $LKVER ($(get_rev_from_unamer $LKVER))" mark_good fi compile_next && \ cd $SOURCEDIR/.. && \ echo 'Installing the linux-image and running update-grub && reboot' && \ dpkg -i $(ls linux-image-*_$(cat attempt)_*.deb) && \ update-grub fi


You have my permission to use, modify and redistribute these scripts or modified versions based on these under the terms of the MIT license.


[*] because the libpam-rsa package seems to be unmaintained (especially upstream), while libpam-usb seems to inactive (maybe is considered finished by upstream?)

[1] I didn't automate the removal of the previous test kernel, but that could have been done easily

[2] I haven't made a custom grub section for the test kernels in such a way they would boot by default at the next reboot since I considered that to be too cumbersome for the moment (although I had /vmlinuz symlinks) and it was simpler to select manually the kernel

Thursday, 18 June 2009

Howto: transitioning to grub2 from lilo (LVM)

Failing to boot my self compiled kernel, I got to the conclusion that, in order to try to debug the initramfs issues I might have I needed to pass easily different break=* parameters on the kernel command line to identify which part of the boot within the initramfs goes awry.

Since I was stuck with lilo and booting with different parameters is a pain when using it, I decided is time to try to install and start using grub-pc, aka grub2.


Documentation on migrating from lilo (or even grub 1) is lacking, even more for cases where you have the /boot directory on LVM, like I do. IMHO, one of the most important things during a migration is to never lose the ability to boot properly your system, which means that it was a must not to write the grub code into the MBR unless I was 100% sure I will be able to boot with grub at the next reboot.


I tried quite a few approaches (installing grub in a partition, trying to create a real /boot partition), but they all failed on way or another, so I will not describe any of those failing methods.


What I found to work was to:
  • install the grub-pc package,
  • create its configuration and set it up (creates the module files in /boot/grub and /boot/grub/grub.cfg),
  • then install the boot code on an external USB stick, while nothing else was to written on the stick (so no data lost from the stick);
  • after successfully booting from the USB stick, install the boot image on the internal harddisk's MBR
  • reboot and be happy with grub2
Note that, although I run lenny, I decided to use sid's grub package since the little information on grub2 that existed pointed to grub-mkconfig which doesn't exist in lenny's grub-common package. Due to some bug I found earlier at work when trying to migrate my workstation, I knew the squeeze version wasn't good either because it failed to properly boot systems with / on LVM. The earliest version that works is 1.96+20090603-2 and installs without problems directly from sid on a lenny system.

Now lets go over the steps in more detail*.

Creating the configuration is a matter of creating the device.map file and grub.cfg:
update-grub


Create a core.img file with support for lvm:
grub-mkimage --output=/boot/grub/core.img ext2 chain pc gpt biosdisk lvm


Install on the MBR of the USB stick (making sure lvm will be visible):
grub-install --modules="pc ext2 biosdisk gpt chain lvm" /dev/sdb


Try to boot from the stick at least one kernel. If it fails, you probably didn't add in the proper modules (see the possible module names in /usr/lib/grub/i386-pc and add the necessary ones in the modules list). If you're dropped into the GRUB rescue prompt, use ls and set to check if the root and prefix variables are set correctly and if the lvm root is visible.

If you manage to boot, you're ready to replace lilo or your old boot loader.


Install on the internal disk MBR:
grub-install --modules="pc ext2 biosdisk gpt chain lvm" /dev/sda


Reboot and be happy.



Please note that you might have added some optional parameters in your old lilo.conf which aren't present in the new /boot/grub/grub.cfg file and you might want to add those. If you want to add some option to all the images, you'll probably want to append those parameters to the value of this variable from /etc/default/grub:

GRUB_CMDLINE_LINUX="quiet"

For instance, since I want to find when sleep stopped working for my machine, I changed mine into:

GRUB_CMDLINE_LINUX="acpi_sleep=beep ec_intr=0"


After that, run update-grub once more to propagate the changes into /boot/grub/grub.cfg.





* skipping package installation; just use aptitude's text user interface and install grub-pc and grub-common from sid; depending on the system you might need to install grub-efi, grub-ieee1275 or even grub-linuxbios instead of grub-pc

Friday, 16 January 2009

Symbian develoment on Linux (almost native) (Part 1)

After my previous failed attempt at installing Carbide C++[1] to allow me to develop stuff for Nokia E71 (i.e. Symbian), recently I have been digging up the internet on information on how to develop directly in Linux.

After much digging and lots and lots of confusion, I actually managed to find some useful information which allowed me to install a cross compiler and the C++ SDK for my phone (S60 3rd Edition Feature Pack 1) on my MSI PR200 laptop which runs Debian Lenny (amd64). The compiler was built from source and its build arch is x86_64 linux, so it should be able to be a little bit faster than the precompiled binaries for i686.


The key of the solution was the GnuPoc project and the fork that offers support for platforms newer than S60 v2 which is avilable at:

http://www.martin.st/symbian/


Besides the GnuPoc download, you'll need to install the appropriate toolchain and compiler:

EKA1 + a modfied gcc release - for S60 v1 and v2
EKA2 + CodeSourcery's GCC (I chose the source variant) - for S60 v3 or newer and UIQ


The instruction on Martin's page are really good, so I won't repeat them here.



After this, there's the non-free part, the SDK installation which needs to be downloaded from forum.nokia.com (you need an account on forum nokia, but the download is free of charge). The installation procedure is also explained on Matrin's page and they work properly (at least they worked for me).

By the way, there are two variants of the SDKs: a full SDK (Java, C++) and one only for C++. The instructions refer to the C++ version, but I don't know if they'd work with the full version.



I did everything until this point and I haven't completed and/or tested if things work, but I'll do it and update the information.


[1] actually I needed a tool chain, but having a graphical IDE was looking really good

Thursday, 14 August 2008

vga=what in lilo.conf for 1280x800 1440x1024

With the more frequent wide screen laptops using 1280x800 or something else like 1440x1280, most people would want the console in vesa mode to use the entire screen, not only 1024x768.

I was in that situation and wanted to know the "secret code" that was needed for that console mode since these modes for 1280x800 seem to be vendor specific and not a VESA standard. They are not present in the official documentation but can be found with a simple command:

hwinfo --framebuffer | grep 1280x800

and pick one of those that makes sense for you. Add that to lilo.conf, run lilo (run etckeeper commit, if you use etckeeper) and reboot. Enjoy.

Saturday, 26 July 2008

fast translation bug report via icedove

Here's a quick way to report po-debconf transalations:



Courtesy of Quicktext


Update: the image above is an animated gif and some viewers might not be able to render such images correctly. Apparently even on the blog the animation isn't visible unless you click on the image itself.

Wednesday, 5 March 2008

email harvesting prevention - simple and efficient

On this page I've seen this nice way of preventing email addresses harvesting (I have noscripts installed):



When javascript is enabled/permited, it looks like this:


The code is really simple and could be easily automatically applied to pages (the code obfuscates even more, but that is irrelevant - see the original source for the code):
<script language='JavaScript' type='text/javascript'>
<!--
var prefix = 'ma' + 'il' + 'to';
var path = 'hr' + 'ef' + '=';
var addy35780 = 'g.martinez' + '@';
addy35780 = addy35780 + 'pcbsd' + '.' + 'es';
document.write( '<a ' + path + '\'' + prefix + ':' + addy35780 + '\'>' );
document.write( addy35780 );
document.write( '<\/a>' );
//-->\n </script><script language='JavaScript' type='text/javascript'>
<!--
document.write( '<span style=\'display: none;\'>' );
//-->
</script>This e-mail address is being protected from spam bots, you need JavaScript enabled to view it
<script language='JavaScript' type='text/javascript'>
<!--
document.write( '</' );
document.write( 'span>' );
//-->
</script>


BTW, the PC-BSD site looks really nice, unlike Debian's.

Sunday, 2 March 2008

pbuilder - /etc/pbuilderrc no longer a conffile (333294)

I have just finished a completely functional patch for pbuilder's bug #333294. I started this during debconf7, to be more precise, just after Junichi's talk about pbuilder. This patch makes pbuilder smarter smarter about the information it places in the MIRRORSITE variable of the /etc/pbuilderrc.

And now that file is no longer a conffile[1], and it asks some questions via debconf. And since debconf is translatable, there is a Romanian translation, too.

I have published my changes in my pbuilder git repo on alioth.

Anyone interested in the changes can pull from that repo:

git://git.debian.org/git/users/eddyp-guest/pbuilder.git
or
http://git.debian.org/git/users/eddyp-guest/pbuilder.git


[1] this was the main reason the patch wasn't good enough, since modifying a conffile is RC.

Saturday, 1 March 2008

TDebs - arch is not important, is it?

Updates:
  1. the arch issue raised by Frans was not the object of my post. Also, not all points are invalid/reiterated, just the ones iterated here. Thanks Frans.
  2. add a note that there is an image that proves that .mo endianness is irelevant - usefull for readers that might miss that



While looking at Neil Williams' talk(hi-res/lo-res) from FOSDEM about cross building and tdebs I was STUNNED about the many (not all, thanks Frans) points that were reiterated during that talk when there were already answered.

Well, I guess you have to repeat yourself a lot to get your ideas through to the other side. So, let's do that.

All (except the first) of the following have been already answered one way or another on the TDebs wiki page.

Q: .mo files are arch independent or not? Neil asks again on his blog.
A: It doesn't matter, does it?


0 eddy@bounty ~ $ file /usr/share/locale/ro/LC_MESSAGES/wormux.mo
/usr/share/locale/ro/LC_MESSAGES/wormux.mo: GNU message catalog (big endian), revision 0, 228 messages
0 eddy@bounty ~ $ uname -a
Linux bounty 2.6.24.2-bounty #1 SMP Wed Feb 13 02:34:09 EET 2008 x86_64 GNU/Linux
(there is a picture here which proves my point, if you can't see it, click here).



Q: The packages file will explode
A: No, that's addressed.

Q: How do you select languages?
A: Simply use what we have and expand on that.

Q: Do tdebs support anything else than gettext?
A: Yes.

Thursday, 28 February 2008

sarge + cowbuilder = love

Update:
A disclaimer/warning is needed: UNLESS you KNOW what you're doing, you probably are better off building packages for each of the supported Debian distros.


I know it sounds cheesy, but I couldn't find a better name.


At my workplace we are packaging software to work for as many distros as possible at one time. As a consequence the .deb packages we produce should work fine on all Debian distros: oldstable, stable, testing, sid.

To make sure we achieve that, we are building them in an oldstable chroot. OTOH, using pbuilder is very demanding on I/O, especially when entering the chroot and when exiting it. That slows down a lot the build system and is time to change that.


One reliable alternative is to use cowbuilder (part of cowdancer package). What is interesting is that cowbuilder relies on the existance of /usr/bin/cow-shell inside the chroot, so the cowdancer package needs to be installed both in the chroot and the host machine.

But cowdancer was not a part of sarge, so, if you try to create a new sarge cow directory with cowbuilder, it will fail at the end saying it didn't found the cowdancer package. Also trying to login via cowbuilder --login will complain it didn't found /usr/bin/cow-shell.


The fix for this is simple, once you figured it out and saw that cowdancer is available in sarge as a backport:
cowbuilder --create --distribution sarge --basepath /var/cache/pbuilder/sarge.cow --othermirror 'deb http://www.backports.org/debian sarge-backports main'
You may prefer to be sure that the only thing from backports is cowdancer and use a more hands-on approach[1]. Also, this will eliminate the need to correct later the apt sources to exclude backports[2].

Write the same command as before, but omit everything starting with --othermirror. Then install by hand the package in the chroot.
twix:/home/eddy# cowbuilder --create --distribution sarge --basepath /var/cache/pbuilder/sarge.cow
twix:/home/eddy# chroot /var/cache/pbuilder/sarge.cow
twix:/# wget http://ftp.de.debian.org/backports.org/pool/main/c/cowdancer/cowdancer_0.23~bpo.1_i386.deb
twix:/# dpkg -i cowdancer_0.23~bpo.1_i386.deb
twix:/# rm cowdancer_0.23~bpo.1_i386.deb
twix:/# exit
twix:/home/eddy# cowbuilder --update


Since after the bootstrap all .deb packages will be in the apt cache in the chroot, you might want to save some disk space:
twix:/home/eddy# cowbuilder --login --save-after-login
twix:/# aptitude clean
twix:/# exit

That's it!


If anything is broken somehow, missing or whatnot, cowbuilder --login --save-after-login is your friend.

[1] the initial bootstrap does not pull from backport anything else but cowdancer
[2] since you want to compile for plain sarge, not sarge+backports ;-)

Postfix - (almost) a satellite system

When installing postfix and configuring it to act as a "Satellite system", all mails, including messages to root, will go to the remote mail delivery system. Also, there might be a local user account that absolutely doesn't exist on the remote MTA and should stay on the local machine, but, by default, this doesn't happen.

This is not good.

What is needed in such a case is virtual_alias_maps and a /etc/postfix/virtual file such as:

root root@localhost
root@fqdn.domain.name root@localhost
eddy eddy@localhost
eddy@fqdn.domain.name eddy@localhost

Don't forget to run postmap /etc/postfix/virtual, after restarting the system.


This tip is particularly useful on laptops.

Friday, 8 February 2008

forking privately in Debian

Joachim, the key is probably to implement patch support in srcinst and actually implement something nice which I think has nice applications.

It might be nice to combine srcinst with debian-xcontrol to obtain also customizable dependencies (e.g. no arts support).

I guess you'll be the best person to take over srcinst since:
  1. John Goerzen doesn't want to/can't continue maintaining it
  2. is written in Haskell and you know Haskell
  3. source is available for grabs from http://darcs.complete.org/srcinst/

The basic idea is to have another command[1] wrap aptitude/apt-get/srcinst and make sure it does the proper job, depending on the local configuration.


[1] let's say, the debtoo command - srcinst should remain the tool for clear 'from source installs'