Showing posts with label howto. Show all posts
Showing posts with label howto. Show all posts

Thursday, 4 July 2019

HOWTO: Rustup: Overriding the rustc compiler version just for some directory

If you need to use a specific version of the rustc compiler instead of the default, the rustup documentation tells you how to do that.


First install the desired version, e.g. nightly-2018-01-09

$ rustup install nightly-2018-01-09
info: syncing channel updates for 'nightly-2018-01-09-x86_64-pc-windows-msvc'
info: latest update on 2018-01-09, rust version 1.25.0-nightly (b5392f545 2018-01-08)
info: downloading component 'rustc'
info: downloading component 'rust-std'
info: downloading component 'cargo'
info: downloading component 'rust-docs'
info: installing component 'rustc'
info: installing component 'rust-std'
info: installing component 'cargo'
info: installing component 'rust-docs'

  nightly-2018-01-09-x86_64-pc-windows-msvc installed - rustc 1.25.0-nightly (b5392f545 2018-01-08)

info: checking for self-updates

Then override the default compiler with the desired one in the top directory of your choice:

$ rustup override set nightly-2018-01-09
info: using existing install for 'nightly-2018-01-09-x86_64-pc-windows-msvc'
info: override toolchain for 'C:\usr\src\rust\sbenitez-cs140e' set to 'nightly-2018-01-09-x86_64-pc-windows-msvc'

  nightly-2018-01-09-x86_64-pc-windows-msvc unchanged - rustc 1.25.0-nightly (b5392f545 2018-01-08)
That's it.

Saturday, 15 June 2019

How to generate a usable map file for Rust code - and related (f)rustrations

Intro


Cargo does not produce a .map file, and if it does, mangling makes it very unusable. If you're searching for the TLDR, read from "How to generate a map file" on the bottom of the article.

Motivation

As a person with experience in embedded programming I find it very useful to be able to look into the map file.

Scenarios where looking at the map file is important:
  • evaluate if the code changes you made had the desired size impact or no undesired impact - recently I saw a compiler optimize for speed an initialization with 0 of an array by putting long blocks of u8 arrays in .rodata section
  • check if a particular symbol has landed in the appropriate memory section or region
  • make an initial evaluation of which functions/code could be changed to optimize either for code size or for more readability (if the size cost is acceptable)
  • check particular symbols have expected sizes and/or alignments

Rustrations 

Because these kind of scenarios  are quite frequent in my work and I am used to looking at the .map file, some "rustrations" I currently face are:
  1. No map file is generated by default via cargo and information on how to do it is sparse
  2. If generated, the symbols are mangled and it seems each symbol is in a section of its own, making per section (e.g. .rodata, .text, .bss, .data) or per file analysys more difficult than it should be
  3. I haven't found a way disable mangling globally, without editing the rust sources. - I remember there is some tool to un-mangle the output map file, but I forgot its name and I find the need to post-process suboptimal
  4. no default map file filename or location - ideally it should be named as the crate or app, as specified in the .toml file.

How to generate a map file

Generating map file for linux (and possibly other OSes)

Unfortunately, not all architectures/targets use the same linker, or on some the preferred linker could change for various reasons.

Here is how I managed to generate a map file for an AMD64/X86_64 linux target where it seems the linker is GLD:

Create a .cargo/config file with the following content:

.cargo/config:
[build]
    rustflags = ["-Clink-args=-Wl,-Map=app.map"]

This should apply to all targets which use GLD as a linker, so I suspect this is not portable to Windows integrated with MSVC compiler.

Generating a map file for thumb7m with rust-lld


On baremetal targets such as Cortex M7 (thumbv7m where you might want to use the llvm based rust-lld, more linker options might be necessary to prevent linking with compiler provided startup code or libraries, so the config would look something like this:
.cargo/config: 
[build]
target = "thumbv7m-none-eabi"
rustflags = ["-Clink-args=-Map=app.map"]
The thins I dislike about this is the fact the target is forced to thumbv7m-none-eabi, so some unit tests or generic code which might run on the build computer would be harder to test.

Note: if using rustc directly, just pass the extra options

Map file generation with some readable symbols

After the changes above ae done, you'll get an app.map file (even if the crate is of a lib) with a predefined name, If anyone knows ho to keep the crate name or at least use lib.map for libs, and app.map for apps, if the original project name can't be used.

The problems with the generated linker script are that:
  1. all symbol names are mangled, so you can't easily connect back to the code; the alternative is to force the compiler to not mangle, by adding the #[(no_mangle)] before the interesting symbols.
  2. each symbol seems to be put in its own subsection (e.g. an initalized array in .data.

Dealing with mangling

For problem 1, the fix is to add in the source #[no_mangle] to symbols or functions, like this:

#[no_mangle]
pub fn sing(start: i32, end: i32) -> String {
    // code body follows
}

Dealing with mangling globally

I wasn't able to find a way to convince cargo to apply no_mangle to the entire project, so if you know how to, please comment. I was thinking using #![no_mangle] to apply the attribute globally in a file would work, but is doesn't seem to work as expected: the subsection still contains the mangled name, while the symbol seems to be "namespaced":

Here is a some section from the #![no_mangle] (global) version:
.text._ZN9beer_song5verse17h0d94ba819eb8952aE
                0x000000000004fa00      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004fa00                beer_song::verse
 
When the #[no_mangle] attribute is attached directly to the function, the subsection is not mangled and the symbol seems to be global:

.text.verse    0x000000000004f9c0      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004f9c0                verse
I would prefer to have a cargo global option to switch for the entire project, and code changes would not be needed, comment welcome.

Each symbol in its section

The second issue is quite annoying, even if the fact that each symbol is in its own section can be useful to control every symbol's placement via the linker script, but I guess to fix this I need to custom linker file to redirect, say all constants "subsections" into ".rodata" section.

I haven't tried this, but it should work.

Friday, 26 January 2018

Detecting binary files in the history of a git repository

Git, VCSes and binary files

Git is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?

Detecting any binary files, only in the current commit

As with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'
Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'
And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
 Of course, we assume here, the work tree is clean.

Checking all commits in a branch

Since we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_bins
Git has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh
#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommit
After we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'

Your branch is up-to-date with 'origin/master'
$ git branch -D test_bins
Deleted branch test_bins (was 6358b91).
Enjoy!

Thursday, 11 January 2018

Suppressing color output of the Google Repo tool

On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never info
Other options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto.

Thursday, 10 December 2015

HOWTO: Setting and inserting/using MS Word 2013 document properties in the body of the document

I wrote this so I won't forget it and for others to find, if confronted with the same issue.

I hate Microsoft Office in all its incarnations, but I have to use it at work for various stuff. One of them is maintaining some technical documentation. we now use Office 365 and Office 2013.

Since MS Office Word 2013 is not a technical documentation program, some of the support for it is clunky. For things such as version numbers or others strings that might repeat throughout the document, (advanced) document properties is a way to go.

To set them select File > Info > Properties > Advanced Properties > Custom then fill in the 'Name:', 'Type:' and 'Value:', then press Add, then OK.

Once the properties are set, it can be inserted in the document by selecting its name in the 'Property:' list from the menu: INSERT > Quick Parts >  Field... > Categories:Document Information > DocProperty.

After updating the value of any property (from the Advanced Properties dialog), to update all the places where the properties were used in the document, press Ctrl+A then right click > Update Filed > Update entire table > OK.

And, yes, 'Update entire table' will update the values, although it's name is stupid.

Saturday, 23 May 2015

HOWTO: No SSH logins SFTP only chrooted server configuration with OpenSSH

If you are in a situation where you want to set up a SFTP server in a more secure way, don't want to expose anything from the server via SFTP and do not want to enable SSH login on the account allowed to sftp, you might find the information below useful.

What do we want to achive:
  • SFTP server
  • only a specified account is allowed to connect to SFTP
  • nothing outside the SFTP directory is exposed
  • no SSH login is allowed
  • any extra security measures are welcome
To obtain all of the above we will create a dedicated account which will be chroot-ed, its home will be stored on a removable/no always mounted drive (acessing SFTP will not work when the drive is not mounted).

Mount the removable drive which will hold the SFTP area (you might need to add some entry in fstab). 

Create the account to be used for SFTP access (on a Debian system this will do the trick):
# adduser --system --home /media/Store/sftp --shell /usr/sbin/nologin sftp

This will create the account sftp which has login disabled, shell is /usr/sbin/nologin and create the home directory for this user.

Unfortunately the default ownership of the home directory of this user are incompatible with chroot-ing in SFTP (which prevents access to other files on the server). A message like the one below will be generated in this kind of case:
$ sftp -v sftp@localhost
[..]
sftp@localhost's password:
debug1: Authentication succeeded (password).
Authenticated to localhost ([::1]:22).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
Write failed: Broken pipe
Couldn't read packet: Connection reset by peer
Also /var/log/auth.log will contain something like this:
fatal: bad ownership or modes for chroot directory "/media/Store/sftp"

The default permissions are visible using the 'namei -l' command on the sftp home directory:
# namei -l /media/Store/sftp
f: /media/Store/sftp
drwxr-xr-x root root    /
drwxr-xr-x root root    media
drwxr-xr-x root root    Store
drwxr-xr-x sftp nogroup sftp
We change the ownership of the sftp directory and make sure there is a place for files to be uploaded in the SFTP area:
# chown root:root /media/Store/sftp
# mkdir /media/Store/sftp/upload
# chown sftp /media/Store/sftp/upload

We isolate the sftp users from other users on the system and configure a chroot-ed environment for all users accessing the SFTP server:
# addgroup sftpusers
# adduser sftp sftusers
Set a password for the sftp user so password authentication works:
# passwd sftp
Putting all pieces together, we restrict access only to the sftp user, allow it access via password authentication only to SFTP, but not SSH (and disallow tunneling and forwarding or empty passwords).

Here are the changes done in /etc/ssh/sshd_config:
PermitEmptyPasswords no
PasswordAuthentication yes
AllowUsers sftp
Subsystem sftp internal-sftp
Match Group sftpusers
        ChrootDirectory %h
        ForceCommand internal-sftp
        X11Forwarding no
        AllowTcpForwarding no
        PermitTunnel no
Reload the sshd configuration (I'm using systemd):
# systemctl reload ssh.service
Check sftp user can't login via SSH:
$ ssh sftp@localhost
sftp@localhost's password:
This service allows sftp connections only.
Connection to localhost closed.
But SFTP is working and is restricted to the SFTP area:
$ sftp sftp@localhost
sftp@localhost's password:
Connected to localhost.
sftp> ls
upload 
sftp> pwd
Remote working directory: /
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /netbsd-nfs.bin
remote open("/netbsd-nfs.bin"): Permission denied
sftp> cd upload
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /upload/netbsd-nfs.bin
netbsd-nfs.bin                                                              100% 3111KB   3.0MB/s   00:00
Now your system is ready to accept sftp connections, things can be uploaded in the upload directory and whenever the external drive is unmounted, SFTP will NOT work.

Note: Since we added 'AllowUsers sftp', you can test no local user can login via SSH. If you don't want to restrict access only to the sftp user, you can whitelist other users by adding them in the AllowUsers directive, or dropping it entirely so all local users can SSH into the system.

Monday, 6 April 2015

HOWTO: Dnsmasq server for network booting using TFTP and DHCP

Dnsmasq is a very lightweight server that besides the expected DNS caching functionality, it also offers DHCP and TFTP functionality in a single binary.

This makes it very useful if one needs to network boot a system since you can have the TFTP and DHCP part of the setup done easily, and only add NFS for a complete network boot.

Add to that that

One extra nice thing dnsmasq has is that it can mark specific hosts, addresses or ranges with some internal markers, then use those markers as symbolic names to apply settings based for classes of devices.

In the configuration snippet below, there is a rule I set up to make sure I would apply the 'netbsd' label to any system connecting through specific ethernet interfaces (one is the interface of the system, the other is a USB NIC I use from time to time):
#You will need a range for static IPs in the main file
dhcp-range=192.168.77.250,192.168.77.254,static

# give the name 'kinder' to any machine connecting through the given ethernet nics and apply 'netbsd' label
dhcp-host=00:1a:70:99:60:BB,00:06:4F:0D:B1:95, kinder, 192.168.77.251, set:netbsd

# Machines tagged 'netbsd' shall use the given NFS root path
dhcp-option=tag:netbsd, option:root-path,/export/netbsd-nslu2/root
# Enable dnsmasq's built-in TFTP server
enable-tftp

# Set the root directory for files available via FTP.
tftp-root=/srv/tftp
Saving this configuration file in /etc/dnsmasq.d/kinder-netboot will enable this to be used by dnsmasq if this line is present in /etc/dnsmasq.conf
conf-dir=/etc/dnsmasq.d
Commenting it will disable the netbsd part easily.

Sunday, 29 March 2015

HOWTO: Disassemble a big endian Arm raw memory dump with objdump

This is trivial and very useful for embedded code dumps, but in case somebody (including future me) needs this, here it goes:
arm-none-eabi-objdump -D -b binary -m arm -EB dump.bin | less
The options mean:
  • -D - disassemble
  • -b binary - input file is a raw file
  • -m arm - arm architecture
  • -EB - big endian
By default, endianness is assumed to be little endian, or at least that's happened with my toolchain.

Sunday, 1 September 2013

HOWTO add a shell script wrapper and preserve quoting for parameters

If you ever find yourself in the situation where you have to add a shell script wrapper over a command, but the parameters' quoting gets lost and you end up with the wrong parameters in the wrapped command/tool, you might want to read this post.

On my system I have some command line tools which are Windows only and, in order to easily use the same build system as on Windows on my Linux machine I added a wrapper script which invokes wine on the commands and made symlinks to the wrapper with the file names as the tools, but without the '.exe' suffix.

Of course, I wanted to properly pass the parameters through the wrapper to the tools so I wrote (note the bold text):
#!/bin/sh
wine $0.exe "$@"
So the answer is: use $@ and quote like I did in the code above and the parameters will be passed correctly.




Update: stbuehler suggested to use exec to replace the shell process with wine with this construct:
Use:
#!/bin/sh
exec wine $0.exe "$@"

Thanks for the suggestion.

Wednesday, 24 July 2013

HOWTO: git - change branch without touching working copy (at all)

Did you ever had the need in a git repository to change to another branch without altering AT ALL the working copy and ever wondered how that's done?

Usual use cases might be when you mde some changes to the working copy thinking you were on another branch, or you double-track in git a directory which is also tracked by another VCS (e.g. ClearCase).

What you need, in fact, is to update the index and not touch the working copy. The command that does that is

git read-tree otherbranch
If you also need to commit the state of your working tree to the otherbranch, you also need to tell git to actually associate the curent HEAD with the branch you just switched to:
git symbolic-ref HEAD refs/heads/otherbranch
I use this approach at my work place* to develop/experiment with possible code improvements on my machine before considering the merge into the official code.

* The preferred VCS is (Base) ClearCase, and I keep a git repository over the relevant part of the project in the ClerCase Dynamic View, so for synchronisations, the files in the working copy are updated by ClearCase and I have to resync my git branch (clearcaseint) following the latest official code from time to time, so I can pull in my local disk git repository the clearcaseint branch and merge it with my experimental changes in my git feature branches. 

If people are curious about how I work with ClearCase and git, I can expand on this in another post.

Friday, 25 January 2013

(Serial) console flooded with kernel messages?

(If you want to ignore the explanations and see how to stop the Linux kernel from flooding the console with low importance messages, go straight to the bottom of the article, is the small bit at end with larger font.)

After connecting to the serial console on my Linksys WRT160NL router I was faced with the problem that the console was flooded with all sorts of messages such as:
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.U DST=178.156.183.255 PROTO=UDP SPT=137 DPT=137 LEN=58
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
ACCEPT IN=br0 OUT=eth1 SRC=a.b.c.d DST=69.171.246.16 PROTO=TCP SPT=3651 DPT=443
DROP IN=eth1 OUT= SRC=X.Y.Z.U DST=178.156.183.255 PROTO=UDP SPT=137 DPT=137 LEN=58
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=178.156.183.146 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=120
DROP IN=eth1 OUT= SRC=178.156.183.146 DST=178.156.183.255 PROTO=UDP SPT=17500 DPT=17500 LEN=120
DROP IN=eth1 OUT= SRC=178.156.183.146 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=120
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=178.156.177.142 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=153
DROP IN=eth1 OUT= SRC=178.156.177.142 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=153

The serial console was working, but it was impossible to do anything practical in these conditions. I tried looking on the net for 'linux stop console flooding' and similar terms, but didn't get too far, except the fact the problem was the loglevel.

Here is the explanation of what this means (quote from Documentation/kernel-parameters.txt):

        loglevel=       All Kernel Messages with a loglevel smaller than the
                        console loglevel will be printed to the console. It can
                        also be changed with klogd or other programs. The
                        loglevels are defined as follows:

                        0 (KERN_EMERG)          system is unusable
                        1 (KERN_ALERT)          action must be taken immediately
                        2 (KERN_CRIT)           critical conditions
                        3 (KERN_ERR)            error conditions
                        4 (KERN_WARNING)        warning conditions
                        5 (KERN_NOTICE)         normal but significant condition
                        6 (KERN_INFO)           informational
                        7 (KERN_DEBUG)          debug-level messages


This was enough to go through my local git tree of the kernel in the Documentation directory and grep for loglevel. This brought me to this interesting bit from Documentation/sysctl/kernel.txt
==============================================================

printk:

The four values in printk denote: console_loglevel,
default_message_loglevel, minimum_console_loglevel and
default_console_loglevel respectively.

These values influence printk() behavior when printing or
logging error messages. See 'man 2 syslog' for more info on
the different loglevels.

- console_loglevel: messages with a higher priority than
  this will be printed to the console
- default_message_loglevel: messages without an explicit priority
  will be printed with this priority
- minimum_console_loglevel: minimum (highest) value to which
  console_loglevel can be set
- default_console_loglevel: default value for console_loglevel

==============================================================

So I ran 'cat /proc/sys/kernel/printk' and got (I managed to read it through the flood of messages from the firewall):

7       4       1       7
According to the explanations above, that meant that console_loglevel was too, so to fix it I ran:
echo '2 4 1 7' > /proc/sys/kernel/printk
And, behold, the serial console was usable.

Monday, 15 October 2012

HOWTO: sudo + cowbuilder (+git-buildpackage)

In case you tried to use git-buildpackage and wanted to use cowbuilder as a builder, you might have ran into the error

sudo: sorry, you are not allowed to preserve the environment

This is due to a change in sudo default configuration in version 1.7.4p4-2 (I know, is true since 2010) which doesn't allow the execution of commands via sudo using the parameter '-E' which means 'setenv'.

The news item even explicitly states pbuilder can be affected by this because it wants to port over to the pbuilder environment the HOME environment variable and suggests using

Defaults env_keep += HOME

But adding such a line to your /etc/sudoers.d/01_pbuilders file (/etc/sudoers is recommended to be touched only by the package) will do the  same for all commands and users ran via sudo, which is, according to my preferences, too permissive.

The irony is that running git-buildpackage --git-pbuilder  will invoke sudo -E cowbuilder so the env_keep suggested fix will not work because for -E to be allowed, setenv needs to be set in sudoers. This would defeat the purpose of env_reset, if done for all commands, but we can do a better job if we allow this kind of change only for the cowbuilder and pbuilder commands. You do have different commands that are allowed explicitly stated, don't you?

On my system I have made a group especially for people allowed to do packaging work, the group is called 'pack'. The only account in that group is my own.

Also, I have defined a command alias named PBUILDERS which looks something like this:

Cmnd_Alias PBUILDERS = /usr/sbin/pbuilder, /usr/sbin/cowbuilder

Running PBUILDERS is already restricted to the pack group. Here is an example that requires password on run:

%pack ALL=PBUILDERS

So all that needs to be done is to allow setenv for the PBUILDERS commands. Reading through the sudoers manpage and after some trial and error (use visudo for editting sudoers files - visudo -f /etc/sudoers.d/01_pbuilders) I found out that the symbols that distinguish between commands, users and groups in a 'Defaults' line needs to be right next to the 'Defaults' word. For command the sign is an exclamation sign '!' (for user lists it's ':') so since we want to link the exception to the command, not the user list, we'll use 'Defaults!':

Defaults! PBUILDERS setenv

Assuming you also have a PBUILDERS commands alias, this is what you need to be able to use git-buildpackage in conjunction with cowbuilder and sudo.

If there is a non-intrusive way to prevent sudo to use -E when invoking cowbuilder, please add your comment, I would be interested to know it.

Monday, 8 October 2012

HOWTO fix git-buildpackage signs with wrong signature

If you have two or more available gpg keys that can be used for the same identity (e.g.: 'John Doe <john@doe.com>') that you might use to add entries to changelogs of packages, you might end up in situations where git-buildpackage or other similar tool might want to sign packages with the wrong key.

It seems debsign (the tool that actually does the signing) just picks up the first key that is still valid (I also have a revoked key) and matches the used identity.

There are many ways to fix this, but the one that will work for most cases is to run this command:

echo 'DEBSIGN_KEYID=0x0123ABCD' >> ~/.devscripts

Of course, you should replace  0x0123ABCD with the keyid which you prefer.

This will create a ~/.devscripts file (if it didn't exist) which will contain the DEBSIGN_KEY variable with the desired value. This file will be sourced by debsign before doing its actual work, so it will do the right thing when ran.

There are also git-buildpackage specific fixes like defining builder in the [DEFAULT] section of ~/.gbp.conf and passing '-k0x0123ABCD' to debuild something like:

[DEFAULT]
builder = debuild -i -I -k1234ABCD
...

But that probably gets ugly if you use a custom builder.


P.S.: I have just deleted my old key 0xDD1F1F9F since I won't be using it anymore. I don't remember where I put the revocation certificate, but I'll revoke the key, once and if I find the revocation certificate. Otherwise, it will expire in July, next year.

From now on, I am going to use only the key 0xE3E083A1 to which I even added a photo and some newer identities and updated with a few signatures I got during DebConf 9.

Saturday, 25 August 2012

HOWTO: Things to remember about cowbuilder

Here are a few things to remember about cowbuilder:

If you run cowbuilder through sudo, and you want to build a source package whose result should be available to the user who initiated the build, then

  • you should have "BUILDRESULTUID=the_user's_id" in ~/.pbuilderrc, and
  • you might want to invoke cowbuilder with

'sudo cowbuilder --build the_pack_to_build.dsc --buildresult destination_dir_for_build_results'

If you want to login into a chroot environment in which you'd like to see part of your directory alongside the unpacked source tree of your package, then invoke cowbuilder with

sudo cowbuilder --login --bindmounts /path/to/the/dir/you/want

Your base.cow directory can be updated/changed manually with

sudo cowbuilder --login --save-after-login

You can update the base.cow directory with

sudo cowbuilder --update

 That's about it.

Also, noteworthy tip: ccache might not work correctly in the cowbuilder chroot.

Sunday, 22 April 2012

xorg-macros in Debian

If you are looking for xorg-macros in Debian, you should install xutils-dev.

Sunday, 4 March 2012

HOWTO: Fix: Baobab opens directories in Totem/VLC (and some Xfce4 related things)

If you ever used filelight or baobab you probably know how useful they are. If you didn't, then you missed a lot on how you can spot (and fix) where your disk space is wasted.

With my recent attempt to upgrade to GNOME 3 which, because of its innate property to be useless and counter-productive, actually made me to use Xfce4 with a mix of GNOME aplications (since Xfce lacks a few functionalities here and there), I ran into all sorts of problems.

As a side note, Xfce4 is quite decent, but if you like some icons on your panels to be left aligned and some right aligned, you should know that you can add a Separator item to the panel, right click on it -> Properties and tick the Expand* check box. And if you also set the Transparent style, it will look nice, too.

Back to the topic. With my mix of Xfce and Gnome apps, I configured my top panel to contain a Free Space Checker for my /home file system and today it alerted me that I was low on disk space, so I started baobab to check what I can clean up.

When I found a possible suspect, I wanted to open the directory with a file browser, but, instead, Totem was started and started trying to queue all the files in the offending directory. The problem is that one way or another, Totem (or VLC) was configured to be the default handler for directories instead of the file manager.

The solution is simple, open with an editor the file ~/.local/share/applications/mimeapps.list and search for the line starting with inode/directory= and you'll see something like:

inode/directory=nautilus-folder-handler.desktop;baobab-usercreated.desktop;vlc-usercustom.desktop;


Remove the offending part, vlc-usercustom.desktop;, save the file and try again to open that directory from baobab. If you are double-lucky :P and now it opens with Totem, you will have to remove a reference to a "totem-usercustom.desktop;" or something of that sort. Now, on my system, that line looks like this:

inode/directory=nautilus-folder-handler.desktop;baobab-usercreated.desktop;


And now it works as expected**

* I suppose it's called like that in English, I have my desktop in Romanian
** Except that I would like it to start my desktop preferred file manager, not Nautilus, but that's another issue.

Thursday, 16 February 2012

HOWTO: Git - reauthor/fix author and committer email and author name after a git cvsimport

You might find yourself at some moment when your git repository imported from CVS does not contain all the correct names and email addresses of the commits which were once in CVS but are now part of your project history in your git repo. Or you might do a cvsimport which missed a few authors.

Let's suppose you first import the cvs repo into git, but then you realise you missed some authors.

Before being able to do a git cvsimport, you need a checkout of the module or cvs subdir that you want to turn into its own git repo.

For ease of use I defined CVSCMD as
cvs -z9 -d :pserver:my_cvs_id@cvs.server.com:/root_dir
You will need to replace the items written in italics according to you situation, more exactly, you need to define 'my_cvs_id', 'cvs.server.com' and 'root_dir'. If your acces method to the server is not pserver, you should change that accordingly. This information should be available from your project admin or pages.


Check out the desired module or even subdir of a module

$CVSCMD checkout -d localdirname MODULE/path/to/subdir

git cvsimport -A ../authors -m -z 600 -C ../new-git-repo -R

How to find out the commits which do need rewriting

The way to limit yourself only to the commits that had no cvs-git author and commit information on git-cvsimport time is to use a filter like this:
git log -E --author='^[^@]*$' --pretty=format:%h
This tells git log to print only the abbreviated hashes (%h) for the commits that have NO '@' sign in the 'Author:', which happens if no cvs user id to git author and email was provided in the authors file and git cvsimport time.

We will use this command's output to tell later git filter-branch which commits need rewriting. *

But before that...

How do we find if our authors file is complete?

For this task we'll use a slighly modified form of the previous command and some shell script magic.
git log -E --author='^[^@]*$' --pretty=format:%an | sort -u > all-leftout-cvs-authors
And now in all-leftout-cvs-authors we'll have a sorted list of all cvs id's which were not handled in the original git-cvsimport. In my case there are only 19 such ids:
$ wc -l all-leftout-cvs-authors
19 all-leftout-cvs-authors

Nice, that will be easy to fix. Now edit your all-leftout-cvs-authors file to add the relevant information in a format similar to this:
john = John van Code <john@code.temple.tld>
jimmy = Jimmy O'Document <jimmy@documenting.com>
In case you can't make a complete cvs-user-to-name-and-email map, you might want to use stubs of the following form in order to be able to easily identify later such commits, if you prefer (or you could let them unaltered at al ;-):
cvsid=cvsid <cvsid@cvs.server.com>

How to actually do the filtering to fix history (using git-filter-branch and a script)

After this is done, we'll need just one more piece, the command to do the altering itself which reads as follow (note that my final authors file is called new-authors and that I placed this in a script in order to be able to easily run it without trying to escape all spaces and such madness):

[ "$authors_file" ] || export authors_file=$HOME/new-authors

#git filter-branch -f --remap-cvs --env-filter '
git filter-branch -f --env-filter '

get_name () {
grep "^$1=" "$authors_file" | sed "s/^.*=\(.*\)\ .*$/\1/"
}

get_email () {
grep "^$1=" "$authors_file" | sed "s/^.*\ <\(.*\)>$/\1/"
}

if grep -q "^$GIT_COMMITTER_NAME" "$authors_file" ; then
GIT_AUTHOR_NAME=$(get_name "$GIT_COMMITTER_NAME") &&
GIT_AUTHOR_EMAIL=$(get_email "$GIT_COMMITTER_NAME") &&
GIT_COMMITTER_NAME="$GIT_AUTHOR_NAME" &&
GIT_COMMITTER_EMAIL="$GIT_AUTHOR_EMAIL" &&
export GIT_AUTHOR_NAME GIT_AUTHOR_EMAIL &&
export GIT_COMMITTER_NAME GIT_COMMITTER_EMAIL
fi
' -- --all
You might wonder what's up with the commented git filter-branch line with the --remap-cvs option. This script will NOT work for you as long as you have the stock git-filter-branch script and keep the option --remap-cvs while not patching your git-filter-branch script (/usr/lib/git-core/git-filter-branch), but that option will provide a file with the mappings from the old to the new commit ids. If you want that function, too, you'll want to apply this patch to git-filter-branch:

diff --git a/git-filter-branch b/git-filter-branch
old mode 100644
new mode 100755
index ae602e3..d1f7ef6
--- a/git-filter-branch
+++ b/git-filter-branch
@@ -149,6 +149,11 @@ do
prune_empty=t
continue
;;
+ --remap-cvs)
+ shift
+ remap_cvs=t
+ continue
+ ;;
-*)
;;
*)
@@ -368,6 +373,33 @@ while read commit parents; do
die "could not write rewritten commit"
done <../revs

+# Rewrite the cvs-revisions file, if requested and the file exists
+
+ORIG_CVS_REVS_FILE="${GIT_DIR}/cvs-revisions"
+if [ -f "$ORIG_CVS_REVS_FILE" ]; then
+ if [ "$remap_cvs" ]; then
+ printf "CVS remapping requested\n"
+
+ CVS_REVS_FILE="$tempdir/cvs-revisions"
+ cp "$ORIG_CVS_REVS_FILE" "$CVS_REVS_FILE"
+ printf "\nFound $ORIG_CVS_REVS_FILE; will copy and alter it as $CVS_REVS_FILE\n"
+ cvs_remap__commit_count=0
+ newcommits="$(ls ../map/ | wc -l)"
+ for commit in ../map/* ; do
+ cvs_remap__commit_count=$(($cvs_remap__commit_count+1))
+ printf "\rRemap CVS commit $commit ($cvs_remap__commit_count/$newcommits)"
+
+ oldsha1="$(basename $commit)"
+ read newsha1 < $commit
+ sed -i "s@$oldsha1\$@$newsha1@" "$CVS_REVS_FILE"
+ done
+ else
+ warn "\nNo CVS remapping requested, but cvs-revisions file found. All CVS mappings will be lost.\n"
+ fi
+elif [ "$remap_cvs" ]; then
+ warn "\nWARNING: CVS remap was ignored, since no original cvs-revisions file was found\n"
+fi
+
# If we are filtering for paths, as in the case of a subdirectory
# filter, it is possible that a specified head is not in the set of
# rewritten commits, because it was pruned by the revision walker.
@@ -491,6 +523,11 @@ if [ "$filter_tag_name" ]; then
done
fi

+if [ "$remap_cvs" -a -f "$CVS_REVS_FILE" ]; then
+ mv "$ORIG_CVS_REVS_FILE" "$ORIG_CVS_REVS_FILE.original"
+ cp "$CVS_REVS_FILE" "$ORIG_CVS_REVS_FILE"
+fi
+
cd ../..
rm -rf "$tempdir"


Then, after running this script, let's call it filter, you should have a brand new git repo with the appropriate authors and their emails set.


P.S.: I have started writing this post some time ago but stopped just before the last part, the one with the filter script. I realise I might be missing something in the explanation, but if you have problems, please comment so I can help you fixing them.

P.P.S.: * I realised in the filter script at some point I wanted to do something like:
for R in $(git log -E --author='^[^@]*$' --pretty=format:%H | head -n 2) ; do
[the same git filter branch command above but ending in ...]
' $R
done
But I think I remember that $R didn't work on the whole history, but only on that revision, or some other weird of that sort. I know I ended up not filtering explicitly those revisions, but the entire history. I hope this helps.

Wednesday, 1 February 2012

HOWTO: Windows, nmake, cygwin and path type detection

If you are using Windows, have cygwin installed and need to test if a path is absolute or relative in nmake (I know, how often does that happen?), here is the magic bit of code that manages to do just that:


cygwin_path = c:\cygwin\bin

echo = $(cygwin_path)\echo.exe
grep = $(cygwin_path)\grep.exe

#testpath = ..\..\rel\test\path
#testpath = rel\test\path
#testpath = \abs\test\path
testpath = c:\abs\test\path

!if ([ $(echo) '$(testpath)' | $(grep) -q -E '^^(\w:)?\\\\' ] == 0)
type = abs
!else
type = rel
!endif

test:
@$(echo) "testpath = $(testpath)"
@$(echo) "type = $(type)"


Probably there are other solutions, but this is the first I came up with. Another solution would be to use gnu make :-) .

Friday, 20 January 2012

HOWTO: GIMP - create a text-with-halo effect

A small tutorial I made about an effect I used in the previous interviews (in English), so, for consistency, I had to recreate it, even after Kino was no longer available in Debian Wheezy.



I'll probably upload more videos like this about cinelerra, pitivi, gimp, audacity and other software I use for the work I do for our „Sceptici în România”/ „Skeptics in Romania” podcast (The podcast is in Romanian, but we are preparing also a project for the international audience, too).

And in case you are wondering, yes, this podcast is part of the reasons I wasn't able to do any work for Debian lately!

Monday, 9 January 2012

Another Windows tip - How to store cvspass login for CVSNT

Since I am currently working on a Windows machine at work I am looking for ways to make this thing work in a sane way. The latest insane thing is the fact that I wasn't able to log on a CVS server at work from WinCVS (which uses CVSNT) with my regular credentials, while the cached password in Cygwin did work with the Cygwin CVS.

So the obvious fix was to copy the .cvspass file from cygwin to whereever CVSNT kept its cvspass file. Well, it isn't that easy, since CVSNT keeps such passwords in the Windows registry. But since I had no previous logins with CVSNT, I didn't knew what to put in the registry.

I found really easily that the key is under HKEY_CURRENT_USER\Software\cvsnt\cvspass, but how do I save it? Looking at the line in my cygwin .cvspass I saw the line had the format:

/1 :pserver:username@our.cvs.server.net:/u S()meh4s'h00

I finally found out that I have to create a string value with the name ":pserver:username@our.cvs.server.net:/u" and the value data that hash "S()meh4s'h00" and plainly ignore the first field.


Stay tuned. The next article will be about what's common between Windows 7 and GNOME 3 / gnome-shell, since I upgraded my home laptop to wheezy (I really wanted to use pitivi 0.15), and my desktop at work to Windows 7.