Monday 28 October 2013

The stupidest trend in laptop design is...

... numpads on laptop keyboards.

Just because a very, very, very restricted segment of the population is into accounting or other jobs needing frequent numeric input, almost all laptop manufacturers feel the stupid urge to place a numpad on laptops with screens over 14''.

Reasons why a numpad on a laptop is a bad idea:
  • can lead to health issues: it forces the user to assume a bad position in front of the screen; this can affect the eyes and the backbone; It's bad enough many people have a bad posture in front of the computer as it is, no need to pump up Scoliosis' position in the laundry list of modern day health risks
The numpad forces eye focus and users' hands to be way off center.
The touchpad looks as if it was thrown to the side by accident
Note how the designer tried to offset the misalignment forced
by the numpad by "positioning" the touchpad closer to center,
but not in line with the space bar (and the keyboard)

  • ugly design: it simply looks ugly; why do people think Apple didn't jump into this stupid bandwagon? Because it's ugly design!
Without numpad the position is almost perfect!
The symmetry of the laptop improves on the design.
Notice how the touchpad is also centered.



  • the numpad is useless for the vast majority of people, and those who need numpads, already use them at desktop (keyboard) or, can buy numpads
  • it's more expensive to manufacture (more moving parts, more complex wiring and more expensive materials - plastic is cheaper than plastic+copper+rubber)
  • less resilient: more ways to fail, higher risk to get liquids inside
  • bad mechanical design: wider keyboard means less distance from the edge to the keyboard, which can mean a more fragile case
  • makes the laptop heavier: the plastic or material that covers the keys would be enough to cover the insides; the extra rubber, wiring, support etc, add extra weight
  • it kills the opportunity to use the real estate for other useful things (or things which improve the overall design): speakers, volume keys, finger print readers, other special function keys, LEDs or other output devices
What's worse is that most resellers do not have filters for this particular mis-feature, so you can't easily exclude laptops corrupted by this horrendous idea.

So, if you're a laptop designer, please stop putting numpads on laptops.

It will make for better looking laptops and you'll have the opportunity to be the one stating the obvious: numpads are generally useless! Even on desktop keyboards!

Monday 2 September 2013

Integrating Beyond Compare with Semanticmerge

Note: This post will probably not be on the liking of those who think free software is always preferable to closed source software, so if you are such a person, please take this article as an invitation to implement better open source alternatives that can realistically compete with the closed source applications I am mentioning here. I am not going to mention here where the open source alternatives are not up to the same level as the commercial tools, I'll leave that for the readers or for another article.



Semanticmerge is a merge tool that attempts to do the right thing when it comes to merging source code. It is language aware and it currently supports Java and C#. Just today the creators of the software have started working on the support for C.

Recently they added Debian packages, so I installed it on my system. For open source development Codice Software, the creators of Semanticmerge, offers free licenses, so I decided to ask for one today, and, although is Sunday, I received an answer and I will get my license on Monday.

When a method is moved from one place to another and changed in a conflicting way in two parallel development lines, Semanticmerge can isolate the offending method and can pass all its incarnations (base, source and destination or, if you prefer, base, mine and theirs) to a text based merge tool to allow the developer to decide how to resolve the merge. On Linux, the Semanticmerge samples are using kdiff3 as the text-based merge tool, which is nice, but I don't use kdiff3, I use Meld, another open source visual tool for merges and comparisons.


OTOH, Beyond Compare is a merge and compare tool made by Scooter Software which provides a very good text based 3-way merge with a 3 sources + 1 result pane, and can compare both files and directories. Two of its killer features is having the ability split differences into important and non-important ones according to the syntax of the compared/merged files, and the ability to easily change or add to the syntax rules in a very user-friendly way. This allows to easily ignore changes in comments, but also basic refactoring such as variable renaming, or other trivial code-wide changes, which allows the developer to focus on the important changes/differences during merges or code reviews.

Syntax support for usual file formats like C, Java, shell, Perl etc. is built in (but can be modified, which is a good thing) and new file types with their syntaxes can be added via the GUI from scratch or based on existing rules.

I evaluated Beyond Compare at my workplace and we decided it would be a good investment to purchases licenses for it for the people in our department.


Having these two software separate is good, but having them integrated with each other would be even better. So I decided I would try to see how it can be done. I installed Beyond compare on my system, too and looked through the examples


The first thing I discovered is that the main assumption of Semanticmerge developers was that the application would be called via the SCM when merges are to be done, so passing lots of parameters would not be problem. I realised that when I saw how one of the samples' starting script invoked semantic merge:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"" -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\"" -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""
Can you see the problem? It seems Semanticmerge has no persistent knowledge of the user preferences with regards to the text-based merge tool and exports the issue to the SCM, at the price of overcomplicating the command line. I already mentioned this issue in my license request mail and added the issue and my fix suggestion in their voting system of features to be implemented.

The upside was that by comparing the command line for kdiff3 invocations, the kdiff3 documentation and, by comparison, the Beyond Compare SCM integration information, I could deduce what is the command line necessary for Semanticmerge to use Beyond Compare as an external merge and diff tool.

The -edt, -emt and -e2mt options are the ones which specify how the external diff tool, external 3-way merge tool and external 2-way merge tool is to be called. Once I understood that, I split the problem in its obvious parts, each invocation had to be mapped, from kdiff3 options to beyond compare options, adding the occasional bell and whistle, if possible.

The parts to figure out, ordered by compexity, were:

  1. -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"

  2. -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""

  3. -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\""

Semantic merge integrates with kdiff3 in diff mode via the -edt option. This was easy to map to Beyond Compare, I just replaced kdiff3 with bcompare:
-edt="bcompare \"#sourcefile\" \"#destinationfile\""
Integration for 2-way merges was also quite easy, the mapping to Beyond Compare was:
-e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\""
For the 3-way merge I was a little confused because the Beyond Compare documentation and options were inconsistent between Windows and Linux. On Windows, for some of the SCMs, the options that set the titles for the panes are '/title1', '/title2', '/title3' and '/title4' (way too descriptive for my taste /sarcasm), but for some others are '/lefttitle', '/centertitle', '/righttitle', '/outputtitle', while on Linux the options are the more explicit kind, but with a '-' instead of a '/'.

The basic things were easy, ordering the parameters in the 'source, destination, base, output' instead of kdiff3's 'base, source, destination, -o ouptut', so I wanted to add the bells and whistles, since it really makes more sense for the developer to see something like 'Destination: [method] readOptions' instead of '/tmp/tmp4327687242.tmp', and because that's exactly what is necessary for Semanticmerge when merging methods, since on conflicts the various versions of the functions are placed in temporary files which don't mean anything.

So, after some digging into the examples from Beyond Compare and kdiff3 documentation, I ended up with:
-emt="bcompare  \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'"

Sadly, I wasn't able to identify the symbolic name for the output, so I added the hard coded 'merge result'. If Codice people would like to help with with this information (or if it exists), I would be more than willing to do update the information and do the necessary changes.

Then I added the bells and whistles for the -edt and -e2mt options, so I ended up with an even more complicated command line. The end result was this monstrosity:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="bcompare \"#sourcefile\" \"#destinationfile\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'" -emt="bcompare  \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'" -e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'"
So when I 3-way merge a function I get something like this (sorry for high resolution, lower resolutions don't do justice to the tools):



I don't expect this post to remain relevant for too much time, because after sending my feedback to Codice, they were open to my suggestion to have persistent settings for the external tool integration, so, in the future, the command line could probably be as simple as:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java
And the integration could be done via the GUI, while the command line can become a way to override the defaults.

Sunday 1 September 2013

HOWTO add a shell script wrapper and preserve quoting for parameters

If you ever find yourself in the situation where you have to add a shell script wrapper over a command, but the parameters' quoting gets lost and you end up with the wrong parameters in the wrapped command/tool, you might want to read this post.

On my system I have some command line tools which are Windows only and, in order to easily use the same build system as on Windows on my Linux machine I added a wrapper script which invokes wine on the commands and made symlinks to the wrapper with the file names as the tools, but without the '.exe' suffix.

Of course, I wanted to properly pass the parameters through the wrapper to the tools so I wrote (note the bold text):
#!/bin/sh
wine $0.exe "$@"
So the answer is: use $@ and quote like I did in the code above and the parameters will be passed correctly.




Update: stbuehler suggested to use exec to replace the shell process with wine with this construct:
Use:
#!/bin/sh
exec wine $0.exe "$@"

Thanks for the suggestion.

Wednesday 24 July 2013

HOWTO: git - change branch without touching working copy (at all)

Did you ever had the need in a git repository to change to another branch without altering AT ALL the working copy and ever wondered how that's done?

Usual use cases might be when you mde some changes to the working copy thinking you were on another branch, or you double-track in git a directory which is also tracked by another VCS (e.g. ClearCase).

What you need, in fact, is to update the index and not touch the working copy. The command that does that is

git read-tree otherbranch
If you also need to commit the state of your working tree to the otherbranch, you also need to tell git to actually associate the curent HEAD with the branch you just switched to:
git symbolic-ref HEAD refs/heads/otherbranch
I use this approach at my work place* to develop/experiment with possible code improvements on my machine before considering the merge into the official code.

* The preferred VCS is (Base) ClearCase, and I keep a git repository over the relevant part of the project in the ClerCase Dynamic View, so for synchronisations, the files in the working copy are updated by ClearCase and I have to resync my git branch (clearcaseint) following the latest official code from time to time, so I can pull in my local disk git repository the clearcaseint branch and merge it with my experimental changes in my git feature branches. 

If people are curious about how I work with ClearCase and git, I can expand on this in another post.

Saturday 20 July 2013

(Not a) GNU Make quirk, or why logs should be provided

About two months ago I was writing about a quirk I found in GNU Make related to the $(patsubst ) function.

I have just tried this on my Debian Wheezy laptop which has make 3.81, but I wasn't able to reproduce the issue with the version from Debian (3.81-8.2).

The makefile looks like this:
PATH := ../some/prefixCPU12suf/include
CPUINC := $(patsubst ../some/prefix%,%,$(PATH))
CPU := $(patsubst %/include,%,$(CPUINC))

default:
    @echo "PATH   = $(PATH)"
    @echo "CPUINC = $(CPUINC)"
    @echo "CPU    = $(CPU)"
And the result was correct:
0 eddy@heidi /tmp $ make
PATH   = ../some/prefixCPU12/include
CPUINC = CPU12/include
CPU    = CPU12
0 eddy@heidi /tmp $ make --version
GNU Make 3.81
Copyright (C) 2006  Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.

This program built for x86_64-pc-linux-gnu
The worst part is that I know I tested this issue on 3.82 on Cygwin and on Linux with the 3.82 version and it failed, but I wasn't able to remember how I did it. I started searching through the directory where I knew there could be the test makefile, I wasn't able to find it, until I remembered what I was trying to achieve.

From a path like ../some/prefixCPU12suf/include I wanted to use % to remove the parts 'some/prefix' and 'suf/include' because in the directory ../CPU12 there were some files that needed to be processed.

The actual issue is that GNU Make's '%' is not analogous to shell's '*', so that means code like this does not work as I assumed anf the 'pref' part is not an anchor:


PATH := ../some/prefCPU12suf/include
CPUINC := $(patsubst pref%,%,$(PATH))
CPU := $(patsubst %suf/include,%,$(CPUINC))

default:
    @echo "PATH   = $(PATH)"
    @echo "CPUINC = $(CPUINC)"
    @echo "CPU    = $(CPU)"
Which leads to these results, no matter the version:

0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ ./make -f /tmp/makefile
PATH   = ../some/prefCPU12suf/include
CPUINC = ../some/prefCPU12suf/include
CPU    = ../some/prefCPU12
0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ ./make --version
GNU Make 3.82
Built for x86_64-unknown-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ make -f /tmp/makefile
PATH   = ../some/prefCPU12suf/include
CPUINC = ../some/prefCPU12suf/include
CPU    = ../some/prefCPU12
0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ make --version
GNU Make 3.81
Copyright (C) 2006  Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.

This program built for x86_64-pc-linux-gnu
Not sure if this could be qualified as a true bug, or a if the way I expected is a nice to have feature, but, in any case, the behaviour is consistent, unlike my brain which failed to initially identify the inconsistency in my code:

0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ grep patsubst /tmp/makefile
CPUINC := $(patsubst pref%,%,$(PATH))
CPU := $(patsubst %suf,%,$(CPUINC))
0 eddy@heidi ~/usr/src/make/make-profiler/make-3.82 $ make -f /tmp/makefile
PATH   = ../some/prefCPU12suf/include
CPUINC = ../some/prefCPU12suf/include
CPU    = ../some/prefCPU12suf/include
Note that this behaviour of patsubst is asymtric to how subst works, as explained in the updated old post.

This took some extra effort to remember what was the actual issue, and shows why logs are important when reporting an issue, and why reporting issues as soon as they were encountered: because human brains are faulty. (Yes, yours, too!)

Thursday 18 July 2013

HOWTO: git - remove those pesky remotes/origin/

Often people create branches to fix various bugs or implement various features, and publish those to a public repository to be pulled in the official repository (e.g. pull requests in the github model).

The problem is that after the fix is merged the local repo still has the branch that tracks the origin (your public repository).

To remove the local branch it is simple:

git branch -d feature_x

To remove the actual branch in the remote repository:

git push origin --delete feature_x

But, still the local tracking branches remain, those pesky 'remotes/origin/feature_x'. Here is an example from one of my repos:

$ git branch -a
  activestate
  master
  next
* py.test
  sysplatformtest
  remotes/activestate/master
  remotes/origin/HEAD -> origin/master
  remotes/origin/appauthor-fallback
  remotes/origin/linux-fixes
  remotes/origin/master
  remotes/origin/next
  remotes/origin/py.test
  remotes/origin/root-directories
  remotes/origin/sysplatformtest
  remotes/origin/wrapper-defaults

All of those can be removed with this command:

git remote prune origin

 And the output for my appdirs repository is:

Pruning origin
URL: git@github.com:eddyp/appdirs.git
 * [pruned] origin/appauthor-fallback
 * [pruned] origin/linux-fixes
 * [pruned] origin/next
 * [pruned] origin/root-directories
 * [pruned] origin/sysplatformtest
 * [pruned] origin/wrapper-defaults
Which, finally removes those damned refs:

$ git branch -a
  activestate
* master
  py.test
  remotes/activestate/master
  remotes/origin/HEAD -> origin/master
  remotes/origin/master
  remotes/origin/py.test
Great!

Update 2013-07-24:  There have been also other solutions proposed in the comments section. I am quoting them here for better visibility.

Besides the solution I proposed, people commented about other alternatives for pruning remote branches, by using git fetch --prune or by using git-push's --prune option.

Brice provided a method to remove specific branches one by one with git branch -dr command:
git branch -dr origin/foo
What is ironic is that in the past I have used this format, and before writing the original article I was looking for this, but haven't found it that easily. I hope this article will help some other poor soul or myself in the future.

Wednesday 29 May 2013

GNU Make quirk

(Updated 2013-07-20: Correctly describe the issue and provide examples)

Make has several useful built-in functions, among which $(patsubst pattern,replacer,string). The pattern can contain the character % which is a wildcard meaning any string and that can be referred in the replacer parameter. Only the first occurrence of % is treated with this special meaning, just as the manual says.

What the manual doesn't say is that only the form %bla works, while the form bla% does NOT work GNU Make's '%' is not analogous to shell's '*', so you can't expect to remove substrings in the middle of a string without knowing the exact string until the part you want to remove.

This does not work if you want to obtain '../some/CPU12/include' in OTHERPATH:

PATH := ../some/prefCPU12/include
OTHERPATH := $(patsubst pref%,%,$(PATH))

default:
    @echo "OTHERPATH = $(OTHERPATH)
The result will be:
 OTHERPATH = ../some/prefCPU12/include
This bahaviour is somewhat asymetric to how subst behaves and how one would expect it to, considering '%' is explained as a glob pattern:

0 eddy@heidi /tmp $ cat makefile
PATH      := ../some/prefCPU12/include
OTHERPATH := $(patsubst pref%,%,$(PATH))
SPATH     := $(subst pref,,$(PATH))

default:
    @echo "PATH      = $(PATH)"
    @echo "OTHERPATH = $(OTHERPATH)"
    @echo "SPATH     = $(SPATH)"
0 eddy@heidi /tmp $ make
PATH      = ../some/prefCPU12/include
OTHERPATH = ../some/prefCPU12/include
SPATH     = ../some/CPU12/include
Not sure what make developers would say about this, and I am not sure if having % work as a glob pattern and using it, instead of invoking sed would be better for performance, but sure I would like to have the option :-) .

Thursday 28 March 2013

The sorry state of the Android universe


Update: Corrected the code name for 4.2.2 is Jelly Bean, not Ice Cream Sandwich as I initially stated. I also described in the comments why the calendar application is also crap.

I recently bought a phone based on the Android platform (version 4.2.2 aka Jelly Bean). Before the purchase I had the wrong idea that this platform - Android - is the best thing since sliced bread. Let me tell you, that idea is so wrong, it's a shame anybody thinks or has ever thought that.

My previous phone was a Nokia E71 and with its stock set of applications and in spite of its old and rusty Symbian OS, I still have a hard time to even match the basic functionalities on the Android phone even using the most praised apps from the Play Store.

The dialer is crap, there is no decent speed dialer, the focus is on the apps instead of the phone functionality, the homescreen-type-to-search-in-contacts functionality of E71 is probably impossible due to the retarded decision to forget you're using a phone, notifications, even important ones, are hidden, to the point that the ones needing attention can be missed (e.g. entering the PIN/confirmation for Bluetooth pairing must be searched for in the notification area), when looking in the agenda there is no straight one step to edit a contact, you must jump to another application and do the edit there, treating SMS conversations as one to one instant messages works until you want to reply to multiple people at once.

Volume for ringing, SMS notification and the headset are controlled all together, so if during the last conversation you turned down the volume because it was too loud, you can miss a call because our will ring at a lower volume.

And these are only broken things in the basic functionality (for a post 2008 cell phone).

Android phones are smart phones, so more advanced features are required: playing audio and video files, GPS related applications, podcasting support, email handling and web browsing are among the features that can be expected on such a phone.

Only web related functionality and simple media playing are at a reasonable level compared with my old E71, maybe due to Google being web oriented and the new phones having better screens than the old Nokia.

But I am an avid podcast listener, so I've been searching for an application that can match Nokia's Podcasts stock application, and I've come to the conclusion Android users which love podcasts either have to wait for Nokia to develop on Android (which seems unlikely) or find one or two talented developers to create a decent application.

I don't understand how can such an application not have a playlist with the downloaded episodes, not have download all new episodes or mark all/selected as listened. I have found applications that have at least one of these issues and have an average of over 4 out of 5 stars in Google Play. Poor users!

In the light of these issues, I'm having so much difficulty coming up with an excuse for Nokia losing its position as a market leader, but our seems technical superiority is not necessary or enough to dominate the mobile phone market. Sadly, that says a lot about our species, and the words aren't nice.

Friday 22 March 2013

Finally the XDG fixes are in appdirs

Just a few hours ago my XDG fixes landed in the master branch of ActiveState appdirs.

https://github.com/ActiveState/appdirs/pull/17

They will be part of version 1.3.0, but the maintainers want first to create some tests, since the test frame was just rewritten recently.

Tuesday 12 March 2013

Herbalife, a detailed analysis

You might remember that a while ago I mentioned I got involved in skepticism to the point I even co-host a podcast, Skeptics in Romania (in Romanian).

Among the subjects we tackle there are claims about various miracle fruits, shady dietary advice, various nonscientific health products, and even scams. We try to inform or listeners about ways to identify themselves such dangerous/fake products and how they can inform themselves about the claims they might encounter, what questions they should ask before considering buying (into) such things.

One sensitive subject is the so called multilevel marketing, especially for people involved in such businesses.

This is a sensitive subject because many of these schemes are actually pyramidal schemes, also known as Ponzi schemes. These are illegal in many countries, since they are, in fact scams designed to lure people with supposed high profits and little work.

One such pyramidal system... well you can judge yourselves (note that the presentation contains many slides, but it's really captivating):

http://www.businessinsider.com/bill-ackmans-herbalife-presentation-2012-12?op=1j

Monday 4 March 2013

So, I did the responsible thing and fixed appdirs

In my previous post I expressed my frustration at the way a perfectly nice and fine idea, a portable way to get the standard configuration and data directory/files, was broken for Linux and BSD, because the authors of appdirs thought the XDG standard was "subject to some interpretation".

Although I said I decided not to use appdirs, I realised that wouldn't help anyone, so I fixed the code.

During the coding phase I discovered that the authors of appdirs broke the XDG standard even more, this time, ignoring XDG_DATA_DIRS, and talking about XDG_CONFIG_DIRS. When I found this I became convinced the *nix part of the implementation was subject to continuous irony, since the comment in this newly found breakage said "Perhaps should *use* that envvar", referring to XDG_CONFIG_DIRS, but writing later in code:

/etc/xdg/<appname>

Sweet, isn't it?

If you want a fixed version, you can grab it from my repository, on the linux-fixes branch:

https://github.com/eddyp/appdirs/tree/linux-fixes

Tuesday 19 February 2013

Appdirs from pypi is retarded

The XDG standard says user-specific data files should be written [in] $XDG_DATA_HOME which defaults to $HOME/.local/share.

Appdirs explicitly breaks XDG_DATA_HOME specification and returns a path based on $XDG_CONFIG_HOME, ~/.config/<appname> instead of ~/.local/share/<appname> for user_data_dir. Although the spec is as clear as day, the authors of appdirs say this is 'subject to some interpretation'.

No, it's not, read the damn spec!

http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html



The irony is that initially they got it right, but decided to break the standard some two years ago.

Thank you appdirs authors for ignoring a perfectly clear standard just because you *think* 'in practice, Linux apps tend to store their data in ~/.config/<appname>'.


Dear appdirs authors, if some people break the law it doesn't mean the law is broken. And if *you*think* some apps do not conform to a standard, it does mean *at*all* that you should not, either. More than that, if some applications decide on their own they consider some things config data and you'd classify those as 'application data', that's their business, and is not your place to decide to break the standard for everybody. Standards exist for a reason.

 BTW, your software is worthless to me now and I decided NOT to use it because of this idiotic decision of yours.

Friday 25 January 2013

(Serial) console flooded with kernel messages?

(If you want to ignore the explanations and see how to stop the Linux kernel from flooding the console with low importance messages, go straight to the bottom of the article, is the small bit at end with larger font.)

After connecting to the serial console on my Linksys WRT160NL router I was faced with the problem that the console was flooded with all sorts of messages such as:
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.U DST=178.156.183.255 PROTO=UDP SPT=137 DPT=137 LEN=58
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
ACCEPT IN=br0 OUT=eth1 SRC=a.b.c.d DST=69.171.246.16 PROTO=TCP SPT=3651 DPT=443
DROP IN=eth1 OUT= SRC=X.Y.Z.U DST=178.156.183.255 PROTO=UDP SPT=137 DPT=137 LEN=58
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=178.156.183.146 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=120
DROP IN=eth1 OUT= SRC=178.156.183.146 DST=178.156.183.255 PROTO=UDP SPT=17500 DPT=17500 LEN=120
DROP IN=eth1 OUT= SRC=178.156.183.146 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=120
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=X.Y.Z.W DST=255.255.255.255 PROTO=UDP SPT=58488 DPT=2008 LEN=26
DROP IN=eth1 OUT= SRC=178.156.177.142 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=153
DROP IN=eth1 OUT= SRC=178.156.177.142 DST=255.255.255.255 PROTO=UDP SPT=17500 DPT=17500 LEN=153

The serial console was working, but it was impossible to do anything practical in these conditions. I tried looking on the net for 'linux stop console flooding' and similar terms, but didn't get too far, except the fact the problem was the loglevel.

Here is the explanation of what this means (quote from Documentation/kernel-parameters.txt):

        loglevel=       All Kernel Messages with a loglevel smaller than the
                        console loglevel will be printed to the console. It can
                        also be changed with klogd or other programs. The
                        loglevels are defined as follows:

                        0 (KERN_EMERG)          system is unusable
                        1 (KERN_ALERT)          action must be taken immediately
                        2 (KERN_CRIT)           critical conditions
                        3 (KERN_ERR)            error conditions
                        4 (KERN_WARNING)        warning conditions
                        5 (KERN_NOTICE)         normal but significant condition
                        6 (KERN_INFO)           informational
                        7 (KERN_DEBUG)          debug-level messages


This was enough to go through my local git tree of the kernel in the Documentation directory and grep for loglevel. This brought me to this interesting bit from Documentation/sysctl/kernel.txt
==============================================================

printk:

The four values in printk denote: console_loglevel,
default_message_loglevel, minimum_console_loglevel and
default_console_loglevel respectively.

These values influence printk() behavior when printing or
logging error messages. See 'man 2 syslog' for more info on
the different loglevels.

- console_loglevel: messages with a higher priority than
  this will be printed to the console
- default_message_loglevel: messages without an explicit priority
  will be printed with this priority
- minimum_console_loglevel: minimum (highest) value to which
  console_loglevel can be set
- default_console_loglevel: default value for console_loglevel

==============================================================

So I ran 'cat /proc/sys/kernel/printk' and got (I managed to read it through the flood of messages from the firewall):

7       4       1       7
According to the explanations above, that meant that console_loglevel was too, so to fix it I ran:
echo '2 4 1 7' > /proc/sys/kernel/printk
And, behold, the serial console was usable.

Tuesday 22 January 2013

(Rsnapshot) backup and security - I see problems

In my previous post I was asking for suggestions for backup solutions that would be open/free software, do backups over the network to a local HDD, be cross platform to allow Windows and Linux clients and not be too CPU/memory hungry (on the server).

Several people suggested rsnapshot, BackupPC, areca-backup, and rsync. Thank you for all your suggestions, you have been a tremendous help. I have decided to give rsnapshot a try since it was suggested to me that it would actually do what is supposed to do for Windows clients, too (which was initially my perceived show stopper for rsnapshot).

Still, when getting to the implementation, I was a little disappointed by the very permissive access that needs to be provided on the client machines, since the backup is initiated from the backup server. Even the so called more secure suggested solutions seem way too permissive for my taste, since losing the control over the backup system means basically giving total access to the data from all client machines, which is quite a big problem in my opinion.

The data-transfer mechanism employed by rsnapshot is simply
  1. S ==(connects and reads all data)==> C
  2. S stores data in the final storage area
Am I the only one seeing a problem with this idea? If the server can connect to all your client machines and read all areas as it pleases, even if you restrict it to some directories, the data is already compromised when the backup server is compromised (think .ssh private keys, files with wireless network passwords and so on; I won't say card information - you don't keep credit/debit card information on your computer, or at least not in plain text, do you?).

What I would consider a better alternative would be a server-initiated dialogue which goes a little like this (S is server, C is client, '=' represent connections via ssh):
  1. S ---(requests backup initiation procedure)---> C
  2. S waits for a defined period of time that C connects back to send (already encrypted) data; if it doesn't arrive, it aborts
  3. S <===(sends encrypted data to be backed up)=== C
  4. S <-(signals the completion of data transfer)-- C
  5. S stores the data in the final storage area
This way, the server can allow access from the clients only in designated areas (even a chroot is possible) from designated clients, access can even be provided only after a port knocking procedure and only during the backup time frame (since the server initiates the negotiation, it can expect only then the knocks, but only then), so the server is quite well secured. The connection to the server can even be done through an unprivileged account, it can even be one account per client machine which can be limited to a scponly shell, if you care for that level of security.

On the other hand, the client information is secure since it can be encrypted directly on the client machine and sent only after encryption, the client machine can decide and control what it sends, while the backup server can only store what the client provides. Also, if the server is compromised, the clients' data and system aren't compromised at all, since the data is on the backup machine, but is encrypted with a key only known on the client (and a backup copy of it can be stored somewhere safe).

I am aware this approach can be problematic for permission (user/group preservation), but it doesn't happen if there is a local <-> remote user mapping or simply the numeric IDs are kept.

I am also aware this means smarter clients and might mean the Windows machine might not be able to implement this completely, but a little more security than "here is all my data" can still be achieved, can't it?

What do other people think? Am I insane or paranoid?

I think I can implement this type of protocol in some scripts (at least one for server and one for clients) and use the backup_script feature of rsnapshot to keep this clean and nice within rsnapshot.

What might prove problematic with this approach is that rsync spedup is lost (might be?) because the copy is done to a temporary directory which, I assume, is empty, so tough luck. Another problem seems to be that every time the backup is done, the client has to encrypt each of the files to backup, which seems to be a real performance penalty, especially if the data to be backed up is quite large.

Is there an encryption layer that does this automatically at file level in the same/similar manner that LUKS does for entire block devices? Having the right file names, but with scrambled/encrypted contents seems to be the ideal solution from this PoV.

Thanks for reading and possible suggestions you might point me to.

P.S.: I just thought of this, if there was an encryption layer implemented with fuse which is mounted in some directory on the client machine, the default rsnapshot mechanism could actually work, and this would mitigate the data accessibility issue and the performance issue since that file system could be contained within a chroot and the encryption/scrambling would be done transparently on the client, so no data is plainly accessible. Does anybody know such a FUSE implementation that does on-the-fly file encryption?

P.P.S.: EncFS does exactly what I want with its --reverse option which is exactly designed for this purpose:
Normally EncFS provides a plaintext view of data on demand. Normally it stores enciphered data and displays plaintext data. With --reverse it takes as source plaintext data and produces enciphered data on-demand. This can be useful for creating remote encrypted backups, where you do not wish to keep the local files unencrypted.
Great!