Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Tuesday, 22 January 2013

(Rsnapshot) backup and security - I see problems

In my previous post I was asking for suggestions for backup solutions that would be open/free software, do backups over the network to a local HDD, be cross platform to allow Windows and Linux clients and not be too CPU/memory hungry (on the server).

Several people suggested rsnapshot, BackupPC, areca-backup, and rsync. Thank you for all your suggestions, you have been a tremendous help. I have decided to give rsnapshot a try since it was suggested to me that it would actually do what is supposed to do for Windows clients, too (which was initially my perceived show stopper for rsnapshot).

Still, when getting to the implementation, I was a little disappointed by the very permissive access that needs to be provided on the client machines, since the backup is initiated from the backup server. Even the so called more secure suggested solutions seem way too permissive for my taste, since losing the control over the backup system means basically giving total access to the data from all client machines, which is quite a big problem in my opinion.

The data-transfer mechanism employed by rsnapshot is simply
  1. S ==(connects and reads all data)==> C
  2. S stores data in the final storage area
Am I the only one seeing a problem with this idea? If the server can connect to all your client machines and read all areas as it pleases, even if you restrict it to some directories, the data is already compromised when the backup server is compromised (think .ssh private keys, files with wireless network passwords and so on; I won't say card information - you don't keep credit/debit card information on your computer, or at least not in plain text, do you?).

What I would consider a better alternative would be a server-initiated dialogue which goes a little like this (S is server, C is client, '=' represent connections via ssh):
  1. S ---(requests backup initiation procedure)---> C
  2. S waits for a defined period of time that C connects back to send (already encrypted) data; if it doesn't arrive, it aborts
  3. S <===(sends encrypted data to be backed up)=== C
  4. S <-(signals the completion of data transfer)-- C
  5. S stores the data in the final storage area
This way, the server can allow access from the clients only in designated areas (even a chroot is possible) from designated clients, access can even be provided only after a port knocking procedure and only during the backup time frame (since the server initiates the negotiation, it can expect only then the knocks, but only then), so the server is quite well secured. The connection to the server can even be done through an unprivileged account, it can even be one account per client machine which can be limited to a scponly shell, if you care for that level of security.

On the other hand, the client information is secure since it can be encrypted directly on the client machine and sent only after encryption, the client machine can decide and control what it sends, while the backup server can only store what the client provides. Also, if the server is compromised, the clients' data and system aren't compromised at all, since the data is on the backup machine, but is encrypted with a key only known on the client (and a backup copy of it can be stored somewhere safe).

I am aware this approach can be problematic for permission (user/group preservation), but it doesn't happen if there is a local <-> remote user mapping or simply the numeric IDs are kept.

I am also aware this means smarter clients and might mean the Windows machine might not be able to implement this completely, but a little more security than "here is all my data" can still be achieved, can't it?

What do other people think? Am I insane or paranoid?

I think I can implement this type of protocol in some scripts (at least one for server and one for clients) and use the backup_script feature of rsnapshot to keep this clean and nice within rsnapshot.

What might prove problematic with this approach is that rsync spedup is lost (might be?) because the copy is done to a temporary directory which, I assume, is empty, so tough luck. Another problem seems to be that every time the backup is done, the client has to encrypt each of the files to backup, which seems to be a real performance penalty, especially if the data to be backed up is quite large.

Is there an encryption layer that does this automatically at file level in the same/similar manner that LUKS does for entire block devices? Having the right file names, but with scrambled/encrypted contents seems to be the ideal solution from this PoV.

Thanks for reading and possible suggestions you might point me to.

P.S.: I just thought of this, if there was an encryption layer implemented with fuse which is mounted in some directory on the client machine, the default rsnapshot mechanism could actually work, and this would mitigate the data accessibility issue and the performance issue since that file system could be contained within a chroot and the encryption/scrambling would be done transparently on the client, so no data is plainly accessible. Does anybody know such a FUSE implementation that does on-the-fly file encryption?

P.P.S.: EncFS does exactly what I want with its --reverse option which is exactly designed for this purpose:
Normally EncFS provides a plaintext view of data on demand. Normally it stores enciphered data and displays plaintext data. With --reverse it takes as source plaintext data and produces enciphered data on-demand. This can be useful for creating remote encrypted backups, where you do not wish to keep the local files unencrypted.
Great!

Thursday, 27 September 2012

Security, so easy to be done wrong

I just finished reading this article about data security and how easily it can be done wrong. A real eye opener, if you're thinking about breeding your own solution. I highly recommend it, in spite of the initially weird feeling of reading something that looks like a script.

Thursday, 9 October 2008

Nokia has your data

As a beta tester for the new email application from Nokia, you will have to give your mail account password:



and agree that they will get your data, too [1].




Really unimpressive, Nokia.


[1] I haven't checked the "Nokia privacy policy", but is enough for me to say no, thanks, and start to wonder if they do anything behind my back with the phones they sell.

Wednesday, 16 July 2008

OH MY GOD!

This can't be true! How can someone trust such a key?

Does anybody have a reasonable explanation for such an abomination?



Update: It seems it is unclear to some people what is the problem with this key. There are multiple identities (apparently different people) for the same key. So, in layman's terms, multiple different people are using exactly the same key to certify their identity.

I really don't understand how this could work in real life. Maybe is a case of misunderstanding what gpg is about?

Tuesday, 27 November 2007

Updates: NSLU2, Andrew S. Tanenbaum in .ro

Last weekend was as hectic as my life has been lately: I have been trying to restore sanity into my NSLU2, I went to a lecture from Andrew S. Tanenbaum and I made a 2.5 hours drive to my parents in about 4 hours because of the fog.

First, my slug:
  • refuses to recognise the USB NIC I have been using until the latest incidents (it either says 'not accepting address, error -71' or 'device descriptor read/64, error -71')
  • sometimes reboots when I insert the USB NIC
  • either doesn't boot at all or boots really slowly when the USB NIC is inserted
  • (obviously) doesn't show the NIC in lsusb listing when is not recognised
Since the USB NIC works on my laptop, I suspect a hardware problem with the slug. Bummer!

Dear no-so-lazyweb, is there a way to install Debian on an ASUS WL-500G Premium router without loosing wireless ability? Or, is there a way to make use of my USB NIC with the ASUS router?



Second, Andrew S. Tanenbaum visited Romania and lectured Friday at the University „Politehnica” Bucharest.

He presented Minix3's architecture and the advantages it has over monolithic OSes. I attended the lecture (although I am not a student anymore) and found it quite nice and well prepared, but I had the feeling that sometimes he was trying to avoid or to bash topics that were not putting Minix into a good light or challenged its title of being the first open[0] OS based on a micro-kernel architecture[1]. In spite of that, I found him to be a really good speaker and I liked the overall presentation, although, I also expected some on the spot demos or at least some recordings.

The things that I remember:
  • 2.4 millions subtle code alterations in drivers with only 80000 driver crashes (of course, no kernel crashes)
  • simulation of network driver repeated crashes at different time intervals and how it affects performance - a 30% degradation at crashes that occur once every second and an insignificant degradation at crashes occurring at each 10 seconds
  • every driver has a set of rights assigned to it; it was difficult for them to define this - this sounds a lot like SELinux issues
  • messages have a fixed length
  • there is no dynamic memory allocation within the kernel
  • the kernel is 5000 lines of code (all drivers are in user space)
  • really secure system
  • there were performance comparisons with Minix2 and the hit was about 20%; still, is said that L4 has only an approximate 2-5% performance hit because of the micro-kernel architecture
  • apparently the FreeBSD kernel has only 3 bugs /1000 lines of code
  • Minix uses a BSD license
I also got a Minix live CD (which is more like the Gentoo Linux install CD - just console in the live system) and made an installation of Minix in a qemu machine[2]. Unfortunately, I don't think I'll have the time to dwell into the source.

I was thinking, would it worth the effort to try to make a GNU/Hurd/Minix system (i.e. replace Mach with Minix's micro-kernel)? BTW, is Debian GNU/Hurd now based on L4 or does it still uses Mach?


Note: Some of my work colleagues suggested that the presentation was the same as one he made at linux.conf.au last year, but I can't confirm/infirm that since I didn't saw the recording.



I won't write about the "fog drive", but I'll just say it wasn't pleasant at all, and I felt I was in driving in The Twilight Zone for the whole Friday evening.




[0] he gave credit to QNX
[1] For instance, I tried to ask him twice if he felt that GNU Hurd was violating the micro-kernel paradigm or if he can compare it to Minix' architecture. I had the impression that both times he avoided to answer and started the usual Hurd bashing, "they have been developing it for 20+ years, but got nothing working", meanwhile "Minix is here". After the lecture/presentation somebody told me that AST shortly said that they "were similar, but different". I didn't catch that line.
[2] thanks to qemu-launcher it is trivial to create and manage multiple qemu virtual machines

Wednesday, 14 November 2007

Lesson relearned: when Linux networking weirdess occurs...

My relearned lesson for the day: when Linux networking weirdness occurs in a NAT environment, remember to try MTU clamping.

Thanks to the comments by Justin and Sesse, I was fast-tracked to the core of the problems I have been experiencing since Thursday, MTU issues. What's worse (from my pov) is that I have encountered this issue before with the provider I had in Timișoara, but, since that ISP was using PPPoE and my current ISP in Bucharest doesn't, I never really made the connection. I even had a commented out iptables rule for MTU clamping in my firewall script.

The rule I am talking about looks like this:

iptables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -o $EXT_IF -j TCPMSS --clamp-mss-to-pmtu

or like the one I have been using (seems more logical to me):

iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu


Note that this is not a fix, but a workaround and the real problem is over-zealous admins or weird setups[1] which think that banning TCP fragmentation (or the entire ICMP traffic) is a way to secure networks.


Once again, thanks to everybody who read and/or commented about my issue.

[1] Sesse told me that in his case there was a transparent proxy involved when he exeprienced MTU weirdness.

Sunday, 28 October 2007

gpg signatures sent

I finally managed to resend the signatures to the few people I decided to send them a while back after debconf7.

I actually resent all the signatures I thought I should sign (if I didn't socialize at all with you during debconf or before you shouldn't receive a signature from me).

So, please:
  • sorry, if you get my signatures again; if so, ignore
  • don't be mad if you didn't receive a signed key from me, I probably don't consider I know you enough to do that yet ;-)
Now I can cross one more item on my long todo list. Yay!

This message has emerged thanks to: caff, dato, python's smtplib and rfc822, vi, gpg, exim, linksys, dell, todo(the application from openhand) and blogger :-)