Saturday, January 31, 2015

Facebook reverse phone search is much more dangerous than you think

Facebook's got this dangerous feature allowing anybody to search for people using only their phone number.

Many companies are now taking profit of this to associate a number with a name and other information from the profile. This is incredibly bad for your privacy.

Facebook's bad decisions

Facebook made two particularly bad decisions.
First they require a phone number for many of the site's functions.
Secondly, on your profile you can choose who can see your number. You can even select "Me only". But there is a different setting, which is enabled by default, that allows people to search for your profile using only your phone number.

There's a saying in software design that default settings should be good for most people. I don't think this is the case here. The problem is made even worse by the fact that Facebook is used by teenagers (and older people too) who are not aware of the consequences a lack of privacy can have on their lives, and their Facebook profile contains everything there is to know about them and that can be used against them.

On a larger scale...

Maybe you are thinking "Well, so someone knows a number and can find who owns it, what's so bad about it?" It would require a lot of time for people to look for the numbers*.
Then you are not aware of what can be done with computers:

  • Write a program to perform a brute-force search by trying every possible number there is out there, and build a database. Then sell this database.
  • You think Facebook would find someone doing such kind of search? If they do, then attackers would use different network paths for each connection like it can be done with I2P.
  • I2P would be particularly slow though. Also you would need a Facebook profile to do the search. Then, botnets would be used. People operating such networks have 100,000s of "zombie" computers working for them (where supposedly there would be a cookie on the computer allowing them to perform the search), and these would use people's Facebook accounts to do the search. The attack could be done in minutes.
* Did you know there are people in India and elsewhere in the world currently solving CAPTCHAs by hand? Although there are advanced techniques to solve many kinds of CAPTCHAs, by the time and price an engineer can write such a program, people with low pay from poor countries would have solved millions of these stupid images.

Friday, January 30, 2015

Microsoft and Snapchat vulnerabilities got publicly exposed.

Here's a post to thank people who don't fear to publish information about security threats when the software companies don't care after they have been warned.

Microsoft for instance failed to correct serious issues after 90 days it has been reported to them by Google on its Project Zero blog. I mean, come on! Microsoft is a giant company. 720 hours is plenty to correct the bug in time.

Snapchat isn't a good player either.
For those who don't know, Snapchat is a social media site who mostly focuses on the exchange of so-called "snaps" : photos or short videos that delete themselves upon viewing.

The recipient can only see the video or photo once and while holding a finger on the screen. The "snaps" get deleted afterwards.
Well, not quite. The truth is, the snaps are marked for deletion by the operating system. The files get renamed with a ".nomedia" extension and will be really removed from the file system later, because, as you might have witnessed with the "Gallery" app, this operation is slow.

Many applications, which are illegal by the Snapchat terms and conditions as well as Google Play's, and hence were removed from the store, did something very simple: copy the marked files elsewhere and rename them. VoilĂ , users could view the snaps as long as they wanted and re-share them.
To prevent this, Snapchat used some encryption.

If you know a thing or two about encryption, you might be aware that the term encryption refers to an infinite number of techniques trying to "hide" data in some way or another. For instance, reversing the letters from a text is considered encryption.
As this very interesting and very well written article from GibsonSec will tell you, Snapchat uses AES/CBC with a single synchronous key. The decryption function in Python is only 8 "instructions" long, including two requests on a web service.

Snapchat founder said he doesn't care about security. He wants his users to have fun with the app. That's obviously something a product design major would say, not an engineer.
Because the truth is that if they want their service to exist in the future and make money, they should consider this issue very seriously. If people can cheat and anybody can save the snaps by downloading an app publicly available on the Play store, their whole business idea goes at the bottom of the sea.

Even if Google removes the apps from the store, Android users, hopefully, are free to download and install what they want on their device. So simply removing apps exploiting the encryption weakness is useless.

Snapchat played its cards very badly, as users got their credentials stolen because the company still considers security a minor issue...
There are plenty of websites featuring stolen photos and videos from Snapchat. What did the company say? It's because users installed third-party apps. Boo hoo. If the government did the same. Who would you blame? The citizens, the hackers? Nope. You'd blame the government. So you should blame Snapchat.

Snapchat doesn't give a **** about your privacy.

Wednesday, January 28, 2015

ownCloud Client asking for password when starting a KDE session

UPDATE: On another machine I have, the fix described below is not enough. You have to make sure that the KWallet daemon is enabled for the current user. Enable with: System Settings => Account details => KWallet => Enable

The ownCloud client is integrated with KDE's password manager (KWallet).
When it starts, it checks if the kwallet is unlocked and looks for the password.

Yeah, that's how it should be and that is a very software design. Unfortunately it has never worked and the ownCloud client asks for the password every single time!

In the latest version the problem is actually explained in the dialog ("No keychain available"):



In my case, which seems to be the default, the problem is that kwalletd is not running when the ownCloud client starts. It's a timing issue.

My solution :


1. Open KDE's "Autostart" control module (a quick search from the start menu will get you there)
2. Change the ownCloud entry to point to a script of your own.
3. Click OK.
4. Write the following shell script:



5. Make it executable.
6. Reboot (or kill kwalletd and try your script. Note that logging out doesn't kill the daemon.)

Of course if your KWallet is protected by a password, then you will be asked to provide it.

VBoxManage made simple

As you might know, VirtualBox machines can be managed from the command-line. However, I find the syntax of the VBoxManage command-line utility cumbersome and hard to remember.

Copy this in .bash_aliases:



Now you can do the following:

vm-start starts a VM in headless mode
vm-savestate suspends a VM to disk
vm-powerbutton simulates a press on the power button
vm-poweroff simulates unplugging the power cord (some OS don't shutdown completely)
vm-running lists the running VMs

Autocompletion is enabled and dangerous actions will ask for confirmation.

(You will need to re-login or "source" .bash_aliases whenever you add / remove a VM.)
(The script doesn't support VM names with spaces.)

Sunday, January 25, 2015

Backing up your data with Amazon S3 Glacier and rsnapshot. A complete guide, Part 4.

Aletsch Glacier in Switzerland
Photo from John Fowler

Amazon Glacier

Here we go! In Part I I wrote quite a lot about Glacier, now is the time to get our hands dirty.


Create an AWS account and a vault

Before starting you need an Amazon Web Services account. I realize there is no point in showing you screenshots or explaining you the following process in details because it's well documented by Amazon and it will be probably be outdated by the time you read this. Using the web management console:

  1. Create an AWS account. It will be referred by Amazon as the "root account".
  2. Create a group with permissions for Amazon Glacier.
  3. Create a user (IMA) and add it to the group you just created. Mark down the access key and the secret key. We will need them in a minute.
  4. Switch to the datacenter you want your data to be stored in.
  5. Create a new vault in Glacier. Name it appropriately, for instance "yourHostname_backup".
  6. Configure the appropriate limit for the vault in the "Settings" panel from the Glacier page.

Glacier-Cmd

Amazon doesn't develop any command-line or graphical client. All they offer are wrappers in all languages for their REST API. The Java and .NET APIs offer high-level features that the others do not. But still, everybody needs to upload and download archives, so some people developed interfaces. One of them is Glacier-Cmd.

As a regular user:

git clone https://github.com/uskudnik/amazon-glacier-cmd-interface.git
cd amazon-glacier-cmd-interface
sudo python setup.py install

At the time of writing there was a pending patch to support "eu-central-1", Amazon's latest datacenter located in Frankfurt am Main, Germany, Central Europe.

With the user that will send the archives to Amazon Glacier:

Edit ~/.glacier-cmd:

[aws]
access_key=YOUR_ACCESS_KEY
secret_key=YOUR_SECRET_KEY

[glacier]
region=eu-central-1
#logfile=~/glacier-cmd.log
#loglevel=INFO
#output=print

Change the keys and region accordingly. The rest is optional. Attention! You must choose the region in the AWS web console, not here!
The logging commands don't seem to work.

Verify this is working:

glacier-cmd lsvault

By the way, you can create vaults with glacier-cmd.
The command-line is badly documented. Look at this page instead.

To upload an archive:
glacier-cmd -c ~/.glacier-cmd  upload --description "A description of the archive" your_vault_name archiveName.tar

Do I need to tell you to run this with Task Spooler or Screen?

I am not sure Glacier Cmd completely supports resuming. But in case you get a timeout, try this:
  1. glacier-cmd listmultiparts your_vault_name
  2. Copy the upload ID
  3. Retry the upload with "--resume --uploadid the_copied_upload_id".
The CLI says it resumes something even though the documentation says it doesn't support the feature, so I'm a bit lost. Maybe because the doc is 2 years old...
See also this solution in case of timeouts. (in short: while true: do glacier-cmd ... --resume --partid 1234 --partsize multiple_of_two; sleep 600; done)

Alternative: Glacier CLI. It also offers an interesting mechanism to work with git-annex.
Alternative on Windows and Mac: I love CyberDuck. It might well be the most user-friendly GUI for many storage providers including S3 Glacier.


Things to know

  • Upload is free
  • Download is not
  • Asking for the "inventory" takes times.
  • Asking for a download link takes at least 4 hours.
  • Download links are available for 24 hours. (Does it mean we have 24 hours to download the entire archive?)
  • It takes one day before deleted archives are not listed anymore in the "inventory".
  • Started but failed uploads are kept by Amazon, you must either resume them (see above) or remove them (glacier-cmd abortmultipart)

Automating

The next step is to automate the upload and the deletion of the archives created from rsnapshot. 
Remember you can only delete for free 90 days after the archive has been uploaded.

Last words: Testing

VoilĂ . We are done. One last advice: test your backups. To do this, create a small directory, for instance with a single "Hello World" text file, and modify rsnapshot so it runs a script that will create an archive, encrypt it and send it to S3 Glacier. Then download the archive with Glacier-cmd or another software (try one of the GUIs like SAGU), decrypt it, extract it and see if you can retrieve the "Hello, World" text.

Backing up your data with Amazon S3 Glacier and rsnapshot. A complete guide, Part 3.


(Part I is here)

Remember when I told you it was a bad idea to backup everything automatically with rsnapshot if the files never change (e.g. photos from 5 years ago) ? Still, you want to make one backup of these files.
That's what we'll do now. We'll create archives manually and encrypt them.

Creating archives from the rsnapshot folders and encrypting them is left as an exercise. It should be easy if you read the guide.

Creating an archive

The most simple way to create an archive of a folder is this:
tar cf somename.tar folderPath/

Then you could run "watch du somename.tar -h" to see how it progresses.
Hopefully, there is a more complicated way!
tar cf - folderPath/ -P | pv -s $(du -sb folderPath/ | awk '{print $1}') | cat > somename.tar

This will display a nice progress bar. I suggest the use of screen for long-running jobs. You can also try Task Spooler, a barely known, badly documented yet very useful tool!
Note that the two commands above create an uncompressed archive. That's what you want if you are creating a backup of your music library, images, videos, ZIP files, ...

If you want compression:
tar zcf somename.tar.gz folderPath/
for a GZIP compressed file. It's slow because it uses one thread. There are multithreaded implementations of GZIP and BZIP2 (namely pigz and pbzip2) that will linearly accelerate the compression depending on the number of CPU cores.

With the progress bar:
tar cf - folderPath/ -P | pv -s $(du -sb folderPath/ | awk '{print $1}') | gzip > somename.tar.gz

Note: Instead of TAR you might want to take a look at DAR. It offers many useful features.
Note 2: I've found that with default options filesizes go like this: uncompressed > gzip > bzip2 > xz (~ 7zip).
Note 3: Here and in general: avoid ZIP, RAR and DMG files. Everything TAR based is 100 % open-source while these are not, or might get you in trouble. Also tar+gzip+bzip2 are available on all UN*X after the first boot.

Encryption

OpenSSL is able to do some compression but it's not meant for large files, so we can't use it. We are left with S/MIME and PGP. Here we will use GnuPG / GPG, an alternative to the original PGP (Privacy Guard / Pretty Good Privacy) proprietary software.

First we'll need to create a private and public key. I won't explain how PGP works, nor RSA or ElGamal... There are plenty of GUIs for all operating systems to create the keys, but you can create them with the command-line, as explained in this online guide:
gpg --gen-key

Make several copies your private and public keys. Use a strong password. Whenever possible use 400 permissions for the private key. You can give your public key to anyone ; in fact most people put them on a key server so other people can find the public keys to send encrypted e-mails.
PGP is great to encrypt files or e-mails so only designated recipients can read them. In this case, the entity encrypting the file and the recipient are the same person. Let's encrypt our archive:
gpg --encrypt --recipient 'Your Name or e-mail here' archivename.tar
or even shorter:
gpg -e -r NameOrEmail archivename.tar

You'll end up with a new file named "archivename.tar.gpg", encrypted. You can now delete the unencrypted version. 
Exercise: combine the archive creation, compression and encryption using pipes. Yes you can.

I would advise to check this page from NASA explaining how to use the AES256 cipher algorithm and how to use the compression flags of GPG.

Backing up your data with Amazon S3 Glacier and rsnapshot. A complete guide, Part 2.


(Part I is here)

Let's get our hands dirty!
It's time to make automated backups with rsnapshot.

Remember rsnapshot lets you access full backups while minimizing space and letting you access older versions of the files.

Install rsnapshot

Step 1 is to install rsnapshot on your system.

Configure rnapshot

rsnapshot can be configured to store files over the network and do pretty complicated stuff. It is in fact just a layer written in Perl on top of rsync and other common Linux commands.
The configuration file /etc/rsnapshot.conf will tell you plenty on how to configure the program. I just want you to pay attention to these points that are not that clear in tutorials and hard to find in the documentation:
  • Use TABS, not spaces. If like me your default in ViM is to replace tabs by spaces, you can temporarily disable this behavior for the current session (or file?) by typing ":set noexpandtab". It has to look stupid when you "cat" the file.
  • Folder paths must end with a slash. Always.
  • Look at the rsync man page for the exclusion patterns you can use.
  • The retain lines should be read like below. Do not try to interpret it otherwise, it would be wrong.

    retain hourly  4


    Keep only the four most recent versions of the job named "hourly". Only a few people know this but "hourly" doesn't mean anything for rsnapshot. You could replace it with "darkvader" if you wanted to.
    Here are incorrect ways to read the "retain" lines:
    "4" is not the number of times per hour the backup must be done.
    "hourly 0.5" doesn't mean the job will be executed every two days.
  • The retain lines must be declared from the most to the least frequent. So: hourly, daily, weekly, monthly, yearly.
  • Again, the job name (e.g. "daily") doesn't mean anything. You can remove any of them. For instance you could have it configured to keep the last 4 "hourly" jobs and the last 2 "monthly" jobs without mentioning "daily" and "weekly".
  • I repeat for the third time: the job name has no meaning. So if you put "daily" before "hourly", then the folders named "daily" will actually contain the "hourly" backups.

Rsnapshot will create the output folder if it doesn't exist. On Debian, the default path is /var/cache/rsnapshot. The folder will be owned by root and forbid anyone else to access it.

First run

The very first time, invoke rsnapshot manually as root from the command line (preferrably with screen) in verbose mode and see what happens:

rsnapshot -v hourly
where "hourly" is the name of the first retain job in the configuration. The very first run will take much longer than all the other afterward because it has to make all the copies. The next runs are faster because only the modified files get copied.

Schedule rsnapshot to run every hour / day / week / month ...


If all went well, you can now create a few cron tasks to run rsnapshot automatically. Type "crontab -e" as root and enter something like this (I will explain it below):

# m h  dom mon dow   command
  0 1,7,13,19 * * * /usr/bin/rsnapshot hourly
  0 2  *   *   *    /usr/bin/rsnapshot daily
  0 6  *    *   1   /usr/bin/rsnapshot weekly
  0 11 1    *   *   /usr/bin/rsnapshot monthly

Quit the crontab editor.

hourly: Instead of "*/6" to make an hourly backup every 6 hours, I didn't want the first one to run between midnight and 1 am because I know there are other cron jobs scheduled at that time. 
If you are keeping the last 4 "hourly" backups you probably want to make one every 6 hours. Does that make sense to you?

daily: There is one big risk with these cronjobs. It is that the hourly cronjob is not finished when you schedule the daily cronjob. In that case, the daily cronjob will be cancelled. I am pretty sure you can configure rsnapshot to run two jobs in parallel but I would advise against that. The best bet is to keep enough time for the "hourly" job to complete.

weekly: Same remark. Funny story, the value of "dow" can be [0 - 7]. Both "0" and "7" designate Sunday for portability reasons. Here "1" is for Monday. (You should probably run the weekly job in the week-end in a corporate environment.) In my case the job runs every Monday at 6 am.

monthly: Same remark regarding the hour (not too close from the other jobs). In my case the monthly job runs every 1st day of the month at 11 am. 

Trick question: How can you schedule a backup to run every 3 days instead of one and keep all of the backups from the past month? You must keep the daily and weekly backups.

In /etc/rsnapshot.conf:
retain everyotherday 10 
where "everyotherday" could be "gogglydoe", and 10 is 30 days divided by 3 days.
The line must go between "daily" and "weekly".

In the crontab: 
# m h  dom mon dow   command
  0 0  */3 *   *    /usr/bin/rsnapshot everyotherday

Enjoy the power of full backups

You know what's nice with full backups (or kind of, as rsnapshot uses hard links to avoid duplication) ?
You can browse the backup folders in /var/cache/rsnapshot just like the "live" folders!

Continue to Part III