Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Monday, March 16, 2015

One more hour of sleep every night

Cheerful Sunset
(no copyright information)


It's 7 pm and the sky has this nice orange - red color.
But my computer screen does not. It's "shouting" a blue-ish tone at me. But not for long!

Everybody knows that watching screens (TV, computer or handheld devices) before going to bed is bad for your sleep. What you might not know, is that if you do use screens at night, you'll sleep more and better if the screen is yellow - red instead of blue. Yes, blue! I won't start the blue/black gold/white dress war again, but it's true and you don't realize it: there is no such thing as white in nature and color analysis is one of these very complex tasks your brain "computes" yet is subjective at the end because your brain is not wired the same way than mine, so at some point you might see a gold dress when I see a blue dress. So believe me, if you are looking at your screen right now, it's blue. If you think car headlights are white, then put one car with regular headlights and one with Xenon lamps next to each other. You will probably say one is yellow-ish and the other is blue-ish.
But that's the beauty of it. If your screen goes very slowly and smoothly from blue to yellow - red, you won't notice it.

So, about that one hour of sleep. There was a study I cannot find anymore, where they had people lying in bed doing nothing, and they measured how fast these people would fell asleep. Then they would repeat the experiment with a more yellow / red light. They found people would fell asleep faster with the second setup.

Redshift on Windows

I haven't tried it, but there is an experimental version for Windows. It does seem like f.lux is older but maybe it works better, I don't know. Tell me if you find out.

Installing and configuring Redshift on Kubuntu (or any Linux distribution really)

In this tutorial I'll assume you live in western Switzerland, and that you use Kubuntu.
The procedure is similar on other systems. I am also assuming you are not changing timezones all the time. (But there is a solution for you if you find yourself in this situation, look at the documentation.)

  1. Install Redshift:
    sudo apt-get install redshift
  2. Create and open the configuration file with your favorite editor:
    nano ~/.config/redshift.conf
  3. Paste this (Ctrl + Shift + V in Konsole) :
    [redshift]
    transition=1
    location-provider=manual
    adjustment-method=randr
    [manual]
    lat=46.7
    lon=7.1

    Change the last two lines with your latitude and longitude (yes, go ahead, click on this link). You can keep all digits if you want to. If your latitude reads South (Australia, New Zealand) or your longitude reads West (North America), use negative values where appropriate.

    If you can see "GNU nano" at the top left of the console window, press Ctrl+O then Ctrl+X when you are done to close the editor and save the file (or the other way around).
  4. Start "redshift" from the Terminal to check if it works. Your screen should go a bit yellow in a matter of seconds if the sun is not up. Otherwise try to mess with the latitude and longitude or your computer clock. There should be no output on the console.
  5. All well? Time to start Redshift automatically. Open the KDE menu and type "autostart". Select the entry that appears. Click "Add Program..." then type "redshift" (without quotes). Don't select anything just type "redshift" and click OK. Click OK again to close the window.
  6. Log out and in again. Your screen should be slightly yellow. It does? Congratulations. You just bought yourself one hour of sleep each night.
Now I suggest you install a similar app like Twilight on your phone.

Sunday, March 1, 2015

Netflix on Arch Linux (or Linux in general)

UPDATE: 
You don't need an unstable version of Chrome to play Netflix streamed media anymore. Simply download the latest stable version of Chrome.
Note that Chromium (it is exactly the same as Chrome, minus features such as syncing using a Google account and proprietary plugins) does not support the widevine and thus cannot play Netflix content.

Original article:

Netflix has thrown away its Microsoft Silverlight interface and is now using HTML5 to stream its content.

Well, almost. Not surprisingly, the videos are still encrypted and DRM'd. It is done using a plugin developed by Widevine Technologies (a Google company).
Chromium is the open-source part of the Google Chrome browser (everything but the Google branding, the syncing mechanism with the Google account and a few plugins, is open source). So nowadays you'll often find the Chromium browser on Linux distributions.

The Widevine plugin is not included in Chromium for licensing reasons, yet you would expect to be able to install it anyway. There seems to be people working on that but I couldn't make it work.

What worked for me is this:

1) Install google-chrome-dev from AUR. The reason you need the unstable version is that you need a very recent version (42+) and that you need a Google-branded browser including the Widevine plugin.
yaourt google-chrome-dev
2) Launch it with the google-chrome-unstable command.
3) Open Netflix and check if it works.
4) If it doesn't work, make sure the Widevine plugin is installed and working. Type chrome://plugins and possibly chrome://components in the address bar and check it is enabled.


Also, check out this cool browser extension Flix Plus from Lifehacker. I find it very useful!

I have to say I am impressed by this company from a business and technological point of view. Did you know they release a lot of their code with an open source license? Did you know they don't manage any hardware but rely heavily on Amazon for their computing and storage needs?
There is a trend nowadays, except for a few giants like Facebook, for most companies to transition from managing their own hardware and storage to have it them hosted by platforms such as Amazon AWS, Heroku, GitHub, Bitbucket, Microsoft Azure and other online services.
In many IT companies, there's no server room anymore, except for switches and routers, as file servers and other tools are physically located outside the company in data centers.

Saturday, February 28, 2015

Arch Linux and Arduino with your favorite IDE

You can edit your Arduino code from any IDE and compile it / upload it without using the Arduino IDE. To do that, the ino tool comes to the rescue.

If you are using Arch Linux you might be interested to know I uploaded a package named ino-1.6-git on AUR. It has been patched to work with the latest version of the Arduino SDK.

Installation

yaourt ino-1.6-git
sudo gpasswd -a $USER tty
sudo gpasswd -a $USER uucp
sudo gpasswd -a $USER lock

Log out and in again, or reboot.

Quick Start:

cd my-arduino-project
ino init
$EDITOR src/sketch.ino
ino build
ino upload

Happy Arduino-ing!

Saturday, February 21, 2015

Remove audio from a video on Linux

If you are working on a computer vision project, there is a great chance that you have been recording videos with sound, which you have no use of. To save space and processing power, you can strip the audio from the video just like this:

avconv -i input_video.mp4 -an output_video.mp4

(Replace avconv with ffmpeg if necessary.)

Just so you know, there is something dishonest with avconv as they make you believe ffmpeg is dead or obsolete. In fact it's nothing like that. avconv is a fork of ffmpeg and has become the default on Ubuntu. So if you are reading to use a filter that isn't present in avconv, then you need to download the latest version of ffmpeg from source or from a PPA. Read more on this subject here.

Rotate a video on Linux

When you record videos on your smartphone, the video itself is not rotated. The video recorder simply adds a tag telling the player how to rotate the video.
That's why mediainfo reports "Rotation: 180°" or other angles on videos recorded from my Samsung Galaxy S4.

Unfortunately unlike the video player on your phone, the vast majority of video players on desktop computers and TVs don't support this tag.

If the video is upside down:

avconv -i original_video.mp4 -vf "vflip,hflip" output_video.mp4
or
avconv -i original_video.mp4 -vf "transpose=2,transpose=2" output_video.mp4

(You can replace avconv with ffmpeg if avconv is not found on your system.)

Note that the first version is slightly faster (2-5 %).

If the video was recorded in portrait mode:

if the top of the image is on the left of the screen, it needs a +90° rotation:

avconv -i original_video.mp4 -vf "transpose=1" output_video.mp4

if the top of the image is on the right of the screen, it needs a -90° or +270° rotation:

avconv -i original_video.mp4 -vf "transpose=2" output_video.mp4

Sunday, February 15, 2015

Ubuntu: run a script when a USB device is plugged in

There's a lot of stuff you can do with udev. One of them is running a script when a USB device is plugged in.

It's as simple as:

1. figure out the vendor id and product id with lsusb
2. edit /etc/udev/rules.d/85-something.rules:

ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="04c5", ATTR{idProduct}=="11a2", RUN+="/absolute/path/to/your/script/that/will/be/executed/as/root"

3. Write the script mentioned above and make sure it's executable.
4. Restart udev: service udev restart.

You can do other things with udev such as managing /dev/ and the permissions on the device.
However you should not use udev to mount USB disks, there are other tools for that.

To run a script when detaching a USB device, replace ACTION=="add" with ACTION=="remove". You can put several lines in the same ".rules" file.


Monday, February 9, 2015

Downloading subtitles

Subliminal is a command-line utility written in Python that downloads subtitles for TV episodes or movies.

While media players such as Kodi (formerly known as XBMC) offer this feature, it's a painful process to pause the video, open the subtitles menu, download the subtitles file, wait until the download completes and so on.
I stream videos directly from my NAS, so that wasn't even an option because the smart TV is unable to store the subtitles file anywhere, it has to be at the same location as the video.

With one little command, Subliminal fetches subtitles from various sources and finds what it thinks is the best subtitles track for the media files you throw at it.

Here's how to install and use it:

1. Try to find if you can install Subliminal directly from your package manager. If so, skip to step 4.
2. Install Python PIP.
3a. Recommended method: using virtualenv(wrapper): mkvirtualenv subtitles; pip install subliminal colorlog
3b. Alternate method: directly on the system: sudo pip install subliminal colorlog
4. Download the subtitles: subliminal -l en -s --color videoOrDirectory anotherVideoOrDirectory...
You can use other flags. These will download a subtitle track in English. The "-s" flag means the file will be named something.srt instead of something.en.srt. I use this option because some players are not able to associate the ".en.srt" subtitles file with the video. "--color" is just for eye candy. You can use "-v" for more messages.

To my knowledge subliminal won't browse directories recursively. You can achieve that effect with:
find . -type d -exec sh -c 'subliminal -l en -s  "{}"' \;

Subliminal is smart enough to detect if the file is a video, and if it needs subtitles.
I was confused with the message "No video to download subtitles for". It will be printed if there was no video in the path(s) you provided and if the video(s) already contain(s) a subtitles track. You can check that with mediainfo.

As with Kodi, you should follow certain naming conventions. It is best to include the full name of the show or movie in the title, and describe the episode as "S01E02" (Season 1 Episode 2) or "1x02". For best compatibility avoid spaces and non-ASCII characters. If you downloaded the video from the Internet, you should however try to keep the filename as it was, including the uploader's nickname and the video quality (HDTV, 720p, 1080p...) There might be a subtitles track for the exact video file you have.

Sunday, February 8, 2015

Ditch scp (SSH copy) and use rsync instead

That is a strong statement, I know.
You have been living happily and using scp to transfer files between machines is a no brainer. But here is the thing: HTTP, FTP, SMTP, and SSH are bad file transfer protocols. Why you ask? Because they offer no guarantee you have got yourself a carbon copy.

The solution? Checksums! And not MD5 or SHA1 which are both broken, but SHA256 or SHA512, both already available on your system (except for Windows...)

So you could sha256sum the file before transfer, transfer, then sha256sum the file after the transfer. But that's rather painful and you'd have to write a complicated script to automate this. There's a much simpler solution.

For some reason, people think rsync is a complicated tool. Maybe because it's got so many flags and options. But it's actually a very complete tool that does what you want without having to write complicated shell scripts.
Four amazing things it can do:

  • Syncing directories (great for backup), transferring only what needs to be transferred, deleting things that aren't present on one side if that's what you want
  • Resuming directory transfers where it left, copying only what's needed
  • On-the-fly compression
  • Checksums
The last feature is a very interesting one, because you will know immediately if the transfer failed, whereas scp won't even tell you anything and might act like everything was fine. It's only 6 months later when you try to restore a backup that you realize all your data is lost forever...

So, replace this:

scp myfile.tar.gz remote-user@remote-host:/remote/directory/

with this:

rsync -avPe ssh myfile.tar.gz remote-user@remote-host:/remote/directory/

Of course you can transfer files the other way around. There is no additional flag needed for directories, but you have to understand the importance of trailing slashes. They play an important role.
Here I used the SSH protocol to perform the transfer but there are other options, including the rsync protocol, that is commonly used to mirror distribution repositories accross the world, amongst other things.

Parallel Gzip with a progress bar

Everybody knows how to create a "tar.gz", probably using something like this:

tar zcvf foo.tar.gz bar-directory/

This is fine to compress something relatively small (a few megs).
However, if you have to archive hundreds of gigabytes, you probably want features such as:

  • Speeding things up
  • Displaying a progress bar
  • Displaying an ETA
This command* offers all of these:

tar cf - bar-directory/ -P | pv -s $(du -sb bar-directory/ | awk '{print $1}') | pigz > foo.tar.gz

You'll probably have to install pigz, which is a parallelized version of Gzip. You can substitute it with pbzip2 if you want a "tar.bz2" archive.

Supposing the CPU was the bottleneck and not the I/O, which is probably the case if you are working on the local filesystem, this will speed things up by a factor of "number of CPU cores". For instance, instead of 1 hour with gzip, it took only 15 minutes with pigz on my machine with 4 CPU cores.
In fact it was almost as fast as copying the directory without archiving / compression.

As a bonus you get a nice progress bar in your terminal:


*On OS X and *BSD, du uses another unit, use awk to convert the size into bytes.

Wednesday, January 28, 2015

ownCloud Client asking for password when starting a KDE session

UPDATE: On another machine I have, the fix described below is not enough. You have to make sure that the KWallet daemon is enabled for the current user. Enable with: System Settings => Account details => KWallet => Enable

The ownCloud client is integrated with KDE's password manager (KWallet).
When it starts, it checks if the kwallet is unlocked and looks for the password.

Yeah, that's how it should be and that is a very software design. Unfortunately it has never worked and the ownCloud client asks for the password every single time!

In the latest version the problem is actually explained in the dialog ("No keychain available"):



In my case, which seems to be the default, the problem is that kwalletd is not running when the ownCloud client starts. It's a timing issue.

My solution :


1. Open KDE's "Autostart" control module (a quick search from the start menu will get you there)
2. Change the ownCloud entry to point to a script of your own.
3. Click OK.
4. Write the following shell script:



5. Make it executable.
6. Reboot (or kill kwalletd and try your script. Note that logging out doesn't kill the daemon.)

Of course if your KWallet is protected by a password, then you will be asked to provide it.

VBoxManage made simple

As you might know, VirtualBox machines can be managed from the command-line. However, I find the syntax of the VBoxManage command-line utility cumbersome and hard to remember.

Copy this in .bash_aliases:



Now you can do the following:

vm-start starts a VM in headless mode
vm-savestate suspends a VM to disk
vm-powerbutton simulates a press on the power button
vm-poweroff simulates unplugging the power cord (some OS don't shutdown completely)
vm-running lists the running VMs

Autocompletion is enabled and dangerous actions will ask for confirmation.

(You will need to re-login or "source" .bash_aliases whenever you add / remove a VM.)
(The script doesn't support VM names with spaces.)

Sunday, January 25, 2015

Backing up your data with Amazon S3 Glacier and rsnapshot. A complete guide, Part 1.


In this first part I'll tell you when to consider Amazon Glacier or not, compare full backups to incremental backups, and explain why you shouldn't "put all files in the same basket".

When to consider Glacier, and when not

Glacier is a great storage solution offered by Amazon for about $0.012 per GB, supposing :

  • You want something cheap but reliable ;
  • You understand that by "Glacier" Amazon means that your files are frozen, it takes a while to get to the glacier and heat up your data so you can retrieve it ;-)
  • You almost never need to access the data from the server (doing so will cost you something, and you will have to wait about 4 hours before getting a download link) ;
  • You already have some primary backup storage (a second disk will do) where you can restore data immediately if needed ;
  • You understand that Glacier is only meant to protect your data in case of fire or other major events, not simply to restore a file deleted by mistake on the "live" storage ;
  • You don't plan to delete your files less than 90 days after uploading them (otherwise it will cost something) ;
  • You are OK with the principle of storing and retrieving archives instead of single files.

With these considerations in mind, if the delay (~4 hours) to retrieve your data is unacceptable you are looking at the wrong product, try regular Amazon S3 storage. It costs 3 times as much but it's no slower than downloading this web page.
In fact there are plenty of use cases where Amazon Glacier is not the right solution except if you are willing to accept its limitations.

Full backups and incremental backups explained

If you copy a folder with all the links (cp -a src dest), you are doing a full backup. If the source folder is 100 GB and you want to keep the backup for the last 7 days, you will need 700 GB of storage, and it will take 20 to 25 minutes to copy. If you have 1 TB, we are talking about 3 to 4 hours !

The nice thing about full backups is that you can browse the backup just like you would with the "live" copy because it's a plain old regular folder! There is no need to extract archives or to use the backup solution's command-line client.

But as you can see, full backups use a lot of storage and are not particularly quick. The alternative is incremental backups. Instead of making a whole copy of the source folder, you only do it the first time. The next time only the differences get saved. So if you add one character to a text file and that's all you did, the second backup is only 1 byte (I am simplifying but you get the idea). The technical term to describe this would be a "delta".
A good command-line program to make incremental backups is rdiff-backup. 
One big flaw of this system is that you can't access the files directly because the complete content of the file is splitted across backups. You will need to rebuild it from all the small pieces.
What incremental backup kind of people usually do is to create a full backup every other week or so to mitigate the problem.

My personal preference is rsnapshot. It's probably the best of both worlds. It gives you full backup-like folders while saving only the files that changed. So yes, if you change one byte in the file, a complete copy is made. That's the price to pay.

The little magic trick that rsnapshot uses is hard links. You see, when you list the content of a folder or you type rm somefile you are only dealing with a symbolic name to a record (also named "inode") on the file system (the i-node contains all sorts of metadata but not the name). It means two things: not only nothing is erased from the disk when you ask to "remove a file", but that you can have two filenames pointing at the same content on the disk. This principle is known as a "hard link".
A "symbolic link" on the other hand is the UN*X equivalent of shortcuts on Microsoft Windows. The "shortcut filename" points to an link-type i-node which points to the real i-node. If the real i-node marked as removed, the shortcut gets broken.
This means that rsnapshot never stores the same file with the same content more than once and explains why the very first time you run rsnapshot it takes much longer than say the exact same command run one hour later. That's why it is advised to run the first backup manually instead of letting cron do it, so you can verify it works like it should and because it takes a long time.

The dilemma

There is one problem with rsnapshot. If you make an archive of the last folder which is supposed to be only a few megabytes bigger than the folder from an hour ago, you end up with the full 100 GB backup. You can send it on Glacier and it will be great because when the time comes, you'll get a full copy requiring almost no more work than extracting it.
The bad news: you will need to pay to store the same file again and again.

Incremental backups are much less practical to store on Glacier. First you have to keep a log of some sort to know when you stored the first version of the file and where are all deltas you need to build the version of the file you are interested in. This is very complex and cannot be done by hand.

Not all files are born equal

I have 200 GB of data to backup. But here's the thing: you are probably like me, 90 % of it is made of files that never change and take a lot of space. These can be photos and videos. They never change and "incremental backups" are useless on that kind of files.
You must be very picky when choosing the folders you want to automatically backup.
This way you don't make useless copies of files you know will never change and you reduce your costs.

Stuff you are working on gets royal privileges

I've got two requirements regarding files related to projects I am currently working on: there must be at least two copies accessible immediately and they must be synchronized as often as possible.
This can be achieved with versioning systems such as Git if you are working on code, or with Dropbox, Copy, Box, OwnCloud, ... for everything else.
If anything happens to my laptop, I can open a browser on another computer and access my files in less than a minute.
You think that's excessive? Imagine you are in a rush and you have only a few (dozen) minutes to print a paper, a Master thesis, the e-ticket for your flight in 3 hours, the PowerPoint presentation that begins in 10 minutes...

There's a rule of thumb in the storage world:
The more often the data needs to be accessed, the fastest the retrieval, and the higher the cost.

You should still save these files in the "slow backup system" because you shouldn't trust Dropbox and alike to have multiple copies of your files in several locations and they usually delete the old versions after a few months.

Continue to Part II

Saturday, January 24, 2015

Linux firewall and DLNA server


MediaTomb is a DLNA server on Linux that is great to stream movies and music from a NAS or any network storage to a TV or any compatible device.

The server magically appears on the TV, and from there you can browse the disk for media. Its one big flaw: there is no authentication. Anybody on the network can not only see that you have a DLNA server running, but also watch all your content.

There are many tutorials out there to explain you how to setup MediaTomb (which is simple thanks to its Web interface and one change in the XML configuration to enable some authentication on the web page).

What you don't find is how to prevent people from seeing the DLNA server and watching the content.
This can be done easily, supposing the IP address of the client (such as a TV) never changes.

Simply add the appropriate rules in Netfilter to allow the one client to access the server, and block traffic for everyone else:

In my configuration, the default policy for the INPUT chain is DROP. Rule number 3 allows anybody from the network to access services on the server (which is not that secure, but well...). You can look at the line numbers by typing iptables -vnL --line-number. (no s at the end)

To only let 172.16.0.123 to access MediaTomb:

iptables -I INPUT 3 -i eth1 -p udp --dport 1900 -d 172.16.0.123 -j ACCEPT
iptables -I INPUT 4 -i eth1 -p tcp --dport 1900 -d 172.16.0.123 -j ACCEPT
iptables -I INPUT 5 -i eth1 -p tcp --dport 1900 -j DROP
iptables -I INPUT 6 -i eth1 -p udp --dport 1900 -j DROP

You can do the same for the web configuration interface but I didn't bother because the username / password that can be set. I let this exercise for you. (The IP address will be your computer's).

Note : There is probably a way to specify a port both for TCP and UDP in the same rule, but I couldn't find it.
Also, eth1 is my LAN network interface. For this interface everything but DLNA is accepted. With the WAN interface, accepted traffic is an exception, so there was no need to write new rules.

DHCP subnet based on vendor in MAC address


As a network administrator you are probably doing some network segmentation, where you have internal servers in one subnet, IP phones in another, and so on.
You should probably use VLANs if you don't want these devices to "see" each other. But in other cases you only need to put them in a separate subnet and/or dynamically assign them particular IP addresses.

As you might know, half of the MAC address is dedicated to the hardware vendor. If you are running a DHCP server such as the ISC DHCP Server on *nix, you can have devices from one vendor use a particular subnet / IP range.

Edit /etc/dhcp/dhcpd.conf:

Watch a folder and send files by e-mail


My printer is able to convert faxes to PDF documents and save these documents on a network folder.
From there I could do many things. But I need to watch for new documents in a folder.

But first let's setup the Windows / Samba network folder.

Shared folder with Samba



The comment is shown in the file explorer. browseable tells if the folder is listed, and writeable is useful to write and (more interestingly in this case) delete files.
valid users can contain users or groups (prefixed with @) that have the permission to read and (if applicable) write to the folder.
create mask is required because the folder is meant to be shared to a group instead of belonging to a single user. Use 0750 if you only want the creator of the file to be able to remove the file.

I created a sambausers group on my machine and put the appropriate users in it.. Remember you need a system account for each user, and that you need to configure each account through smbpasswd. By the way you can configure Samba with a database or LDAP if you like.

You need to make a new directory and let the sambausers group own it and have read / write / execute (chmod g+rwx) on it. The execute permission on the folder is needed so you can create files in the folder.

Restart Samba. Check you can access the folder from your file manager with the appropriate credentials.

Watch directories on your file system

One great way to do this is the incron daemon. It lets you setup cron-like tasks to execute a shell script or any executable program whenever a change is detected in the folder.

Install incron on your system.

Login with a user with permissions to read files from the shared folder on your Linux box. You can sudo -i -u theUser.

Open man incrontab in one terminal window.
In another window, fire up incrontab -e to edit the tasks associated with the user.
I have mine configured with /srv/shr/IncomingFax IN_CREATE /home/myuser/somescript.sh
You can watch for other events, just read the manual you just opened!

Now everytime a file is created in the IncomingFax folder the script will be executed.

E-mail the file just added

Here is an example of the shell script that I use. It might not be the smartest way to do what it does (particularly because the information of the file added is lost in the process because of how incron works)


The script holds its "state" by using a ".lastfile" because it might happen that the script is executed several times with the same document. I don't know why it does that, I think it's an issue with the printer. You might not need to do that.
Also I happen to have a log file to log what happens with the script. You might not want that either. What you might like though is to verify the extension of the file. Note that the filesystem is case-sensitive so ".PDF" files won't be matched.

Make sure you have mutt installed. It's a bit complicated to send attachments. Sendmail is not enough. Note you can attach several files at once. The argument to the echo command is the message body, and what comes after "-s" is the message subject.
I purposedly let this example in French as a reminder that it is safer to avoid any other encoding than pure ASCII. There has to be a way to cope with UTF-8 but I didn't have time to investigate that issue.

This script has (at least) one limitation. I should actually rewrite it to make it more robust. I am assuming it takes less than 5 seconds for the printer to transfer the file. After 5 seconds I send the file by e-mail. There would be ways to know if the transfer is finished:
  • Use a program to read the file. That program probably knows if it's valid.
  • Wait until the file size has been constant for some time. Then we can assume the transfer is finished.
  • If there are a lot of transfers, we can assume the before-last file was completely transferred when we detect a new file.


There you go. We can now look at a folder, filter files by extension, and send the new file by e-mail.