Backup speed is very slow

After a failure of a server i decided to set up the server for regular backup on my new nas. (Qnap Ts 453)

Yesterday i do a test: a complete backup of 150 GB of data take 2 hour and 43 minutes… the mean speed is about 7-8 Mb/s… It’s very slow !!
The nas and the server both have gigabit connection as the switch and the router. If in transfer a file with copy/past directly from a pc to the nas the speed is about 20-30 MB/s (with peak of 50 Mb/s).

Why the backup process is so slowly ?
Can depend on the processor “speed” ? My server has a 2.2 Ghz dual core…

AFAIK backup is not a simple copy of files… think about compression

Inviato da Samsung Mobile

Here are some stats from the backup email. Could you share yours?

StartTime 1419120216.62 (Sun Dec 21 01:03:36 2014)
EndTime 1419138145.64 (Sun Dec 21 06:02:25 2014)
ElapsedTime 17929.02 (4 hours 58 minutes 49.02 seconds)
SourceFiles 2114002
SourceFileSize 142312982744 (133 GB)
NewFiles 2114002
NewFileSize 142312982744 (133 GB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 2114002
RawDeltaSize 141226348598 (132 GB)
TotalDestinationSizeChange 92120882883 (85.8 GB)
Errors 0

This is the same thing i was thinking of… the tar “compression/ packaging” process…
In the documentation should warn the user that the backup require a “power” cpu…
This backup process simply isn’t suitable to “low” processor… most of the low power consuming cpu based server are inapt (atom, celeron, pentium G or amd low power).
My server are AMD Turion II Neo N54L (2.2 Ghz dual core), Intel Intel Celeron G1610T (2.3Ghz dual core) and Pentium G (2.8 Ghz dual core) based… after a test the backup process is very slow (from 7 to 11 MB/s).
With these cpu, backup about 1TB of data never end because require more than 24 hour !

Backup a big amount of data contained/shared by the server require a “power” cpu !

For developer:
The backup compression/packaging take advantage of multi core cpu or simply uses one core ?

in my experience I found NFS 3 times faster than smb/cifs

test made on the same hw (both server and nas were the same), nas was a freenas

Yes!
Doing some calculation I notice that even your server process a backup at 7.5 MB/s…
Very slow ! :frowning:

Now I have some older log with the old nas… but the result are same… When i will go to office i will post newer log.

Here two example:
Server: Hp microserverG7, Nas: qnap TS-219 Pro
--------------[ Statistiche del backup ]--------------
StartTime 1426888970.06 (Fri Mar 20 23:02:50 2015)
EndTime 1426909859.51 (Sat Mar 21 04:50:59 2015)
ElapsedTime 20889.46 (5 hours 48 minutes 9.46 seconds)
SourceFiles 187977
SourceFileSize 139851160189 (130 GB)
NewFiles 187977
NewFileSize 139851160189 (130 GB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 187977
RawDeltaSize 139809639037 (130 GB)
TotalDestinationSizeChange 110998170558 (103 GB)
Errors 0

--------------[ Statistiche del backup ]--------------
StartTime 1427493753.41 (Fri Mar 27 23:02:33 2015)
EndTime 1427514749.14 (Sat Mar 28 04:52:29 2015)
ElapsedTime 20995.73 (5 hours 49 minutes 55.73 seconds)
SourceFiles 192060
SourceFileSize 140377354506 (131 GB)
NewFiles 192060
NewFileSize 140377354506 (131 GB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 192060
RawDeltaSize 140334395658 (131 GB)
TotalDestinationSizeChange 111317903766 (104 GB)
Errors 0

I can’t use nfs. Everything has to be password protected.

it can also depend of the size of files that you want to backup

as a matter of interest with backuppc and a P4 HT (1GB NIC)=> higher than 55° during the work

2,8T -> 1902 minutes -> 24,5 MB/s of large files (movies) over rsync
50G -> 50 minutes ->16,9 MB/s of little files (odt,jpeg,pdf,png) over rsync

Data Backup module of nethserver not use rsync but use duplicity that pack all in some tar archive.
Over rsync i hope to have more speed than 24,5 MB/s ! With copy and past on the nas shared folder from my pc i have more speed (dependly of file size).
Most of my file in the server are little (libreoffice file, pdf, txt or like, cad file)

P.s. 1902 minutes are more than 24 hour… but every 24 hour start a new backup… this means that the second backup start when the first in not finished yet… this makes the incremental backup impossible because the complete backup (used for comparision) is not finished yet !

if backup script is smart enough to check if another backup task is running, you won’t have 2 backups running at the same time…

I have a server with 4 Tb synced with rsync to another machine… first run was loooooong (3 days IIRC), but now the backup task run for less than 30 min

I repeat: I use duplicity that is the base of the nethserver data backup module.
One time a week do a full backup and other day do an incremental backup.
I don’t know if duplicity (but i think, yes) is clever and check if a backup task is already running.

Rsync is different, it synchronize two directory (or the whole hd). Backup is a bit different.

Duplicity is not clever, but NethServer is. :smile:
The backup-data command checks if a backup is already running and skips the run if true.

I usually start the full backup on Saturday, so if it’s not ended by Sunday, it will skip Sunday’s run.

I like this thread about performances, I’d like to have more time to setup a test environment.
In the coming days I will collect more statistics and evaluate different backup programs that were suggested here (obnam and attic).
Is there anybody using those tools who can share is experience?

@ filippo: share your server spec.
We have to see what affect the backup performace: lan link speed, cpu, ram…

I haven’t more time to setup a test environment too, i’m to busy at work until end of september. But i can share more backup log…

Please share log and the server specification such as ram, cpu (model, frequency, number of core), lan link speed…

AFAIK duplicity has --compress-algo and --bzip2-compress-level flags…
set compress-level to 0 (no compression) and see what changes…
moreover, during backup and using compression, check your load (cpu, I/O)

@filippo_carletti: what are the default values? are they hard-coded?

Backup destination is CIFS on QNAP TS-439 Pro II, gigabit link, raw write speed around 60 Mbytes/sec.
NethServer is running on a 4-core i7 860 @2.8 GHz, 8G of ram, root fs is on RAID1 over a pair of WDC WD1003FBYX-0 (raw read speed above 110 MBytes/sec).

I think that the most important slowness factor is the number of files:
1602954

@zamboni, probably duplicity defaults are optimized for remote cloud backup. I didn’t check thoroughly, but I think that compression levels could be adjusted only when used with encryption (and we don’t support encryption now).