After a failure of a server i decided to set up the server for regular backup on my new nas. (Qnap Ts 453)
Yesterday i do a test: a complete backup of 150 GB of data take 2 hour and 43 minutes… the mean speed is about 7-8 Mb/s… It’s very slow !!
The nas and the server both have gigabit connection as the switch and the router. If in transfer a file with copy/past directly from a pc to the nas the speed is about 20-30 MB/s (with peak of 50 Mb/s).
Why the backup process is so slowly ?
Can depend on the processor “speed” ? My server has a 2.2 Ghz dual core…
This is the same thing i was thinking of… the tar “compression/ packaging” process…
In the documentation should warn the user that the backup require a “power” cpu…
This backup process simply isn’t suitable to “low” processor… most of the low power consuming cpu based server are inapt (atom, celeron, pentium G or amd low power).
My server are AMD Turion II Neo N54L (2.2 Ghz dual core), Intel Intel Celeron G1610T (2.3Ghz dual core) and Pentium G (2.8 Ghz dual core) based… after a test the backup process is very slow (from 7 to 11 MB/s).
With these cpu, backup about 1TB of data never end because require more than 24 hour !
Backup a big amount of data contained/shared by the server require a “power” cpu !
For developer:
The backup compression/packaging take advantage of multi core cpu or simply uses one core ?
Data Backup module of nethserver not use rsync but use duplicity that pack all in some tar archive.
Over rsync i hope to have more speed than 24,5 MB/s ! With copy and past on the nas shared folder from my pc i have more speed (dependly of file size).
Most of my file in the server are little (libreoffice file, pdf, txt or like, cad file)
P.s. 1902 minutes are more than 24 hour… but every 24 hour start a new backup… this means that the second backup start when the first in not finished yet… this makes the incremental backup impossible because the complete backup (used for comparision) is not finished yet !
if backup script is smart enough to check if another backup task is running, you won’t have 2 backups running at the same time…
I have a server with 4 Tb synced with rsync to another machine… first run was loooooong (3 days IIRC), but now the backup task run for less than 30 min
I repeat: I use duplicity that is the base of the nethserver data backup module.
One time a week do a full backup and other day do an incremental backup.
I don’t know if duplicity (but i think, yes) is clever and check if a backup task is already running.
Rsync is different, it synchronize two directory (or the whole hd). Backup is a bit different.
I like this thread about performances, I’d like to have more time to setup a test environment.
In the coming days I will collect more statistics and evaluate different backup programs that were suggested here (obnam and attic).
Is there anybody using those tools who can share is experience?
AFAIK duplicity has --compress-algo and --bzip2-compress-level flags…
set compress-level to 0 (no compression) and see what changes…
moreover, during backup and using compression, check your load (cpu, I/O)
@filippo_carletti: what are the default values? are they hard-coded?
Backup destination is CIFS on QNAP TS-439 Pro II, gigabit link, raw write speed around 60 Mbytes/sec.
NethServer is running on a 4-core i7 860 @2.8 GHz, 8G of ram, root fs is on RAID1 over a pair of WDC WD1003FBYX-0 (raw read speed above 110 MBytes/sec).
I think that the most important slowness factor is the number of files:
1602954
@zamboni, probably duplicity defaults are optimized for remote cloud backup. I didn’t check thoroughly, but I think that compression levels could be adjusted only when used with encryption (and we don’t support encryption now).