One nethserver restic backup fails because of a mount error. I get email notification with the following info:
Backup started at 2020-09-10 23:26:55
Pre backup scripts status: SUCCESS
umount: /mnt/mountpoint: not mounted
Backup directory is not mounted
Can’t initialize restic repository
Action ‘backup-data-restic hostname’: FAIL
Backup status: FAIL
Sep 10 23:38:00 hostname kernel: No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount.
After your reply and because I knew, I had not done any updates on nethservers for a while, I updated all my systems and rebooted.
The problem is still persisting and the share is not mounted by fstab but it is a normal (restic) backup job within nethserver. When I hit the check button while modifying the backup job, it does not work either. The target share is also on a up to date nethserver, so I guess this is a bug then?
I did another test. As destination I took a share of a windows 10 client instead of the cifs share of a nethserver on the same network and with this one it works. So the problem might not be on the nethserver trying to backup nor on the vpn connection but on the local nethserver serving the share?
Var log messages in the nethserver serving the cifs share:
I have the following custom template on the server, where I try to configure a backup:
/etc/e-smith/templates-custom/etc/dnsmasq.conf/99fog with a line:
I have the following custom template on the server, which is serving the cifs share (and which is acting as our fileserver):
comment = Profiles directory
browsable = no
path = /var/lib/nethserver/profiles
read only = no
store dos attributes = Yes
create mask = 0600
directory mask = 0700
profile acls = yes
csc policy = disable
As those templates were not touched, I dont think they are the problem.
We have a local Proxmox server with nethserver-vm’s for:
We have a remote ProxMox server, where one more nethserver installed as a vm.
This remote nethserver provides imap for email, nextcloud thus has installed among others: fail2ban, firewall, IPS, Threadshield,…
Some time ago, we changed our networking setup because of having some problems with it:
Before we had setup an ipsec vpn between local physical fireall and the remote nethserver. But we only had one physical nic in local Proxmox, so I had created two nics for the firewall neth vm on Proxmox but both were on same bridge, and connected to the internal switch. To separate red and green network, we created a /25 from 192.168.x.0-128 network and a red /29 network from 192.168.x.248-255), but that had caused problems, particularly shorewall got confused and blocked some legitimate traffic like teamviewer while producing strange log entries (see separate linked thread if interested)… Now we changed the whole network setup (2 nics on ProxMox, signal comming from physical firewall -> to red on neth-fw -> on separate nic/bridge -> then to green on neth-fw -> switch to internal ressources so red and green now have a separate /24 networks each). The vpn configuration had to be changed too, so it is an openvpn site2site now - between the local nethserver firewall as ovpn tunnel client and the remote nethserver as ovpn tunnel server. The site2site vpn is used to provide access to the roadwarriors which connect to opnsense vm on external proxmox (with ad credentials plus otp) and then access internal ressources by said site2site vpn.
Anything else work fine, I can access the local neth-share from a windows vm on the remote proxmox, and I can ping local ressources (dc-neth, fileserver-neth from the remote nethserver)… Wait, while testing, I see that after having changed the red iprange of local red I see that on the remote nethserver I still have some wrong dns entries from the old network config, I corrected the dns entries now, and I also added red (only had green) network in the vpn remote network configuration in openvpn server which is configured on the remote nethserver, while the local nethserver which is the ovpn tunnel client already had red and green networks from the remote neth configured.
As said, I can ping the local fileserver from the remote nethserver by name and get reply of correct ip, but I cannot mount its fileshare on the remote neth. I can mount another share from a local windows client on the same local ProxMox. I also can access the local share provided from local neth-fileserver from a windows vm on the remote proxmox server, so I am a bit out of ideas.
Wait, I see shorewall:net2fw:drop:in messages in dmesg of the fileserver from the source ip, that is configured in vpn p2p network. But why is shorewall installed on the fileserver? I did not expect this, saw this only now in dmesg of the fileserver. So shorewall must be a dependency on something installed on the fileserver?
Following applications are installed on the fileserver: Antivirus, ClamScan, File server, Restore data and Webserver…
I dont understand, why shorewall on local neth-fileserver blocks the attempt to mount the cifs share, while it accepts access from a windows vm on the remote proxmox through the same tunnel anyway…
In trusted networks of the fileserver, I have the green iprange of the the remote nethserver, maybe there have to be added more networks, like the red or the tunnel p2p ip? I thought, this should not be necessary as the remote windows vm on the same green iprange can successfully access the share…
Not the P2P IP, just the green network from remote nethserver. I thought this is enough, as I can access the share from a remote windows vm on the same remote green network successfully. But as dmesg/shorewall on local fileserver shows the p2p ip is being blocked, I will try and report back.
That worked - at least for the mount from terminal
Now I will reboot the server or umount the cifs share and try to re-configure the backup job
Well at least the configuration of the backup works thus the mount problem is solved. But starting the job resulted in a fail because the resticconfig file already exists from earlier restic backups of a job I have deleted in the meantime.
Backup started at 2020-09-16 21:49:36
Pre backup scripts status: SUCCESS
Fatal: create repository at /mnt/mountpoint failed: config file already exists
Fatal: wrong password or no key found
Action ‘backup-data-restic hostname_restic’: FAIL
Backup status: FAIL
Do I have to delete the content and start with a new full? I hoped, I can reuse the existing repo of the earlier restic backups, as there is already quite some data (over 70 GB) in it.
Btw. I tested once again what we had discussed some time ago, to configure share/subfolder as destination for restic backup. And although this config could be saved successfully, I saw that there were some folders created for the restic backup but in share, not in share/subfolder, so I shared path/subfolder directly, and can live with that.
Thanks all for your contributions which helped me track down the issue and solve it.