Backup fails because of mount error

One nethserver restic backup fails because of a mount error. I get email notification with the following info:

Backup started at 2020-09-10 23:26:55
Pre backup scripts status: SUCCESS
umount: /mnt/mountpoint: not mounted
Backup directory is not mounted
Can’t initialize restic repository
Action ‘backup-data-restic hostname’: FAIL
Backup status: FAIL

In /var/log/messages I found:

Sep 10 23:38:05 hostname kernel: CIFS VFS: Error connecting to socket. Aborting operation.
Sep 10 23:38:05 hostname kernel: CIFS VFS: cifs_mount failed w/return code = -4

And a bit earlier:

Sep 10 23:38:00 hostname kernel: No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount.

How can this be fixed?

Hm, it should be fixed already, I assume you want to backup to an old protocol CIFS share.

1 Like

After your reply and because I knew, I had not done any updates on nethservers for a while, I updated all my systems and rebooted.

The problem is still persisting and the share is not mounted by fstab but it is a normal (restic) backup job within nethserver. When I hit the check button while modifying the backup job, it does not work either. The target share is also on a up to date nethserver, so I guess this is a bug then?

I would really love to fix this issue, so anyone @support_team maybe has ann idea what could cause this and how it could be fixed?

Anyone please?

I cannot mount this Share anymore. And I would really like to configure a backup again.

mount -t cifs -o username=xy,password=‘yx’ //server/share /mnt/mountpoint does not work either.

I get mount error(115): Operation now in progress.

dmesg shows:
[11959.701027] CIFS VFS: Error connecting to socket. Aborting operation.
[11959.702170] CIFS VFS: cifs_mount failed w/return code = -115

Sorry for late answer.

Please try to add vers=1.0 to your mount line like

mount -t cifs -o vers=1.0,username=xy,password=yx //server/share /mnt/mountpoint

to check if an old protocol is the problem.

Maybe you have custom templates on one of your servers that keep the old version protocol?

Another thread with same error:

Hi mrmarkuz, no prolem :slight_smile:

Tried with vers=1.0 and it does not work either, same thing. The hostname is pingable so it is not a problem of dns as I also tried to mount via ipadress instead of hostname.

I also created a new share but it does not work. Those shares are accessable from a windows vm in same proxmox as the nethserver trying to configure a backup or mounting via console. :confused:

try adding

sec=ntlm,vers=1.0

1 Like

Same result.

I did another test. As destination I took a share of a windows 10 client instead of the cifs share of a nethserver on the same network and with this one it works. So the problem might not be on the nethserver trying to backup nor on the vpn connection but on the local nethserver serving the share?

Var log messages in the nethserver serving the cifs share:

Sep 16 12:19:01 hostname smbd_audit: [2020/09/16 12:19:01.801902, 0] …/…/lib/param/loadparm.c:784(lpcfg_map_parameter)
Sep 16 12:19:01 bdc smbd_audit: Unknown parameter encountered: “profile acls”
Sep 16 12:19:01 bdc smbd_audit: [2020/09/16 12:19:01.801928, 0] …/…/lib/param/loadparm.c:1843(lpcfg_do_service_parameter)
Sep 16 12:19:01 bdc smbd_audit: Ignoring unknown parameter “profile acls”

Maybe this can help to find out the source of the problem?

1 Like

Any of the (neth)servers have custom templates?


EDIT - Information about error return codes (though not sure how useful it can be):

4 internal mount bug

115 = 64+32+16+2+1

1 incorrect invocation or permissions
2 system error (out of memory, cannot fork, no more loop devices)
16 problems writing or locking /etc/mtab
32 mount failure
64 some mount succeeded


EDIT: Care to explain how is the VPN connection involved?

2 Likes

I have the following custom template on the server, where I try to configure a backup:
/etc/e-smith/templates-custom/etc/dnsmasq.conf/99fog with a line:
dhcp-boot=undionly.kpxe,ipaddress

I have the following custom template on the server, which is serving the cifs share (and which is acting as our fileserver):
/etc/e-smith/templates-custom/etc/samba/smb.conf/71profiles:
[profiles]
comment = Profiles directory
browsable = no
path = /var/lib/nethserver/profiles
read only = no
store dos attributes = Yes
create mask = 0600
directory mask = 0700
profile acls = yes
csc policy = disable

As those templates were not touched, I dont think they are the problem.

We have a local Proxmox server with nethserver-vm’s for:
-DC
-Fileserver
-Firewall

We have a remote ProxMox server, where one more nethserver installed as a vm.

This remote nethserver provides imap for email, nextcloud thus has installed among others: fail2ban, firewall, IPS, Threadshield,…

Some time ago, we changed our networking setup because of having some problems with it:

Before we had setup an ipsec vpn between local physical fireall and the remote nethserver. But we only had one physical nic in local Proxmox, so I had created two nics for the firewall neth vm on Proxmox but both were on same bridge, and connected to the internal switch. To separate red and green network, we created a /25 from 192.168.x.0-128 network and a red /29 network from 192.168.x.248-255), but that had caused problems, particularly shorewall got confused and blocked some legitimate traffic like teamviewer while producing strange log entries (see separate linked thread if interested)… Now we changed the whole network setup (2 nics on ProxMox, signal comming from physical firewall → to red on neth-fw → on separate nic/bridge → then to green on neth-fw → switch to internal ressources so red and green now have a separate /24 networks each). The vpn configuration had to be changed too, so it is an openvpn site2site now - between the local nethserver firewall as ovpn tunnel client and the remote nethserver as ovpn tunnel server. The site2site vpn is used to provide access to the roadwarriors which connect to opnsense vm on external proxmox (with ad credentials plus otp) and then access internal ressources by said site2site vpn.

Anything else work fine, I can access the local neth-share from a windows vm on the remote proxmox, and I can ping local ressources (dc-neth, fileserver-neth from the remote nethserver)… Wait, while testing, I see that after having changed the red iprange of local red I see that on the remote nethserver I still have some wrong dns entries from the old network config, I corrected the dns entries now, and I also added red (only had green) network in the vpn remote network configuration in openvpn server which is configured on the remote nethserver, while the local nethserver which is the ovpn tunnel client already had red and green networks from the remote neth configured.

As said, I can ping the local fileserver from the remote nethserver by name and get reply of correct ip, but I cannot mount its fileshare on the remote neth. I can mount another share from a local windows client on the same local ProxMox. I also can access the local share provided from local neth-fileserver from a windows vm on the remote proxmox server, so I am a bit out of ideas.

Wait, I see shorewall:net2fw:drop:in messages in dmesg of the fileserver from the source ip, that is configured in vpn p2p network. But why is shorewall installed on the fileserver? I did not expect this, saw this only now in dmesg of the fileserver. So shorewall must be a dependency on something installed on the fileserver?

Following applications are installed on the fileserver: Antivirus, ClamScan, File server, Restore data and Webserver…

I dont understand, why shorewall on local neth-fileserver blocks the attempt to mount the cifs share, while it accepts access from a windows vm on the remote proxmox through the same tunnel anyway… :confused:

In trusted networks of the fileserver, I have the green iprange of the the remote nethserver, maybe there have to be added more networks, like the red or the tunnel p2p ip? I thought, this should not be necessary as the remote windows vm on the same green iprange can successfully access the share…

1 Like

Shorewall is installed by default on NethServer.

Did you add your VPN network to the trusted networks on the fileserver?

1 Like

Not the P2P IP, just the green network from remote nethserver. I thought this is enough, as I can access the share from a remote windows vm on the same remote green network successfully. But as dmesg/shorewall on local fileserver shows the p2p ip is being blocked, I will try and report back.

1 Like

You may try to add the P2P IP too, maybe it helps.

2 Likes

That worked - at least for the mount from terminal :smiley:

Now I will reboot the server or umount the cifs share and try to re-configure the backup job :+1:

Well at least the configuration of the backup works thus the mount problem is solved. But starting the job resulted in a fail because the resticconfig file already exists from earlier restic backups of a job I have deleted in the meantime.

Backup: job_name
Backup started at 2020-09-16 21:49:36
Pre backup scripts status: SUCCESS
Fatal: create repository at /mnt/mountpoint failed: config file already exists

Fatal: wrong password or no key found
Backup failed
Action ‘backup-data-restic hostname_restic’: FAIL
Backup status: FAIL

Do I have to delete the content and start with a new full? I hoped, I can reuse the existing repo of the earlier restic backups, as there is already quite some data (over 70 GB) in it.

2 Likes

I think this would be the safest way. You can’t use the same share for multiple backups with restic.

1 Like

Ok, so this might take a while :smiley:

Btw. I tested once again what we had discussed some time ago, to configure share/subfolder as destination for restic backup. And although this config could be saved successfully, I saw that there were some folders created for the restic backup but in share, not in share/subfolder, so I shared path/subfolder directly, and can live with that. :+1:

Thanks all for your contributions which helped me track down the issue and solve it.

1 Like