How upgrade storage in your (very old) Buffalo NAS

My little experience on how upgrade an old LS-WXL too small to be useful (1TB RAID1, WD green) to a better size (2tb RAID1, WD Red, used).

  1. Look for an used QNAP or Synology
  2. have some polite and respectful negotiation
  3. dont let the offerer deflect you, ask for the correct model for being sure of state of support and version of firmware supported by the producer
  4. wait for delivery
  5. enjoy and setup the new box
  6. remove old hard drives
  7. smash your old Buffalo for waste recycle or fill it with old 300gb drives and use it as a gift for your worst enemy

:roll_eyes: :unamused:

Still there? Still want to know?
Really? (did I at least make laugh a bit?)

Well… it’s not “that fun”.

At least for LS-WXL and some other product of the same age, there are no options for:

  • expand in-place the storage (with different disks)
  • backup and/or restore configuration

so any change to the storage become a “back to /”, unless the size of the replacing disk is slightly bigger than the dead one. Another option is to backup the NAS content on a USB-Connected Hard Drive, but currently i have no capable size for USB drive or it’s occupied with other stuff/use case.

I am a linux noob. I use *ubuntu for lots of my tests (today I take a run on Regolith as alternative to Plasma), and i get along with some patience with bash/terminal… but i don’t easily give up and i am no expert.

So… let’s take a ride…

You need:

  • Windows to run the tools provided by Buffalo (NasNavigator and firmware updater are necessary/useful) and a “one more thing”…
  • a DHCP server
  • a decent 1GBe switch
  • a linux installation on a desktop computer, with at least 1GBe network card (PCI/PCIe, at least for keeping the long transfer low on CPU consumption. Don’t waste a good Intel card for that); next points are related to the linux installation
  • SMBv1 support
  • a partitioning tool (gparted or gnome-disks are nice enough)
  • enough knowledge of your distro for installing XFS support (if not available as default)
  • an easy tool for mounting SMB shares (AFAIK no NFS support, but don’t quote me on that, please) if your skills lacks (i go for Gigolo)
  • mdadm
  • something nice to do, drink, eat while waiting… not on your PC!..
  • a lot of time and patience

I am assuming that Windows, Linux desktop and NAS are lying on the same network segment/lan.

  1. Take note of all the “meaningful” settings of your NAS (users, shares, network, alerts, energy save). Screenshot, a text file, the things that you like most. But it has to be handy to get on. I strongly suggest to not use cell photos. Then shut down the NAS.
  2. While waiting the shutdown, connect both “new” disks to the linux computer via SATA, get sure that both disks are empty (no partitions at all). If not already install, now it’s time to install mdadm.
  3. Remove both old disks from the NAS; install only one “new” disk into the NAS, bay 1. Time to mark it if not already marked. This is necessary if you don’t want RAID0.
    If you want RAID0, plug both drives now, firmware install will format both disks and create a stripe setting.
  4. Connect one of the old disks to your linux installation, two if you have a RAID0 source disk, boot both devices.
  5. NAS currently is in EM (emergency) mode, led will flash accordingly. Take a read on your manual for being sure about that.
  6. Time to credit source…
    On Windows, into the folder of the firmware update tool, there’s a file called LSUpdater.ini edit with your favourite text editor, change the flags section like this
    VersionCheck = 0
    NoFormatting = 0
    then add
    then start the updater.
    I personally did not followed all the steps indicated into the page, simply allowed reformatting for the first disk, then the NAS was in DHCP mode and the updater were able to install firmware, reboot the device, and give this… “not that nice” login
    (username and password are currently the default ones)
    I do not know japanese, so… this might come handy

    But i did not choose this Codepage, into settings are available also Western European ISOs codes (ISO8859_1)
    Also… this sequence of things (boot, firmware install, editing of the language) will take… more time that you should like. So… Switch to the Linux install.
    (edit and note time… NAS Navigator and the firmware updater are not that “polished” but at least NAS Navigator tells you quite reliably the IP address and the status of the NAS even without decoding led blinks. Also, if the IP address is in “APIPA Mode” AKA 169.254.x.y/; in this case, configure a dual stack ip one matching with DHCP and one with APIPA address and no gateway might help to solve issues)
  7. Boot up your favourite distro, if already not done, enable SMBv1, install XFS support and please… be careful. You still have a backup into the other “old” drive, but dont be reckless and remember than copy is far more safer than move
    The drive you connected have… .a lot of wasted spaces at the end (something like 8GB, i don’t know if this is a way for give a bit more… room for different disk size), and at least six partitions, the last one is the bigger.
    Filesystem is XFS, and it contain all the data of your shares. Mount it wisely, don’t mess around, mount only that parition (if the drive is /dev/sdc, partition will be /dev/sdc6) but wait…not directly. Partition is part of an mdadm volume, and my distro mounted it quite automatically when i had the package installed, but assuming that that not happened, this might be helfpul
    mdadm -A -R /dev/md9 /dev/sdc6
    assuming that the “old drive” disk 1 of the NAS is sdc for the system. Add a sudo if your user is not root
    If mdadm and XFS support are already available, you will see the /dev/md9 available for mount, in every place you like.
    I also really don’t know how to handle a “old” RAID0 setup, mostly because i did not had to face it. Assuming that the partition structure would be the same on both disks, and the “old” Disk 2 is mapped /dev/sdd into the system, maybe the correct instruction should be mdadm -A -R /dev/md9 /dev/sdd6, but look for mdadm documentation about RAID0 for checking if the instruction is right and the striping volume is working as intended.
  8. Back on NAS. Did you already switched to your favourite language? Wonderful. Time to shut it down again, and plug in the “new” Disk2 into the NAS after shutdown. Then power on again the NAS
  9. Time for decisions… RAID1, RAID0, two disks? IMHO, RAID1 is the only option, but feel free to follow the flow. Part of the decision was taken to step 1, if you plugged only 1 disk.
  10. Format “New” Disk2 from the NAS Management.
    If you want RAID1, pair Disk1 to Disk2 via EDB settings. It’s really counter-intuitive to use but Disk2 had to be formatted for allow pairing. Disk2 will be formatted again after pairing, and after that the NAS will sync the two filesystems, but it will also allow you to write data into.
    If you want dual disk, keep it like that :slight_smile:
  11. Back to Linux time!
    If like me you’re starting from RAID1, it’s only matter of… browsing the mounted XFS partition, you’ll find a directory for any share created into previous NAS life. Create matching shares (with the correct options and permissions/restrictions), if you need par-user access be smart and create users first, than the shares with the restrictions. Then mount the shares.
  12. It’s copy time. Yeah. It is. Your brewed infusion, your nicest book, your preferred potion. Launch the copy from mounted mdadm volume to the SMB shares mounted. Than go away… if you choose the dual disk setup, the same thing had to be done for “old” Disk2 for copying data into the share of the “new” Disk2. And keep brewing, drinking, reading, enjoying… Maybe a little tour on AnyDesk as “remote checkup tool” (available also for Linux and Smartphone) could ease the stress to get back to the screen. Or…
  13. Or you look for the SMBv2 hack. YES!
    Buffalo updated until few months ago the firmware to my device for a vulnerability, but did not ever allowed “plainly” to use SMBv2 on this device. But SAMBA into this box is already SMBv2 capable!
    Windows… still there dude? Go for ACP-Commander and please… unlock the power of Buffalo!
  14. Wait the end of the copy, Enable SSH via ACP-Commander on NAS, create a root password and time for SCP!. Some are brave and use a console emulator, i go with portable WinSCP, than could allow you to edit “in place” the files.
  15. Create a copy of /etc/init.d/ on the NAS, then if you’re a pro of the console, you can edit via SSH, but i played safe, and copied on my computer.
    Look for row /usr/local/sbin/nas_configgen -c samba
    added the row sed -i '2 i\ max protocol = SMB2' /etc/samba/smb.conf immediately after
    Copy back on the NAS, overwriting the existing one
    Then reboot NAS. And wait.
  16. After a lot of time (at least 15-20 minutes) when i was convinced that i screwed my crappy NAS, SMB shares were visible and … browsable by Windows 10 without tips and tricks about enabling SMBv1 (which is bad and I strongly suggest you to not use)

… ok… all ladies and dudes are sleeping… curtain calls… Don’t forget to be patient. NAS is slow on booting, copying, shutting down, syncing… If you’re into a rush… do something else while waiting.
A “way” to save a humongous amount of time is format the “new” Disk 1 on the NAS, create the shares, then shut down the NAS and use linux installation for copy from “partition 6” on old Disk 1 to “partition 6” on “new” Disk 1. But i wasn’t able to mount/run both mdadm volumes at the same time, because after starting the first or the second mdadm device, the other one was marked as busy. Stopping the first one also won’t allow me to get to the second one, and viceversa. A reboot everytime (thanks crappy 60g SSD with the portable linux toolbox inside… at least you were snappy at reboot)


I just created a NAS from 2nd hand parts. However, I was lucky enough not to need to save any data from a previous NAS, since I didn’t own a previous NAS… :smiley:
That said, I went another route: HP Microserver G8 with 4x WD Re 4, 4TB disks. Granted, that is a lot more expensive than your solution, but I didn’t spend THAT much:

  • HP Microserver G8 (2nd hand, with 8GB memory): EUR170,-
  • Intel Xeon E3 1260L (2nd hand): EUR30,-
  • 4x WD 4 Re 4TB (2nd hand, about 35k use hours each = about 4years): EUR250,-
  • 1x 120GB SSD (new) for ZIL: EUR30.-
  • 2x thumbdrive 64GB (as raid1 boot/system disks)

I installed TrueNAS 12.0-U2.1 on the thumbdrives and created a RaidZ pool from the 4 WD disks.
Plan is to have the NAS as an NFS target for 1 or more Proxmox compute nodes. Now putting money aside to buy the first compute node. Hopefully somewhere before summer starts.

I only have bumped into 1 minor thing: since I boot from USB thumbdrive and the HP ms G8 refuses to boot from the USB3 ports, It takes a bit of time to boot. It also means that extra plugins take like forever to install and start. So I have to refrain from plugins to make the system stay usable. That’s no biggy for me since I plan to host any services on a VM or container on Proxmox later.

Did you already… massaged the mainboard and the HPE peripherals with a whole shower of firmware updates?

(your hardware seems a mine truck compared to mine… quite like very old Opel CorsaVan)

To be honest, I didn’t look for any firmware updates yet. But If I remember correctly HPE, in all their wisdom, made those firmware updates only available when you have a support contract… :-/

/edit OK, looks like I am wrong about that:

Don’t forget to update and connect iLO. It’s too much useful…

Step Back on LS-WXL. Seems way faster than previously. Spiking to 15mb/s is not that impossible, therefore seems a lot faster than it was before (7-8mb/s) and it’s still syncing the Filesystem!!! Maybe is the setup (I’m a guest into a small network during updates), let’s see if into the crib the performance will be similar.

I probably need to create a RHEL install to be able to update the firmware… That can be done a seperate flashdrive. I’ll have to look into this deeper…

Or casually find the “right” Service Pack for Proliant…

What’s your use case for the machine? Anything that would be using RAIDZ2 for storage, would probably not have a good use for SLOG–and if you do have a good use for a SLOG device, it should have some very specific characteristics. But this would make a great boot device.

Plugins (like all jails) are stored on your data pool, not your boot device, so performance of that device is pretty much irrelevant to jail performance. But if you only have 8 GB of RAM, that could be the issue.

I didn’t choose a RaidZ2. The disks are in RaidZ. (1 parity drive)
My intention is to use the NAS as an NFS target for Proxmox compute node(s)

Plugins (like all jails) are stored on your data pool, not your boot device

Didn’t know that, but still, it took like forever to install and boot the system with even 1 plugin. And I think the functionality will better be taken care by Proxmox, which will be available on much beefier hardware.
I already did some data transfer tests and the Gb network adapter was absolutely the bottleneck.

Still not really a good combination of pool design and usage. See:

Yeah, Free/TrueNAS is slow to boot (even from SSD, and more so from a USB stick). That’s normal, but doesn’t have much of anything to do with the plugins.

You’re likely right there. As somewhat of a counterpoint, though, if the plugins you’re anticipating would be interacting with your data to a significant degree, putting them on the Free/TrueNAS box means the network is no longer a bottleneck.

:roll_eyes: still don’t get how ProxMox relate to this…

I’m certainly missing something :sweat:

That’s, as always, the trade-off you have to make. I only have 4 spindles and don’t want to losse too much disk capacity. So instead of 2 mirrors, leaving 2 disks of ‘effective’ diskcapacity, I choose to have a RaidZ leaving 3 disks of ‘effective’ diskcapacity.
If I would have had like 24 disks, it would be a whole other story.

Agreed, but then using proxmox computenodes seem unnecessary. And will need MUCH beefier hardware for TrueNAS.
Alternatively, the HP MS G8 does have a pcie slot in wich I could fit a 10Gb network card. The Proxmox computenodes are already planned with 2 network interfaces, of which 1 could also be a 10Gb interface. Then the compute nodes and the NAS can be on a separate subnet with only the 2nd interface of the computenode(s) facing the ‘active’ user LAN. This also makes it a sensible choice to have all services on the compute nodes.

Quote from the discussion on Truenas forums: The only thing you can really do to improve the write situation is to have lots of free space available on the pool.

That’s what my (initial) take is: I have 10TB+ net storage and only use less than 2TB. (slowly growing)