10GbE: it's time?

I know that for 100 euros/dollars a desktop-grade 10GbE card can be bought. But SFP+ connections, Cat7 or even Switches are far away from being affordable.
Ethernet rised up until 25GbE and 40GbE, but they are (in small enviroments) still quite out of range.
For stuff
And for cabling

Or am i wrong?

SFP+ is quite affordable. It isnā€™t as cheap as Cat5/6, but itā€™s 2 x US$16 optics plus a fiber patch cable of any length up to 1000m. Patch cables (in ā€œnormalā€ lengths) arenā€™t that expensive either:

Neither are switches:

The issue is just that, in the vast majority of home or SMB use cases, it simply isnā€™t necessaryā€“gigabit is plenty fast. Iā€™m using a bit of it at home, but itā€™s overkill for any purpose I could legitimately claim to need.

Hi
At the momen, in my point of view, the only real legitimation for 10 GBE for us SOHO Users and SME Users is virtualization.
Video Creation Applications is another case use, but thatā€™s a fairly special use, Gigabit is enough for several home treams, the problem for home users is rather WLan saturationā€¦
But virtualization, say 10 GBE for the Proxmox hosts (Or HyperV, VMWare or whateverā€¦) and the central storage (NAS?) can make a world of difference - provided that central storage can handle such volume / speed. Reading is rarely the issue, but writing at that fast speeds, especially in RAID configurationsā€¦
I am actually contemplating upgrading part of my Proxmox environment (Virtualization) to 10 GBE and a 10 GBE Interface in the Synology NAS.

My 2 cents
Andy

1 Like

Why would you need Cat7 (with the expensive connectors etc) if you can have Cat6A that also supports 10GBe.
When I bought the Motherboard for my homeserver like 3 years ago, it already came with 2 10GBe ports on it.
And to get ontopic: I do think so, yes it is about time 10GB connections to become mainstream, even in home networks. Especially the connections of the mass storage devices should have 10GBe available.
Unfortunately 10GBe switches arenā€™t exactly affordable yetā€¦ (for a 8port switch count on prices of EUR1200,- + )
Alternatively you could opt for switches with 8 or 16 1Gb ports and 2 or 4 10Gb SFP+ ports. Just as an example, those are about EUR270,- for a 24 1Gb port + 4 10GB SPF+ ports.

See my link aboveā€“itā€™s a new, 4 x SFP+ (+ 1 x GbE) managed switch for US$125. Not as cheap as an unmanaged GbE switch to be sure, but it certainly ought to be do-able if you have a need for 10G. Or thereā€™s always used enterprise gear, which is really the way to go for most purposes. I have three used Dell enterprise switches (a 5524, a 5524P, and a X1052) I bought off eBay; all came with transferable lifetime warranties. Hereā€™s a X1052 (48 x GbE, 4 x SFP+) for US$165:
https://www.ebay.com/itm/Genuine-Dell-X-series-X1052-48-port-1Gb-SFP-48-port-Managed-Networking-Switch/123873444481?hash=item1cd76ec281:g:tUYAAOSwnMRdMjBr:sc:FedExHomeDelivery!31324!US!-1

Mikrotik CRS309-1G-8S+IN. 8 x SFP+, 1 x GbE, managed switch, with a MSRP of US$269:

US vendor, price at US$235:
https://www.balticnetworks.com/mikrotik-8x-10g-sfp-ports-poe-cloud-router-switch.html
This appears to be a .eu vendor, listing it at US$210 excluding VAT:

Why? Certainly thereā€™s nothing preventing you from doing this if you want, but what common problem do you think it would solve in a home network?

The 8 port is ā€œinsanely low poweredā€ for 8 10GbE + 1 GbE port (24W! insane!)
Enterprise grade ā€œoldā€ switches in Europe cost from 2x to 5x US prices.

Sounds like youā€™re moving the goalposts hereā€“and note that itā€™s 23 watts maximum, not typical. In any event, 10G gear is available at reasonable pricesā€“no, it isnā€™t as cheap as GbE gear, but it also isnā€™t 1200 EUR for an 8-port switch or even close to it. Iā€™m assuming that eurodk.com is a European vendor, but even if they arenā€™t, hereā€™s one in .de for 234,24 EUR including VAT:
MikroTik Cloud Router Switch CRS309-1G-8S+IN, Layer2/Layer3 Switch - meconet-Shop

The gear is available, and prices are reasonable. But I still donā€™t see that thereā€™s much call for it in the typical home or SMB environment, and certainly not as the default wired connectionā€“if youā€™re running a hypervisor and its storage in separate boxes, a 10G connection between them could well be justified (though you wouldnā€™t need a switch at all in that case). Perhaps if youā€™re doing video editing, with the storage on a remote box. But other than that, Iā€™m having trouble seeing why it would be called for.

First things first: network backup.
If all the hosts sending data are 1GbE capable, connecting the storage (server or NAS) on 10GbE could increase the data process rate of the procedures, if the destination storage is capable to handle, of course. Or could get a bit less slower activity when filetransfer is running.
(i know, an out of band network for backup could also ease this situation but could multiply the points of failure and should be correctly created, solving all routing and firewall issues).
Second: user data access.
With cheap consumer-grade SSDs that are quite fast compared to HDD and 1GbE connection, the ā€œhurryā€ for performance sometimes is used to justify put sensitive data locally (single disc, desktop or laptop PC) instead into the network assigned space (Server, RAID storage, UPS). Switch to 10GbE Ethernet could ease part of the performance complain. Adequate hardware is supposed to be in place for take advantage of 10GbE bigger transfer rate.
Last but donā€™t least: faster migrations.
If a server migration is on the run and rely on network for transfer data, switch to 10GbE could dramatically improve the timings for all the occurences (tests, for instance), and this become a gamechanger during consolidation of multiple servers into one.

This is an SMB point of view, sometimese you have more than one ā€œservice providerā€ installed into a small organization.

By the way, if someone would like to talk about the Trunk/LACP/Bonding way to increase data trasfer between hosts or switches, i would like he/she open a specific topic for exchange experiences and hints :slight_smile:

First sentence i agree, second one i donā€™t. As IT staff i can understand why some prices are higher, but the ā€œmost blockingā€ issue currently is cabling.
SFP+ based cables are short, optical + transceiver seems viable, but most of the transceiver should be verified on every device when not declared compatible, and also iā€™m quite afraid of the reliability out of a network rack of optical connections, Cat7 is quite out of mind for prices, Cat6A seems an alien to cabling companies here, often they call it Cat6e mixing the two standards.
And in my opinion, CAT7 should be the way for connecting at least rooms without cables running into walls.

Too many physical standards for 10GbE, until now. So too many different ways to mess up with something not compatible.

1 Like

Guaranteed-compatible optics are 16,00 EUR each to customers in .eu, compatible with whichever equipment you like:
https://www.fs.com/de-en/products/11552.html
Iā€™ve used these, and they work well in the Chelsio, Dell, and Mikrotik equipment Iā€™ve tried them with. Patch cables obviously depend on the length, but are also reasonable:

So, depending on the distance, around 40,00 EUR (two optics, one cable) per device to connect. Yes, itā€™s more expensive than copper. No, it shouldnā€™t be cost-prohibitiveā€“the cost delta for the system is a couple of percent.

I donā€™t know, a bunch of 2mm-diameter cable seems a lot easier to manage than a bunch of 10mm-diameter cable, and the stuff I linked above is said to be tolerant of tight bends (which copper cable is also sensitive to).

But Iā€™m not sure Iā€™m seeing the point here, though a few issues are being raised:

  • 10 GbE networking is too expensive
    • Itā€™s more expensive than 1 GbE, to be sure, but the cost delta is still in the noise for a system build
  • 10 GbE is too complicated
    • Yes, you still have optionsā€“both copper and fiber are still in play. IMO (though Iā€™m far from an expert in these things), SFP+ remains the way to goā€“compared to 10GBase-T, power consumption is lower, latency is lower, range is longer (300m with SR optics, tens of kilometers with others), flexibility is greater (including the ability to use copper if/when needed, via either twinax DAC or SFPĀ±to-RJ45 modules). Standardize on SFP+ SR 850nm optics unless thereā€™s a particular need for something different.
  • We should be using 10 GbE more often
    • Go for it
  • 10 GbE should be standard for home/SMB environment
    • That will happen when thereā€™s a widespread perceived need for it, which (IMO) isnā€™t likelyā€“heck, most home installations are wireless at this point.

You donā€™t think as user-pov. Itā€™s easier to brake for impact, twist, bend. Optical cable should be managed with care, but most of users are care-less.
Patches should be boxed, outside the rack.

For complexity, is undeniable, but is ā€œnotā€ too complicated: itā€™s just far easier pick the wrong part/connector/device/adaptor.
For price, ā€œcheapā€ are only for a few products, until now, and as far as i can consider MikroTike a nice and serious producer, IMO seems not enough for lead a change.
Some customers still get the charme of ā€œbig brandsā€, think that sometimes there are cost-effective products without spend the amount of money of a small car.
For ā€œbetter wireless than wiredā€ IMO people never do backups when streamingā€¦

The farther this thread goes, the less point I see to it.

shortly yes.

There a a lot used cheap switches in the market with 1G Copper connectors and 2 to 4 module slots with 10G and networkcards to.
i is usefull for optical, direct copper cabels.