10GbE: it's time?

I know that for 100 euros/dollars a desktop-grade 10GbE card can be bought. But SFP+ connections, Cat7 or even Switches are far away from being affordable.
Ethernet rised up until 25GbE and 40GbE, but they are (in small enviroments) still quite out of range.
For stuff
And for cabling

Or am i wrong?

SFP+ is quite affordable. It isn’t as cheap as Cat5/6, but it’s 2 x US$16 optics plus a fiber patch cable of any length up to 1000m. Patch cables (in “normal” lengths) aren’t that expensive either:

Neither are switches:

The issue is just that, in the vast majority of home or SMB use cases, it simply isn’t necessary–gigabit is plenty fast. I’m using a bit of it at home, but it’s overkill for any purpose I could legitimately claim to need.

At the momen, in my point of view, the only real legitimation for 10 GBE for us SOHO Users and SME Users is virtualization.
Video Creation Applications is another case use, but that’s a fairly special use, Gigabit is enough for several home treams, the problem for home users is rather WLan saturation…
But virtualization, say 10 GBE for the Proxmox hosts (Or HyperV, VMWare or whatever…) and the central storage (NAS?) can make a world of difference - provided that central storage can handle such volume / speed. Reading is rarely the issue, but writing at that fast speeds, especially in RAID configurations…
I am actually contemplating upgrading part of my Proxmox environment (Virtualization) to 10 GBE and a 10 GBE Interface in the Synology NAS.

My 2 cents

1 Like

Why would you need Cat7 (with the expensive connectors etc) if you can have Cat6A that also supports 10GBe.
When I bought the Motherboard for my homeserver like 3 years ago, it already came with 2 10GBe ports on it.
And to get ontopic: I do think so, yes it is about time 10GB connections to become mainstream, even in home networks. Especially the connections of the mass storage devices should have 10GBe available.
Unfortunately 10GBe switches aren’t exactly affordable yet… (for a 8port switch count on prices of EUR1200,- + )
Alternatively you could opt for switches with 8 or 16 1Gb ports and 2 or 4 10Gb SFP+ ports. Just as an example, those are about EUR270,- for a 24 1Gb port + 4 10GB SPF+ ports.

See my link above–it’s a new, 4 x SFP+ (+ 1 x GbE) managed switch for US$125. Not as cheap as an unmanaged GbE switch to be sure, but it certainly ought to be do-able if you have a need for 10G. Or there’s always used enterprise gear, which is really the way to go for most purposes. I have three used Dell enterprise switches (a 5524, a 5524P, and a X1052) I bought off eBay; all came with transferable lifetime warranties. Here’s a X1052 (48 x GbE, 4 x SFP+) for US$165:

Mikrotik CRS309-1G-8S+IN. 8 x SFP+, 1 x GbE, managed switch, with a MSRP of US$269:

US vendor, price at US$235:
This appears to be a .eu vendor, listing it at US$210 excluding VAT:

Why? Certainly there’s nothing preventing you from doing this if you want, but what common problem do you think it would solve in a home network?

The 8 port is “insanely low powered” for 8 10GbE + 1 GbE port (24W! insane!)
Enterprise grade “old” switches in Europe cost from 2x to 5x US prices.

Sounds like you’re moving the goalposts here–and note that it’s 23 watts maximum, not typical. In any event, 10G gear is available at reasonable prices–no, it isn’t as cheap as GbE gear, but it also isn’t 1200 EUR for an 8-port switch or even close to it. I’m assuming that eurodk.com is a European vendor, but even if they aren’t, here’s one in .de for 234,24 EUR including VAT:

The gear is available, and prices are reasonable. But I still don’t see that there’s much call for it in the typical home or SMB environment, and certainly not as the default wired connection–if you’re running a hypervisor and its storage in separate boxes, a 10G connection between them could well be justified (though you wouldn’t need a switch at all in that case). Perhaps if you’re doing video editing, with the storage on a remote box. But other than that, I’m having trouble seeing why it would be called for.

First things first: network backup.
If all the hosts sending data are 1GbE capable, connecting the storage (server or NAS) on 10GbE could increase the data process rate of the procedures, if the destination storage is capable to handle, of course. Or could get a bit less slower activity when filetransfer is running.
(i know, an out of band network for backup could also ease this situation but could multiply the points of failure and should be correctly created, solving all routing and firewall issues).
Second: user data access.
With cheap consumer-grade SSDs that are quite fast compared to HDD and 1GbE connection, the “hurry” for performance sometimes is used to justify put sensitive data locally (single disc, desktop or laptop PC) instead into the network assigned space (Server, RAID storage, UPS). Switch to 10GbE Ethernet could ease part of the performance complain. Adequate hardware is supposed to be in place for take advantage of 10GbE bigger transfer rate.
Last but don’t least: faster migrations.
If a server migration is on the run and rely on network for transfer data, switch to 10GbE could dramatically improve the timings for all the occurences (tests, for instance), and this become a gamechanger during consolidation of multiple servers into one.

This is an SMB point of view, sometimese you have more than one “service provider” installed into a small organization.

By the way, if someone would like to talk about the Trunk/LACP/Bonding way to increase data trasfer between hosts or switches, i would like he/she open a specific topic for exchange experiences and hints :slight_smile:

First sentence i agree, second one i don’t. As IT staff i can understand why some prices are higher, but the “most blocking” issue currently is cabling.
SFP+ based cables are short, optical + transceiver seems viable, but most of the transceiver should be verified on every device when not declared compatible, and also i’m quite afraid of the reliability out of a network rack of optical connections, Cat7 is quite out of mind for prices, Cat6A seems an alien to cabling companies here, often they call it Cat6e mixing the two standards.
And in my opinion, CAT7 should be the way for connecting at least rooms without cables running into walls.

Too many physical standards for 10GbE, until now. So too many different ways to mess up with something not compatible.

1 Like

Guaranteed-compatible optics are 16,00 EUR each to customers in .eu, compatible with whichever equipment you like:

I’ve used these, and they work well in the Chelsio, Dell, and Mikrotik equipment I’ve tried them with. Patch cables obviously depend on the length, but are also reasonable:

So, depending on the distance, around 40,00 EUR (two optics, one cable) per device to connect. Yes, it’s more expensive than copper. No, it shouldn’t be cost-prohibitive–the cost delta for the system is a couple of percent.

I don’t know, a bunch of 2mm-diameter cable seems a lot easier to manage than a bunch of 10mm-diameter cable, and the stuff I linked above is said to be tolerant of tight bends (which copper cable is also sensitive to).

But I’m not sure I’m seeing the point here, though a few issues are being raised:

  • 10 GbE networking is too expensive
    • It’s more expensive than 1 GbE, to be sure, but the cost delta is still in the noise for a system build
  • 10 GbE is too complicated
    • Yes, you still have options–both copper and fiber are still in play. IMO (though I’m far from an expert in these things), SFP+ remains the way to go–compared to 10GBase-T, power consumption is lower, latency is lower, range is longer (300m with SR optics, tens of kilometers with others), flexibility is greater (including the ability to use copper if/when needed, via either twinax DAC or SFP±to-RJ45 modules). Standardize on SFP+ SR 850nm optics unless there’s a particular need for something different.
  • We should be using 10 GbE more often
    • Go for it
  • 10 GbE should be standard for home/SMB environment
    • That will happen when there’s a widespread perceived need for it, which (IMO) isn’t likely–heck, most home installations are wireless at this point.

You don’t think as user-pov. It’s easier to brake for impact, twist, bend. Optical cable should be managed with care, but most of users are care-less.
Patches should be boxed, outside the rack.

For complexity, is undeniable, but is “not” too complicated: it’s just far easier pick the wrong part/connector/device/adaptor.
For price, “cheap” are only for a few products, until now, and as far as i can consider MikroTike a nice and serious producer, IMO seems not enough for lead a change.
Some customers still get the charme of “big brands”, think that sometimes there are cost-effective products without spend the amount of money of a small car.
For “better wireless than wired” IMO people never do backups when streaming…

The farther this thread goes, the less point I see to it.