Needed: 10 Gbit mezzanine NICs for Dell C6220 II

It’s totally OT here, but maybe there’s a greybeard or two here who can help. And it’s for the hardware that’s running my Proxmox cluster, so that might be relevant to @Andy_Wismer

I’m looking for three or four dual-port 10G SFP+ mezzanine NICs for my Dell PowerEdge C6220 II that would include the appropriate mounting bracket. Dell made an Intel NIC available for this model, and those are all over eBay at fairly-reasonable prices–but none of them that I’ve found include the mounting bracket, which appears to be unobtainium. Here’s what appears to be a representative example of such a NIC:

I think finding brackets for these would likely be the simplest solution, and I kind of like the idea of “simple.”

Also reasonably available on eBay for even less are these cards:

These do include the appropriate bracket, and indeed one I bought for testing mounts just fine in one of my nodes. The model number (not shown in the listing) is MCQH29-XFR, and searching for that model number finds me this manual:

That manual tells me it should do 10 GbE (or 40 Gbit/sec Infiniband, which I’m not interested in), auto-detecting what it’s connected to. It fits the system physically, and Windows 10 recognizes it right away, though as an IPoIB adapter:

Following the recommendation here, I tried connecting it to some of my other 10G gear using this:

…and both a SFP+ DAC, and SFP+ optics and a fiber patch cable, but no link with any of these.

I know it’s a shot in the dark, but if anyone knows:

  • How to connect that Mellanox card to 10 GbE (or configure it for 10GbE);
  • Where to get brackets for that Intel card (or get cards with the appropriate brackets); or
  • Some other solution to this

I’d really appreciate it.

Hi @danb35

Interesting, although I have never had the option to work with 40 GB, 10 GbE is the max so far for me…

Are you using the Mellanox card on Proxmox or with Windows?

My 2 cents

1 Like

The intent is to use it on Proxmox. I have Windows on that machine right now because the only download I could find of what should have been the last version of the Mellanox Firmware Tools that supported the card was Windows-only. But that download doesn’t support the CX-2 cards after all, so that was largely moot–I’ll probably swap the Proxmox boot device back in in the morning.

1 Like

Hi @danb35

It would be interesting to see if Debian / Proxmox supports the Mellanox card (and how well).
The same goes also for the Intel card, but I assume the support on Intel is much better. Still, without any mounting brackets a wobbly card would be asking for trouble…

1 Like

Yeah. Particularly in the 6220, which is pretty dense. The bracket mounting points look the same on the Mellanox cards as on the Intel cards, but I’m not too fond of the idea of buying Mellanox cards just to scavenge the brackets (particularly as I’m not sure it will work, and I’ve already spent a bit of money on cards that won’t physically fit). But…

Sometimes the real issue with good second hand servers (or PCs / Notebooks) is the fact that it’s “prime time” is usually over, a lot of add-ons won’t be fully available on the market anymore… :frowning:

Eg: Getting a second power brick for a notebook is usually still easily possible, finding a second docking station often much more difficult…

Sometimes the opposite happens:
12 years back, I bought myself a brand new HP Notebook. I was quite pleased with the hardware, so I bought 2 additional docking stations (total of three docking stations) and the same amount of power bricks.
It took me 4 days to set it up exactly the way I wanted / needed it. The first day actually using that notebook took me to Zurich to pick up a CD at a client in central Zurich, right next to the main station. While I was at the client, my car, parked right outside the client, a big school, was smashed, and my new notebook stolen… :frowning:
Well, I wanted to buy the same notebook again, as I had plenty of accessories now for it… I could not find another one of that modell, it was discontinued by HP and none left on stock in the shops…
It was only 2 years later that I got a second hand modell to use my docking stations… That really sucked!


My 2 cents

That’s true. Server and other enterprise gear generally has a longer lifecycle than consumer gear, but it isn’t infinite.

What I’m ultimately trying to accomplish is to use NVMe SSDs with this system. And while it’s reasonably modern, it predates widespread use of those. No problem, you can buy PCIe adapter cards for those–but because of the high-density design of this system, I only have one usable PCIe slot, which I’m currently using for 10 GbE NICs. So it’s a cascading effect.

1 Like

@danb35 different “janky setups” coming from LTT tells an interesting story about PCIe lanes, driving capabilities of these number at full load (and the two first rounds of Ryzen and Threadripper were not at the same level of Intel at that time…) chipset, CPUs and overhead due to parity management about ZFS.

NVMe is capable of so much burst, but it’s not that uncommon to stump on latency and “lack of syncronysm” issues.

So I have a solution worked out, mostly thanks to the folks at STH:

In short, install sysfsutils, and edit /etc/sysfs.conf to include these lines:

bus/pci/devices/0000\:82\:00.0/mlx4_port1 = eth
bus/pci/devices/0000\:82\:00.0/mlx4_port2 = eth

This doesn’t save that setting to the card’s ROM, which I’d prefer, but it does bring it up as an Ethernet interface, and it persists across reboots. Speed (according to iperf3) is quite close to line speed.

The downside is that the only QSFP-to-SFP+ adapter I’ve found so far that works, costs more than the NIC. But still, at around US$100/node, this seems like it should do the trick.


@danb35 you feel bad using an (eventual) bracket 3Dprinted instead of metal?
Maybe some trace is already available into some public free db.

I’d have a few concerns about a 3D-printed bracket:

  • It’d be a pain to design (though that problem naturally goes away if someone else has already done the work)
  • The existing bracket provides some degree of EMI shielding, which a printed one wouldn’t
  • The bracket is also part of the “handle” for removing and reinstalling the node trays, and thus needs a fair bit of strength. It also needs to accept screw threads.

I wouldn’t say that a suitable bracket couldn’t be designed for 3D printing, but it would take some redesigning. And if the Mellanox card works, and card + QSFP-to-SFP+ adapters + bracket costs around the same as the Intel card w/o the bracket, I think that’ll be a working solution.