dacwe
December 27, 2025, 12:21am
1
Hi,
I want to extend the storage by adding a pcie gen 4 x4 to dual nvme adapter but I’m not sure which is fine to use since there are som reports of such devices not working. Some examples:
Hi,
I got my Framework Desktop just set up as a computing powerhorse. The Fedora Server 42 system was installed on a Intel Optane P1600X SSD (118G, PCIe Gen3) for its high durability.
However, I noticed that I have a ~50% chance of boot failure when I reboot the system. The bios shows “bood device not found”. At such circumstances, clicking the power button can successfully reboot the system and get me into Fedora. Also, startup from cold nearly always work.
Could this be due to a BIOS incomp…
Third time’s the charm! It wasn’t easy- I actually completely crashed my system and had to do a complete reinstall of Windows, but once I went into Bios and slowed the PCIe slot speed to Gen3, the last adapter worked. So I’m finally up and running after two weeks of troubleshooting. I now have three 4tb NVMe SSDs mounted, plus a 2TB external drive with USB for 14TB total. I’m happy to share more details if anyone else is dealing with something similar.
opened 03:43PM - 14 Dec 25 UTC
bug
Desktop - AMD Ryzen AI 300
When using a PCI-E 4x NVMe adapter WITH NVMe disk fails to get passed the BIOS m… ost of the times.
**Sanity checks**
* Funnily enough, when there is no disk, we do get passed the BIOS.
* The NVMe disk works fine in the normal slots.
* The NVMe adapters tested work fine in other systems and can access the NVMe disk
* [Silverstone ECM22 ](https://www.silverstonetek.com/en/product/info/expansion-cards/ECM22/)
* [Icybox IB-PCI208-HS](https://icybox.de/product/interne_speicherloesungen/IB-PCI208-HS)
* Using the Arch Linux installer; when in the off-chance we manage to get passed boot, I managed to see the following errors ```nvme nvme2: I/O tag 24 (1018) QID 0 timeout, disable controller
nvme nvme2: failed to read smart log (error -5)
nvme 0000:c2:00.0: probe with drive nvme failed with error -5```
The set-up with the `Icybox IB-PCI208-HS` booted fine (aka got passed the BIOS flash screen) with the BIOS 3.02, however had soft lock-ups when installing the [Proxmox VE 9.1 ]([Proxmox VE 9.1 ISO Installer](https://proxmox.com/en/downloads/proxmox-virtual-environment/iso/proxmox-ve-9-1-iso-installer)
I will attempt to downgrade the BIOS to 3.02; since I have successfully installed Proxmox VE 9.1 on BIOS 3.04 and report back. Just to note here, when I was on the BIOS 3.02 I got the following errors when trying to install Proxmox VE 9.1:
```
BUG: soft lockup - CPU#1 stuck for 26s
```
I tried setting various kernel parameters, but to no avail.
The failing to get passed BIOS was ESPECIALLY confusing and alarming, as the PCI-E adapter was connected during the BIOS update. Obviously coming back to a black screen after a BIOS update was not encouraging.
Edit#1:
Since I did not use fwupgrade to perform the BIOS, I could only downgrade to BIOS 3.03.
* With PCI-E expansion:
* The behavior is that it takes a long time to get to the BIOS to load and then it fails to boot Proxmox being stuck at "Loading initial ramdisk".
* Without PCI-E expansion:
* works
Edit#2:
* I cannot find where I can version 3.02 of the BIOS, so that's as far as it goes as getting that part to work. However the softlock issues which prompted me to update the BIOS were on 3.02...
Any ideas are welcome, I would love to add some more storage!
This seems like the same issue I’ve been having with a dGPU on the PCIe x4 slot:
I’m also trying to get the motherboard to work with dGPUs.
@Lincoln_Chen did you end up abandoning the use of the PCIe 4.0 x4 slot? I may have misunderstood, but it seemed like you were last using the NVMe M.2 slot(s) instead?
@Hrothmund how has that ADT-Link adapter worked for you?
In my experiments, I’m using the ADT-Link R23A-AMP (x4 to x16) and then a normal GPU riser, but the connection isn’t stable and keeps dropping to PCIe Gen 1.0. Forcing Linux to stick to Gen 1.0 solves the renegot…
Edit: Found this one too?
I recently got a Framework Desktop motherboard and was excited to upgrade. But I ran into some issues with my ~6 hard disk storage array and pcie-sata controller
When running several parallel heavy read / write workloads and strongly stressing the disks and controller I would get this dmesg line.
ahci 0000:c1:00.0: Using 64-bit DMA addresses
Followed immediately by corrupt sector reads seemingly randomly distributed across every disk in the array. I tried using a Marvell 88SE9215 controller a…