Confusion about powered PCI-E risers


#1

Hi all,

As I expand out my new mining rig, I’m running into a lot of different configurations for PCI-E risers, and I’m pretty confused.

Let’s say I want to run 6 RX 580’s cards on this rig, and that I’ll be using these 6 pin PCI-E risers:

and that I’ll have a server power supply with a breakout board powering the GPUs such as:


and a seperate PSU powering the mobo.

In order to successfully power the rig, am I correct in assuming that I will need to use 6x 6 pin cables from the breakout board to power the PCI-E risers, and 6x 6 pin -> 8 pin cables from the breakout board to power the GPUs? There also seems to be SATA powered connections (which look to be pictured in the PCI-E riser picture above), but I’ve heard people say to not use those.

Or, is it optimal (and safe) to be using only 6x 6 pins from the server PSU, and using these splitters to power both the card and the riser:

Thanks in advance you guys, and apologies for the noobie question


#3

Yes, you are correct. Alway power risers from the same psu that powers the gpus. Otherwise you can get things fried ( mobo and cpu from my experience).

6 pin ftw! Sata sucks


#4

And the noob questions have begun!

Hey guys, didn’t wanna create a new thread for this, so will just post my question here. I bought this Ubit riser since it allows for all power configurations:

My question is: Which one is best recommended to use? The 6-pin or the sata? (I already read not to use adapters)


My First rig up and running
#5

My suggestion is to use what ever is directly off of your power supply. Most fire’s or meltings happen in the jumper cables that change one connector to another. So if you plan to use SATA or molex cords off your Power supply then use those connectors directly but never put more than 2 per strand. SATA and Molex were designed for HDD’s which run 15w max on a spindle drive so 30-50w max on these cables.


#6

Yea I saw warning about those adapters causing fires, so I plan to use them directly from the ps and no more than 2 per cable. Just needed confirmation that it was ok to use the sata for the risers. Thanks.


#7

I suspect this to be the solution, can you confirm the system is stable with “no more than 2 per cable”? I have an Ubuntu 18 system that is freezing because the client wanted to daisy chain 4 risers on a single SATA cable. There are 4 PCIE risers to a SATA cable now and I don’t believe that should work reliably based on the SATA cable specifications. The PCIE riser may draw more power than the SATA cable is able to supply, especially in my specific client’s case where they have filled up all 4 SATA cable plugs with PCIE risers instead of going for just “2 per cable”. Let us know how it’s going, I am very interested in an update!


#8

I can confirm you should never daisy chain 4 risers from a single SATA cable it is not meant to handle that kind of power and at some point you it is going to melt or at best shut down due to over current. This has been tested a ton. Check out BitsBeTrippen’s channel youtube he has been mining longer than the vast majority of us have been in crypto.


#9

Thanks @Nekko! I have been telling the client this is the issue, but they kept arguing with me that they didn’t feel any abnormal heat from the daisy chained cables. They also have another rig that is setup the same way and it has no issues. However the PSU configuration is different between the two rigs. Worker1 has single 1500W Thermaltake PSU, while Worker2 has dual 850W eVGA PSUs. The rig with the issues is Worker1 with the single 1500W PSU.

I’m also very familiar with BBT YoutTube channel and have seen the videos you are referring to. I’m just going to insist this client take my recommendation. Thanks for the clarity!

I would like to test this more as well. I just checked and I also have another very stable rig that is also using a single PSU (1600W eVGA) and a single SATA cable to power 4 PCIE risers. I’m going to go review the BBT video again to refresh my memory on how they tested this.

UPDATE
I have confirmed on the system where I have 4 GPUs on a single SATA cable, I have those GPUs limited to 80% power limit in MSI Afterburner. I believe now that this is the only reason that rig is stable.

Resource Used:

Although that article is mostly about 1080’s, I am using 960 GTX’s, 1060’s, 1070’s, and 1080’s. I believe this is why I’m seeing mixed results from different clients configurations.


#10

Thanks for the tips, but my question was more related to which one of the power connectors is better to use with the risers I bought (mine accommodates all 3 connectors). So far I have each riser on its own cable. I’ve initially used the SATA connector for powering them, but due to the layout of the connectors on the board and my current ghetto setup, I opted to use the 6-pin connector instead. No issues so far.


#11

What model GPUs are those? And since that is the only SATA used for providing the power, then I’m “assuming” it “might” work. Undervolting may also help, as well as lower end cards. But its still a gamble and should be avoided. Just tell your client that 2 per cable is “Best Practice”.


#12

On my rigs and many others here along with BitsBeTrippen we run ours at 65% power. This increases your hash/watt for most cards minus the 10##ti’s. At least that is what I have been seeing here. I only have 3 x 1080ti’s and they are all in gaming rigs so they are running 100% power with OC’s.

Since most of the time the only thing that is really powered by the SATA/MOLEX connectors on a GPU is the fan’s and auxillary things I would suspect most cards are going to be the same in that aspect as most mining cards have 2-3 fans. Its the fans that are drawing the most power off the SATA connectors. I will see if I can do some more research to verify what I am stating here. But if you do 10W per fan on a card I think this would give you some indication as to how many you can get per SATA connector as a spindle HDD will use 15w max.


#13

I have a few rigs setup with a single SATA cable powering 4 GPUs. I’ll list them here:
Rig1: 8 x 1060 3GB has 2 SATA cables, 4 GPUs on each, 1 x 1500W Thermaltake PSU

Rig2: same as above, except 2 x 850W eVGA PSU
Both Rig1 and Rig2 16 x 1060 3GB are limited to 90W using ‘nvidia-smi’ in Ubuntu v18 for Rig1 and v16 for Rig2

Rig3: 2 x 1080Ti, 1 x 1070, 1 x 960 GTX, 4 GPUs on a single SATA, power on all 4 GPUs is limited to 80% using MSI Afterburner. 1 x 1600W eVGA PSU

Rig3: I believe I have found the Windows systems “happy place” is running these 4 GPUs at 80% power. This rig is very stable

Rig2: This rig is very stable, I have zero issues with it and it uses daisy chained cables for SATA and PCIE to the GPUs.

Rig1: This rig crashes/freezes up completely. 36 hours is the maximum uptime I have had with this rig. It is running the latest NVIDIA drivers and latest Ubuntu v18 with the older libraries required to mine with Claymore. This rig will drop GPUs at boot time and you may reboot to see 4 GPUs, 6 GPUs, 7 GPUs, but it is very hard to get all 8 x 1060 3GB GPUs to show up once this rig freezes up. In order to fix this, I usually have to unplug every PCIE and SATA cable then slowly add GPUs one at a time until all 8 come online in Ubuntu v18. Once all 8 show up finally, I can start mining and then it last for 16, 24, or 36 hours approximately so far. I intend to go to the mining facility and rewire this rig specifically soon.


#14

I believe we are talking about the same thing, SATA/MOLEX connectors on the PCIE risers, not a GPU, but this is very interesting and has me thinking hard about this rig I have with 8 x 1060s that fails with a single 1500W Thermaltake PSU. I wonder if the rig is fine as long as the fans are spinning slow enough to not draw a lot of power on the SATA cable. I have monitored these GPUs and they get up to 66C-70C. When that happens, I’m curious if that is when this rig starts to fail. I suspect the single 1500W Thermaltake PSU to be as not a good quality as the eVGA brand as well.


#15

I cannot help you with Ubuntu as I do not and have not ever used it. Honestly my best advice is to load another OS to see if you get the same results to determine if it is hardware or software. If the problem persists through OS changes you can be fairly certain you have a hardware issue. If it does not then you can be certain it is a software issue. Even if you do not use SMOS regularly it is nice to have it on a thumb drive for trouble shooting purposes.

Most headaches for troubleshooting mining rigs is trying to determine if it is a hardware or software issue. The above process removes the vast majority of these headaches.


#16

I run all of my rigs at 65% TDP 70C max temp and 50% fan speed. The only thing that will change from card to card is the power percent.


#17

Try updating the motherboard bios. Some motherboards start freaking out and becoming unstable when you start adding more than 6 GPUs. Research your motherboard model and see if its a common issue.


#18

I recommended to the client that we actually do test with SMOS if this one system continues to be unreliable, but I have some new updates regarding these rigs since I visited the mining facility today and reviewed everything.

The rig with 2 x 850W eVGA PSU’s only has 2, or 3 PCIE risers on the SATA cables.

The rig with 1 x 1500W Thermaltake had 4 PCIE risers on 1 SATA cable and 4 PCIE risers on another SATA cable. Today I went there and put 2 PCIE risers on 4 different SATA cables. The rig was only online for a short while before it failed again. The error from Claymore is that it can’t get GPU temperatures from specific GPUs. I’ll have to double check the motherboard BIOS is fully up to date. I’m not 100% sure that it is yet. I’d also like to put SMOS on this system.


#20

I think I saw somewhere that you can turn off that option in Claymore. May look into that, but I don’t think that is your issue. Would be curious to see how the SMOS test works. You state these are for a client. Couldn’t you take the rig when it crashes and just restart it with SMOS to test it and just state its for troubleshooting purposes? How is the business setup? Are you hosting the rigs at your site or are they at a customers site?


#21

I don’t think temperature is the issue either because I monitored them and they never got above 70C.

I don’t know what you mean. Yes, I can bring the rig back to my house from the clients location. But I don’t need another mining rig at my house pulling power and generating heat. The clients rigs are at the clients location.
I have remote SSH and monitoring, but when it fails, it freezes up Ubuntu. I told them the next step is to try SMOS, but I have to wait since they are having AC issues now at the location and that needs to be repaired before I can reasonably put more heat in the location from mining.


#22

He meant when it crashes again that you should boot it up with SMOS and test it.