Save Power! Without sacrificing Hashrate in HiveOS (Nvidia cards)

Cryptocurrency News and Public Mining Pools

Save Power! Without sacrificing Hashrate in HiveOS (Nvidia cards)

Save Power! Without sacrificing Hashrate in HiveOS (Nvidia cards)

Things have changed in my little World, since my last bigger post and we´re still mining raven Full Power.
If you haven´t read my prior post, this is were you find it as it will most likely help you especially if you´re new to mining and you´re looking to expand.

https://www.reddit.com/r/Ravencoin/comments/oi3ixo/mining_ongaming_pc_nvidia_overclocking_guide/

So as a little Prelude. I have expanded my mining and I´m now currently running 6 total RTX 3080.
The models are 1x Asus Tuf OC and 4x Asus Strix OC 1x Asus Strix.
Two of these are Full Hash Rate models, so they are on ETH and the other 4 are LHR and sitting on Ravencoin. My hashrate on these is around 51-53MHs depending on the current ambient temps and everything. One is obviously in my gaming rig as I definetly need that, so it´s practically always mining in the background. Meaning in the End I´m around 200MHs currently.

Since my electrizity cost is relativly high I´m always looking for ways to improve the efficiency and especially on Raven it´s been a struggle. When I was looking at the card in my personal computer I´ve noticed, that it´ll always be using significantly less power then on my other cards which are running within HiveOs. So the personal one was sitting somewhere from 270-280W and the 3 cards in the mining rig are somewhere from 300-320W with the same hashrates, which was obviously confusing for me.

So where is the difference?
Well, as I had disclosed in my prior post the silicon lottery will have influence in the efficiency of the cards, so I´ve tried swapping the "more effective card" into the rig and just use a different one in my personal rig.
Result? Well nothing changed. The card now in windows was drawing less power and the other one was drawing more.

So is windows the solution?
Nope. Tried it and while it does give better powerefficiency then Hive (with using my method for overclocking), so what else.

Well my personal Rig is obviously overclocked for gaming with a coreclock offset. My Hive Rig wasn´t.
So how to solve this?
Before state is, that my Coreclock is locked at 1440 within Hive and the memory is pushed as far as it´ll go on each card.
Why 1440 and not less?
Well on anything less I lose Hashrate and obviously have less powerdraw, but the real reason can be found in the GPUs Datasheet.
https://www.techpowerup.com/gpu-specs/asus-rog-strix-rtx-3080-gaming-oc.b8036
The GPUs BaseClock is at 1440 for the RTX 3080. So the way to determine what to lock you cclock at for the best Rig efficiency is in almost all cases this number. There is a reason, that the cards have this clockspeed baked into them and it also gives insane stability.
So here is the Before.

Again ignore the bottom two cards they are on ETH 🙂

So that gave me these numbers and as seen in the Picture a Powerdraw of at that moment 937W for these 3 cards alone.
Using Whattomine.com we can easily calculate how much that´ll make us. So as it isn´t possible to also increase the coreclock in hive while I have the clockspeed locked. Or can I?

The Solution in my case was Trex Miner. It doesn´t allow me to increase the coreclock, but it does support locking the coreclock within Linux.
So I went into trial and error, until I figured what to put into the flightsheet.
The command for Trex miner within Hive is

"lock-cclock": "1440"
or
"lock-cclock": "1440,1440,1440"
(if you have different card models you´ll need to do the second version instead, so you can input different speeds for each card)

Trex Config Tab

And now I can go in and change the coreclock offset in HiveOs as it would be done usually.
I recommend starting out at +100 on "OC" model cards as they can usually handle the extra coreclock in the lower voltage department.
And you can obviously work your way up from there. Don´t lower it! That´ll increase the voltage to the GPU and that will not save power and while it shouldn´t be dangerous it certainly can get unhealty for the cards.
On "non OC" cards I´d try with +50 first and work from there.
I´ve also put a PL at 350W, just to have a safety net in case something goes wrong. Forecast, it did.
Trex miner only locks the coreclock when it starts after that any changes to the OCs will stop the lock and let it just pull as much wattage as it is able to, so it is absolutely necessary to restart the miner, if you ever change the overclocks on any card.

So now what we´ve all been waiting for.
Results.

Yes hashrate is different, but it´s mostly due to GPU0 not being stable at the settings from my before picture 😀 (testing was so long I don´t want to redo it.)

So how does it look? Well at least we´re down a little in wattage to 907W so 30W and that isn´t even the End of it. If the GPUs are stable with a even a little more coreclock they can get even more efficient, while they basically don´t lose any hashrate. Also a difference in temperature is noticable. I don´t have a real reason as to why that is, but it certainly is looking promising.

I´ll be doing further testing and I´ll update any advances I can make.
As in my last post, I´ll be counting how often I have to edit this post, because I´m a huge DumDum. (0)

Have a great time mining!
Greetings Jack

submitted by /u/JackDeRke
[link] [comments]