Why is a DAG memory hard?
I'm wondering what specifically about a directed acyclic graph is more memory hard than say, a deterministic large pool? As I understand it, ASIC resistance is achieved by creating lots of large memory access, thereby reducing the value of compute only modules and forcing multi-die ASIC solutions which will be less $/mhz bandwidth efficient.
So what if, instead of the DAG, I take the prior block hash, run it through a series of general purpose operations (add/div/mult/xor/etc) and then make a huge pool of these numbers based on that. I create 4GB of numeric garbage instead of a DAG. Then the hashing algo takes the nonce (in the header) and mixes in N rounds of chunks (say 256 bytes) of data taken from this big garbage dump of data to make the final hash for difficulty compare. Every full node can calc it because its deterministic (based on prior block hash) and there's no easy way to pre-calc it; you have to have high memory bandwidth. If the garbage builder takes too long then set it on an epoch (rebuild every N blocks).
Just wondering what is special about the DAG vs some other assembled block of data.
submitted by /u/FurrieBunnie
[link] [comments]