“Äther Mining App”

“Äther Mining App”

All computers are not created equal when dealing with massively parallel processing : either you’ve got a supercomputer with a great number of CPUs, or you create a very customized device…like an ASIC. Third solution : use your GPU, or even better : build a computer with several GPUs !
Das sieht man am besten in unserem heutigen Finanzsystem. Jemand, der monatlich 1.000 Euro zur Verfügung hat, verkonsumiert diese (sehr wahrscheinlich) zu 100%. Jemand, der 10.000 Euro zur Verfügung hat, verkonsumiert (wahrscheinlich) nur einen Teil. Jemand, der Milliarden zur Verfügung hat, verkonsumiert nur einen Bruchteil dessen, was er einnimmt. In einem Zinssystem wird das große Vermögen also immer gewaltiger, die kleinen bleiben für immer nichtig.
As a last note, make sure to keep an eye on your temps as you don’t want to accidently burn your cards or cause your rig to shutdown. Once you see your temperatures stay flat for a few minutes it should be alright. Just stay vigilant when trying out new settings and don’t assume that the first thirty seconds are the temps your cards will run at for all time. OpenHardwareMonitor is a good tool for plotting your temps and visualizing them if you need something to watch while you test out new settings on a rig, snag it from http://openhardwaremonitor.org/
I’ve heard that Ethereum mining is ASIC resistant since it requires memory-hard hashing algorithms. However, why can’t miners buy large DRAM and devote the memory to ASIC mining? What prevents miners from making an ASIC paired with large DRAM?
Ist es bei den Entwicklungsstufen ebenfalls möglich (3 und 4 stehen ja noch aus), dass es dabei zur Entstehung eines neuen Coins kommt, wie bei Bitcoin und Bitcoin Cash oder wird die Entstehung eines neuen Coins durch die Difficulity-Bomb nahezu verhindert?
MinerGate allows you to mine with either your CPU or GPU or even both, you can also choose how many cores you want your CPU/GPU to use. This allows you to regulate the heat and electricity produced by your hardware, allowing it to have a greater life-span.
As for those obsessed with how the GPU miners just go where the profit is, coins be damned etc. please don’t forget that the reason we have bitcoin, litecoin and all the others is because it is profitable to mine them. I am just as much a crypto zealot as the next guy but day to day I check on my rigs and make sure they are running because if they are not it hurts my bottom line. I’ll leave my house to fix a problem at midnight because, especially the way things are now, I don’t want to lose a whole nights worth of productivity. If I was just doing it to be nice, because I believe in the cause, to be frank a down rig would wait til morning.
We’ve ranked Nvidia’s current crop of mining cards, according to their overall value for mining. Older Nvidia cards with 4 GBs (or less) of VRAM are excluded. Their performance was usually bad, and it will only worsen more as epochs advance. Then the DAG size increases.
As for the RX 500 and 400 series GPUs, memory capacity clearly isn’t the issue as the 8GB cards still suffer. I believe the issue is memory bandwidth, at least when comparing the RX 580 to the R9 390, for example. That said this doesn’t explain why Nvidia’s GTX 1050 Ti sees no performance degradation, so perhaps it is a driver issue. Compared to the R9 390 series which features a massive 512-bit wide memory bus for a bandwidth of 384 GB/s, the RX 580 uses a 256-bit wide bus for 256 GB/s. So that could certainly explain the issue, at least when comparing those two GPUs.
Note that Bitcoin ASIC chips generally can only be used for Bitcoin mining. While there are rare exceptions – for example chips that mine both Bitcoin and Litecoin – this is often because the chip package effectively has two ASICs: one for Bitcoin and one for Litecoin.
A very high cost to design and manufacture a batch of ASICs keeps most of this market. The designers of the coin often design ASICs in an attempt to stabilize the mining of the coin long term. and keep interest in that coin.
In addition to recording the performance of each card at stock clock rates and after memory overclocking, we also tried to optimize the GPU/GDDR5 frequencies and GPU voltage to achieve the highest hash rate possible at the lowest power consumption. In other words, we shot for the best possible efficiency in a search for each board’s sweet spot.

Leave a Reply

Your email address will not be published. Required fields are marked *