Machine Learning Hardware

Checking for Machine Learning Hardware online for the SMR-lab.


Another advantage of using multiple GPUs, even if you do not parallelize algorithms, is that you can run multiple algorithms or experiments separately on each GPU. Efficient hyperparameter search is the most common use of multiple GPUs. You gain no speedups, but you get faster information about the performance of different hyperparameter settings or different network architecture. This is also very useful for novices, as you can quickly gain insights and experience into how you can train a unfamiliar deep learning architecture.

Using multiple GPUs in this way is usually more useful than running a single network on multiple GPUs via data parallelism. You should keep this in mind when you buy multiple GPUs: Qualities for better parallelism like the number of PCIe lanes is not that important when you buy multiple GPUs.

On the other hand, NVIDIA has now a policy that the use of CUDA in data centers is only allowed for Tesla GPUs and not GTX or RTX cards.

If we look at performance measures of the Tensor-Core-enabled V100 versus TPUv2 we find that both systems have nearly the same in performance for ResNet50 [source is lost, not on Wayback Machine]. However, the Google TPU is more cost-efficient.

Note that to use the benefits of Tensor Cores you should use 16-bit data and weights — avoid using 32-bit with RTX cards!


Normalized performance/cost numbers for convolutional networks (CNN), recurrent networks (RNN) and transformers. Higher is better. An RTX 2060 is more than 5 times more cost-efficient than a Tesla V100. The word RNN numbers refer to biLSTM performance for short sequences of length <100. Benchmarking was done using PyTorch 1.0.1 and CUDA 10.

So a 16-bit 8GB memory is about equivalent in size to a 12 GB 32-bit memory.

https://tweakers.net/reviews/7190/nvidia-rtx-2060-en-2070-super-meer-waar-voor-je-geld.html

Comparison between RTX 2060 (& Super version), RTX 2070 (& Super version), RTX 2080 Super:
https://tweakers.net/pricewatch/compare/1302516;1422784;1500926;1457982;1436320;1443570/

Cpu vs GPU from 2018

Test between my GPU vs a RTX 2060

Watercooled GPU:
https://nl.hardware.info/videokaarten.5/inno3d-geforce-rtx-2080-super-ichill-8gb.539933

List of supported GPU’s

Lowend:

Midrange:

Highend:

Please, do not spend your time on figuring out hardware combinations and server installations.

It will take much longer than you think and is not worth the money in comparison to the usage, you can set up a cloud server for ML training in minutes.

Check out google colab intro.
Or paperpsace, basically any VPS with gpu will work.

1 Like

Yes but problem is:

  1. Can’t rely on students turning VPS off after use
  2. No easy way to let school pay for the (recurring) cost of VPS

With own hardware we don’t have those problems

Wow this is cool man. Missed this when I quickly scanned your post first time 'round