No price development information for this product.
Product unavailable
This product is end of life or currently not available.
This product is end of life or currently not available.
Specifications
Description
Store availability
Delivery
Price development
| Processor | |
| Graphics processor | Tesla M40 |
| CUDA | ![]() |
| CUDA cores | 3072 |
| Graphics processor family | NVIDIA |
| Peak floating point performance (single precision) | 7000 Gflops |
| Memory | |
| Graphics card memory type | GDDR5 |
| Memory bus | 384 bit |
| Memory bandwidth (max) | 288 GB/s |
| Discrete graphics card memory | 24 GB |
| Ports & interfaces | |
| Interface type | PCI Express x16 |
| Weight & dimensions | |
| Height | 111.2 mm |
| Depth | 267.7 mm |
| Energy management | |
| Power consumption (typical) | 250 W |
| Supplementary power connectors | 1x 8-pin |
| EAN | 3536403348922 |
| Warranty | 2 years |
| Source: Icecat.biz | |
The World’s Fastest Deep Learning Training Accelerator for Today’s Neural Networks
The NVIDIA® Tesla® M40 is the world’s fastest deep learning training accelerator, purpose-built to train larger, more sophisticated neural networks within hours—versus days on CPU-only systems. It features an NVIDIA Maxwell GPU and 24 GB of large-capacity memory, letting users increase the detection and prediction accuracy of their deep learning models and accelerate time-to-deployment.
Deep learning is redefining what’s possible. From early-stage startups to large web service providers, deep learning has become the fundamental building block in delivering amazing solutions for end users. Today’s leading deep learning models typically take days to weeks to train, forcing data scientists to make compromises between accuracy and time to deployment. The NVIDIA Tesla M40 24GB GPU accelerator is the world’s fastest accelerator for deep learning training, purpose-built to dramatically reduce training time. Running Caffe and Torch on the Tesla M40 delivers the same model within days versus weeks on CPU based compute systems. Some of the world’s largest data centers take advantage of Tesla accelerators to deliver unprecedented system throughput. The Tesla Platform supports industry-standard applications and system management tools, making it easier than ever before for IT managers to maximize uptime and system performance. Deliver faster discoveries and scientific insights to your users by deploying GPU accelerators in your data center. With broad support for HPC developer tools like MPI, scientific libraries, and OpenACC, most applications offer performance boost with GPUs today.
The NVIDIA® Tesla® M40 is the world’s fastest deep learning training accelerator, purpose-built to train larger, more sophisticated neural networks within hours—versus days on CPU-only systems. It features an NVIDIA Maxwell GPU and 24 GB of large-capacity memory, letting users increase the detection and prediction accuracy of their deep learning models and accelerate time-to-deployment.
Deep learning is redefining what’s possible. From early-stage startups to large web service providers, deep learning has become the fundamental building block in delivering amazing solutions for end users. Today’s leading deep learning models typically take days to weeks to train, forcing data scientists to make compromises between accuracy and time to deployment. The NVIDIA Tesla M40 24GB GPU accelerator is the world’s fastest accelerator for deep learning training, purpose-built to dramatically reduce training time. Running Caffe and Torch on the Tesla M40 delivers the same model within days versus weeks on CPU based compute systems. Some of the world’s largest data centers take advantage of Tesla accelerators to deliver unprecedented system throughput. The Tesla Platform supports industry-standard applications and system management tools, making it easier than ever before for IT managers to maximize uptime and system performance. Deliver faster discoveries and scientific insights to your users by deploying GPU accelerators in your data center. With broad support for HPC developer tools like MPI, scientific libraries, and OpenACC, most applications offer performance boost with GPUs today.
Multitronic Vaasa: 0 pcs
Estimated delivery: No delivery info
Estimated delivery: No delivery info
Today: 10:00 - 18:00
Multitronic Pietarsaari: 0 pcs
Estimated delivery: No delivery info
Estimated delivery: No delivery info
Today: 09:00 - 17:00
Multitronic Jyväskylä: 0 pcs
Estimated delivery: No delivery info
Estimated delivery: No delivery info
Today: 10:00 - 17:00
Multitronic Lappeenranta: 0 pcs
Estimated delivery: No delivery info
Estimated delivery: No delivery info
Today: 10:00 - 18:00
Multitronic Mariehamn: 0 pcs
Estimated delivery: No delivery info
Estimated delivery: No delivery info
Today: 10:00 - 18:00
The above technical details are for reference only and may change without notice. We reserve the right to correct printing errors and use illustrations for guidance. Some text may be auto-generated or machine-translated, which could lead to inaccuracies.

