No price development information for this product.
This product is end of life or currently not available.
Key Specifications
Choose one or multiple features to search for items that have the same specifications.Processor | |
Processor frequency | 948 MHz |
Graphics processor | Tesla M40 |
CUDA | |
CUDA cores | 3072 |
Graphics processor family | NVIDIA |
Processor boost clock speed | 1114 MHz |
Peak floating point performance (single precision) | 7000 Gflops |
Memory | |
Graphics card memory type | GDDR5 |
Memory bus | 384 bit |
Memory bandwidth (max) | 288 GB/s |
Discrete graphics card memory | 12 GB |
Ports & interfaces | |
Interface type | PCI Express x16 |
Weight & dimensions | |
Weight | 898 g |
Width | 111.2 mm |
Depth | 267.7 mm |
Energy management | |
Power consumption (typical) | 250 W |
Supplementary power connectors | 1x 8-pin |
Operational conditions | |
Operating relative humidity (H-H) | 5 - 90% |
Storage temperature (T-T) | -40 - 75 °C |
Operating temperature (T-T) | 0 - 45 °C |
Design | |
Form factor | Full-Height/Half-Length (FH/HL) |
Number of slots | 2 |
Product colour | Black, Green, Stainless steel |
Cooling type | Passive |
Performance | |
TV tuner integrated | |
Dual Link DVI | |
PhysX | |
EAN | 3536403346751 |
Warranty | 2 years |
Source: Icecat.biz |
The Tesla M40 GPU Accelerator is purposebuilt for deep learning training and is the world’s fastest deep learning training accelerator for data center. Tesla M40 is based on NVIDIA Maxwell™ architecture and a Tesla M40 server outperforms CPU server by 13x.
Deep learning is redefining what’s possible. From early-stage startups to large web service providers, deep learning has become the fundamental building block in delivering amazing solutions for end users. Today’s leading deep learning models typically take days to weeks to train, forcing data scientists to make compromises between accuracy and time to deployment. The NVIDIA Tesla M40 GPU accelerator is the world’s fastest accelerator for deep learning training, purpose-built to dramatically reduce training time. Running Caffe and Torch on the Tesla M40 delivers the same model within days versus weeks on CPU based compute systems. Some of the world’s largest data centers take advantage of Tesla accelerators to deliver unprecedented system throughput. The Tesla Platform supports industry-standard applications and system management tools, making it easier than ever before for IT managers to maximize uptime and system performance. Deliver faster discoveries and scientific insights to your users by deploying GPU accelerators in your datacenter. With broad support for HPC developer tools like MPI, scientific libraries, and OpenACC, most applications offer performance boost with GPUs today.
Estimated delivery: Unknown
Today: 11:00 - 15:00
Estimated delivery: Unknown
Today: Closed
Estimated delivery: Unknown
Today: 10:00 - 15:00
Estimated delivery: Unknown
Today: Closed
Estimated delivery: Unknown
Today: Closed
Estimated delivery: Unknown
Today: 11:00 - 15:00
Estimated delivery: Unknown
Today: Closed