Merging HPC and AI can solve a lot of problems – storage ain’t one of them

Webcast The convergence of traditional CPU-powered HPC and GPU-fueled AI is one of the wonders of the age, whether it’s modelling vaccines or the climate, enabling automated trading, or speeding up fraud protection.

But while the focus is often on the compute side of things, a storm has been brewing in the storage layer.

That’s because this convergence involves bringing together two very different storage models – write intensive and HDD-based operations for HPC, read intensive and flash-based for deep learning and AI.

The results can be an engineering marvel but can also be a cost disaster. Because scaling up flash to the hundreds of petabytes typical of a traditional HPC cluster is going to be eye wateringly expensive. And let’s face it, the more you spend on storage, the less you have to spend on those high performing CPUs and GPUs.

So, whether you’re pondering juicing up your HPC workloads with a little machine learning, or want to work in some old-school modelling to your GPU-powered AI work, you should join our Regcast. “Spend Less on HPC/AI Storage” on June 17, at 0800 PT (1100 ET, 1600 BST).

Our broadcast expert Tim Phillips, who has some experience of high end modelling himself, will be joined by HPE’s Uli Plechschmidt, who will explain why you should be spending less on HPC/AI storage – and more on CPU/GPU compute.

They’ll pick through what sticking with your existing architectures could cosy – both in terms of cold hard cash, and in innovation.

And they’ll walk you through exactly what parallel HPC/AI storage could mean for you and your workloads, and how to build infrastructure that meets your needs and doesn’t cost the earth.

Joining this session is a model of simplicity. Just drop your details in here, and we’ll update your calendar and nudge you on the day. In the meantime, be careful out there, and don’t get caught between those HDDs and SSDs.

Sponsored by HPE

Source link

Related Articles

Back to top button