AWS will deliver a new public container registry “within weeks” in response to Docker’s introduction of pull rate limits for Docker Hub.
The cloudy business has also posted tips on how to avoid having application deployments break because of the limits.
“Our customers should expect some of their applications and tools that use public images from Docker Hub to face throttling errors,” said AWS technical product manager Omar Paul and developer advocate Michael Hausenblas. Google has also expressed concerns about the same issue.
The short-term advice is either to copy public images to the Amazon Elastic Container Registry (ECR), or another registry, or to take out a paid Docker Hub subscription, both cases requiring reconfiguration to authenticate container image pull requests.
AWS has something else in store, though, which is a new public container registry. “Developers will be able to use AWS to host their private and public container images,” said AWS, as well as “related files like helm charts and policy configurations.”
There will be a new website where anyone can browse and pull available images, even anonymous users. AWS will also provide its own images such as those for AWS Deep Learning or CloudWatch.
The new container registry has limits of its own. Developers sharing public images get 50GB of free storage, and pulling images anonymously is free for the first 500GB of data bandwidth each month. Authenticating with AWS ups that limit to 5TB per month. Workloads running on AWS get unlimited bandwidth for pulling container images. There is no mention of a free tier for developers storing private images.
AWS said it has been working on the project for several months, apparently in response to customer requests. Even without the incentive of avoiding Docker rate limits, it is in character for the company to pull more technology in-house. As it remarked, “developers will be able to use AWS to host both their private and public container images, eliminating the need to use different public websites and registries.”
Use a public website other than AWS? Perish the thought!
Faster on-demand supercomputing
Separately, AWS has introduced new GPU-based virtual machine instances aimed at machine learning and HPC (high performance computing) workloads, using Nvidia A100 Tensor Core GPUs.
The new P4d instances include support for Nvidia GPUDirect Remote Direct Memory Access (RDMA), a capability that has been added to the AWS Elastic Fabric Adapter. The combination enables what AWS calls EC2 UltraClusters, including “more than 4,000 NVIDIA A100 GPUs, petabit-scale non-blocking networking infrastructure, and high throughput, low latency storage with FSx for Lustre.” The P4d instances are only available in the US East and US West regions.
A single P4d instance has 96 vCPUs, 1152GB RAM and 8 A100 GPUs. Network bandwidth is 400Gbps, or 600 Gbps GPU peer-to-peer. You also get 8TB of local instance NVMe SSD storage. Cost is $32.77 per hour, coming down to $11.27 for a 3-year reserved instance.
AWS promised that “popular AWS services for ML and orchestration such as Amazon SageMaker, Amazon Elastic Kubernetes Service (EKS), AWS ParallelCluster and AWS Batch will be adding support for P4d instances in the coming weeks.” ®