EIN Presswire
Launched earlier this year, CUDOS Intercloud, a product of CUDO, addresses a crucial gap in the Web3 computing space by providing a scalable and distributed GPU-as-a-Service cloud tailored for DePIN (Decentralised Physical Infrastructure) communities and AI computational tasks, including machine learning and inference. The growth of DePIN networks within Web3 allows communities and small businesses to capitalise on the AI boom by leveraging protocols for machine learning and AI inference workloads, as well as other intensive computational tasks.
Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies
Web3 companies need to guarantee service levels and access the right GPUs. They also desire a Web 3.0-aligned experience, such as connecting digital wallets, thus streamlining their access to GPU resources.
CUDOS offers an alternative to other approaches by distributing resources across many different vendors worldwide. This helps address the needs of individuals and businesses alike, tackling issues such as high costs, inability to pay in a Web3-native manner, and extensive KYC requirements, making CUDOS Intercloud an increasingly preferred choice for users.
The CUDO network, comprising both CUDO Compute and CUDOS Intercloud, has delivered over 500,000 consumed hours of AI GPU time. The integration of NVIDIA GPUs, including the NVIDIA H200 Tensor Core GPU and NVIDIA H100 Tensor Core GPU, plays a pivotal role in helping overcome the challenges of building robust and decentralised Web3 and AI systems, utilising advanced techniques in generative AI, machine learning, image processing, and language understanding.
Also Read:Â Extreme Networks and Intel Join Forces to Drive AI-Centric Product Innovation
CUDO also supports other NVIDIA GPUs, including the A100 Tensor Core, V100 Tensor Core, and A40 GPUs, as well as the range of RTX A6000, A5000 and A4000 GPUs for professional visualisation, providing users with greater choice based on their budget and type of high-performance computing workload. The combination of NVIDIA GPUs with CUDO Network platforms, leveraging a distributed chain of data centres worldwide, offers customers a powerful cloud solution for anything from visualisation to massively parallel computational tasks. This provides greater control over operations, location, and security, helping companies and users to scale more effectively.
With savings of up to 75% compared to other offerings, a specialised focus on GPU computing for AI, and the use of 100% renewable energy-powered data centres for their GPUs, CUDO’s platforms offer an economically and environmentally friendly alternative for the AI era.
Also Read:Â More than 500 AI Models Run Optimized on Intel Core Ultra Processors