AI Compute HPE ProLiant Compute DL384 Gen12
Achieve next-level performance for mixed, memory intensive and AI workloads like fine tuning and inference with Retrieval Augmented Generation (RAG).
Deploy at scale with rack-based solutions for any AI destination
As part of NVIDIA AI Computing by HPE, HPE ProLiant Compute DL384 Gen12 with NVIDIA GH200 NVL2 handles next-level performance and scale-out fine tuning and inference with RAG.
Accelerate the shift to generative AI
Leverage artificial intelligence (AI), particularly large language models (LLMs) for AI fine-tuning and inference with RAG. Enable new generative AI (GenAI) applications such as text generation, language translation, coding, visual content, and many more.
Maximize data center utilization
NVIDIA GH200 NLV2 with 1.2 terabytes of fast, unified, and coherent memory supports mixed and memory-intensive workloads for next-level performance and maximizes data center utilization for AI computing tasks.
Get scale-out accelerated computing and enterprise AI productivity
Designed to deploy large language models for AI fine-tuning and inference with RAG with 3.5x capacity and 2X higher performance, this versatile scale-out platform significantly enhances computing capabilities. For faster enterprise AI deployment and success, leverage HPE Private Cloud AI.