GIGABYTE is taking part at COMPUTEX 2025 with total data center solutions for enterprises.
TAIPEI, Taiwan — May 19, 2025 — GIGABYTE is taking part at COMPUTEX 2025 with total data center solutions for enterprises. At the heart of the showcase is GIGAPOD, GIGABYTE’s high-density, rack-scale infrastructure for AI workloads. Integrated with GPM, GIGABYTE’s in-house management platform, the solution enables unified orchestration of resources, from node to rack to cluster. This powerful combination supports rapid deployment, workload optimization, and real-time system monitoring—forming the backbone of scalable AI infrastructure.
GIGABYTE uses its G4L3 DLC server and a 4+1 rack module to demonstrate GIGAPOD’s readiness for computing at higher density, leveraging Direct Liquid Cooling to boost energy efficiency and thermal performance. With system-level integration and validation, GIGABYTE enables accelerated deployment through partnerships with industry leaders.
GIGABYTE positions itself as an AI Data Center Infrastructure Builder, offering L12-level data center services that cover consulting, facility design, system integration, deployment, and ongoing operations. Backed by a full range of AI servers and rack-level systems supporting AMD Instinct™, Intel® Gaudi® 3, and NVIDIA HGX™ systems, GIGABYTE delivers optimized performance density, scalable architecture, and deep system integration, delivering turnkey solutions that shorten time-to-AI.
Empowering AI Training with Cutting-Edge Platforms
On the AI training front, GIGABYTE delivers a comprehensive lineup of high-performance servers built to support the latest GPU architectures and maximize computational throughput:
- AMD Platform: Built on the latest AMD Instinct MI350 Series GPUs and Pensando™ Pollara 400 (PCIe NIC), delivering exceptional AI computing capabilities for next-generation workloads.
- Intel Platform: Featuring 5th Gen Intel® Xeon® processors and Gaudi® 3 AI accelerators, these platforms are tailored for enterprise-scale AI deployment and real-time inference scenarios.
- NVIDIA Enterprise AI Factory: For agentic and physical AI workflows, autonomous decision-making, and more, this full-stack design integrates GIGABYTE NVIDIA MGX™ server–XL44-SD1 with NVIDIA RTX PRO™ 6000 Blackwell Server Edition, NVIDIA Spectrum™-X Ethernet networking platform for AI, NVIDIA BlueField®-3 DPUs, and NVIDIA AI Enterprise.
A major highlight is the new NVIDIA GB300 NVL72 platform, featuring 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace™ CPUs in a liquid-cooled cabinet, featuring NVIDIA ConnectX®-8 SuperNICs for massive AI inference. GIGABYTE’s robust platforms also include OCP-compliant rack-level systems, and advanced memory architectures leveraging CXL for disaggregated compute and storage.
GIGABYTE also collaborates with leading storage partners to deliver the next-gen storage technologies, powering petabyte-scale workloads and AI clusters with extreme IOPS.
For queries or more information, please contact sales.
Source: GIGABYTE