logo
Home Cases

QNAP Introduces QAI-h1290FX Edge AI Storage Server for Private LLM and Generative AI Workloads

Certification
China Beijing Qianxing Jietong Technology Co., Ltd. certification
China Beijing Qianxing Jietong Technology Co., Ltd. certification
Customer Reviews
The sales staff of Beijing Qianxing Jietong Technology Co.,Ltd are very professional and patient. They can provide quotations quickly. The quality and packaging of the products are also very good. Our cooperation is very smooth.

—— 《Festfing DV》LLC

When I was looking for intel CPU and Toshiba SSD urgently, Sandy from Beijing Qianxing Jietong Technology Co., Ltd gave me a lot of help and got me the products I needed quickly. I really appreciate her.

—— Kitty Yen

Sandy of Beijing Qianxing Jietong Technology Co.,Ltd is a very careful salesman, who can remind me of configuration errors in time when I buy a server. The engineers are also very professional and can quickly complete the testing process.

—— Strelkin Mikhail Vladimirovich

We are very happy with our experience working with Beijing Qianxing Jietong. The product quality is excellent, and delivery is always on time. Their sales team is professional, patient, and very helpful with all our questions. We truly appreciate their support and look forward to a long-term partnership. Highly recommended!

—— Ahmad Navid

Quality: “Great experience with my supplier. The MikroTik RB3011 was already used, but it was in very good condition and everything works perfectly. Communication was fast and smooth, and all my concerns were addressed quickly. Very reliable supplier—highly recommended.”

—— Geran Colesio

I'm Online Chat Now

QNAP Introduces QAI-h1290FX Edge AI Storage Server for Private LLM and Generative AI Workloads

May 11, 2026

QNAP Systems has unveiled the QAI-h1290FX, an edge AI storage server engineered for enterprises seeking to deploy large language models, RAG-based search and generative AI workloads on private infrastructure. Optimized for data privacy, low-latency processing, regulatory compliance and local operational control, the hardware allows businesses to run AI workflows on-premises without transferring sensitive data to public cloud environments.

latest company case about QNAP Introduces QAI-h1290FX Edge AI Storage Server for Private LLM and Generative AI Workloads  0

QNAP QAI h1290FX front panel


The QAI-h1290FX enables enterprises to build internal AI assistants for staff training, policy inquiry and internal knowledge retrieval while keeping raw data confined within corporate boundaries. Legal, financial, HR and operational departments can construct private RAG pipelines to analyze contracts, reports and internal archives, delivering more contextual insights than conventional keyword searching. Creative teams can operate generative image tools including Stable Diffusion and ComfyUI to streamline design workflows, while IT administrators may leverage automation platforms such as n8n to execute AI inference, generate content and dispatch system alerts across business infrastructures.

QNAP QAI-h1290FX Components, Expansion, and I/O


Powered by an AMD EPYC 7302P CPU and equipped with twelve U.2 NVMe/SATA SSD drive bays, the QAI-h1290FX integrates enterprise-grade computing power with an all-flash storage architecture tailored for latency-sensitive AI tasks. This 16-core, 32-thread processor supports AI inference, virtualization and multi-task parallel processing. Meanwhile, its high-speed SSD array is built to accommodate frequent model execution, continuous data streaming and rapid access to datasets, embeddings, documents and AI-generated outputs.

latest company case about QNAP Introduces QAI-h1290FX Edge AI Storage Server for Private LLM and Generative AI Workloads  1

The server optionally supports the NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation GPU, offering up to 96GB VRAM to handle resource-intensive local AI workloads. With native compatibility for CUDA, TensorRT and Transformer Engine acceleration, the system facilitates on-premises LLM inference, image synthesis and deep learning computation, eliminating the need for independently built GPU workstations.

QNAP QAI h1290FX rear panel


The QAI-h1290FX is equipped with enterprise-grade networking, including two 25GbE SFP28 SmartNIC ports and two 2.5GbE ports with Wake-on-LAN capability. For hardware expansion, four PCIe Gen 4 slots are available — three x16 slots and one x8 slot — supporting upgraded network adapters, external GPUs and other compatible expansion cards. Additional I/O specifications cover three USB 3.2 Gen 1 ports, jumbo frame transmission, SR-IOV and GPU pass-through. Its twelve drive bays universally support 2.5-inch SATA SSDs and U.2 NVMe PCIe Gen4 x4 storage media.


latest company case about QNAP Introduces QAI-h1290FX Edge AI Storage Server for Private LLM and Generative AI Workloads  2
latest company case about QNAP Introduces QAI-h1290FX Edge AI Storage Server for Private LLM and Generative AI Workloads  3
latest company case about QNAP Introduces QAI-h1290FX Edge AI Storage Server for Private LLM and Generative AI Workloads  4


QNAP Systems has introduced the QAI-h1290FX, an edge AI storage server built for enterprises deploying LLMs, RAG search and generative AI on private infrastructure. Designed for data privacy, low latency and local governance, the system allows businesses to run AI workloads securely without sending sensitive data to public clouds.

QNAP QAI h1290FX front panel


The QAI-h1290FX supports internal AI assistants for training, policy queries and in-house knowledge searches, keeping data within corporate boundaries. Departments including legal, finance and HR can build private RAG pipelines to analyze internal documents with richer context than basic keyword searches. Creative teams may run image-generation tools, while IT staff can utilize automation platforms to trigger inference and system alerts.

QNAP QAI-h1290FX Components, Expansion, and I/O


Equipped with an AMD EPYC 7302P CPU and twelve U.2 SSD bays, this all-flash server delivers optimized performance for AI workloads. The 16-core processor handles inference and parallel tasks, while its high-speed SSD architecture enables fast access to datasets, embeddings and model files.

The system optionally supports the NVIDIA RTX PRO 6000 Blackwell GPU with 96GB VRAM. Leveraging CUDA and TensorRT acceleration, it simplifies local LLM inference and deep learning without requiring custom-built GPU workstations.

QNAP QAI h1290FX rear panel


It features two 25GbE SFP28 ports and two 2.5GbE ports, alongside four PCIe Gen 4 slots for hardware expansion. Additional specifications include USB 3.2 ports, GPU pass-through and broad compatibility with U.2 NVMe and SATA SSDs.

Built Around Fast Storage, GPU Acceleration, and Local Control


Running QNAP’s ZFS-based QuTS hero OS, the QAI-h1290FX offers data integrity protection, comprehensive snapshots and inline deduplication. These functions suit AI workloads that process massive repeated data from documents, embeddings, models and training resources.

Developers can deploy AI tools in GPU-accelerated containers via Container Station, while Virtualization Station supports VM-based GPU pass-through. This flexible allocation optimizes resources, balancing container agility and virtual machine isolation for diverse deployment needs.

Preloaded with AnythingLLM, OpenWebUI and Ollama, the device accelerates private LLM deployment. QNAP is also integrating Stable Diffusion, ComfyUI, n8n and vLLM to unify text generation, image creation, automation and inference on one local platform.

A Local Infrastructure Option for Enterprise AI Teams


The platform cuts manual workload for on-premises AI deployment, eliminating repetitive hardware assembly and environment configuration. Users can launch AI applications directly while retaining full data control and reducing cloud dependence.
Compatible with QNAP JBOD enclosures, the server enables seamless storage expansion to accommodate growing datasets, knowledge bases and AI-generated files.

Beijing Qianxing Jietong Technology Co., Ltd.
Sandy Yang/Global Strategy Director
WhatsApp / WeChat: +86 13426366826
Email: yangyd@qianxingdata.com
Website: www.qianxingdata.com/www.storagesserver.com
Business Focus:
ICT Product Distribution/System Integration & Services/Infrastructure Solutions
With 20+ years of IT distribution experience, we partner with leading global brands to deliver reliable products and professional services.
“Using Technology to Build an Intelligent World”Your Trusted ICT Product Service Provider!
Contact Details
Beijing Qianxing Jietong Technology Co., Ltd.

Contact Person: Ms. Sandy Yang

Tel: 13426366826

Send your inquiry directly to us (0 / 3000)