First software provider in Taiwan to integrate NVIDIA B200 GPU to improve computing performance and inference accuracy for enterprise AI applications.

APMIC announced its participation in COMPUTEX Taipei’s InnoVEX exhibit for the second time, an event focused on innovation and startups. The company has adopted NVIDIA’s Blackwell architecture B200 GPU and launched its “S1 Model Fine-Tuning and Distillation” solution. This offering covers model training, knowledge distillation, and testing scalability, and is intended to support enterprises in developing customized, domain-specific models for AI applications.
Custom AI models as a strategic imperative: Introducing the S1 Solution at COMPUTEX 2025
According to Bedrock Security’s 2025 Enterprise Data Security Confidence Index published in March, 70% of cybersecurity professionals in the U.S. cited AI and machine learning data governance as their top security concern this year. This underscores the rising demand for secure, private AI deployment and data governance. APMIC has introduced the S1 Model Fine-Tuning and Distillation Solution, built on NVIDIA NeMo and NIM, offering a full AI lifecycle from training to testing, with containerized deployment options for on-premises or private clouds. Supporting both tool-based rentals and ODM customization, the solution enables enterprises to quickly build their own “AI enterprise brain.”
The solution leverages a Teacher-Student framework, optimizing open-source models with internal data and distilling them into lightweight versions, retaining performance while reducing resource usage. With full support for private infrastructure, it empowers enterprises to develop secure, efficient, and tailored AI environments.
Accelerating enterprise AI with advanced hardware: First in Taiwan to Deploy NVIDIA B200 GPU
APMIC is also the first software provider in Taiwan to fully adopt the NVIDIA B200 GPU based on the next-generation Blackwell architecture, stepping confidently into a new era of AI computing. Compared to its predecessor, the B200 delivers significantly improved performance in LLM training and inference, enhanced FP4 computational efficiency, and higher memory bandwidth—addressing key compute limitations faced by enterprises in large model fine-tuning and compression.
With APMIC’s S1 solution, enterprises can fully leverage the B200’s computing capabilities within existing hardware infrastructures, dramatically accelerating model development, deployment, and inference while reducing total cost of ownership (TCO). This upgrade lays a strong foundation for autonomous AI architectures and paves the way for internalizing AI assets, turning AI from a concept into a daily operational reality.
For more information, visit apmic.ai.