EnCharge AI announces EN100 for on-device computing

Advanced analog in-memory computing technology delivers 200+ TOPS of AI compute power in a highly optimized package.

EnCharge AI today announced the EnCharge EN100, the industry’s first AI accelerator built on precise and scalable analog in-memory computing. Designed to bring advanced AI capabilities to laptops, workstations, and edge devices, EN100 leverages transformational efficiency to deliver 200+ TOPS of total compute power within the power constraints of edge and client platforms such as laptops.

Previously, models driving the next generation of AI economy—multimodal and reasoning systems—required massive data center processing power. Cloud dependency’s cost, latency, and security drawbacks made countless AI applications impossible.

EN100 shatters these limitations. By fundamentally reshaping where AI inference happens, developers can now deploy sophisticated, secure, personalized applications locally.


This breakthrough enables organizations to rapidly integrate advanced capabilities into existing products—democratizing powerful AI technologies and bringing high-performance inference directly to end-users

EN100, the first of the EnCharge EN series of chips, features an optimized architecture that efficiently processes AI tasks while minimizing energy. Available in two form factors – M.2 for laptops and PCIe for workstations – EN100 is engineered to transform on-device capabilities:

  • M.2 for Laptops: Delivering up to 200+ TOPS of AI compute power in an 8.25W power envelope, EN100 M.2 enables sophisticated AI applications on laptops without compromising battery life or portability.
  • PCIe for Workstations: Featuring four NPUs reaching approximately 1 PetaOPS, the EN100 PCIe card delivers GPU-level compute capacity at a fraction of the cost and power consumption, making it ideal for professional AI applications utilizing complex models and large datasets.

EnCharge AI’s comprehensive software suite delivers full platform support across the evolving model landscape with maximum efficiency. This purpose-built ecosystem combines specialized optimization tools, high-performance compilation, and extensive development resources—all supporting popular frameworks like PyTorch and TensorFlow.

Compared to competing solutions, EN100 demonstrates up to ~20x better performance per watt across various AI workloads. With up to 128GB of high-density LPDDR memory and bandwidth reaching 272 GB/s, EN100 efficiently handles sophisticated AI tasks, such as generative language models and real-time computer vision, that typically require specialized data center hardware. The programmability of EN100 ensures optimized performance of AI models today and the ability to adapt for the AI models of tomorrow.

Early adoption partners have already begun working closely with EnCharge to map out how EN100 will deliver transformative AI experiences, such as always-on multimodal AI agents and enhanced gaming applications that render realistic environments in real-time.

While the first round of EN100’s Early Access Program is currently full, interested developers and OEMs can sign up to learn more about the upcoming Round 2 Early Access Program, which provides a unique opportunity to gain a competitive advantage by being among the first to leverage EN100’s capabilities for commercial applications at env1.enchargeai.com.

For more information, visit env1.enchargeai.com.