Stealth

AI semiconductor computing chip

Facebook Twitter LinkedIn

We are building a full-stack analog in-memory computing (IMC) chip that runs AI inference directly at the sensor — consuming under 1–3 W, requiring no cloud connection, no active cooling, and no large battery.

The core innovation: Our chip performs matrix multiplication — the mathematical engine behind all AI and ML — inside analog memory, rather than shuttling data between a separate processor and storage. This is how the human brain computes. Eliminating that data movement gives us 10× or greater power efficiency over conventional digital AI chips, as a structural property of the architecture — not an engineering compromise.

Product. The chip integrates three layers that all competitors treat as separate concerns: (1) a proprietary analog circuit design — the IMC macro that performs matrix multiply in memory; (2) a custom XPU architecture with AXI interconnect for modular, scalable chip integration; and (3) a co-designed software and algorithm stack with sparse matrix primitives and stochastic gradient methods tuned specifically to analog compute constraints such as noise, drift, and conductance quantization. This full-stack co-design is what allows us to achieve inference accuracy that rivals digital while consuming a fraction of the power.


 


Fundraise Details

This section is only visible to registered investors. Log in or register as an accredited investor to view.

Ready to Ask For Funding for your company?

Post a Funding Request