The head background
The head background
  • Accelerated Chip Engine
    Accelerated Chip Engine
  • Highly Scalable and Versatile
    Highly Scalable and Versatile
  • 5ms Latency
    3ms Latency
  • MAC Efficiency up to 92%
    Chip Utilization Ratio up to 95.4%

Corerain’s Custom AI Streaming Accelerator (CAISA) Architecture is a high-performance computing architecture designed for deep learning neural networks inference.

The CAISA architecture controls the computing sequence through the sequence of the data streams. Operator-level data flow graphs are analyzed from the CNN network model and then mapped to the CAISA streaming engine. It eliminates idle cycles by overlapping data movement and data calculation. By maximizing chip utilization ratio, it provides cost effective advanced computing power for AI applications.


Whitepaper | CAISA

Custom Artificial Intelligence Streaming Accelerator

white paper

Customized Logic Modules for Higher Computational Power


Reduce Instruction Overhead by Precise Sequence Control


High Adaptability for Various AI Applications

High Adaptability for Various AI Applications

Acceleration Boards

Nebula Accelerator X3
Nebula Accelerator NA-100c
Rainman Accelerator
Nebula Accelerator X6

Application Scenarios

CAISA Architecture offers high-performance computing for AI applications.

Semantic segmentation
Object Detection
Image recognition
Feature Regression