The head background
The head background
  • Accelerated Chip Engine
    Accelerated Chip Engine
  • Highly Scalable and Versatile
    Highly Scalable and Versatile
  • 5ms Latency
    5ms Latency
  • MAC Efficiency up to 92%
    MAC Efficiency up to 98%

Highly Scalable Custom AI Streaming Accelerator (CAISA) Architecture, Compatible with Different Deep Learning Algorithms

Ultimate Computing Performance with High Energy Efficiency Ratio and Low Latency

Multi-engine Parallel to Maximize the Usage of Hardware Resources

CAISA
1

Parameterized Computing, Reconfigurable Data Path

Underlying Parameterization
Multi-Layer Parallel Expandable
2

Multi-Layer Parallel Expandable

Data Parallelism

Filter Parallel

Channel Parallel

Layer Parallel

Accelerator Engine Parallel

Data Parallelism, Filter Parallel, Channel Parallel, Layer Parallel, Accelerator Engine Parallel

3

Convolutional Neural Network Architecture with Scalable Dimensions

Expanded Memory Architecture to Support High-Dimensional Data Parallelism

Extended Data Accumulation Unit to Support Parallel Computing Core Accumulation

Expanded Memory Architecture to Support High-Dimensional Data Parallelism, Extended Data Accumulation Unit to Support Parallel Computing Core Accumulation

3D Convolutional Neural Network Architecture with Scalable Dimensions

AI Chip Products Based on CAISA

  • Nebula Accelerator
  • Rainman Accelerator

Application Scenarios

CAISA Architecture offers high-performance computing for AI applications.

Image recognition
Semantic segmentation
Time series analysis
Speech semantics