Artificial Intelligence (AI), and its siblings Machine Learning (ML) and Neural Networks (NN) are emerging as the most important computational workloads of modern times. These methods allow us extract meaningful insights from data.
CORNAMI is focused on the deployment of intelligent systems in real-time environments. The company has developed a scalable and massively parallel architecture addressing the shift in processing needs for these vast data sets. This game-changing, software-defined technology delivers unprecedented scalability from thousands of cores on a single chip to millions across a system. All individually programmable.
CORNAMI empowers developers, large enterprise, IoT, edge to cloud computing to deliver high performance anywhere and on any device at the lowest power and latency.
Unique New Processor Architecture
Stream-based, scalable, memory-less, computing fabric that directly executes any neural network topology.
Modern Machine Learning Algorithms are all 3-Dimensional topologies
In software, Cornami can code an entire 3D topology and execute it natively at hardware speeds.
Cornami matches biologically inspired Neural Net control structures with highly-efficient streaming between processing engines
Minimizes memory references and is self-pruning
Increases performance and lowers latency and power consumption by orders of magnitude
Neural Networks process real time, streaming data, yet neural networks implemented upon GPUs must save state and restore state layer by layer by layer. This results in excessive power dissipation, excessive external chip memory requirements, excessive external memory bandwidth, and artificially lowered performance.
A truly streaming architecture eliminates this wasted work – providing an architecture that has the best TeraOPS per Watt.