FlexLogix has announced inference-optimized nnMAX clusters to develop the InferX X1 edge inference co-processor for incorporation in SoCs as IP, and in chip form, in Q3. InferX X1 chip claims to delivery high throughput in edge applications with a single DRAM, resulting in higher throughput/watt. Its performance advantage is claimed to be strong at low batch ...
This story continues at FlexLogix introduces inference engine
Or just read more coverage at Electronics Weekly