RuvLLM ESP32 - Tiny LLM inference for ESP32 microcontrollers with INT8 quantization, RAG, HNSW vector search, hyperbolic embeddings (Poincaré/Lorentz), and multi-chip federation. Run AI on $4 hardware.
High-performance Node.js bindings for Hailo AI acceleration processors (NPU). Run neural network inference with hardware acceleration on Hailo-8 devices.
This package helps you summarize pdfs using Gemini nano on edge or on browser, making it compliant safe, faster and free
nodes supporting integration with edge impulse (www.edgeimpulse.com)
Native AG-UI protocol implementation for Cloudflare Workers AI - Enable CopilotKit with edge AI at 93% lower cost