Open Core Ventures announces Kernelize
Open Core Ventures (OCV) has just unveiled Kernelize Inc., an innovative AI compiler platform designed to “bridge the CUDA moat” by auto-generating optimized backends for a wide variety of hardware targets. Built on the open-source Triton compiler, Kernelize lets developers write high-performance GPU kernels in Python once and deploy them across GPUs, NPUs, TPUs, and more—eliminating lock-in to any single vendor’s proprietary stack. Founded by industry veteran Simon Waters, whose résumé includes leading AMD’s Triton contributions and co-creating the Catapult C Synthesis tool, Kernelize aims to democratize AI performance and accelerate hardware-agnostic innovation.
Today’s AI models demand ever-larger computational resources, and NVIDIA’s CUDA ecosystem has long held a performance edge—often forcing teams to rewrite algorithms and rebuild custom toolchains whenever they consider alternative accelerators. Kernelize tackles this head-on by extending Triton with an automated backend generation system. Instead of each chip developer writing their own compiler backend from scratch, they can plug into Kernelize’s standardization agent, slashing development time, reducing costs, and making it easier for smaller chip makers to compete.
By lowering the barrier to entry for new hardware and simplifying the path to peak performance, Kernelize has the potential to reshape the AI landscape—encouraging greater hardware diversity, driving down compute costs, and unlocking new innovations. Read the full announcement on the Open Core Ventures blog: https://www.opencoreventures.com/blog/kernelize-inc-launches-to-bridge-the-cuda-moat.