Category: Blog

  • What sets Permion Apart from Others? Neurosymbolic AI: Large Graph Models

    We are introducing our Neurosymbolic AI Large Graph Model (LGM) as a successor to the immensely valuable, successful and revolutionary Large Language Model (LLM) technology that is a product of Generative Pre-Training (GTP). We will introduce, in the very near future, the concept of a Large Graph Model (LGM) that has some very unique Neurosymbolic AI model properties.

    This article has one objective: to present Neurosymbolic AI and why Permion chose the path it did for its contribution to certain unique markets – those markets in which failure is not an option.

    Permion’s Neurosymbolic AI integrates neural net capabilities to learn patterns in raw data with symbolic structures, that provides a foundation for knowledge representation and reasoning. Together, the integration of neural (tensor methods) with symbolic (theorem-prover) technologies enables intelligent systems that are both flexible and explainable—learning patterns and applying logic.

    MIT and IBM AI Lab [1], frames neurosymbolic AI as a fusion of pattern-driven learning with logic-based reasoning to enhance transparency, generalization, and cognitive capability while work with Google Deepmind and MIT [2] frames the tests and the astonishing results.

    The most recent work at the Military College confirms Permion results 3 years ago (DTRA program ‘STARDUST’) that neurosymbolic AI only needs 50% of the training data to achieve world-beating results [3]. Work at MIT on confirms explainability, data- and memory-efficiency, higher quality, fast time to results in complex cognitively loaded scenes [4].

    How can the US outcompete its peer competitors in the AI race for new advanced AI programs without a new AI approach? Permion has overcome many challenges and has developed a solution path for AI Chip design challenges, with lessons learned, for AI chip-prototyping, where a strong partnership can address these by building the first US neurosymbolic AI Chip leveraging Permion’s current virtual “chip” in software available on AWS. Look for the XVM™ product.

    Implementing Neurosymbolic AI problems and issues: Anyone trying to build Neurosymbolic AI will very quickly run into a plethora of challenges in which the most difficult and fundamental problem is how to reframe AI computing workloads at the most fundamental level possible, for building an AI Chip, by crafting an Instruction Set Architecture (ISA) tailored for Neurosymbolic AI.

    This challenge is very much harder than adapting and retrofitting today’s CPUs or GPUs because of mixing two historically separate computational paradigms: continuous neural processing and discrete symbolic reasoning. Neural processing is very complex with serious design issues around concepts such as Spiking Neural Network representation and efficient computation, Neuromorphic semiconductor engineering and design as well as a number of topics in advanced mathematics and physics of computing that are largely uncommon knowledge.

    First, there is no public literature explaining how to define machine-level instructions operating seamlessly on both continuous tensors (for neural nets) and discrete structures (graphs, rules, logical clauses) without forcing one to inefficiently emulate the other [4]. At the core of this problem is that there is no mathematical “supratype” in the public open source literature that both expresses tensors and logic graphs in hardware-efficient form.

    Therefore, if a supratype were discovered, then a processor can execute both gradient descent and symbolic inference steps natively, integrated within single computational cycles with learning and reasoning (i.e. thinking about learning how to learn) interwoven. This would cut latency in hybrid AI workloads (e.g., planning with perception, theorem proving with embeddings) by 10–100×, while ensuring explainability is not lost. Even today’s most advanced work at the closest hardware-level step [5] toward a unifying instruction set for neuromorphic and intermediate representations(IR), recognizes the open challenge, for integration at the continuous/discrete step at the ISA/IR layer.

    Second, designing instructions that support the control flow for reasoning on dynamic graphs (that change temporally) can traverse, match, and update dynamic graph symbolic structures (graphs, rules, ontologies, logic) efficiently, in parallel, with the same determinism that SIMD instructions give to matrix math [6]. To date, no-one has publicly solved this even though prior work (DySER[7], GraphBLAS[8]) provides partial solutions, but no mainstream ISA has integrated these concepts into a reusable modular pattern-matching and unification set of primitives into hardware pipelines.

    The nearest work that illustrated a solution to unification began with Warren’s Abstract Machine (WAM) design [9] and progress on processor architecture versions [10] but was too hard to connect to graphs or matrices even the era of GPU’s [11] due to the irregularity in memory access and control flows in graph processing. Large Language Models (LLM) have no embedding of any kind of knowledge graph due to the lack of this neural and symbolic aspect.

    Therefore, if solved then constraint solving (SAT/SMT solving, abductive inference, multi-agent planning), missing data imputation, combinatorial problems would move from hours or minutes and become sub-millisecond. Reasoning over dynamic knowledge graphs opens a real path to Artificial General Intelligence (AGI). This is critical to logistics, planning, defense, and scientific discovery.

    Third, conventional ISAs define correctness as “bitwise exact” while AI uses probabilistic correctness, and in the future, quantum AI will need systemic representations developed for multimedia and multimodal data based on ‘qubits’ — all of which mismatch to the conventional mainstream computing architecture at the instruction levels.

    There is no public source detailing how to encode constraints, proofs, or causal explanations as first-class objects in an ISA. Today we only have experiments that all have communications interchanges to provide side output, external proofs. Instructions within a processor, if solved, would carry semantic guarantees (e.g., “output X satisfies clause Y”) and executables could be “proven” correct on a blockchain as an immutable version of program integrity. This provides deep explainability, tamper-proofing, and guarantees of trust. If solved, each inference could return both the answer and a proof sketch, making AI decisions trustworthy, auditable, and certifiable in safety-critical domains. Even the most advanced work is a post-processing ‘glue on’ that does not work at the ISA level [12].

    A neurosymbolic AI chip that natively unifies tensor learning and symbolic reasoning, that offers deterministic high-throughput graph control, and returns proof-carrying results at the instruction boundary would deliver decisive advantages in data efficiency, speed, and trust for the most mission critical enterprises and commercial systems (e.g., banks).

    That’s the heart of Permion’s roadmap—and why a U.S. neurosymbolic AI chip program is timely and necessary.

    Today, Permion makes it easy for a programmer to write Neurosymbolic AI programs without having to think about how to solve all these hard problems – we have done it for you. We provide example code, software developer kits with tools, debuggers, code-analyzers, integrations with Python and Java and industry standard environments along with AI powered support.

    In future articles we will discuss the importance of Neurosymbolic AI and how and why it has zero-hallucination. It enables building new kinds of AI models that tightly and deeply compose both the neural and symbolic paradigms for rich new kinds of applications – where failure or hallucination is not an option.

    Agentic AI using Neurosymbolic technologies deliver new levels of intelligent software agents and ready to use protocol frameworks for machine learning to build models fast and without the high-costs of size, weight and power. You get deductive, abductive and inductive distributed reasoning capabilities. You get logistics solvers and planners with constraints and optimization strategies for your most complex issues. You get robotic process automation (RPA) to craft fleets of AI intelligence sources from data.

    Permion is new and I am so extremely excited to be a key part of one of the most amazing group of people that are working to bring to the world such an amazing new engine for discovery science, data understanding and creative applications for banking, finance, hedge-funds, business intelligence, medicine, biology, materials science, space mission systems, edge AI, network processing, robotics and latent signal analytics.

    References:

    1. MIT and IBM Neurosymbolic AI: https://mitibmwatsonailab.mit.edu/category/neuro-symbolic-ai/

    2. Teaching Machines to Reason about what they See: https://news.mit.edu/2019/teaching-machines-to-reason-about-what-they-see-0402

    3. Neurosymbolic Artificial Intelligence for Robust Network Intrusion Detection: From Scratch to Transfer Learning: https://arxiv.org/html/2506.04454v1

    4. Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding: https://arxiv.org/pdf/1810.02338.pdf

    5. Neuro-Symbolic methods for Trustworthy AI: https://neurosymbolic-ai-journal.com/system/files/nai-paper-726.pdf

    6. Neuro-Symbolic AI: Explainability, Challenges, and Future Trends: https://arxiv.org/pdf/2411.04383

    7. Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computing: https://www.nature.com/articles/s41467-024-52259-9

    8. Hardware Acceleration for Knowledge Graph Processing: Challenges & Recent Developments: https://arxiv.org/pdf/2408.12173

    9. Dyser: Unifying Functionality And Parallelism Specialization For Energy-Efficient Computing: https://www.cs.cmu.edu/afs/cs/academic/class/15740-f18/www/papers/ieeemicro12-govindaraju-dyser.pdf

    10. GraphBLAS: https://graphblas.org/

    11. An Abstract Prolog Instruction Set: https://www.sri.com/wp-content/uploads/2021/12/641.pdf

    12. Research in the System Architecture of Accelerators for the High Performance Execution of Logic Programs: https://apps.dtic.mil/sti/tr/pdf/ADA259710.pdf

    13. SIMD-X: Programming and Processing of Graph Algorithms on GPUs: https://arxiv.org/pdf/1812.04070v1

    14. RISC Zero zkVM: Scalable, Transparent Arguments of RISC-V Integrity: https://dev.risczero.com/proof-system-in-detail.pdf