The Ontology Coup

13/6/24
AIP & Foundry
Why are enterprises struggling with AI adoption?
“Traditional data architectures do not capture the reasoning that goes into decision-making or the action that results, and therefore limit learning and the incorporation of AI. Conventional analytics architectures do not contextualise computation within lived reality, and therefore remain disconnected from operations.” - Akshay Krishnaswamy, Chief Architect
Ontology
The key differentiator is a software architecture which revolves around the Palantir Ontology (Foundry). The Ontology is designed to represent the decisions in an enterprise, not simply the data.
Data
Relevant data in a company can comprise structured data, streaming and edge sources, unstructured repositories, imagery data, etc. BUT one data not captured by traditional data-software architecture is “decision” data.
Decision data refers to the information generated during the decision-making process. It includes the context around decisions, the alternatives that were considered, and the effects of the final choice made. This data differs from traditional forms of structured data because it's dynamic and evolves as decisions are made.
Thus, integrating the full range of enterprise data with the fluid nature of decision data requires a very different architecture than a classical database management solution that is optimised for reporting and analytics.
Ultimately, the Ontology is designed to capture decision data that is produced by operational users as they carry out daily work (e.g., within supply chains, hospital systems, customer service centres).
“The end-to-end “decision lineage” of when a given decision was made, atop which version of enterprise data, and through which application, is automatically captured and securely accessible to both human developers and generative AI.”
This decision data serves as foundation for LLMs to learn (training) and infer (inference). Without it, any use of LLMs are very limited or hallucinogenic.
A way to look at it is that the Ontology serves as a trusted data source that grounds the LLM in your enterprise. AIP, this equips the LLM with a “tool” to request or query data objects directly from the Ontology to add to the information in the original prompt.
Logic
Data is only a piece of the decision equation. When and How to make a decision is known as Logic.
Recall “Decision data refers to the information generated during the decision-making process.” The principles (rules and guidelines) that underpin the decision-making process IS Logic.
Enterprise logic can be simple to complex. For instance:
Simple: If the applicant's credit score is above 700 and their annual income exceeds $50,000, approve the loan.
Complex: Optimisation model that pulls data from various sources (e.g., raw material costs, shipping times, inventory levels, customer demand forecasts) to generate the most efficient production and delivery schedule.
Can’t LLM reason? They can, but they’re mediocre at best. In formal terms, non-deterministic reasoning (see [1] for more clarity).
As such, AIP offers “tools” for LLMs to leverage on. (E.g., Yield Optimiser, Demand Planner, Invoice Poster, Financial Forecaster, Geophysics Model, Best Supplier Finder, etc.) [2]
In essence, the LLM handoff the bulk of the reasoning to the tools and in turn they contextualise it for the LLM.
“LLMs are designed to predict the next best token. While this capability is helpful for generating realistic-looking text, it’s not always the right tool for more complex tasks like solving equations, forecasting, or running simulations, which are better handled with purpose-built models or functions.
“By leveraging tools for tasks that LLMs are not well-suited to perform, we can further reduce the likelihood of a hallucination in the overall Logic function.” - Palantir Blog
Before you ask, No, users do not have to build entire logic assets from scratch. The platform integrates existing logic assets typically found in CRM and ERP systems; predictive analysis models across data science environments in cloud; proprietary or domain-specific tools.
A key feature of Ontology is “Logic binding”. One of the biggest challenges today is dealing with fragmented systems, where business logic, machine learning models, and other processes live in different environments and can't easily interact. The Ontology solves this by combining these heterogeneous logic assets.
For example, a workflow could take data from a CRM, pass it through a machine learning model in the cloud, and then use an on-premises optimisation algorithm — all through one interface.
And it’s modular! If there are gaps in the existing system or if a specific task requires logic that hasn’t been created yet, users can build or plug new logic assets, or remove them as needed.
Action
And at last…execution of the decision!
Data tools and logic tools can be used to reduce the likelihood and impact of hallucinations. But, no matter how well LLMs are designed, hallucinations are a potential side-effect of using GenAI.
So to account for that possibility, “Human-AI teaming” allow users to oversee and revise AI-generated actions.
Now you ask, chain of responsibility and accountability? Worry not, the platform segments those who can explore possible decisions, those who can stage decisions for review, and those who can commit those decisions.
Brilliant analogy: If the data elements in the Ontology are “the nouns” of the enterprise, then the actions can be considered “the verbs”. These nouns and verbs are brought together through logic to form a sentence.
Ultimately, AIP acts as the bridge between Ontology (Foundry) with LLMs.
[1] LLMs are designed and trained to perform just one kind of task: predicting the next most likely “token” based on a given sequence of text. A token can be a word, a part of a word, or even a series of characters. However, when it comes to answering questions, next token prediction does not require LLMs to return truthful or accurate answers to the questions posed. The job of an LLM is to just produce text that is statistically likely to look plausible, based on its training data. This is known as Hallucination.
[2] Tools work through RAG/OAG
Retrieval Augmented Generation (RAG) enables LLM to retrieve data from outside sources (i.e. orders, customers, locations, etc.) to generate responses. Think of ChatGPT using data from a PDF you uploaded. Ontology Augmented Generation (OAG) takes RAG to the next level. Now not only retrieving data, but logic (i.e. statistical models) and actions.
Contact Us
© 2025 by Aknia, Inc. All rights reserved.
Aknia, Inc. is a corporation duly incorporated in the State of Delaware, United States. The company operates in accordance with the Delaware General Corporation Law (DGCL). The materials on this website are for illustration and discussion purposes only and do not constitute an offering. Prospective and registered investors are encouraged to view the investment memorandum.
Address
251 Little Falls Drive
Wilmington, New Castle County, Delaware 19808
United States