Graph-Driven, LLM-Assisted Virtual Assistant Architecture
Large language models (LLMs) have demonstrated that they are capable of reasoning through mechanisms such as Chain of Thought (CoT), Graph of Thought (GoT) and Algorithm of Thought (AoT) and undoubtedly offer exciting possibilities for virtual assistants and Artificial General Intelligence (AGI). However, these approaches introduce risks inherent in the probabilistic nature of the LLM and for high risk applications reasoning may need to be deterministic and rules explicitly defined. Here, I share a conceptual architecture of a virtual assistant that reduces this risk by utilising an explicitly defined semantic data model as a knowledge graph and a LLM in a translative role, to translate the results of inferencing.
Taking this approach:
(1) The business domain such as customer goals and constraints are conveniently represented and maintained in the knowledge graph. Rules that related goals and constraints are encoded into the knowledge graph.
(2) LLM agents for determining Completeness, Correctness, Validity, Classification and Planning (amongst other things) are functional building blocks and interact with one another to understand and fulfil customers’ goals taking into account constraints.
(3) The Python Engine orchestrates the interaction between the LLM Agents, the Knowledge Graph and the UI.
Importantly, in this approach, the semantic data model and the results of inferencing are supplied to the LLM as the reasoned response, with the LLM being utilised in a translative role.
Views are my own.