Generative Pretrained Transformers (GPT) like the usual chatbots like ChatGPT or Claude are fed by Large Language Models (LLM) which are advanced guessing of the next word according your inputs. It’s more or less advanced autocomplete on steroids. But the models used do not know what they are talking about, because they just see patterns and mimic textual content based on trained text from the internet. This means it’s not actual Knowledge based on reasoning but just rephrasing existing content to your need.
Our cognitive Models instead create formal logic. This means we do not mimic or guessing but we formulate the content baed on given terms and their formal relation to other terms. Thus we strap the actual logic to the given context of the problem. Our solution approaches the problem iteratively not by hallucination. So if the topic is unclear or not specified clearly, our solution goes into a dialog until the problem is clarified.
Leave A Comment