Our Mission: Truly Cognitive Intelligence for Machines
Our Mission: Truly Cognitive Intelligence for Machines
Artificial General Intelligence (AGI) or human-like AI is a dream of mankind. Billions of dollars are poured in the race only to cure a new winner everyday. The race is currently in a hot stage because who gets the pool-position often also earns the most money. Much is invested into LLMs to tickle the last possible perfomance out of them to finally surpass the bar of real human intelligence. But are we really that far? As some optimistics shout out in any virtual marketplace like x, linkedin or the „legacy media“. But let it settle a bit. Although the answers given to promts are in many cases reliable and satisfy 90% of the users need they still struggle with problems that are specifically articulated and require some thoughts to be invested. Thus such Generative Pretrained Transformer Models (GPT) tend to hallucinate, but why? and is this really human-like intelligence?

All what the AI sees are Patterns
GPT bots using LLMs are based to guess the next possible word by a given prompt. The problem is, that many prompts hold so many semantic uncertainties, that it is hard to guess what could the sender have meant. Most GPT bots just will give the most common answer to the prompt but due to many uncertainties the GPTs answer with content that are not facts but are invented to match best to the query. State of the Art GPT bots will adapt if you refine your prompt but it will never reason weather is makes sense. It also does not ask question if you gave an unspecified question like: „The application has to be user friendly“. What user friendliness really means is matter of very free interpretation. A human would ask questions requesting to specify your terms a bit more specifically. And by specifying the details there comes reason and purpose.
The Terminology is not fixed
Unlike the common sense we see the terminology of words not something that is fixed. In programming languages this is something that is mandatory. But in this the problem is already originated. We see in software development, that when using terms to describe a method it’s unlike how language works. That’s why we often say natural language is hard to solve because we say „it’s ambiguous“. But in fact language is very strict in it’s constraints and rules. There is just no explicit omnipotent meaning of terms for all contexts in the beginning. Instead it rather gets negotiated in each contexts. It works by the user interacting with the software to strap the meaning to the real intention in that given context. It is thus receives a very individual meaning there, sure there is always a prototype what a certain term by default represents, but constraining it to a certain context, it can there resolve it’s true meaning in relation to other terms


To be Continued…
This article is currently Work in Progress
it will be finished shortly…