S2Schat shows a radical difference from GPT algorithms

Singapore, September 16, 2024

Arllecta Group approached the initial implementation of the sense-to-sense (S2S) algorithm based on its own mathematical theory Sense Theory [1] specifically designed for the creation of self-identifying AI. 

Now we will briefly describe the technological approach used in our solution.

The uniqueness of this algorithm in comparison with the current most advanced GPT models lies in at least three of its characteristic differences. 

The first difference is that we do not try to “guess” what the user is going to enter or ask our algorithm in advance, as we see these as two critical algorithmic errors when creating AI.  First, each person is unique both in his level of education and in his personal preference for using certain linguistic expressions. Second, the approach of guessing the next words has a catastrophic consequence for the development of an individual person as it does not provide the opportunity to develop unique personal abilities.

The second difference and perhaps one of the most important differences is that we do not use the transformer architecture at all, since we believe that this architecture has several critical defects that greatly distort the meaning of the processed input text. Two obvious defects are that, firstly, the combinational law for the scalar product of a query vector and a key vector cannot be satisfied in the attention mechanism, secondly, normalizing the values ​​of the output vector of the attention mechanism introduces distortions in the decoder module, since any type of context vector, when normalized, loses the main thing – the semantic connection between its elements.

The third difference is that our algorithms of own mathematical theory, Sense Theory, specifically created for self-identified AI, allow us to find and determine semantic connections between both animate and inanimate objects.   For example, our solution allows us to determine the semantic connection between a parked car on Fifth Avenue and a ship departing from London to New York, even in the absence of a person related to the car or ship. This is the so-called hidden knowledge search algorithm.

The fourth difference is the ability of S2Schat to work with both large and small amounts of data without a teacher. That is, the S2Schat ​​architecture does not use pre-trained data to communicate with its user. S2Schat algorithms are completely autonomous and learn in the process of communication – they create the first impression after 15 minutes of communication with the user. This approach allows us to solve an extremely important task – building a personalized knowledge base relating to an individual user. It’s like creating a genome for each of its users, which in subsequent communication will help quickly find solutions to help this user.

 

Now we will briefly describe the AI focus in our solution.

According to the author of the mathematical theory Sense Theory and the creator of 25 main software modules that make up the software core of S2Schat [2], Egger Mielberg, the current task of our AI is to obtain only one sentence as the result of an answer when analyzing not only a separate small passage from a book or a specialized article, but also a full-fledged one books of 500 sheets or more.

The main cognitive task of our solution is to implement 2 important directions. The first direction is to search and linguistically describe the zero object of the first level as a vector axis that determines the main meaning of the processed text. 

The second direction is to search and describe the depth of the semantic connection between zero objects of the second and other levels. For this implementation, we use sense derivatives [3] and the sense entropy value. Sense entropy was created and described in the article Sense Entropy [4]. 

The uniqueness of the meaning of sense entropy lies in the possibility of describing the degree of semantic connections between objects of different natures. This feature is missing in traditional mathematics.

 

And now we’ll tell you a little about the numbers obtained when comparing our S2Schat solution (13 modules out of 25 implemented) with the most advanced GPT solution now in the market, in our opinion, ChatGPT 4o from OpenAI. The reason we chose ChatGPT is that the vast majority of other current GPT-solutions in the global AI market now are based on ChatGPT.

ChatGPT S2Schat
The sense amplification coefficient (%) [4] 16 97 the more tokens the algorithm identifies in the text and uses for analysis, the lower the value
The sense efficiency coefficient (n zero objects – SEC) [4] 5 1 the less semantic connection between tokens, the higher the value
The Mielberg sense cycle [4] SEC not const SEC const if SEC remains constant when the text sample changes, the result of finding the main meaning is maximum
The Sense Entropy (n) [4] 0.3 0.9 =1 – one meaning (sense) of the analyzed text, 

<1 – several meanings (senses) of the analyzed text

 

For our text analysis we used the American Ways book (XX, On Understanding excerpt).

Below are the exact specifications of our solution and approximate ChatGPT’s.

ChatGPT S2Schat
Number of servers >>100 2 S2S: 4CPU, 32 RAM, 1Tb SSD
Number of parameters >100 mln 25 S2S: 25 core modules determine 1 parameter each
Architecture transformer & others sense-to-sense model S2S: sense derivatives, sense limit & sense sets
Training database different text open sources user downloaded text S2S: the algorithm works on processing text in A4 format
Training mode pretrained real-time S2S: the algorithm uses user-uploaded text
Team number >50 4 S2S: each engineer covers two or three positions
Calculation methods of  traditional mathematics Sense Theory  S2S: a number of computational tools of this theory are used

 

In our algorithmic calculations, we are considering “sense energy”, which has a vector nature and therefore, it will be more accurate to define the Law of Conservation of Sense [4]: 

The total sense energy (SE) of any open sense space (OSS) is constant if the conditions of the Mielberg cycle

are satisfied in this sense space.

In other words, to create AI with self-identification, it is extremely important for the implemented algorithm that the meaning (sense) of the analyzed book, article, abstract and other source remains the same. 

The law of conservation of sense, like the law of conservation of energy, shows the constancy of the existence of the object of study only in different semantic (energy) forms.

The law of conservation of sense, in contrast to the Turing test, qualitatively determines the degree of “humanity” of digital AI.

Resources:

[1] Sense Theory. Part 1.

https://vixra.org/pdf/1905.0105v1.pdf

[2] S2Schat.

https://www.s2schat.com

[3] Sense Derivative.

https://www.researchgate.net/publication/344876659_Sense_Derivative

[4] Sense Entropy. The Law of Conservation of Sense.

https://www.researchgate.net/publication/369295558_Sense_Entropy_The_Law_of_Conservation_of_Sense

Published On: September 16, 2024