Language Model Level of Truth

JoeX69 - Jun 30 - - Dev Community

Many of us knows that -in rough words- LLMs "just" forecast next word. They have been fine tuned after gathering a pretrained model unless you have a Big Tech to: have enough level of computation, then build your own pretrained model from a tokenized corpus, then fine tune it, then deploy it for finally serving it hot just right out from the ouven ready to eat.

One of the big problems now are hallucinations and let's say it, LIES. Because a model will never say you "I don't know". They know it all even if they do not. So I'm interested on finding a way to verify and guarantee a safe level of true of a LLM with a low rate of hallucinations. Them are good when brainstorming but bad in most of the cases.

The pros will be a lot. To mention just one is that, in an age when agentic programming is an incipient area, is very often when we might need a "true checker" so as in conventional programming is done for example, by the argument inside an If-instruction works.

Because when we develope, compile and execute code at any deterministic turing machine, we know that the processor will tell us the truth.

I think this area of research is as amazing as difficult. Some approaches over there? If you are interested on that Contact Me or post here.

.