No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.
The worst for me was a fairly simple programming question. The class it used didn’t exist.
“You are correct, that class was removed in OLD version. Try this updated code instead.”
Gave another made up class name.
Repeated with a newer version number.
It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.
They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.
Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.
That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.
It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.
And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on.
Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.
Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.
LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.
Often the answers are pretty good. But you never know if you got a good answer or a bad answer.
And the system doesn’t know either.
For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.
Accurate.
No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.
The worst for me was a fairly simple programming question. The class it used didn’t exist.
“You are correct, that class was removed in OLD version. Try this updated code instead.”
Gave another made up class name.
Repeated with a newer version number.
It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.
So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?
From what I’ve seen you’ll need an iron stomach.
They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.
Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.
That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.
With proper framework, decent assertions are possible.
If that is done, the work on the human is very low.
That said, it’s STILL imperfect, but this is leagues better than one shot question and answer
Except LLMs don’t store sources.
They don’t even store sentences.
It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.
And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on.
Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.
Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.
The proper framework does, with data store, indexing and access functions.
The cutting edge work is absolutely using LLMs in post-rag pipelines.
Consumer grade chat interfaces def do not do this.
Edit if you worry about topics like context window, sentence splitting or source extraction, you aren’t using a best in class framework any more.
Sounds familiar. Citation please
deleted by creator