Are LLMs Still Lost in the Middle?

Daniel Davis - Nov 1 - - Dev Community

A few days ago, I talked about some of the inconsistency I've seen varying LLM temperature for knowledge extraction tasks.

I decided to revisit this topic and talk through the behavior I'm seeing. Not only did Gemini-1.5-Flash-002 not disappoint in producing yet more unexpected results, but I saw some strong evidence that long context windows still ignore data in the middle. Below is the Notebook I used during the video:

Notebook

. . .