In my work with LLM based question answering I was experimenting with adding clarification/planning steps at various points in the process. For example after the user ask a question, I tried adding prompts with requests like:
What kind of question you got? Is it a Yes/No question? What should be the answer to the question?
Think about ways in which the question might be ambiguous. How could it be made more precise?
Sometimes I even asked a few such questions in a sequence to help the LLM better understand the question by thinking aloud. I did not reach any general conclusions with my experiments - but I am sure that in some cases it helps.
Anthropic Claude requires that user and assistant message alternate and after a user message always comes an assistant message and after assistant message comes a user message. I don’t see any elegant way to add that additional step after the user query. Maybe I’ll do that on the side, in clear context, save the results and somehow add them to the user question in the main context?