Tip: Use Echo prompting to improving the answers of the r1

This is an amazingly simple prompting technique that I use with the r1 when I notice that the AI model is starting to hallucinate or gives poor / less relevant answers. (r1 already is able to reduce quite a bit of hallucination, but I have come across a number of cases where the AI model needs extra help.) It is called echo prompting. You basically tell the AI model used by the r1 to echo back your prompt query to you.

It goes like this. When you give your prompt, make sure to tell the AI to repeat your prompt by saying it back to you in its response. You also want to inform the r1 that the repeating ought to happen before it tries to solve the query.

Here is what I usually include in my prompts when I want to leverage the echo prompt capability:

  • “Repeat the question before you start to answer the question.”

Combining Echo With Chain-Of-Thought

The echo prompt can be combined with another prompting technique known as chain-of-thought (CoT). Chain-of-thought is when you can get the AI to showcase its work by telling the AI to do stepwise processing and identify how an answer is being derived.

  • “Repeat the question before you start to answer the question and then think step by step.”

Don’t ask the r1 to rephrase your prompt!

When you give permission to “rephrase”, you are giving AI a lot of latitude and the outcome will be very different. The act of strictly saying “repeat” is less prone to the AI going all out to tell you what the question might or might not have entailed. By using “repeat”, the wording of the repeated or echoed question should be pretty close to what the original question was. If not, you can assume that something has gone wrong, and you’ll want to double-check the answer. There is a pretty solid chance you’ll want to redo the question.

(For more information about the echo prompt, read the research study entitled “EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning” by Rajasekhar Reddy Mekala, Yasaman Razeghi, and Sameer Singh, arXiv, February 20, 2024.)

5 Likes