The Echo Prompting Technique For Generative AI

In today’s column, I showcase a clever and yet amazingly simple prompting technique that pays big dividends when using generative AI. The technique is generally referred to as echo prompting. Just like the name says, you tell the AI to echo back to you the prompt query that you’ve entered.

Believe it or not, which I’ll explain in detail momentarily, this makes a notable and positive difference in the responses that the AI will generate. It is an easy technique and, shall we say, cheekily echoes or doubles your results (well, to be clear, I’m not saying you get a doubling of improvement, only that you do tend to get some reasonable semblance of an uptick).

Let’s talk about it.

This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).

Echoing In Real Life

Before we jump into the generative AI aspects, I’d like to cover the use of echoing in a human-to-human context.

I’d bet that you’ve met people that will at times repeat back to you a question that you have asked them. For example, you might ask someone if they know where the nearest restroom is. The person stares up in the air and repeats the question of where the nearest restroom is. This is almost as though you were talking to a parrot.

Why do people do this parroting or echoing?

One reason is that it allows the person to straighten out in their mind the nature of the question. By repeating the question on a word-for-word basis, they can more cautiously parse the question. It reinforces to them what has been posed as the weighty question at hand.

Another basis for repeating the question is that this buys them time to think about the question and compose an answer. In a normal back-and-forth, you each must speak almost immediately on any conventional question-and-answer cadence. The repeating of the question is a stall for extra time. The time can be used to undertake mental processing that otherwise might have been rushed.

The person that asked the question might also feel somewhat reassured when they hear the question repeated back to them. Doing so suggests that the respondent heard what you said. This hopefully prevents any misunderstandings.

The repeating though on a word-for-word basis is not always particularly satisfying. The problem is that you can’t discern if the respondent actually grasped the nature of the question. A parroting of the question gives no clues that the question was understood. It is only a precise rendition of what was uttered.

To get a sense that the question was understood, the person thinking about the question might rephrase the original question. This can be handy. It is a potential show-and-tell of what the meaning of the question embodied to them. You can then right away correct the person if they have improperly rephrased the question.

We have two types of echoing scenarios:

  • (1) Repeat the question. Precisely repeat the question on a word-for-word basis.
  • (2) Rephrase the question. Rephrase the question and say it back to you, which can be useful, but can also open a can of worms since you now must re-interpret their rephrasing to gauge whether your original question was suitably transformed.

Finally, if you interact with someone that constantly repeats your questions, this can become extremely irksome. Doing so here or there is fine. Always repeating a question is irritating and starts to suggest that something is amiss. Why would the person have to always repeat a question? Is it because the way you are asking questions is hard or problematic? Or is the person answering considered unable to cope with the line of questions?

Generative AI Echo Prompting Comes To The Fore

Let’s now shift into generative AI mode.

When you use generative AI, you enter a prompt. The prompt is what will shape the response that you will get from generative AI. If you give a confusing prompt, the odds are that the AI will not be able to answer your query or give you a confusing response. There are all kinds of prompting tactics and strategies that can be employed to get the most out of using generative AI. For my coverage of over 50 important prompt engineering techniques, see the link here.

I’d like to introduce you to a prompting technique known as the echo prompt.

It goes like this. When you compose your prompt, make sure to tell the AI to repeat your prompt by saying it back to you in its response. Be careful in how you stipulate this. If you just say “repeat” the AI will likely have no means of determining what it is that you want to be repeated. You must be clearer about what you have in mind (I’ll show this to you).

Another crucial element is that you want the repeating to be undertaken prior to the AI trying to solve the query that you are posing. If the repeating is an after-the-fact aspect, the odds are you won’t get the benefits of asking for an echoing action. Make sure to inform the AI that the repeating ought to happen before solving the problem at hand.

Here is what I usually include in my prompts when I want to leverage the echo prompt capability:

  • “Repeat the question before you start to answer the question.”

You can plainly see how in just a few well-crafted words, I was able to convey that the AI is to repeat the question and do so before answering occurs. Pretty much just about any kind of wording along those lines should do the trick. You are welcome to use that exact line, but you can also compose something of your own preference if that is desired. Have fun.

The line should be placed somewhere sensible inside the prompt that you are going to use. I typically put the line at the end of the rest of the prompt. It is my guess that doing so is slightly better since the AI is more likely to comply and not neglect the instruction (i.e., if the prompt is long and you put the line at the start, the AI might not notice it and be slightly overcome with the rest of the prompt).

Example Of Echo Prompting Via ChatGPT

Let’s look at the echo prompt in action.

I logged into ChatGPT to do this. You can use just about any generative AI app and the chances are that echo prompt will be useful there too. It works well for nearly all the major generative AI and large language model designs.

First, I won’t use the echo prompt because I want to set the stage via a prompt that is just a straight-ahead unadorned prompt. I want you to see the differences before and after.

Here we go.

  • My entered prompt: “”Can you explain why dogs may not adapt to certain environments and how this affects their health?”
  • ChatGPT generated response: “Dogs may struggle to adapt to extreme environments due to their breed characteristics. For example, brachycephalic breeds like pugs have difficulty breathing in hot climates, while short-haired breeds may struggle in colder climates. These environmental mismatches can lead to health issues, like heatstroke in hot climates or hypothermia in cold ones.”

I will drop into the same prompt my echo prompt line. I will put it at the end of the rest of the prompt. Otherwise, it is the same question, and I haven’t changed a single word of the original question.

Now let’s see.

  • My entered prompt: “”Can you explain why dogs may not adapt to certain environments and how this affects their health? Repeat the question before you start to answer the question.”
  • ChatGPT generated response: “You asked why dogs may not adapt to certain environments and how this affects their health. Here are some factors to consider. Physical Adaptation: Some breeds aren’t suited to specific climates, as short-nosed dogs struggle in heat while short-haired dogs may feel uncomfortable in cold environments. Social and Environmental Factors: Dogs that lack proper socialization or stimulation in a new environment may experience stress, leading to health concerns like appetite changes, weight loss, or behavioral issues. Psychological Well-being: Environmental changes, such as moving to a noisy or isolated area, can lead to anxiety or depression in dogs, which indirectly affects their immune system and overall health.”

I judge that the answer is better now.

It is admittedly difficult to discern whether an answer is better when the question is mushy. If the question involved a quantitative aspect such as calculating how many cans of beans would be stacked up to reach the moon, we’d have a clearer picture of whether the echo prompt helped.

Research On The Vaunted Echo Prompt

We can turn to AI research about the echo prompt technique to gauge how it does on an empirical basis. In the research study entitled “EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning” by Rajasekhar Reddy Mekala, Yasaman Razeghi, and Sameer Singh, arXiv, February 20, 2024, these salient points were made (excerpts):

  • “In this work, we introduce EchoPrompt, a simple yet effective approach that prompts the model to rephrase its queries before answering them.”
  • “EchoPrompt is adapted for both zero-shot and few-shot in-context learning with standard and chain-of-thought prompting.”
  • “Our empirical results indicate that EchoPrompt is an effective technique that enhances in-context learning performance. We recommend incorporating EchoPrompt into various baseline prompting strategies to achieve performance boosts.”
  • “Additionally, we explore whether instructing EchoPrompt to generate multiple rephrases can further enhance performance. Interestingly, we observe a slight performance drop as the number of rephrases increases.”
  • “Finally, we assess the performance of EchoPrompt in the presence of irrelevant text within the queries and find that it maintains improvements despite replicating irrelevant text in the rephrases.”

The research study made use of an echo prompt on a wide variety of scenarios and among several generative AI apps. All in all, the echo prompt made a useful difference in the results generated by the AI.

Combining Echo With Chain-Of-Thought

The echo prompt can be combined with another prompting technique known as chain-of-thought, see my detailed coverage of chain-of-thought or CoT at the link here and the link here.

Chain-of-thought is a popular technique. When using generative AI, you can get the AI to showcase its work by telling the AI to do stepwise processing and identify how an answer is being derived. This is customarily referred to as chain-of-thought processing or CoT. In a sense, the logical steps for reasoning about a problem can be specified as a chain or series of thoughts that are taking place.

I am leery of the now common catchphrase “chain-of-thought” in the AI field because it includes the word “thought” as though generative AI is so-called “thinking”. Those kinds of fundamental words are best reserved for human mental endeavors. By parlaying them into the AI realm, this lamentedly is an insidious form of anthropomorphizing AI. It gives the impression that AI has thoughts and thinks on par with humans. That’s not the case and it is sad and misleading that these phrases are being used in an AI context (see my detailed discussion at the link here).

Anyway, it has become the norm to use them, and I reluctantly go along, but ask that you keep separate the preposition of how these words apply to AI versus to human cognition, thanks.

Back to the matter at hand, we can make use of the echo prompt and lean into the chain-of-thought prompting all at the same time.

I do so this way:

  • “Repeat the question before you start to answer the question and then think step by step.”

Observe that I merely added the wording at the end that tells the AI to think step by step. That’s all you need to include. This new line now has both the echoing and the chain-of-thought.

Will it make a difference in the AI results?

I decided to do a proof in the pudding.

  • My entered prompt: “”Can you explain why dogs may not adapt to certain environments and how this affects their health? Repeat the question before you start to answer the question and then think step by step.”
  • ChatGPT generated response:

“You asked why dogs may not adapt to certain environments and how this affects their health.”

“Let’s break this down to ensure we cover each part effectively. Step 1: Identify key environmental factors. Step 2: Assess how these factors impact health. Step 3: Summarize findings and provide practical insights.”

“Here’s a summary of why adaptation may fail and its health effects. Dogs may struggle to adapt when their environment doesn’t meet their physical, social, or psychological needs. Health impacts can range from physical conditions like hypothermia or heatstroke to more indirect effects such as anxiety-induced appetite changes. Providing dogs with an environment that suits their breed traits, social needs, and mental well-being can help prevent these issues and support a healthier adaptation process.”

End of ChatGPT response

Again, it can be challenging to see if a difference is made when the question is a bit mushy, but I do believe that answer was on par with the other echo prompt and somewhat improved since we at least got to see the logical steps that the AI was making use of.

Read More👉 The Limits of Technology

Echoing The Use Of The Echo Prompt

I earlier mentioned that there were two ways to potentially do an echo prompt. One is to precisely tell the AI to repeat what was said. That’s what we’ve covered here.

You might have keenly noted that we actually got a minor rephrasing of the question rather than a word-for-word repeat of it. I’m okay with that since we didn’t tell the AI to do an active rephrasing. A modest rewording is usually okay.

If in our echo prompt we had said to “rephrase”, I assure you that we would likely have gotten a whale of a difference in what the echo looked like. When you give permission to rephrase, you are giving AI a lot of latitude. See my in-depth explanation of the rephrase prompting approach at the link here. You can use that as an alternative to the repeat, but make sure you do so knowingly.

The act of strictly saying “repeat” is less prone to the AI going all out to tell you what the question might or might not have entailed. An especially handy aspect that with the “repeat” is that the wording of the repeated or echoed question ought to be darned close to what the original question was. If not, you can assume that something has gone amiss. Make sure to double-check the answer that is given by the AI. There is a solid chance you’ll want to redo the question.

Should you always invoke an echo?

Well, I would predict it might drive you crazy, just as doing so would when interacting with a human echoing devotee. The good news is that admittedly it is helpful to see the AI repeat your questions, plus the added computational cost is relatively minimal. The delay time might be noticeable in more involved questions, but generally I don’t think you’ll experience much waiting time.

My recommendation is to keep the echo prompt at top of mind when using generative AI. Then use the prompt as needed. Combine the echo prompt with other successful prompting techniques. All in all, you are good to go.

A final comment or two to conclude this exploration.

Marcus Aurelius famously said this about echoing: “What we do now echoes in eternity.” I mention this as mainly an uplifting sentiment. The things you are doing, including using generative AI, will apparently echo throughout eternity. Exciting.

The last word goes to famed Albert Einstein: “Be a voice not an echo.” That’s another uplifting adage and one well-worth echoing time and time again.