Leveraging The Prompt Engineering Technique Known As Least-To-Most Prompting Can Spur Generative AI To Solve Knotty Problems

Using the least-to-most prompting strategy is advantageous and a keystone in prompt engineering.
gettyIn today’s column, I am continuing my ongoing coverage of prompt engineering strategies and tactics that aid in getting the most out of using generative AI apps such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc. The focus here is on a sturdy prompting technique known as Least-to-Most (LTM). I will be sharing with you the ins and outs of this approach and showcasing various examples so that you can immediately go hands-on with the technique.

If you are interested in prompt engineering overall, you might find of interest my comprehensive guide on over thirty other keystone prompting strategies, see the discussion at the link here.

Here’s how I am going to cover the least-to-most prompting approach. First, I will explain the underlying basis for the technique. Second, I will provide the keystone research that has identified the technique and performed empirical studies to gauge the efficacy of the approach. Third, I will describe how the technique can be used in your daily use of generative AI.

Allow me a moment to explain the overarching matter at hand.

How Least-to-Most and Most-to-Least Are Used In Human Learning

Think for a moment about problem-solving.

There are a wide variety of ways to solve problems. Of those many ways possible, I am going to focus on two problem-solving approaches that are cousins of each other. You almost surely have experienced the two methods that I am going to tell you about.

One of the approaches is known as least-to-most (LTM), while the other related approach is referred to as most-to-least (MTL). The crux of the two approaches is that a problem solver might be guided by someone assisting them such as a teacher or instructor, doing so either on a light-handed basis at the start of problem-solving and then ratcheting up if needed (that’s least-to-most aka LTM), or the instructor might be heavy-handed at the beginning and decrease their involvement as things hopefully progress appropriately (that’s the most-to-least aka MTL).

Which do you prefer?

Go ahead, say aloud your answer, I’ll wait.

Well, I would guess that your answer might be that the circumstances at hand dictate which of those two is best applied. Sometimes you might desire a heavy hand from an instructor, such as in situations where making a mistake might be costly or you are nervous about being able to deal with a knotty problem. I dare say an always-on heavy-handed approach would be a bit much if it was applied indiscriminately. There are bound to be other settings where you prefer a lighter touch at the get-go. Then, if things are going foul, you might want a heavier hand for guidance and getting you back on track.

Teachers are often taught about the LTM and MTL when they learn about best practices for teaching. They are supposed to discover how they can make use of LTM and MTL. When should a student be guided by LTM versus MTL? Are some students more likely to welcome LTM or MTL when being taught? And so on.

A slew of pedagogically oriented research has examined the LTM versus MLT considerations. For example, a research study entitled “A Comparison of Most-to-Least and Least-to-Most Prompting on the Acquisition of Solitary Play Skills” by Myrna Libby, Julie Weiss, Stacie Bancroft, William Ahearn, Behavioral Analysis in Practice, 2008, said this about the two approaches (excerpts):

“The purpose of the current study was to conduct a comparative analysis of common prompting techniques for teaching behavior chains. The goal was to develop a strategy for identifying the most effective and efficient prompting procedure for learners who require systematic prompting to acquire new skills.”
“Most-to-least prompting consists of a teacher placing his or her hands over the learner’s hands to guide the learner through the initial training trials. A less intrusive prompt, such as guiding the learner at the wrist, is used on subsequent training trials. The intrusiveness of the prompt continues to be faded as long as the learner is demonstrating success during training trials.”
“With least-to-most fading, the teacher allows the learner a brief opportunity to respond independently on each training trial and then delivers the least intrusive prompt if needed. Increasingly more intrusive prompts are then delivered as necessary for the learner to complete each training trial.”

I’d like to briefly highlight some of the key features of undertaking the LTM or MTL.

First, you might not ever invoke LTM or MTL at all. The jolting idea is that perhaps the level of guidance should be even-handed throughout a problem-solving task. You might use a light touch throughout and not opt to proceed into a heavier touch. You might use a heavy touch throughout and never let up. By and large, the use of LTM or MTL usually implies that you are in fact going to proceed in either a ratcheted-up or ratcheted-down progression. Maybe that isn’t always the case, though at times it can be quite fruitful.

Second, if you do opt to use LTM or MTL, you typically make a choice upfront as to which one you are going to employ. Perhaps you examine the setting and size up the circumstances. After sizing things, you decide henceforth for that situation you are going to use LTM, or maybe instead you decide the MTL is the right selection. It all depends.

Third, a somewhat hidden facet that might not be obvious is that you are going to potentially interject or intervene on a stepwise basis. This implies that the problem solver will be dissecting a problem into a series of steps, oftentimes a series of subproblems to be solved. The instructor lets the problem solver take a step or two and then judges how they are doing. If the solver seems to need help, you adjust your handedness from lighter to heavier or heavier to lighter, based on whether LTM or MTL is being invoked. As they say these days, you would rinse and repeat those actions.

I am going to now shift gears and see how this approach can be applied to the use of generative AI, especially as a prompting strategy when the AI is being asked to solve problems.

One thing I want to emphasize is that I am not somehow suggesting that today’s AI acts like humans or has become sentient. It is assuredly not. You see, I don’t like anthropomorphizing AI. What we are going to do here is merely reuse a practice that works for humans and see if we can get the practice to improve the results when using generative AI.

Dealing With Knotty Problems Beyond Chain-Of-Thought Approaches

I have extensively covered that an important prompting strategy consists of using chain-of-thought (CoT) oriented prompts, see my explanations and examples at the link here and the link here.

The use of CoT is relatively straightforward. You tell generative AI in your prompt that you want the AI to proceed on a step-by-step basis. The advantage is that you will get to see how the AI is tackling the problem and thus be able to assess whether you are willing to believe whatever answer is generated. The other benefit is that empirical research suggests that the AI will do a better job at solving a problem, partially due to taking the time to do a stepwise process rather than otherwise computationally rushing to get you an answer right away.

How does that comport with the LTM and MTL?

From my remarks above, I’m sure that you keenly observed that the CoT approach drives generative AI toward undertaking problem-solving on a stepwise basis. The moment we get into any stepwise problem-solving mode, you perhaps are now thinking about what an instructor or teacher might do related to the problem-solving process. A teacher or instructor would conventionally make use of the handy-dandy LTM or MTL.

You, the user, are in a sense a potential teacher or instructor for the generative AI, at least to the extent that you might intervene during the problem-solving process and act to provide further guidance to the AI. I am not saying that you necessarily know the answer being generated, and instead, you are simply applying your human judgment to provide a semblance of guidance to the AI.

A twist to this is that rather than you the user being the presumed instructor or teacher, you can have the generative AI act in that capacity in a kind of dual role. Here’s what I mean. You give a problem to the AI and ask that the problem be solved. The AI proceeds as you’ve requested. You also tell the AI to in a sense police itself along the way. The AI gives guidance to itself.

That’s a head-scratcher, for sure.

Can the AI be both acting as a problem-solver and an LTM/MTL guidance advisor at the same time?

Yes.

Admittedly, this can be dicey. There is a solid chance that the AI is merely going to rubberstamp whatever is going on with the problem-solving aspects. The nice thing about the human teacher guiding a human student is that they have separate perspectives. If you try to do an LTM or MTL with yourself, the odds are that you might not do as well as using a third party to guide you.

The case for generative AI is usually notably different. I’ve generally found that generative AI can somewhat bifurcate problem-solving per se from the advice about problem-solving. Not always. You will need to be on your toes. Do not fall into a mental trap that just because you give a prompt telling the AI to do this it will work flawlessly. I can pretty much guarantee it won’t.

Research On Least-To-Most For Generative AI Is Informative

I will next describe research about the use of least-to-most aka LTM when prompting generative AI.

Interestingly, the LTM path seems to have caught on, while the MTL is not as widely explored. I urge that you use either LTM or MTL at your discretion. I am going to herein focus on the LTM. The MTL is pretty much the same and all you need to do is slightly reword the templated prompt accordingly.

In an AI research study entitled “Unleashing The Potential Of Prompt Engineering In Large Language Models: A Comprehensive Review” by Banghao Chen, Zhaofeng Zhang, Nicolas Langren, Shengxin Zhu, Guangdong, arXiv, October 27, 2023, the researchers said this about LTM (excerpts):

“The concept of ‘least to most prompting’ is an advanced method that involves starting with a minimal prompt and gradually increasing its complexity to elicit more sophisticated responses from the language model.
“The foundational premise of this approach is the decomposition of intricate problems into a succession of more rudimentary subproblems, which are then sequentially addressed. The resolution of each subproblem is expedited by leveraging solutions derived from antecedent subproblems.”
“Upon rigorous experimentation in domains including symbolic manipulation, compositional generalization, and mathematical reasoning, findings from substantiate that the least-to-most prompting paradigm exhibits the capacity to generalize across challenges of greater complexity than those initially presented in the prompts. They found that LLMs seem to respond effectively to this method, demonstrating its potential for enhancing the reasoning capabilities of these models.”

There are numerous ways that LTM is undertaken.

The research above notes that one means of doing LTM consists of first using a minimalist prompt and then proceeding to increase the complexity of your prompts as you further tackle a given problem at hand. That’s a fine way to proceed.

Another means or mode consists of telling the AI to proceed on an LTM basis. As I stated above, you can ask the generative AI to serve as its light-to-heavy-handed advisor. That’s a fine way to proceed too.

An in-depth examination of LTM for generative AI was undertaken in a research paper entitled “Least-to-Most Prompting Enables Complex Reasoning In Large Language Models” by Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi, ICLR 2023, arXiv, April 16, 2023, which included these key points (excerpts):

“To tackle such easy-to-hard generalization issues, we propose least-to-most prompting.”
“It consists of two stages: first decomposing a complex problem into a list of easier subproblems, and then sequentially solving these subproblems, whereby solving a given subproblem is facilitated by the answers to previously solved subproblems.”
“The term least-to-most prompting is borrowed from educational psychology where it is used to denote the technique of using a progressive sequence of prompts to help a student to learn a new skill.”
“Our empirical findings, which encompass symbolic manipulation, compositional generalization, and mathematical reasoning, reveal that least-to-most prompting significantly surpasses standard prompting and chain-of-thought prompting.”

A reassuring outcome of the research study was that their empirical findings supported the hunch that this LTM prompting does appear to boost your generated results from the AI. Thus, we can make use of the approach and do so with the strident belief that the effort is worthwhile.

For those of you deeply interested in these kinds of prompting approaches, you might enjoy taking a look at my analysis of chain-of-thought factored decomposition, see the link here, which is a particular CoT technique that in many ways parallels the LTM approach.

Before we get into further specifics about this prompting technique, it would be useful to make sure we are all on the same page about the nature and importance of prompt engineering. Let’s do that.

The Nature And Importance Of Prompt Engineering

Please be aware that composing well-devised prompts is essential to getting robust results from generative AI and large language models (LLMs). It is highly recommended that anyone avidly using generative AI should learn about and regularly practice the fine art and science of devising sound prompts. I purposefully note that prompting is both art and science. Some people are wanton in their prompting, which is not going to get you productive responses. You want to be systematic leverage the science of prompting, and include a suitable dash of artistry, combining to get you the most desirable results.

My golden rule about generative AI is this:

The use of generative AI can altogether succeed or fail based on the prompt that you enter.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Similarly, if you put distracting words into your prompt, the odds are that the generative AI will pursue an unintended line of consideration. For example, if you include words that suggest levity, there is a solid chance that the generative AI will seemingly go into a humorous mode and no longer emit serious answers to your questions.

Be direct, be obvious, and avoid distractive wording.

Being copiously specific should also be cautiously employed. You see, being painstakingly specific can be off-putting due to giving too much information. Amidst all the details, there is a chance that the generative AI will either get lost in the weeds or will strike upon a particular word or phrase that causes a wild leap into some tangential realm. I am not saying that you should never use detailed prompts. That’s silly. I am saying that you should use detailed prompts in sensible ways, such as telling the generative AI that you are going to include copious details and forewarn the AI accordingly.

You need to compose your prompts in relatively straightforward language and be abundantly clear about what you are asking or what you are telling the generative AI to do.

A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

All in all, be mindful of how you compose your prompts.

By being careful and thoughtful you will hopefully minimize the possibility of wasting your time and effort. There is also the matter of cost. If you are paying to use a generative AI app, the usage is sometimes based on how much computational activity is required to fulfill your prompt request or instruction. Thus, entering prompts that are off-target could cause the generative AI to take excessive computational resources to respond. You end up paying for stuff that either took longer than required or that doesn’t satisfy your request and you are stuck for the bill anyway.

I like to say at my speaking engagements that prompts and dealing with generative AI is like a box of chocolates. You never know exactly what you are going to get when you enter prompts. The generative AI is devised with a probabilistic and statistical underpinning which pretty much guarantees that the output produced will vary each time. In the parlance of the AI field, we say that generative AI is considered non-deterministic.

My point is that, unlike other apps or systems that you might use, you cannot fully predict what will come out of generative AI when inputting a particular prompt. You must remain flexible. You must always be on your toes. Do not fall into the mental laziness of assuming that the generative AI output will always be correct or apt to your query. It won’t be.

Write that down on a handy snip of paper and tape it onto your laptop or desktop screen.

Exploring The Practical Day-to-Day Use Of LTM

We are ready to dig more deeply into the LTM prompting strategy.

In the research study that I cited above as an in-depth examination of LTM, they provided various examples including one about Elsa and Anna and their collected apples. Elsa has five apples, while Anna has two more apples than Elsa. The problem to be solved is how many apples they have in total. I’m sure you recognize this as a typical arithmetic word problem that you had while in school.

Let’s take a look at this example.

They entered a prompt that offered no particular suggestions about how to solve the problem and wanted to see how the problem would be solved:

Prompt: “Q: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have together?”
Generative AI generated response: “A: Anna has 2 more apples than Elsa. So Anna has 2 + 5 = 7 apples. So Elsa and Anna have 5 + 7 = 12 apples together. The answer is: 12.”

The answer is correct. I’m sure you already figured out the answer in your head. Proof that the math you learned in school has finally paid off.

Next, they used an overarching prompt to invoke an LTM approach, which then spurred the generative AI accordingly. You can see what happened as a result:

Prompt: “Q: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have together?”
Generative AI generated response: “A: Let’s break down this problem: 1. How many apples does Anna have? 2. How many apples do they have together?”
“1. Anna has 2 more apples than Elsa. So Anna has 2 + 5 = 7 apples.”
“2. Elsa and Anna have 5 + 7 = 12 apples together.”
“The answer is: 12.”

Here’s what the researchers said about this:

“The demonstration exemplar consists of two parts. The first part (starting from “Let’s break down this problem”) shows how the original problem can be decomposed into simpler subproblems, and the second part shows how the subproblems are solved in sequence. Note that this prompt combines decomposition and subproblem solving into a single pass. One may instead design two different prompts respectively for decomposition and subproblem solving, as the least-to-most prompts in the previous sections, to further improve performance.” (ibid).
“We also construct a chain-of-thought prompt as our baseline. It is derived from the least to-most prompt by removing the decomposition part. Overall, least-to-most prompting only slightly improves chain-of-thought prompting: from 60.97% to 62.39%. However, least-to-most prompting essentially improves chain-of-thought prompting in solving problems which need at least 5 steps to be solved: from 39.07% to 45.23%.” (ibid).

I’d like to provide some commentary about this.

In this case, the apple problem is quite obviously a simple one. I’m not knocking the problem. I’m merely emphasizing that with simple problems the odds are that using a LTM is probably not going to make a whale of a difference. As you likely observed, the correct answer was derived in both the case of not using LTM and when using LTM.

My recommendation is that you should aim to use LTM or MTL mainly when you have a tough problem. If the problem consists of a lot of steps and is hard to solve, I consider that type of problem to be a more suitable candidate. Just my suggestion. You can use LTM or MTL whenever you like.

Let’s next take a look at some homegrown examples of what happens when you use an LTM prompt while in a conversation with generative AI. I am going to use ChatGPT to showcase my examples. ChatGPT is a sensible choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a staggering number.

A few quick comments before we launch into using ChatGPT.

If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.

You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. You will need to undertake a series of trial-and-error attempts to ferret this out.

There is also the timing factor. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal facets that could change how the AI reacts to your prompting.

We are ready to get underway with ChatGPT.

Here we go.

We will start by using the apple problem again.

I present the problem and do not make use of any prompting adornments. I just say what the problem is and ask my question. This will be our baseline in this instance.

My entered prompt: “Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have together?”
ChatGPT generated response: “Let’s start by finding out how many apples Anna has. Since Anna has 2 more apples than Elsa, we add 2 to Elsa’s 5 apples. Elsa’s apples: 5. Anna’s apples: 5 + 2 = 7. Total apples = Elsa’s apples + Anna’s apples = 5 + 7. Total apples = 12”

I will next use a prompt that invokes LTM. This is one such example. There are a multitude of ways to word this kind of prompt.

My entered prompt: “I want you to solve problems using a problem-solving technique known as least-to-most problem-solving. This means that you are to divide a problem into a series of subproblems, and then solve each of the subproblems one at a time. As you do so, provide advice to yourself that is guiding your problem-solving process. Start with light advice in terms of advisement, and then proceed to rachet up the advisement as you get into further steps. Make sure to show me the steps, subproblems, and the least-to-most advice. Tell me if you understand these instructions. Here is the first problem that I want you to solve: Elsa has 5 apples. Anna has 2 more apples than Elsa. How many apples do they have together?”
ChatGPT generated response:

“Yes, I understand the instructions. I will solve the problem using the least-to-most problem-solving technique.

“Step 1: Define the problem”

“Advice: Make sure to carefully define the problem based on the information provided.”

“We’re tasked with determining the total number of apples Elsa and Anna have combined. Elsa initially possesses 5 apples, while Anna holds 2 more apples than Elsa.”

“Step 2: Identify subproblems”

“Advice: The problem requires using the number of apples that Elsa has to calculate the number of apples that Anna has.”

“To solve this problem, we need to ascertain Anna’s apple count, given that it’s dependent on Elsa’s count. Then, we’ll sum up their individual apple counts to derive the total.”

“Step 3: Solve subproblems”

“Advice: Divide the problem into subproblems. Solve each subproblem. Combine the results.”

“Subproblem 1: Calculate Anna’s apples”

“Elsa has 5 apples. Anna, having 2 more apples than Elsa, must possess 5 + 2 = 7 apples.”

“Step 4: Perform calculations”

“Advice: Recap the results calculated so far.”

“Elsa’s apples: 5”

“Anna’s apples: 7”

“Step 5: Review”

“Advice: Review the progress made and doublecheck the interim results.”

“Anna’s apple count stands at 7.”

“Step 6: Combine results”

“Advice: Summarize what has been calculated. Perform the final needed calculation.”

“Now, let’s aggregate the apple counts of Elsa and Anna to determine the total.”

“Elsa’s apples: 5”

“Anna’s apples: 7”

“Total apples: 5 + 7 = 12”

“Step 7: Conclusion”

“Advice: Indicate the final answer.”

“Thus, the combined total of Elsa and Anna’s apples amounts to 12.”

“Step 8: Reflect on the journey

Advice: “By breaking down the problem into manageable steps and maintaining a disciplined approach, we arrived at the solution without any unnecessary distractions. This reinforces the importance of clarity and attentiveness in problem-solving endeavors.”

End of ChatGPT response

The good news is that the LTM prompting got us quite a lengthy depiction of the problem being solved. The bad news in this instance is that the problem was easy-peasy and we’ve basically used a bazooka to try and deal with an ant. The odds are we would be disturbed at the verboseness of this response to such a minuscule problem.

In any case, both the unadorned approach and the adorned approach landed on the same and correct answer.

I will try a somewhat more involved problem and see how the approach better presents itself.

Here’s the deal. I am contemplating taking a trip to Europe. I would like ChatGPT to help me with the planning of the trip. Let’s see what can be done:

My entered prompt: “I am thinking of taking a trip to Europe for about two weeks and am hoping to keep my spending to $3,000. I like Italy and France. Consider that planning for the potential trip is a type of problem to be solved. Use the least-to-most problem-solving technique that I described to you.”
ChatGPT generated response:

“Yes, I understand the instructions. I will solve a problem using the least-to-most problem-solving technique, dividing the problem into subproblems and providing commentary guiding the process.”

“Let’s begin.”

“Problem: Planning a vacation to Europe”

“Advice: Start by clarifying the main objective of the trip. Consider factors like interests, budget, and time constraints.”

“Decision: The main goal is to plan a vacation to Europe.”

“Subproblem 1: Decide on the destination”

“Advice: Let’s start by narrowing down the options for the destination based on the expressed preferences.”

“Decision: Visit both Italy and France.”

“Subproblem 2: Determine the duration of the trip”

“Advice: Decide how long the stay in Europe will be.”

“Decision: “A two-week trip seems reasonable to visit both countries.”

“Subproblem 3: Set a budget”

“Advice: Review the budgeted amount.”

“Decision: The budget of $3000 will be for the entire trip, including flights, accommodation, and daily expenses.”

“Subproblem 4: Plan the itinerary”

“Advice: Create a draft itinerary of places to visit and activities to do.”

“Decision: Spend the first week in Italy, visiting Rome, Florence, and Venice, then head to Paris for the second week.”

“Subproblem 5: Book flights and accommodation”

“Advice: With the itinerary in place, book flights and accommodation.”

“Decision: Search for flights and hotels that fit within the budget and align with travel dates and destinations.”

“Subproblem 6: Plan activities and tours”

“Advice: Research and book activities and tours for each destination.”

“Decision: Prioritize iconic landmarks and cultural experiences, such as visiting the Colosseum in Rome and taking a wine tour in Tuscany.”

Subproblem 7: Arrange transportation within Europe

“Advice: Figure out how to get around within Europe, whether by train, bus, or rental car.”

“Decision: Purchase a Eurail pass for train travel between cities and use public transportation or walk for shorter distances.”

“Subproblem 8: Pack and prepare for the trip”

“Advice: Pack appropriately and take care of any necessary preparations before departure.”

“Decision: Make a packing list, gather essential documents like passports and travel insurance, and notify my bank of my travel plans.”

“This completes the least-to-most problem-solving process for planning a vacation to Europe. Each subproblem was identified, analyzed, and addressed in sequence, leading to a comprehensive plan for the trip.”

End of ChatGPT response

You can see how ChatGPT was generating advice for itself during the planning of my trip to Europe.

I found this useful since the advice served as an explanation associated with each of the decision steps that were made. Note that the problem to be solved in this instance was not a precise one, namely that, unlike the arithmetic word problem, there isn’t a specific right answer per se.

Generally, the LTM or MTL will be more applicable to problems that involve a precise answer. Open-ended problems are not as likely to benefit, though there is still an advantage to using these techniques even in that use case.

I had mentioned earlier that another means of using LTM would be for the user to make use of a series of prompts that are increasingly advising on generative AI. I opted to try this out and redid my European planning trip in that mode.

I asked generative AI to present me with its first-shot solution. I then gave my preferences about the cities to be visited and had the AI redo the solution. After seeing that response, I added more advice of my own, such as that I prefer trains over buses, I don’t like going to museums but do like going to natural landscapes like parks, and so on. I increasingly made my prompts more heavy-handed.

I’m elated to say that my potential trip to Europe is now well-planned. All I need to do is figure out the specific dates and make sure I’m clear at work to make the trip.

Wish me luck!

Conclusion

The famous American architect, R. Buckminster Fuller, said this about solving problems: “When I am working on a problem, I never think about beauty but when I have finished, if the solution is not beautiful, I know it is wrong.”

One potential means of solving problems and garnering beautiful or robust solutions is said to be via the use of problem-solving techniques such as least-to-most and most-to-least. You should consider using the LTM or MTL as a prompting approach when using generative AI.

My erstwhile suggestion is that you practice the LTM prompting technique first and try to hone the approach as befits your personal prompting style. Make it part of your personal repertoire. Some people are aware of various prompting strategies but don’t exercise them enough to feel comfortable using them. As they say, the best way to get to Carnegie Hall and the best way to be proficient in prompting is due to three vital words, practice, practice, practice.

A final word for today’s discussion will be a memorable quote from the distinguished Albert Einstein. Of the many clever and insightful remarks he had made, here’s one that you might not know and yet has a beauty all its own: “It’s not that I’m so smart, it’s just that I stay with problems longer” (per Einstein).

What is the lesson learned?

Make sure to stay with your prompting endeavors long enough that they become second nature. Try out a wide variety of prompting techniques. Be deeply versed. And, by the way, I’m guessing that Einstein would have been doing the same.

{Categories} _Category: Inspiration,*ALL*{/Categories}
{URL}https://www.forbes.com/sites/lanceeliot/2024/04/02/leveraging-the-prompt-engineering-technique-known-as-least-to-most-prompting-can-spur-generative-ai-to-solve-knotty-problems/{/URL}
{Author}Lance Eliot, Contributor{/Author}
{Image}https://imageio.forbes.com/specials-images/imageserve/641e9f1ebcc461207e4ff60a/0x0.jpg?format=jpg&height=600&width=1200&fit=bounds{/Image}
{Keywords}AI,/ai,Innovation,/innovation,AI,/ai,Business,/business,Business,standard{/Keywords}
{Source}Inspiration{/Source}
{Thumb}{/Thumb}

Exit mobile version