News Report Technology
August 29, 2023

Microsoft and Virginia Tech’s Research Reveals New In-Context Learning Strategy for LLMs

In Brief

Microsoft and Virginia Tech researchers have published a recent paper proposing training Large Language Models on algorithms.

The researchers claim that this new strategy will pioneer a new mode of in-context learning, producing results that surpass the algorithm itself.

The research paper suggests that LLMs possess an innate capability to integrate their intuition into searches that are optimized for better outcomes.

Microsoft and Virginia Tech's Research Reveals New In-Context Learning Strategy for LLMs

Microsoft and Virginia Tech researchers recently published a paper exploring a new strategy for training large language models (LLMs)

In the paper titled “Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models ”, the researchers propose training LLMs on algorithms, calling the method “Algorithm of Thoughts.” (AoT)

The paper claims that this new strategy will pioneer a new mode of in-context learning, producing results that surpass the algorithm itself. Additionally, it suggests that with this training method, LLMs could possess the capability to integrate their intuition into searches that are optimized for better outcomes.

 The research cites that LLMs have traditionally been trained on methods such as the “Chain-of-Thought,” “Self-consistency,” and “Least-to-Most prompting.” 

However, these methods presented certain limitations that restricted their overall effectiveness. 

The Limitations of Traditional Training Methods

The research explained that the “Chain-of-Thought” method involves feeding LLMs with examples where a given question unfolds through a series of intermediate reasoning pieces to reach an answer. 

While effective in enhancing thought coherence, this approach occasionally led to erroneous intermediate steps. In contrast, the “AoT” encourages LLMs to think algorithmically, generating coherent problem-solving pathways that are more intuitive and less prone to inaccuracies.

“Self-consistency” and “Least-to-Most prompting” approaches provided structured learning paths, but their rigidity limited their adaptability to complex problems. “Self-consistency” involves generating a variety of reasoning paths and selecting the final answer through a majority vote, which can require additional generations. 

“Least-to-Most prompting” decomposes problems into smaller subproblems and tackles them sequentially, while “AoT” emphasizes exploration and adaptability, enabling LLMs to consider a range of options for each subproblem, leading to more comprehensive and creative solutions.

When explored further, it was found that the “Tree of Thoughts” (ToT) method attempted to overcome coverage limitations by exploring decision trees, but it often required a high number of LLM queries, affecting efficiency. To streamline this process, “AoT” generates complete thought processes within a single context, reducing the computational burden and enhancing efficiency.

How Effective is AoT?

Given that the proposed training strategy for large language models (LLMs) is currently in a research phase, it is still bound to certain limitations. Researchers from Microsoft and Virginia Tech conducted tests on GPT-4 to explore the effectiveness of the AoT.

They acknowledged that although AoT significantly reduces the number of queries compared to the Tree of Thoughts (ToT) approach, it does require more resources than standard prompting and the Chain-of-Thought (CoT) method.

The heightened resource demand is a consequence of AoT’s idea exploration technique through token generation.

“Crafting token-efficient algorithmic examples is one avenue, but there’s also potential in judiciously tapping into or unlocking the LLM’s “tunnel-vision,” the researchers said, highlighting the limitations of their training strategy.

To overcome these limitations, the researchers propose that future efforts should involve the creation of algorithmic examples that are more efficient in terms of token usage. 

They also suggest the development of adaptive mechanisms to activate the LLM’s “tunnel-vision” more effectively, thereby enhancing the search process. Additionally, they stressed the need to gain a deeper theoretical understanding of this new mode of in-context learning before it can be implemented.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.

More articles
Cindy Tan
Cindy Tan

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.

Hot Stories

How GAMEE Is Making Web3 Irresistibly Fun

by Victoria d'Este
May 09, 2025
Join Our Newsletter.
Latest News

The Calm Before The Solana Storm: What Charts, Whales, And On-Chain Signals Are Saying Now

Solana has demonstrated strong performance, driven by increasing adoption, institutional interest, and key partnerships, while facing potential ...

Know More

Crypto In April 2025: Key Trends, Shifts, And What Comes Next

In April 2025, the crypto space focused on strengthening core infrastructure, with Ethereum preparing for the Pectra ...

Know More
Read More
Read more
Gate.io Releases Latest Proof Of Reserves Report, Reports $10.87B In Total Assets And $2.42B In Excess Reserves
News Report Technology
Gate.io Releases Latest Proof Of Reserves Report, Reports $10.87B In Total Assets And $2.42B In Excess Reserves
May 9, 2025
How STON.fi’s Omniston is Making DeFi Simpler — and What’s Coming Next
Interview Business Markets Technology
How STON.fi’s Omniston is Making DeFi Simpler — and What’s Coming Next
May 9, 2025
How GAMEE Is Making Web3 Irresistibly Fun
Interview Business Markets Technology
How GAMEE Is Making Web3 Irresistibly Fun
May 9, 2025
Bitget Announces Strategic Partnership With SWEAT To Boost Movement Economy In Web3
News Report Technology
Bitget Announces Strategic Partnership With SWEAT To Boost Movement Economy In Web3
May 9, 2025