News Report Technology
August 29, 2023

Microsoft and Virginia Tech’s Research Reveals New In-Context Learning Strategy for LLMs

In Brief

Microsoft and Virginia Tech researchers have published a recent paper proposing training Large Language Models on algorithms.

The researchers claim that this new strategy will pioneer a new mode of in-context learning, producing results that surpass the algorithm itself.

The research paper suggests that LLMs possess an innate capability to integrate their intuition into searches that are optimized for better outcomes.

Microsoft and Virginia Tech's Research Reveals New In-Context Learning Strategy for LLMs

Microsoft and Virginia Tech researchers recently published a paper exploring a new strategy for training large language models (LLMs)

In the paper titled “Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models ”, the researchers propose training LLMs on algorithms, calling the method “Algorithm of Thoughts.” (AoT)

The paper claims that this new strategy will pioneer a new mode of in-context learning, producing results that surpass the algorithm itself. Additionally, it suggests that with this training method, LLMs could possess the capability to integrate their intuition into searches that are optimized for better outcomes.

 The research cites that LLMs have traditionally been trained on methods such as the “Chain-of-Thought,” “Self-consistency,” and “Least-to-Most prompting.” 

However, these methods presented certain limitations that restricted their overall effectiveness. 

The Limitations of Traditional Training Methods

The research explained that the “Chain-of-Thought” method involves feeding LLMs with examples where a given question unfolds through a series of intermediate reasoning pieces to reach an answer. 

While effective in enhancing thought coherence, this approach occasionally led to erroneous intermediate steps. In contrast, the “AoT” encourages LLMs to think algorithmically, generating coherent problem-solving pathways that are more intuitive and less prone to inaccuracies.

“Self-consistency” and “Least-to-Most prompting” approaches provided structured learning paths, but their rigidity limited their adaptability to complex problems. “Self-consistency” involves generating a variety of reasoning paths and selecting the final answer through a majority vote, which can require additional generations. 

“Least-to-Most prompting” decomposes problems into smaller subproblems and tackles them sequentially, while “AoT” emphasizes exploration and adaptability, enabling LLMs to consider a range of options for each subproblem, leading to more comprehensive and creative solutions.

When explored further, it was found that the “Tree of Thoughts” (ToT) method attempted to overcome coverage limitations by exploring decision trees, but it often required a high number of LLM queries, affecting efficiency. To streamline this process, “AoT” generates complete thought processes within a single context, reducing the computational burden and enhancing efficiency.

How Effective is AoT?

Given that the proposed training strategy for large language models (LLMs) is currently in a research phase, it is still bound to certain limitations. Researchers from Microsoft and Virginia Tech conducted tests on GPT-4 to explore the effectiveness of the AoT.

They acknowledged that although AoT significantly reduces the number of queries compared to the Tree of Thoughts (ToT) approach, it does require more resources than standard prompting and the Chain-of-Thought (CoT) method.

The heightened resource demand is a consequence of AoT’s idea exploration technique through token generation.

“Crafting token-efficient algorithmic examples is one avenue, but there’s also potential in judiciously tapping into or unlocking the LLM’s “tunnel-vision,” the researchers said, highlighting the limitations of their training strategy.

To overcome these limitations, the researchers propose that future efforts should involve the creation of algorithmic examples that are more efficient in terms of token usage. 

They also suggest the development of adaptive mechanisms to activate the LLM’s “tunnel-vision” more effectively, thereby enhancing the search process. Additionally, they stressed the need to gain a deeper theoretical understanding of this new mode of in-context learning before it can be implemented.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.

More articles
Cindy Tan
Cindy Tan

Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via [email protected] with press pitches, announcements and interview opportunities.

From Ripple to The Big Green DAO: How Cryptocurrency Projects Contribute to Charity

Let's explore initiatives harnessing the potential of digital currencies for charitable causes.

Know More

AlphaFold 3, Med-Gemini, and others: The Way AI Transforms Healthcare in 2024

AI manifests in various ways in healthcare, from uncovering new genetic correlations to empowering robotic surgical systems ...

Know More
Join Our Innovative Tech Community
Read More
Read more
The Next Frontier: Project Astra and the Future of AI at Google
Education Software Stories and Reviews Technology
The Next Frontier: Project Astra and the Future of AI at Google
May 15, 2024
New Cryptocurrency Mega Dice (DICE) Nears $1 Million Milestone In ICO
News Report
New Cryptocurrency Mega Dice (DICE) Nears $1 Million Milestone In ICO
May 15, 2024
Cross The Ages Raises $3.5M In Equity Funding Round Led By Animoca Brands And Initiates Token Generation Event
Business News Report Technology
Cross The Ages Raises $3.5M In Equity Funding Round Led By Animoca Brands And Initiates Token Generation Event
May 15, 2024
Binance Introduces New Funding Rate Arbitrage Bot And Rolls Out Spot Copy Trading For All Users
Markets News Report Technology
Binance Introduces New Funding Rate Arbitrage Bot And Rolls Out Spot Copy Trading For All Users
May 15, 2024