Research Highlights
Researchers Asked LLMs for Strategic Advice. They Got “Trendslop” in Return.
—
By Natalia Levina
As seen in: Harvard Business Review
Leaders and consultants are increasingly turning to large language models (LLMs) such as ChatGPT as silent partners in the boardroom. These tools promise to summarize complex information, produce clear arguments, and offer polished strategic recommendations in seconds. But as LLMs are integrated into executive workflows, a critical question emerges: How good is their advice? Is it trustworthy?
Leaders might assume that LLMs are able to offer a kind of unbiased, outside perspective. Trained on huge corpuses, it’s fair to assume that they might offer a fresh perspective. Unfortunately, that might be a mistake. An LLM is not the colleague who critically evaluates current ideas, looks into the contextual specifics, stress-tests assumptions, and pushes back when everyone gets comfortable. On strategy, LLMs might be more akin to a freshly minted MBA or junior consultant, parroting what’s popular rather than what’s right for a particular situation.
In our recent research, we found that leading LLMs have clear biases when it comes to strategy. They consistently recommend strategies that align with modern managerial buzzwords and trends rather than context-specific strategic logic. Across thousands of simulations, we saw LLMs almost uniformly select the same trendy strategies, regardless of context. We call the propensity for AI to opt for buzzy ideas over reasoned solutions “trendslop.” In the context of strategic analysis, we call this phenomenon, “strategy trendslop.”
Read the full Harvard Business Review article.
___
Natalia Levina is Paganelli-Bull Professor of Technology & International Business and the Ph.D. Program Coordinator for Information Systems at NYU Stern.