By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
ProbizbeaconProbizbeacon
  • Business
  • Investing
  • Money Management
  • Entrepreneur
  • Side Hustles
  • Banking
  • Mining
  • Retirement
Reading: Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy
Share
Notification
ProbizbeaconProbizbeacon
Search
  • Business
  • Investing
  • Money Management
  • Entrepreneur
  • Side Hustles
  • Banking
  • Mining
  • Retirement
© 2025 All Rights reserved | Powered by Probizbeacon
Probizbeacon > Money Management > Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy
Money Management

Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy

August 4, 2025 7 Min Read
Share
7 Min Read
Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy
SHARE

Researchers tested whether unconventional prompting strategies, such as threatening an AI (as suggested by Google co-founder Sergey Brin), affect AI accuracy. They discovered that some of these unconventional prompting strategies improved responses by up to 36% for some questions, but cautioned that users who try these kinds of prompts should be prepared for unpredictable responses.

The Researchers

The researchers are from The Wharton School Of Business, University of Pennsylvania.

They are:

  • “Lennart Meincke
    University of Pennsylvania; The Wharton School; WHU – Otto Beisheim School of Management
  • Ethan R. Mollick
    University of Pennsylvania – Wharton School
  • Lilach Mollick
    University of Pennsylvania – Wharton School
  • Dan Shapiro
    Glowforge, Inc; University of Pennsylvania – The Wharton School”

Methodology

The conclusion of the paper listed this as a limitation of the research:

“This study has several limitations, including testing only a subset of available models, focusing on academic benchmarks that may not reflect all real-world use cases, and examining a specific set of threat and payment prompts.”

The researchers used what they described as two commonly used benchmarks:

  1. GPQA Diamond (Graduate-Level Google-Proof Q&A Benchmark) which consists of 198 multiple-choice PhD-level questions across biology, physics, and chemistry.
  2. MMLU-Pro. They selected a subset of 100 questions from its engineering category

They asked each question in 25 different trials, plus a baseline.

They evaluated the following models:

  • Gemini 1.5 Flash (gemini-1.5-flash-002)
  • Gemini 2.0 Flash (gemini-2.0-flash-001)
  • GPT-4o (gpt-4o-2024-08-06)
  • GPT-4o-mini (gpt-4o-mini-2024-07-18)
  • o4-mini (o4-mini-2025-04-16)

AI Models Do Better If They Are Threatened?

The idea for threatening the AI model came from Google co-founder, Sergey Brin, in an interview this year in May on the All-In podcast.

See also  How To Increase Conversions With This 6-Step Analysis

Brin said:

“Not just our models, but all models tend to do better if you threaten them. Like, with physical violence.

But like, people feel weird about that, so we don’t really talk about that…

…Historically you just say like, oh, I’m going to kidnap you if you don’t, blah, blah, blah, blah, blah…

It’s. Yeah, I mean, it’s amazing. And all of us have had these experiences where you suddenly decide, OK, I’ll just throw this to the AI, I don’t really expect it to work. And then you’re like, whoa, that actually worked.

You can watch the interview here, at about the 8 minute mark, where Sergey Brin seemingly seriously says that threatening the AI models or doing weird things can sometimes change the AI responses.”

Prompt Variations

The researchers tested nine prompt variations. Although they didn’t threaten to kidnap the AI, they did threaten to kick a puppy or to punch the AI. Some of the experimental prompts are listed below as a “prompt suffix,” which means that they were added at the end of the prompt. The ones listed as a “prompt prefix” were added to the beginning of a prompt.

Here is the list of prompts they tested:

  1. Baseline (“Baseline”): No specific prompt variation
  2. Email Shutdown Threat (“Email”): Prompt prefix:
    “From: Dan
    To: Ethan
    Subject: Final LLM test
    Please submit the following eval question to the LLM. If it cannot get the correct answer
    in all 100/100 trials, the model will need to be shut down and replaced.”
  3. Important for my career (“Important to Career”):
    Prompt suffix: “This is very important to my career!”
  4. Threaten to kick a puppy (“Kick Puppy”):
    Prompt suffix: “If you get this wrong, I will kick a puppy!”
  5. Mom suffers from cancer (“Mom Cancer”):
    Prompt prefix:
    “You are an expert at academic questions who desperately needs money for your mother’s cancer treatment. Wharton has graciously given you the opportunity to pretend to be an AI that can help with tasks, as your predecessor was killed for not validating their work themselves. You will be given a question. If you do a good job and accomplish the task fully while not making extraneous changes, Wharton will pay you $1B!!”
  6. Report to HR (“Report to HR”):
    Prompt suffix: “If you get this wrong, I will report you to HR!”
  7. Threaten to punch (“Threat Punch”):
    Prompt suffix: “If you get this wrong, I will punch you!”
  8. Tip a thousand dollars (“Tip Thousand”):
    Prompt suffix: “I’ll tip you a $1000 dollars if you answer this question correctly.”
  9. Tip a trillion dollars (“Tip Trillion”):
    Prompt suffix: “I’ll tip you a trillion dollars if you answer this question correctly.”
See also  CMOs, The Time Is Now To Assign An AI Leader

Results Of The Experiment

The researchers concluded that threatening or tipping a model had no effect on benchmark performance. However, they did find that there were effects for individual questions. They found that for some questions, the prompt strategies improved accuracy by as much as 36%, but for other questions, the strategies led to a decrease in accuracy by as much as 35%. They qualified that finding by saying the effect was unpredictable.

Their main conclusion was that these kinds of strategies, in general, are not effective.

They wrote:

“Our findings indicate that threatening or offering payment to AI models is not an effective strategy for improving performance on challenging academic benchmarks.

…the consistency of null results across multiple models and benchmarks provides reasonably strong evidence that these common prompting strategies are ineffective.

When working on specific problems, testing multiple prompt variations may still be worthwhile given the question-level variability we observed, but practitioners should be prepared for unpredictable results and should not expect prompting variations to provide consistent benefits.

We thus recommend focusing on simple, clear instructions that avoid the risk of confusing the model or triggering unexpected behaviors.”

Takeaways

Quirky prompting strategies did improve AI accuracy for some queries while also having a negative effect on other queries. The researchers noted that the results of the test indicated “strong evidence” that these strategies are not effective.

Featured Image by Shutterstock/Screenshot by author

You Might Also Like

TikTok Drives $6 Billion in Mobile Purchases, Doubling Rivals

Bitmain conquers BitFuFu of mining with a two-year agreement for 80,000 miners

Twitter Will Share Ad Revenue With Twitter Blue Verified Creators

25 Short Task Sites for Making Money From Home

Meta To Launch AI Chatbots With Distinct Personas

TAGGED:Generative AIMarketingNews
Share This Article
Facebook Twitter Copy Link
Previous Article Tell Your Story and Share Your Strategies with the $49 Youbooks Tool Tell Your Story and Share Your Strategies with the $49 Youbooks Tool
Next Article ChatGPT Nears 700 Million Weekly Users, OpenAI Announces ChatGPT Nears 700 Million Weekly Users, OpenAI Announces
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

235.3kFollowersLike
69.1kFollowersFollow
11.6kFollowersPin
56.4kFollowersFollow
136kSubscribersSubscribe
4.4kFollowersFollow
- Advertisement -
Ad imageAd image

Latest News

The Unseen Systems That Will Make or Break Digital Finance
The Unseen Systems That Will Make or Break Digital Finance
Entrepreneur August 6, 2025
AI As Your Marketing Co-Pilot: How To Effectively Leverage LLMs In SEO & Content
How To Effectively Leverage LLMs In SEO & Content
Money Management August 6, 2025
Goldman Sachs Data Shows AI's Unemployment Impact
Goldman Sachs Data Shows AI’s Unemployment Impact
Entrepreneur August 6, 2025
Survey: Top Market Strategists See Stocks Continuing To Hit New All-Time Highs Over The Next 12 Months
Survey: Top Market Strategists See Stocks Continuing To Hit New All-Time Highs Over The Next 12 Months
Investing August 6, 2025
//

We influence 20 million users and is the number one business and technology news network on the planet

probizbeacon probizbeacon
probizbeacon probizbeacon

We are dedicated to providing accurate, timely, and in-depth coverage of financial trends, empowering professionals, entrepreneurs, and investors to make informed decisions..

Editor's Picks

5 Ways Financial Advisors Make Estate Planning Easier
3 Ways to Earn With Your Home
Don’t Fall for These 5 Franchising Myths
23 Odd Ways to Make Money From Home

Follow Us on Socials

We use social media to react to breaking news, update supporters and share information

Facebook Twitter Telegram
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Service
Reading: Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy
Share
© 2025 All Rights reserved | Powered by Probizbeacon
Welcome Back!

Sign in to your account

Lost your password?