Engines & LLMs

OpenAI Shares Insider Tips for Prompting the New ChatGPT

Published

on

Getting the best results from the latest ChatGPT models (like the rumoured GPT-4.1 or latest GPT-4 versions) requires updated prompting techniques, according to OpenAI’s own team. The newer models follow instructions more literally and make fewer assumptions, meaning your old prompts might need a refresh.

To help users unlock the full potential of their advanced AI, OpenAI staff members Noah MacCallum and Julian Lee released detailed guidance. Here are the key takeaways for crafting better prompts:

1. Structure Your Prompts Clearly

Organize prompts with distinct sections for better clarity. OpenAI suggests including:

  • Role and Objective: Define who the AI should be and its goal.
  • Instructions: Give specific steps or guidelines.
  • Reasoning Steps (Optional): Ask it to outline its approach.
  • Output Format: Specify how you want the response structured (e.g., list, table, JSON).
  • Examples: Provide sample inputs and outputs.
  • Context: Offer necessary background information.
  • Final Instructions: Add any last reminders or constraints.

Using markdown for section breaks and backticks (`) for code helps the model differentiate parts of your prompt.

2. Use Delimiters Effectively

Properly separating different types of information is crucial. OpenAI found that **XML tags** (like TEXT) work exceptionally well for wrapping content sections, allowing for nesting and metadata. JSON formatting, however, performed poorly with the large context windows now available.

3. Build Simple AI Agents

You can configure ChatGPT to act more like an autonomous agent that tackles complex tasks independently. Include reminders for:

  • Persistence: Keep working until the problem is solved.
  • Tool-Calling: Use available tools (like browsing or code execution) instead of guessing.
  • Planning: Think step-by-step before acting.

OpenAI claims these simple additions boosted performance significantly in their tests.

4. Leverage Long Context Windows Smartly

Newer models can handle huge amounts of text (up to 1 million tokens). For best results when providing long documents:

  • Place key instructions at both the beginning and the end of the context.
  • Be explicit about whether the AI should only use the provided documents or if it can blend that information with its own knowledge.

5. Encourage Chain-of-Thought Reasoning

Even though models like GPT-4.1 aren’t specifically ‘reasoning models,’ you can prompt them to show their work. Asking the AI to “think step by step” (chain-of-thought) helps break down complex problems and often improves the quality of the final output, though it uses more tokens.

Get Better Results

These techniques, coming directly from OpenAI, reflect how the models are trained. By adopting a more structured and deliberate approach to prompting – treating ChatGPT less like a simple chatbot and more like a thinking partner – users can achieve significantly better and more reliable results.

Our Take

Okay, so OpenAI is basically giving us a peek under the hood at how to *really* talk to their latest models. It’s less about magic words and more about clear, structured instructions. Using things like XML tags and telling the AI its role feels way more like programming than just chatting.

The advice to put instructions at the beginning *and* end for long documents is a useful nugget, especially as context windows get massive. It shows that even super-smart AI needs clear guardrails. These tips feel like a step towards more predictable and powerful AI interactions if users actually apply them.

This story was originally featured on Forbes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version