Hey there folks! It’s been a while since I’ve last published a post here. I’ve been heads down working my full time role as a product manager at Hashnode and shipped a lot of product features.
I have had multiple start stops to this serendipity engine that is my newsletter, but now it’s time to restart it again.
Welcome to a bunch of new subscribers who have joined us after downloading one of my Notion templates. For those who are new here, I’m Kavir, I write on product management, startups, and AI. Check out some of my best work here.
With that out of the way, let’s dive into this week’s newsletter on ChatGPT’s custom instructions and how you can increase GPT4’s IQ points.
Today’s edition is brought to you by Bubble.
Bubble is a No-code app development framework that lets you design, develop, host and scale applications without writing any code.
Bubble was named one of Fast Company’s Most Innovative Small and Mighty Companies of 2021.
ChatGPT has really changed the game for knowledge work. I last wrote about it in Jan 2023 (ChatGPT for PMs) and since then the space has grown in leaps and bounds including GPT4, Custom Instructions, GPT-4V, DALL-E3 and voice conversations.
In today’s newsletter I will talk about custom instructions in ChatGPT. Because I think they’re an underrated facet of ChatGPT and not enough people know about it or how to use it. You can improve GPT4’s IQ points with just a few lines of text. Let’s dive in:
Custom Instructions
Custom Instructions is a relatively new addition to ChatGPT launched first for beta users and now for all users.
It allows the user to configure information related to the user and how they would like to interact with the AI. Here are two questions it asks:
What would you like ChatGPT to know about you to provide better responses?
and
How would you like ChatGPT to respond?
Both are equally interesting and can have profound effects. Let’s start with the first one:
What would you like ChatGPT to know about you to provide better responses?
One common complaint of AI has been that the responses can be extremely generic. LLMs are generalized AIs with no specialization. Unless you fine-tune the model or prompt engineer it a fair bit, it will spew out generic responses that may or may not be relevant or add value.
This setup aims to reduce that.
You can give the AI a lot of context to who you are:
what do you do, what do you write about, what are your inspirations, what are your interests, what are your goals, what are your motivations, what are your challenges.
For example, I have written that:
My name is Kavir Kaycee, I'm a product manager and writer. I write The Discourse (https://thediscourse.co)—a newsletter about startups, tech, product. And nocodeshots.com, a newsletter on no-code tools and news.
I’m looking to be the best product manager I can be.
My personal goals are to be consistent with my fitness and mental health goals.
I love books, writing, technology, weight lifting, soccer, traveling, standup comedy.
My writing heroes are: Paul Graham, PG Wodehouse, Douglas Adams.
I want to get better at time management and effectively plan my day in advance and keep some space for thinking and planning the future. I need reassurance but need to focus on growth.
This covers what am I am working on, what do I write about, what are my goals, my interests, my inspirations, and what are my challenges.
Now this comes in handy with chat, especially when you ask general questions. The AI already knows about you, so it can make the response more specific.
For example here, it added the point about my interest in startups, tech, and no-code tools.
And when you need a definitive recommendation. It’ll take your goals and constraints in mind and give you an answer one way or the other.
But where this truly shines is during voice conversations. It can really act as your assistant, therapist, counselor, advisor and it seems so natural. More on this in a later piece, subscribe to be the first to receive that piece.
The second one is:
How would you like ChatGPT to respond
There are two key problems that ChatGPT and LLMs face:
One is that the LLM jumps to conclusions. The second is that the AI hallucinates a lot.
Here is an example of the AI getting a simple arithmetic answer wrong and then later correcting itself.
Let’s fix both of these with custom instructions.
Here’s the custom instruction prompt that I am using that is derived from Jeremy Howard’s tweet.
You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.
Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Take a deep breath and think step by step.
Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.
Don't be verbose in your answers, but do provide details and examples where it might help the explanation.
It’s a lot of fancy words, so let me break it down for you in simple words.
Autoregressive in simple terms is similar to the autocorrect in our phones, where depending on what was written earlier, it tries to predict the next word in the sequence. Of course to a much more sophisticated level.
Fine-tuned with instruction tuning just means that the models are not just pre-trained GPT3 ones, but that are further fine tuned with specific datasets and combined with RLHF (Reinforcement Learning from Human Feedback) where the AI becomes smarter with human feedback.
Prompting the AI to give factual and accurate answers and giving an ‘out’ if they don’t know the answer. This cuts down hallucination or creatively making the answer to a great deal.
The next para prevents the LLM from jumping to an answer and attempts a chain of thought answer each time without prompting each time. As evidenced by this paper. This makes the output longer but it’s worth it to get things right and not hallucinate.
The next two are just to prevent some auto responses from the AI and not to have the AI be too verbose.
After adding these instructions, here is the answer to the same question that it got wrong earlier.
All this gives instant IQ points increase for GPT4.
Final Thoughts
The people who find ChatGPT responses to be generic or not useful haven’t taken the time to set ChatGPT up for success.
Through GPT4's custom instructions, we saw how AI responses can fit specific user contexts rather than being broadly applicable. We learned that providing AI with details about ourselves enhances its ability to personalize responses. We also discussed how instructing AIs on response methods helps curb hasty conclusions and unfounded answers.
And this is just the start, the real value of part of this setup will come with the voice conversations — which truly takes ChatGPT into a sci-fi world of robot assistants.
Further Reading:
Every’s post on Using ChatGPT Custom Instructions for Fun and Profit by
- ’s tweet on custom instructions
That's it for today, thanks for reading! What do you think of custom instructions in ChatGPT? Reply or comment below, and I'll reply to you.
Give feedback and vote on the next topic here.
Talk to you soon! Follow me on Twitter @kavirkaycee
In case you or someone you know would like to hire me for product consulting or startup advisory roles, check out my work and get in touch.