Objective setting, breakfast buffets and AI limits
A primer on the importance of objective setting in the AI era and an AutoGPT experiment
I firmly believe that objective setting will be both a critical skill of the AI era and also a deceptively difficult skill to master.
I've wanted to write about objective setting for a while, and a breakfast buffet debate on holiday last week inspired me to open up this topic. In the morning, I would return from the breakfast buffet with a plate of food for myself and sit down to eat. Often I'd forget to get a drink and get up again to get one when I felt thirsty. My wife, on the other hand, often returned from the buffet with multiple things - food for both of us, drinks for both of us, napkins, additional cutlery etc. This isn't the first time I've noticed this difference. Some say this is a "male/female" difference. I'm not sure about that, but I do agree my wife is making "better" breakfast buffet decisions. Until this holiday, I wasn't satisfied with the explanations for our different approaches.
The most common explanation I've heard before is that women are more maternal and have more of an instinct to care for others. Therefore women are more likely than men to return from a buffet with food for others. However, this explanation doesn't add up to me. I've tried going to the buffet with the objective to provide for everyone at the table and find myself checking what drinks are available and then returning empty-handed to ask what drinks people want. The objective of prioritising the whole table's needs when going to the breakfast buffet doesn't, for me, result in the same subtasks of collecting multiple drinks, snacks, cutlery etc.
The conclusion I've come to and discussed with my wife at breakfast (yes you have my permission to feel sorry for her, and yes I have asked permission to write this story) was that I go to the breakfast buffet with the subconscious objective of finding the tastiest breakfast to start the day. She goes to the breakfast buffet with the subconscious objective of maximising the enjoyment of a breakfast experience. Trying out that objective resulted in me fulfilling similar subtasks to my wife and making multiple trips to the buffet to lay a breakfast table.
Objectives and AI
What does all this have to do with AI? One definition of AI is that machines can be considered intelligent to the extent that their actions can be expected to achieve their objectives. As AI applications become more widely used, it is clear that AI successes are as dependent on the quality of the objectives set by the users as they are on sophisticated algorithms.
It is the job of AI researchers and engineers to produce AI with that capability, but as AI becomes more commonly used, it will be the job of users to set the objectives.
We're already starting to see this with prompt engineering and large language models like ChatGPT. Whilst strictly speaking, prompts are not set objectives for LLMs; prompt engineering lays a good foundation for the concept that effective human-AI collaboration starts by describing clear goals.
Why is objective setting hard?
As the breakfast example showed, our objectives are often subconscious and difficult to articulate. Therefore when setting an objective for an automated or AI application, we must consider the potential for unintended consequences. This principle is not new. Literature and ancient tales are rich with cautionary tales of wishes that lead to unforeseen complications and fail to deliver. When King Midas wished for the ability to turn everything he touched into gold, his objective was to become wealthy. His wish became a curse as he died after turning his food, drink, and even his beloved family into gold.
Similarly, the Sorcerer's Apprentice's objective was to make his tasks easier when he used magic to get his brooms to help him by collecting water. However, he failed to define a limit, and instead of making his life easier, the brooms flooded his master's house.
These cautionary tales illustrate how hard it is to define objectives that precisely correspond to our needs and desires.
Objective setting is also challenging because one objective can break into hundreds of tasks and subtasks. Therefore small nuances between two similar objectives can result in a vastly different set of actions. And, when an objective is relentlessly pursued it almost always has unforeseen consequences.
We see this with capitalism. On the face of it, it makes sense to create organisations with a profit-seeking objective. It creates a mechanism to allocate resources, resulting in a competitive and fast most economy that boosts living standards. Great! However, if the only goal is profit, it also results in organisations that passionlessly pollute and exploit the world's natural resources. Not great!
So objective setting is not only hard, but is something that will need to do very carefully with increasingly powerful AI.
The Limits of Objectives and Today's AI
To get a taste of objective setting with AI in the future, I recommend playing around with AutoGPT. AutoGPT is an ongoing project to get ChatGPT to iteratively turn an initial prompt into tasks and then execute those tasks. If you're not a coder or want to experiment with AutoGPT through a simple interface, then the "God-mode" tool enables this (I highly recommend experimenting).
I wanted to know if AI could accelerate research. So I set "God-mode" the objective to create a table of statistics from published articles and surveys on digital and data transformations. To start with, it seemed impressive. It broke the objective into three sub-tasks against which it started creating and executing further tasks:
Conduct a comprehensive literature review of studies and surveys on digital and data transformations.
Example sub-task: Search for studies and surveys on digital and data transformations using Google
Extract relevant statistics from the identified studies and surveys.
Example sub-task: Extract relevant statistics from a study/survey on digital transformation strategy from a Deloitte website.
Organise the extracted statistics into a table format with appropriate references.
Example sub-task: Append a lengthy text containing statistics on digital transformation to the end of a file named 'digital_transformations_stats.txt'.
It was promising, so whenever prompted, I approved it to continue and even auto-approved all actions for 10 minutes. When I checked back, things started to go awry. I stopped paying attention for a while, and after completing 104 subtasks, it suggested that the next step was to "create a comprehensive list of the top 10 legal tech startups in 2023."
This task bore no relevance to the original objective. Much like a human, the AI tool had gone down an internet rabbit hole rather than sticking to the initial objective.
The fact that AutoGPT strayed when unsupervised shows the continued need for human oversight, the gaps in AutoGPT and the gaps objective I set. I didn't put in any constraints, like researching up to 20 articles.
Whilst tools like AutoGPT are advancing at pace it is still early days for AI. However, it is time to practice our objective-setting skills more consciously.