From thought mirror to sparring partner
While good prompting practices, like rubberducking, help us define our problems more precisely and spot gaps in our reasoning, we can get more than just good answers from a tool like ChatGPT. We often think of ChatGPT as a tool that does the work for us and provides answers, but it provides just as much value by giving us critical feedback.
Conversations with AI tools can serve the same purpose as bouncing ideas off our trusted colleagues. Say a business executive is developing a strategy for an upcoming initiative. While they may already have a direction in mind and are waiting to present it to the other leaders in their organization, they can test the idea by bringing their idea to a tool like ChatGPT and ask:
- What weaknesses might exist in my approach?
- How might someone with a different perspective challenge this idea?
- What potential outcomes am I overlooking?
We can spot alternative viewpoints and potential pitfalls, pressure-testing our reasoning, and by the time we share our tested ideas with others, they're sharper and more thoroughly considered.
Another unique advantage of using AI tools as a sparring partner is its lack of bias towards our existing thinking. Human colleagues often provide valuable feedback, but their perspectives can be filtered by your organization's culture, their personal experiences, or the social dynamics that exist between you and them. But AI tools can interact with our ideas without a distracting agenda in the back of its mind, which makes it particularly useful when we need a neutral perspective. You can even have fun with it and customize prompts to have ChatGPT emulate a famous business leader's personality, tailoring its feedback style to someone that you know will cut to the chase and give no-nonsense responses.
Training an iterative mindset
If you've worked enough with Generative AI tools, you know that we can't treat them as one-shot answer machines. While a well-crafted prompt can take us far, these tools rarely produce the perfect response on the first try. The best results come from a back-and-forth process—sending an input, evaluating the output, refining our request, and improving the result step by step.
This process mirrors how the strongest ideas develop in the real world, often not through a "get it right on the first try" method but through constant iteration, making rapid, incremental improvements. When Pixar Animation Studios develops their incredibly successful films, like 2015's Inside Out, they often have dozens of internal screenings as the movie is being developed so that they can rapidly receive feedback and improve their stories. The writers of these films do not wait for a perfect script before committing their ideas to the screen. They make something tangible for others to see their ideas and provide input.
Because AI tools are always ready to respond without delay, this allows us to have our ideas and content rapidly "screened" for the sake of testing, tweaking, and improving our ideas. The more we use these tools, the less hesitant we are to only put our ideas forward until they feel fully polished, helping us become more comfortable with adjusting as we move forward and overcome the paralysis that can come from perfectionism. Instead of stalling out, we learn to put ideas into motion and improve them as we go.
Using Gen AI tools may be one of the best ways for us to see the value of embracing iteration as the key to improvement, helping our minds be more adaptable, flexible, and resilient in the face of challenges, inside and outside of our little chat boxes.
The real intelligence upgrade is within us, not AI
I have had colleagues tell me that the generative AI chat tool we provide to DocuWarians is "getting smarter." The tool is supposedly learning how to give better responses as more people use it. But this is a misperception of why they are getting better results. I know there is no automated learning or information gathering going on in the backend of our particular tool. What's really happening is that our users are the ones that are getting smarter! By using these tools regularly, they're getting better at articulating their thoughts, adapting an iterative mindset, and improving their minds with the kinds of feedback the Generative AI tools can conveniently provide.
We’re only scratching the surface of what this technology can offer. As OpenAI’s Adam Goldberg noted at the 2024 AI Summit in New York, even if no new large language Models were developed, we would still have at least a decade’s worth of discoveries to make about how to best leverage the ones we already have. This suggests that the biggest breakthroughs may not come from new AI models alone, but from how we learn to use these models more effectively and because our brains evolve as a result.