LLMs: The Rubber Duck Debugger for Creative Work
Nithanth Ram
Rubber duck debugging (or rubberducking) is one of the most rudimentary yet powerful lessons a developer can learn in their programming journey. For those who are unfamiliar with this quirky concept, it refers to the act of talking through a block of code using plain-language, as if you were explaining it to a rubber duck. By doing so, a developer can better conceptualize their approach and work through any sticking points when writing their code. Colloquially, the term is used as a metaphor for active methods of debugging, as opposed to passively debugging code in silence and thought.
Rubberducking was one of the first programming lessons that was instilled in me during my introduction to computer science, and it allowed me to remain even-keeled when faced with a host of bugs to squash in my code. Sometimes we think a lot quicker than we speak, which leads to missing the simplest of errors. The so-called rubber duck paradigm is a simple yet powerful one that even the most senior developers employ in their workflow. Can this notion be extended to other areas besides programming.
The rubber duck paradigm prompts us to explore similar problem-solving methods in other areas. Besides programming, large language models (LLMs) may have a substantial impact on various other creative fields.
In fact, we used GPT-4 to write a draft of this very transition, saving me (the author) and my editor at least 10 minutes of brainstorming when we both got stuck.
This is exactly what I mean when I suggest that LLMs are like a rubber duck debugger for creative work: We started with an obstacle, then used words to describe the problem, and finally—rather than hallucinate a dialog with an inanimate object (e.g. an actual rubber duck)—carry on a collaborative discussion with a LLM to at least get inspiration for solving the problem, if not solving it outright.