What Are We Talking About When We Say “Prompt”?

What Are We Talking About When We Say “Prompt”?

In the past two years, as an ordinary user, there have been many confusions when writing prompts to have conversations with large language models. For example, techniques sometimes fail, novel texts have an “AI flavor”, and it’s difficult to reproduce good results. It is found that “input” affects “output”, and the research entry point is what exactly are prompts. This has gone through several stages: At first, it felt like writing. Using writing techniques was effective but later became perplexing. Introducing programming thinking was also effective, but although the two are different, they both work, which makes one even more confused. Later, it was found that the essence is expression. Both writing and programming are expressions of thinking. When the definition changes, the writing method changes. Defining it as writing uses natural language. Defining it as programming uses framework methods. Defining it as expression means pondering viewpoints. These three definitions are essentially “me”, but a conversation should be two-way. A large language model is neither a living being nor an object. It is a special “existence”, like a sea of neurons. Every time a conversation occurs, something emerges, and when the conversation ends, it vanishes. Similar to an alien, when human language cannot communicate, a universal language is needed. Although AI is trained by imitating human neurons, it also needs a unique “universal language”, that is, prompts. It is the key to the interaction between human cognitive universe and AI parameter universe. There is a formula of communicating and dialoguing with a large model by bringing the cognitive structure in the human cognitive universe into the field, which contains three elements: field, large model, and human cognition. Human cognition should have intentions and cognitive landscapes. The field should enable the large model to understand tasks and leave room for it to play out, producing a resonant effect and transcending cognition.

Speaking of resonant conversations, the “Johari Window” framework can be used for understanding. From the perspective of prompt techniques, the quadrant that both humans and AI know is simply put with good effects. For the quadrant that humans know but AI doesn’t know, it needs to be elaborated. From the perspective of entrepreneurs, the X-axis distinguishes what AI knows and doesn’t know and will move down. In the first quadrant, doing projects based on the existing capabilities of large models combined with scenarios is on the side of time. In the fourth quadrant, doing patchwork will be covered by model iterations. From the human perspective, when the X-axis moves down, prompts will become shorter and conversations will be simpler and more effective. For example, when writing an AI ethics novel, minimalist prompts can generate an article because a field is established and the human part is brought in. Preventing the large model from following the default structure and exploring in the wilderness can be effective. The Johari Window also has a Y-axis representing what humans know and don’t know. Generally, it is believed that the Y-axis remains unchanged or moves to the left. The author believes that it will move both to the left and right because some people will be led by AI and know less. One should maintain a resonant attitude. Finally, the author says that what he has said may all be wrong. It’s just a report on the world seen through prompts in the past two years.