Using Coze to Create an Intelligent Question Generator: My Practical Sharing

Using Coze to Create an Intelligent Question Generator: My Practical Sharing

Using Coze to Create an Intelligent Question Generator: My Practical Sharing

In general, the intelligent agents we often encounter are mostly in search-like scenarios, such as knowledge base query, customer service or marketing. In essence, the user provides a demand description, and the agent automatically queries the knowledge base to match fragments and returns results combined with the context.

A friend asked me to make an intelligent question generator agent that needs to maintain a question bank. When users take an exam, the agent randomly generates questions. After users answer, the agent evaluates and scores. If connected to a database or external interface, score management can also be realized. I thought it would be very simple to build it with the no-code tool Coze. However, in practice, there are many pitfalls. I am sharing this here for your reference.

1. The knowledge base needs data cleaning first. You can’t directly throw in a large area of documents. For example, in some question banks, the answers are together with the questions, while in others, the answers are independent. You need to process them yourself. With the help of large language models, time can be saved.

2. First, give a simple definition to the intelligent agent. When the user says “ready”, questions can be randomly drawn from the knowledge base. However, after many attempts, knowledge is not recalled. The large language model starts to fabricate questions randomly. Even if the recallKnowledge function is used to forcefully call, it cannot be recalled. The reason is that in non-workflow mode, searching the knowledge base is combined with user instructions and context. The four words “ready” cannot effectively search and recall. Therefore, simple prompts cannot realize actively randomly extracting content from the knowledge base.

3. Turn to workflow mode. The design idea is that when the user initiates an exam request, the workflow is called. The workflow randomly searches for questions in the knowledge base. The search needs to have logic and cannot be an empty demand. Moreover, it is necessary to ensure that the questions in the exam do not repeat. It is hoped that questions are presented one by one. That is, calling the workflow once returns one question. After answering, the next question is called. The workflow needs to record the questions that have been presented.

4. Process the question bank and give each question a unique number as the index of the knowledge base QA pair, such as “Question No. 1” and “Question No. 2”.

5. Add a code module to write code that randomly generates an id and combines it into a search term. At the same time, save the ids of the questions that have been presented in an array to ensure that the randomly generated id is not in the array. When taking the exam again, empty the array to ensure that only new questions are presented in one exam question-and-answer cycle.

6. Write prompts and learn from the safety protection experience of the previous red envelope intelligent agent to prevent users from directly asking for full marks without taking the exam.

7. Design a return safety code. You can call an interface to write the score into the management system or connect with the access control system to automatically issue access control. Here, as a demo, it is simplified.

In conclusion, to realize the function of randomly extracting knowledge fragments from the agent’s knowledge base, you need to call the workflow to randomly generate question numbers and request the knowledge base. In the non-AI era, this function is easy to implement. Another idea is to build a program that randomly finds questions. The workflow calls the API of this program each time to return a question. The AI can adapt the questions, such as question types or options. If you have a better method, please leave a message. Also, please follow, like, and forward first.