Uncovering “Magic Box” or experimenting with LLM for API Testing

Eleonora Belova
6 min readNov 27, 2023


Everyone is talking about Generative AI.

Recently, I had the opportunity to attend a fascinating Meetup in Amsterdam organized by Girls Code and Rockstars IT, titled ‘AI — Woman vs. Machine’. The focus was on AI and leveraging Language Model Models (LLMs) in both professional and personal contexts. The buzz around generative AI is growing, yet not everyone fully comprehends its potential.

Describing AI can be quite challenging; often, it’s linked to a magic box with inputs and outputs, as Ramona Domen put it. However, the crucial point to grasp is the significance of context. The clarity of your input or prompt directly influences the quality of output from generative AI. In essence, the better defined your input, the more precise and reliable the output tends to be.

The concept of Prompt Engineering is essential here.

What is Prompt Engineering?

Recently I got a chance to enroll to the course Gen AI for Testers and decided to play around with ChatGPT, explore the capabilities and ways I can use it in my day to day work. The course emphasizes Prompt Engineering as a core concept, which I found particularly intriguing.

What precisely does Prompt Engineering mean?

Prompt engineering refers to the process of designing and refining prompts to achieve specific goals when using a language model like ChatGPT. It involves formulating clear instructions, providing content, and specifying the desired output format to guide the model’s responses effectively.

For instance, consider the difference between two prompts:

“How to test API?”


“I’m an Automation Test Engineer working on API tests. Could you provide test cases for the login functionality for a security application?”

The second prompt is a more focused and actionable response compared to the vague and broad nature of the first one.

Building effective prompt engineering

Let’s take a look at what makes prompt engineering successful. What specific steps are involved in creating an effective prompt?

✅ Define objective clearly

This involves articulating with precision the specific goals and desired outcomes you aim to achieve through the interaction with ChatGPT. By clarifying these objectives, you set a clear direction for the model, guiding it to generate responses aligned with your intended purposes.

✅ Choose the right context

It involves providing the necessary background information, details, or settings that frame the conversation for the language model. By choosing the right context, you effectively orient the model towards understanding the specific scenario or subject matter, enabling more relevant and accurate responses.

✅ Specify the format

You might want to see the response in certain format, like: json, table or list. Let’s take a look at the example: “Create a user in json format with name, address and age”.

✅ Use system messages.

Using system messages is a strategy in prompt engineering where you include additional information or context within the conversation that guides the language model’s understanding. These messages are not the primary prompts but serve as instructions or clarifications to direct the model’s responses.

For instance, if a user asks, “How do I test an API endpoint?” a system message could be incorporated to provide further details: “You are a senior developer helping colleagues understand API testing best practices.”

By including such system messages, you offer context to the model about the role or perspective of the user, helping it tailor more precise and relevant responses.

✅ Iterate and refine

Iterating and refining is a crucial aspect of prompt engineering that involves an ongoing process of improvement and adjustment in the prompts used to interact with a language model.

✅ Collaborate with community

Engaging in collaboration with fellow engineers and actively sharing insights on forums and conferences regarding prompt engineering is paramount. This collaborative effort serves as a valuable avenue to stay updated on the latest advancements in this field, enabling you to continually refine your expertise and master this skill.

Additionally, adjusting temperature and token settings, refining prompts with more details, and testing on diverse scenarios are crucial for effective prompt engineering. It’s also essential to be mindful of safety and bias.

Please, find more resources on OpenAI Website.

✍️ It is time to use all these principles in practice!

🎯 My goal is: to generate comprehensive test cases for GET API Endpoint: https://www.boredapi.com/api/{type}

Official Documentation: https://www.boredapi.com/

Define the context and objective

✍️ Using the following prompt:

You are tasked with writing test cases to verify the functionality and behaviour of the GET method on the following URL: “https://www.boredapi.com/api/{type}”. The Bored API helps you find things to do when you’re bored.

Possible type of the activity: [“education”, “recreational”, “social”, “diy”, “charity”, “cooking”, “relaxation”, “music”, “busywork”].

Design comprehensive test cases to ensure the correctness and reliability of the GET request response. Consider different scenarios, including valid and invalid inputs, edge cases and potential error conditions. Please provide test cases details, including the request method, URL, parameters, expected outcomes, and any additional relevant information for each test case.

🤖 Response from ChatGPT:

Iterate and Refine

This response looks helpful for further implementation, but I think some cases are missing and I would like to iterate and refine the response.

✍️ My Prompt:

Could you please include test cases for all possible scenarios.

🤖 Response from ChatGPT:

Specify the format

I am almost satisfied with this result, but since it is easier for me to grasp information in the table.

✍️My Prompt:

Please, use the tabular format

🤖 Response from ChatGPT:

More ideas on how to master prompt engineering

You can find more helpful examples of prompts you can use in ChatGPT. It depends on what your goal is. For example: you would like to know how to translate certain phrases to French.

Navigate to: https://platform.openai.com/examples and find Grammar Correction.

There is a prompt provided to you:


You will be provided with statements, and your task is to convert them to standard English.


She no went to the market.

Open ChatGPT and try to get expected result:

This is the same result you can see in the sample response.

Want to master skills further?

Then try to experiment with different prompts, like:

  • Translation
  • Parse Unstructured Data
  • Calculate Time Complexity
  • Explain Code
  • JavaScript to Python
  • Create SQL Query
  • etc.


To delve deeper into prompt engineering, I’ve included another helpful resource from OpenAI: you can find it here — https://platform.openai.com/docs/guides/prompt-engineering


While I’m still in the process of completing the Gen AI for Testers course, the potential benefits of incorporating ChatGPT into the toolkit of testers and engineers are already visible. An essential lesson learned is “Context matters” — accurate input results in precise results. This underscores the importance of providing precise information to attain the desired outcomes.



Eleonora Belova

Passionate QA Engineer. Love automating testing routines, fan of exploratory testing. Enjoy volunteering for professional communities.