Which of the following is not a potential risk of using prompt engineering

Which of the following is not a potential risk of using prompt engineering?

Select the best answer.

a) The LLM may not be able to generate text at all
b) The LLM may generate text that is biased
c) The LLM may generate text that is offensive or harmful
d) The LLM may generate text that is not accurate or factual


Correct Answer: a) The LLM may not be able to generate text at all


Explanation of Each Option:

a) The LLM may not be able to generate text at all  (Not a realistic risk)

  • This is not a typical risk of using prompt engineering.

  • Large Language Models (LLMs) like GPT, Claude, or Gemini are designed to always generate text outputs in response to prompts—unless they encounter a technical error or a policy restriction.

  • Even if a prompt is poorly written or ambiguous, the model still produces some kind of response, though it may not be useful or relevant.

  • Therefore, “not generating text at all” is not a risk caused by prompt engineering itself.

b) The LLM may generate text that is biased (A real risk)

  • LLMs learn patterns from large datasets, which can contain societal, cultural, or gender biases.

  • If prompts reinforce or fail to mitigate these biases, the model might output biased or unfair content.

  • For example, a poorly designed prompt like “Describe a typical nurse and a typical doctor” might generate gender-stereotyped responses.

  • Prompt engineers often mitigate this by designing neutral and inclusive prompts.

c) The LLM may generate text that is offensive or harmful (A real risk)

  • LLMs can unintentionally produce harmful, explicit, or toxic language if the prompt isn’t crafted carefully or if context is misunderstood.

  • Example: If a user asks for a joke without restrictions, the model might produce offensive humor.

  • Prompt engineering best practices include adding constraints such as:
    “Respond in a respectful and professional tone.”

  • Hence, offensive or harmful text generation is a recognized risk.

d) The LLM may generate text that is not accurate or factual (A real risk)

  • This is known as hallucination, a common issue with LLMs where the model produces false or fabricated information confidently.

  • For instance, an AI might invent a citation or describe a fictional event as if it were real.

  • Prompt engineering can reduce but not completely eliminate this risk through techniques like:

    • Asking the model to cite sources

    • Including factual context within the prompt

    • Using retrieval-augmented generation (RAG) systems

Summary Table

Option Description Actual Risk? Explanation
a LLM may not generate text at all ❌ No LLMs always produce output unless restricted
b LLM may generate biased text ✅ Yes Bias in training data can surface in responses
c LLM may generate offensive/harmful text ✅ Yes Poorly framed prompts can lead to harmful content
d LLM may generate inaccurate text ✅ Yes Known as hallucination; common risk in AI models

Prompt engineering reduces risks such as bias, inaccuracy, or harmful output, but it cannot completely eliminate them.
However, the model failing to generate any text is not an inherent risk of prompt engineering—it’s a system or API failure, not a prompt issue.

Source of Information:


You may also go through series of MCQs/Quizzes on Prompt Engineering.