Prompt Engineering Guides

Prompt engineering guide notes for various models

Meta

https://www.llama.com/docs/how-to-guides/prompting/

Effective prompts

  1. Be clear and concise
  2. Use specific examples
  3. Vary the prompts
  4. Test and refine
  5. Use feedback

Explicit instructions

  • Stylization
  • formatting
  • Restrictions

Use few and zero shot prompts

  • Adding specific examples of your desired output generally results in a more accurate, consistent output. T

Use role based prompts

  • Improves relevance
  • Increases accuracy
  • but it requires effort

Chain of thought technique

  • providing the language model with a series of prompts or questions to help guide its thinking and generate a more coherent and relevant response.
  • Improves coherence
  • Increases depth
  • but it requires effort

Self-Consistency

  • Self-Consistency introduces enhanced accuracy by selecting the most frequent answer from multiple generations

RAG

  • an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which might also negatively impact the foundational model's capabilities
  • could be as simple as a lookup table or as sophisticated as a vector database containing all of your company's knowledge

Program-Aided Language Models

  • instructing the LLM to write code to solve calculation tasks.

Limiting Extraneous Tokens

  • generate

Reduce Hallucinations

  • The language model may hallucinate information or make up facts that are not accurate or supported by evidence.
    • provide the language model with more context or information about the topic to help it understand what is being asked and generate a more accurate response.
  • The language model may hallucinate information or make up facts that are not consistent with the desired perspective or point of view.
    • you can provide the language model with additional information about the desired perspective or point of view, such as the goals, values, or beliefs of the person or entity being addressed.
  • asked to generate a response to a question that requires a specific tone or style.
    • you can provide the language model with additional information about the desired tone or style, such as the audience or purpose of the communication.
  • Overall, the key to avoiding hallucination in language models is to provide them with clear and accurate information and context