add note to all promptfoo lectures
This commit is contained in:
@@ -6,6 +6,9 @@
|
||||
"source": [
|
||||
"# Model-graded evaluations with promptfoo\n",
|
||||
"\n",
|
||||
"**Note: This lesson lives in a folder that contains relevant code files. Download the entire folder if you want to follow along and run the evaluation yourself**\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"So far, we've only written code-graded evaluations. Whenever possible, code-graded evaluations are the simplest and least-expensive evaluations to run. They offer clear-cut, objective assessments based on predefined criteria, making them ideal for tasks with straightforward, quantifiable outcomes. The trouble is that code-graded evaluations can only grade certain types of outputs, primarily those that can be reduced to exact matches, numerical comparisons, or other programmable logic.\n",
|
||||
"\n",
|
||||
"However, many real-world applications of language models require more nuanced evaluation. Suppose we wanted to build a chatbot to be used in middle-school classrooms. We might want to evaluate the outputs to make sure they use age-appropriate language, maintain an educational tone, avoid answering non-academic questions, or provide explanations at a suitable complexity level for middle schoolers. These criteria are subjective and context-dependent, making them challenging to assess with traditional code-based methods. This is where model-graded evaluations can help!\n",
|
||||
|
||||
Reference in New Issue
Block a user