adding real_world_prompting with vertex

This commit is contained in:
Elie Schoppik
2024-08-26 20:56:24 -04:00
parent 52f25e6c26
commit 891ba16451
127 changed files with 191 additions and 32854 deletions

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lesson 5: Customer support prompt\n",
"## Lesson 5: customer support prompt\n",
"\n",
"In this lesson, we'll work on building a customer support chatbot prompt. Our goal is to build a virtual support bot called \"Acme Assistant\" for a fictional company called Acme Software Solutions. This fictional company sells a piece of software called AcmeOS, and the chatbot's job is to help answer customer questions around things like installation, error codes, troubleshooting, etc.\n",
"\n",
@@ -175,18 +175,33 @@
"Next, let's write a function that we can use that will combine the various parts of the prompt and send a request to Claude."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -U python-dotenv google-cloud-aiplatform \"anthropic[vertex]\"\n",
"!gcloud auth application-default login"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from anthropic import Anthropic\n",
"from anthropic import AnthropicVertex\n",
"from dotenv import load_dotenv\n",
"import json\n",
"import os\n",
"\n",
"load_dotenv()\n",
"client = Anthropic()\n",
"\n",
"project_id = os.environ.get(\"PROJECT_ID\")\n",
"# Where the model is running. e.g. us-central1 or europe-west4 for haiku\n",
"region = os.environ.get(\"REGION\")\n",
"\n",
"client = AnthropicVertex(project_id=project_id, region=region)\n",
"\n",
"def answer_question_first_attempt(question):\n",
" system = \"\"\"\n",
@@ -207,7 +222,7 @@
" # Send a request to Claude\n",
" response = client.messages.create(\n",
" system=system,\n",
" model=\"claude-3-haiku-20240307\",\n",
" model=\"claude-3-haiku@20240307\",\n",
" max_tokens=2000,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": final_prompt} \n",
@@ -622,7 +637,7 @@
" # Send a request to Claude\n",
" response = client.messages.create(\n",
" system=system,\n",
" model=\"claude-3-haiku-20240307\",\n",
" model=\"claude-3-haiku@20240307\",\n",
" max_tokens=2000,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": final_prompt} \n",
@@ -638,6 +653,13 @@
"Let's start by making sure it still works when answering basic user questions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 15,
@@ -1038,7 +1060,7 @@
" # Send a request to Claude\n",
" response = client.messages.create(\n",
" system=system,\n",
" model=\"claude-3-haiku-20240307\",\n",
" model=\"claude-3-haiku@20240307\",\n",
" max_tokens=2000,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": final_prompt} \n",
@@ -1250,7 +1272,7 @@
" # Send a request to Claude\n",
" response = client.messages.create(\n",
" system=system,\n",
" model=\"claude-3-haiku-20240307\",\n",
" model=\"claude-3-haiku@20240307\",\n",
" max_tokens=2000,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": final_prompt} \n",