adding real_world_prompting with vertex
This commit is contained in:
1
real_world_prompting/.gitignore
vendored
Normal file
1
real_world_prompting/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.env
|
||||
@@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Lesson 2: A real-world prompt\n",
|
||||
"# Lesson 2: a real-world prompt\n",
|
||||
"\n",
|
||||
"In the previous lesson, we discussed several key prompting tips and saw an example of how to use each in isolation. Let's now try writing a much larger prompt that incorporates many of the techniques we just covered.\n",
|
||||
"\n",
|
||||
@@ -120,7 +120,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -291,7 +291,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -300,7 +300,7 @@
|
||||
"'\\nPatient Name: Lily Chen\\nAge: 8\\nMedical Record:\\n\\n2016 (Birth): Born at 34 weeks, diagnosed with Tetralogy of Fallot (TOF)\\n - Immediate surgery to place a shunt for increased pulmonary blood flow\\n2016 (3 months): Echocardiogram showed worsening right ventricular hypertrophy\\n2017 (8 months): Complete repair of TOF (VSD closure, pulmonary valve replacement, RV outflow tract repair)\\n2017 (10 months): Developed post-operative arrhythmias, started on amiodarone\\n2018 (14 months): Developmental delay noted, referred to early intervention services\\n2018 (18 months): Speech therapy initiated for delayed language development\\n2019 (2 years): Diagnosed with failure to thrive, started on high-calorie diet\\n2019 (2.5 years): Occupational therapy started for fine motor skill delays\\n2020 (3 years): Cardiac catheterization showed mild pulmonary stenosis\\n2020 (3.5 years): Diagnosed with sensory processing disorder (SPD)\\n2021 (4 years): Started integrated preschool program with IEP (Individualized Education Plan)\\n2021 (4.5 years): Hospitalized for RSV bronchiolitis, required brief oxygen support\\n2022 (5 years): Echocardiogram showed progression of pulmonary stenosis, balloon valvuloplasty performed\\n2022 (5.5 years): Diagnosed with attention-deficit/hyperactivity disorder (ADHD), started behavioral therapy\\n2023 (6 years): Cochlear implant surgery for sensorineural hearing loss\\n2023 (7 years): Started mainstream school with continued IEP support\\n2024 (7.5 years): Occupational therapy discontinued, met fine motor skill goals\\n2024 (8 years): Periodic cardiac follow-up shows stable pulmonary valve function\\n2024 (8 years): Speech development progressing well, ongoing therapy\\n '"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -330,7 +330,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -352,20 +352,70 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 98,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n",
|
||||
"Your browser has been opened to visit:\n",
|
||||
"\n",
|
||||
" https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=764086051850-6qr4p6gpi6hn506pt8ejuq83di341hur.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fsqlservice.login&state=1tuG7GpenqE0tnrhJMJBgmtqMdUItk&access_type=offline&code_challenge=yhf6A8DPqH88gqJR8yLR16ZX_qyDSpfqQmEB8vKECro&code_challenge_method=S256\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Credentials saved to file: [/Users/elie/.config/gcloud/application_default_credentials.json]\n",
|
||||
"\n",
|
||||
"These credentials will be used by any library that requests Application Default Credentials (ADC).\n",
|
||||
"\n",
|
||||
"Quota project \"anthropic\" was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Updates are available for some Google Cloud CLI components. To install them,\n",
|
||||
"please run:\n",
|
||||
" $ gcloud components update\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from anthropic import Anthropic\n",
|
||||
"%pip install -U --quiet python-dotenv google-cloud-aiplatform \"anthropic[vertex]\"\n",
|
||||
"!gcloud auth application-default login"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"just-aloe-430520-u6 us-central1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from anthropic import AnthropicVertex\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"load_dotenv()\n",
|
||||
"client = Anthropic()\n",
|
||||
"\n",
|
||||
"project_id = os.environ.get(\"PROJECT_ID\")\n",
|
||||
"# Where the model is running. e.g. us-central1 or europe-west4 for haiku\n",
|
||||
"region = os.environ.get(\"REGION\")\n",
|
||||
"\n",
|
||||
"client = AnthropicVertex(project_id=project_id, region=region)\n",
|
||||
"\n",
|
||||
"print(project_id, region)\n",
|
||||
"\n",
|
||||
"def generate_summary_with_bad_prompt(patient_record):\n",
|
||||
" prompt_with_record = initial_prompt.format(record=patient_record)\n",
|
||||
" response = client.messages.create(\n",
|
||||
" model=\"claude-3-sonnet-20240229\",\n",
|
||||
" model=\"claude-3-sonnet@20240229\",\n",
|
||||
" max_tokens=4096,\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": prompt_with_record}]\n",
|
||||
" )\n",
|
||||
@@ -382,7 +432,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -390,32 +440,15 @@
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"===============================\n",
|
||||
"Here is a summary of Evelyn Thompson's 78-year-old medical record:\n",
|
||||
"Here is a summary of Evelyn Thompson's medical record:\n",
|
||||
"\n",
|
||||
"Chronic Conditions:\n",
|
||||
"- Type 2 diabetes (since 1985) - on metformin, dose increased in 2010\n",
|
||||
"- Hypertension (since 1992) - on lisinopril, dose increased in 2020\n",
|
||||
"- Hypothyroidism (since 2000) - on levothyroxine \n",
|
||||
"- Atrial fibrillation (since 2005) - on warfarin\n",
|
||||
"- Vitamin B12 deficiency (since 2008) - receiving monthly injections\n",
|
||||
"- Chronic kidney disease stage 3 (since 2015) - metformin adjusted\n",
|
||||
"- Mild cognitive impairment (since 2019) - on donepezil\n",
|
||||
"Evelyn is a 78-year-old female with a long history of chronic medical conditions including type 2 diabetes since 1985 (on metformin), hypertension since 1992 (on lisinopril), hypothyroidism since 2000 (on levothyroxine), and atrial fibrillation since 2005 (on warfarin and aspirin). She has also had osteoarthritis requiring a right total hip replacement in 1998 and a left total knee replacement in 2017. \n",
|
||||
"\n",
|
||||
"Surgical History:\n",
|
||||
"- Total hip replacement (1998) - right side, due to osteoarthritis\n",
|
||||
"- Cataract surgery (2003) - both eyes\n",
|
||||
"- Lumpectomy and radiation (2013) - for stage 2 breast cancer \n",
|
||||
"- Total knee replacement (2017) - left side, due to osteoarthritis\n",
|
||||
"In 2003, she underwent cataract surgery bilaterally. She was diagnosed with vitamin B12 deficiency in 2008 requiring monthly injections. In 2011, she had a transient ischemic attack. She was diagnosed with stage 2 breast cancer in 2013, treated with lumpectomy, radiation, and anastrozole.\n",
|
||||
"\n",
|
||||
"Cancer History: \n",
|
||||
"- Breast cancer (2013) - currently on anastrozole for recurrence prevention\n",
|
||||
"She developed chronic kidney disease stage 3 in 2015 requiring metformin adjustment. Other issues include pneumonia in 2018 requiring hospitalization, mild cognitive impairment since 2019 (on donepezil), refractory hypertension in 2020, and recurrent UTIs in 2021 treated with prophylactic antibiotics.\n",
|
||||
"\n",
|
||||
"Recent Issues:\n",
|
||||
"- Recurrent UTIs (2021) - on prophylactic antibiotics\n",
|
||||
"- Worsening kidney function per eGFR (2022)\n",
|
||||
"- Declining mobility (2023) - started physical therapy and home health aide\n",
|
||||
"\n",
|
||||
"Overall, an elderly patient with multiple chronic conditions requiring polypharmacy and close monitoring, especially for diabetes, hypertension, kidney disease, and cancer recurrence.\n"
|
||||
"Recent issues are worsening kidney function based on eGFR in 2022 and declining mobility in 2023 requiring physical therapy and home health aide.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -1151,7 +1184,7 @@
|
||||
"def generate_summary_with_improved_prompt(patient_record):\n",
|
||||
" prompt_with_record = updated_prompt.format(record=patient_record) #use our rewritten prompt!\n",
|
||||
" response = client.messages.create(\n",
|
||||
" model=\"claude-3-sonnet-20240229\",\n",
|
||||
" model=\"claude-3-sonnet@20240229\",\n",
|
||||
" max_tokens=4096,\n",
|
||||
" system=system, #add in our system prompt!\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": prompt_with_record}]\n",
|
||||
@@ -1687,7 +1720,7 @@
|
||||
" final_prompt_part = medical_record_input_prompt.format(record=patient_record) #add the medical record to the final prompt piece\n",
|
||||
" complete_prompt = updated_json_prompt + final_prompt_part\n",
|
||||
" response = client.messages.create(\n",
|
||||
" model=\"claude-3-sonnet-20240229\",\n",
|
||||
" model=\"claude-3-sonnet@20240229\",\n",
|
||||
" max_tokens=4096,\n",
|
||||
" system=system, #add in our system prompt!\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": complete_prompt}]\n",
|
||||
@@ -1860,7 +1893,7 @@
|
||||
" final_prompt_part = medical_record_input_prompt.format(record=patient_record) #add the medical record to the final prompt piece\n",
|
||||
" complete_prompt = updated_json_prompt + final_prompt_part\n",
|
||||
" response = client.messages.create(\n",
|
||||
" model=\"claude-3-sonnet-20240229\",\n",
|
||||
" model=\"claude-3-sonnet@20240229\",\n",
|
||||
" max_tokens=4096,\n",
|
||||
" system=system, #add in our system prompt!\n",
|
||||
" messages=[{\"role\": \"user\", \"content\": complete_prompt}]\n",
|
||||
@@ -1959,7 +1992,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.6"
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Lesson 3: Prompt engineering\n",
|
||||
"# Lesson 3: Prompt Engineering\n",
|
||||
"\n",
|
||||
"In the first lesson, we quickly reviewed some key prompting tips. In the second lesson, we wrote a prompt that \"blindly\" applied all of those tips to a single prompt. Understanding these tips is critical, but it's equally important to understand the prompt engineering workflow and decision making framework.\n",
|
||||
"\n",
|
||||
|
||||
@@ -47,7 +47,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -71,7 +71,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 43,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -101,7 +101,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 44,
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -152,7 +152,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 45,
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -179,21 +179,54 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 46,
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Note: you may need to restart the kernel to use updated packages.\n",
|
||||
"Your browser has been opened to visit:\n",
|
||||
"\n",
|
||||
" https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=764086051850-6qr4p6gpi6hn506pt8ejuq83di341hur.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fsqlservice.login&state=1JlV7BunjhxeVP0sHTct8UQyia4vQW&access_type=offline&code_challenge=qx8gVXITZrRA8x4zSIulz7tYTCgNRtPLYti6p2dEna8&code_challenge_method=S256\n",
|
||||
"\n",
|
||||
"^C\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Command killed by keyboard interrupt\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%pip install --quiet -U python-dotenv google-cloud-aiplatform \"anthropic[vertex]\"\n",
|
||||
"!gcloud auth application-default login"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from anthropic import Anthropic\n",
|
||||
"from anthropic import AnthropicVertex\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"load_dotenv()\n",
|
||||
"client = Anthropic()\n",
|
||||
"\n",
|
||||
"project_id = os.environ.get(\"PROJECT_ID\")\n",
|
||||
"# Where the model is running. e.g. us-central1 or europe-west4 for haiku\n",
|
||||
"region = os.environ.get(\"REGION\")\n",
|
||||
"\n",
|
||||
"client = AnthropicVertex(project_id=project_id, region=region)\n",
|
||||
"\n",
|
||||
"def summarize_call(transcript):\n",
|
||||
" final_prompt = prompt.format(transcript=transcript)\n",
|
||||
" # Make the API call\n",
|
||||
" response = client.messages.create(\n",
|
||||
" model=\"claude-3-sonnet-20240229\",\n",
|
||||
" model=\"claude-3-sonnet@20240229\",\n",
|
||||
" max_tokens=4096,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"user\", \"content\": final_prompt}\n",
|
||||
@@ -204,7 +237,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 47,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -213,14 +246,13 @@
|
||||
"text": [
|
||||
"Here is a summary of the customer service call transcript:\n",
|
||||
"\n",
|
||||
"Main Issue:\n",
|
||||
"The customer was unable to turn on their Acme smart light bulb.\n",
|
||||
"Main Issue: The customer was unable to turn on their smart light bulb. \n",
|
||||
"\n",
|
||||
"Resolution:\n",
|
||||
"The service agent instructed the customer to reset the bulb by turning the power off for 5 seconds and then back on. This should reset the bulb and allow it to turn on.\n",
|
||||
"Resolution: The customer service agent instructed the customer to reset the bulb by turning the power off for 5 seconds and then back on, which should reset the bulb.\n",
|
||||
"\n",
|
||||
"Follow-Up:\n",
|
||||
"The agent told the customer to call back if they continued to have issues after trying the reset procedure. No other follow-up was mentioned.\n"
|
||||
"Follow-up: The agent told the customer to call back if they still needed further assistance after trying the reset procedure.\n",
|
||||
"\n",
|
||||
"Overall, it was a straightforward issue where the agent provided a simple troubleshooting step to potentially resolve the customer's problem with the smart light bulb not turning on. No major follow-up was required beyond checking if the reset worked or if the customer needed additional help.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -230,7 +262,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 48,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -239,11 +271,11 @@
|
||||
"text": [
|
||||
"Summary:\n",
|
||||
"\n",
|
||||
"Main Issue: The customer's Acme SmartTherm thermostat was not maintaining the set temperature of 72°F, and the house was much warmer.\n",
|
||||
"Main Issue: The customer's Acme SmartTherm thermostat was not maintaining the set temperature correctly. The thermostat was set to 72°F, but the house was much warmer.\n",
|
||||
"\n",
|
||||
"Resolution: The agent guided the customer through the process of recalibrating the SmartTherm thermostat. This involved accessing the \"Calibration\" menu, adjusting the temperature to match the customer's room thermometer (79°F in this case), and confirming the new setting. The recalibration process may take a few minutes to complete.\n",
|
||||
"Resolution: The agent guided the customer through recalibrating the SmartTherm thermostat. This involved accessing the \"Calibration\" menu, adjusting the temperature to match a separate room thermometer reading of 79°F, and confirming the new calibration setting. The recalibration process may take a few minutes.\n",
|
||||
"\n",
|
||||
"Follow-up Required: The customer was advised to check the thermostat in an hour to see if the issue was resolved after the recalibration process completed.\n"
|
||||
"Follow-up: The agent instructed the customer to check back in an hour to see if the recalibration fixed the temperature issue with the SmartTherm thermostat maintaining the desired temperature setting.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -253,7 +285,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 49,
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -263,13 +295,13 @@
|
||||
"Here is a summary of the customer service call transcript:\n",
|
||||
"\n",
|
||||
"Main Issue:\n",
|
||||
"The customer was having an issue with their Acme SecureHome alarm system going off randomly in the middle of the night, even though all doors and windows were closed properly.\n",
|
||||
"The customer was having an issue with their Acme SecureHome security system, where the alarm kept going off randomly in the middle of the night, despite no apparent cause.\n",
|
||||
"\n",
|
||||
"How It Was Resolved:\n",
|
||||
"The customer service agent first had the customer check for any error messages on the control panel and confirm that the battery was not low. When those basic troubleshooting steps did not reveal the issue, the agent determined that one of the sensors may be malfunctioning and needed to transfer the customer to the technical support team for a full system diagnostic.\n",
|
||||
"How it was Resolved:\n",
|
||||
"The agent first checked if the issue could be caused by doors/windows not closing properly or a low battery in the control panel, but ruled those out based on the customer's responses. Unable to diagnose the issue further, the agent transferred the call to the technical team so they could run a full diagnostic on the system to identify the root cause, which was likely a malfunctioning sensor.\n",
|
||||
"\n",
|
||||
"Required Follow-Up:\n",
|
||||
"The technical support team needs to run a diagnostic on the customer's SecureHome system to identify which sensor(s) may be causing the false alarms and then repair or replace those components. The customer should be contacted again once the diagnostic is complete and the repair/replacement has been performed to ensure the random alarms have been resolved.\n"
|
||||
"The technical team needs to complete the system diagnostic on the customer's SecureHome system to pinpoint the faulty sensor or component causing the random alarms. Once identified, they can work to replace/repair that part to resolve the issue permanently for the customer.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -304,7 +336,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 50,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -342,7 +374,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 51,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -397,7 +429,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 52,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -445,7 +477,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 53,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -481,7 +513,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 56,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -537,7 +569,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 57,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -545,7 +577,7 @@
|
||||
" final_prompt = prompt.replace(\"[INSERT CALL TRANSCRIPT HERE]\", transcript)\n",
|
||||
" # Make the API call\n",
|
||||
" response = client.messages.create(\n",
|
||||
" model=\"claude-3-sonnet-20240229\",\n",
|
||||
" model=\"claude-3-sonnet@20240229\",\n",
|
||||
" system=system,\n",
|
||||
" max_tokens=4096,\n",
|
||||
" messages=[\n",
|
||||
@@ -564,7 +596,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 58,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -600,7 +632,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 59,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -642,7 +674,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 60,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -687,7 +719,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 64,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -783,7 +815,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 65,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -827,7 +859,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 66,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -862,7 +894,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 67,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -898,7 +930,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 68,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -930,7 +962,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 69,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1015,7 +1047,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 70,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -1046,7 +1078,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 71,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -1183,7 +1215,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 75,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -1394,7 +1426,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 76,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -1404,7 +1436,7 @@
|
||||
" final_prompt = prompt.replace(\"[INSERT CALL TRANSCRIPT HERE]\", transcript)\n",
|
||||
" # Make the API call\n",
|
||||
" response = client.messages.create(\n",
|
||||
" model=\"claude-3-sonnet-20240229\",\n",
|
||||
" model=\"claude-3-sonnet@20240229\",\n",
|
||||
" system=system,\n",
|
||||
" max_tokens=4096,\n",
|
||||
" messages=[\n",
|
||||
@@ -1430,7 +1462,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 77,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1456,7 +1488,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 78,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1489,7 +1521,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 79,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1526,7 +1558,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 80,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1545,7 +1577,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 82,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1564,7 +1596,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 83,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1590,7 +1622,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 84,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1609,7 +1641,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 85,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
@@ -1680,7 +1712,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.6"
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Lesson 5: Customer support prompt\n",
|
||||
"## Lesson 5: customer support prompt\n",
|
||||
"\n",
|
||||
"In this lesson, we'll work on building a customer support chatbot prompt. Our goal is to build a virtual support bot called \"Acme Assistant\" for a fictional company called Acme Software Solutions. This fictional company sells a piece of software called AcmeOS, and the chatbot's job is to help answer customer questions around things like installation, error codes, troubleshooting, etc.\n",
|
||||
"\n",
|
||||
@@ -175,18 +175,33 @@
|
||||
"Next, let's write a function that we can use that will combine the various parts of the prompt and send a request to Claude."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -U python-dotenv google-cloud-aiplatform \"anthropic[vertex]\"\n",
|
||||
"!gcloud auth application-default login"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from anthropic import Anthropic\n",
|
||||
"from anthropic import AnthropicVertex\n",
|
||||
"from dotenv import load_dotenv\n",
|
||||
"import json\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"load_dotenv()\n",
|
||||
"client = Anthropic()\n",
|
||||
"\n",
|
||||
"project_id = os.environ.get(\"PROJECT_ID\")\n",
|
||||
"# Where the model is running. e.g. us-central1 or europe-west4 for haiku\n",
|
||||
"region = os.environ.get(\"REGION\")\n",
|
||||
"\n",
|
||||
"client = AnthropicVertex(project_id=project_id, region=region)\n",
|
||||
"\n",
|
||||
"def answer_question_first_attempt(question):\n",
|
||||
" system = \"\"\"\n",
|
||||
@@ -207,7 +222,7 @@
|
||||
" # Send a request to Claude\n",
|
||||
" response = client.messages.create(\n",
|
||||
" system=system,\n",
|
||||
" model=\"claude-3-haiku-20240307\",\n",
|
||||
" model=\"claude-3-haiku@20240307\",\n",
|
||||
" max_tokens=2000,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"user\", \"content\": final_prompt} \n",
|
||||
@@ -622,7 +637,7 @@
|
||||
" # Send a request to Claude\n",
|
||||
" response = client.messages.create(\n",
|
||||
" system=system,\n",
|
||||
" model=\"claude-3-haiku-20240307\",\n",
|
||||
" model=\"claude-3-haiku@20240307\",\n",
|
||||
" max_tokens=2000,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"user\", \"content\": final_prompt} \n",
|
||||
@@ -638,6 +653,13 @@
|
||||
"Let's start by making sure it still works when answering basic user questions:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
@@ -1038,7 +1060,7 @@
|
||||
" # Send a request to Claude\n",
|
||||
" response = client.messages.create(\n",
|
||||
" system=system,\n",
|
||||
" model=\"claude-3-haiku-20240307\",\n",
|
||||
" model=\"claude-3-haiku@20240307\",\n",
|
||||
" max_tokens=2000,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"user\", \"content\": final_prompt} \n",
|
||||
@@ -1250,7 +1272,7 @@
|
||||
" # Send a request to Claude\n",
|
||||
" response = client.messages.create(\n",
|
||||
" system=system,\n",
|
||||
" model=\"claude-3-haiku-20240307\",\n",
|
||||
" model=\"claude-3-haiku@20240307\",\n",
|
||||
" max_tokens=2000,\n",
|
||||
" messages=[\n",
|
||||
" {\"role\": \"user\", \"content\": final_prompt} \n",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Real world prompting course
|
||||
|
||||
Welcome to Anthropic's comprehensive real world prompting tutorial. This course is designed for experienced developers who have already dipped their toes into the world of prompt engineering, particularly those who have completed our comprehensive **[Prompt engineering interactive tutorial](../prompt_engineering_interactive_tutorial/README.md)**. If you haven't gone through that tutorial yet, we strongly recommend you do so before continuing, as it provides an in-depth exploration of various prompting techniques with hands-on exercises.
|
||||
Welcome to Anthropic's comprehensive real world prompting tutorial. This course is designed for experienced developers who have already dipped their toes into the world of prompt engineering.
|
||||
|
||||
Across five lessons, you will learn how to incorporate key prompting techniques into complex, real world prompts. We recommend that you start from the beginning with the [Prompting recap](./01_prompting_recap.ipynb) lesson, as each lesson builds on key concepts taught in previous ones.
|
||||
|
||||
@@ -9,4 +9,4 @@ Across five lessons, you will learn how to incorporate key prompting techniques
|
||||
* [Medical prompt walkthrough](./02_medical_prompt.ipynb)
|
||||
* [Prompt engineering process](./03_prompt_engineering.ipynb)
|
||||
* [Call summarizing prompt walkthrough](./04_call_summarizer.ipynb)
|
||||
* [Customer support bot prompt walkthrough](./05_customer_support_ai.ipynb)
|
||||
* [Customer support bot prompt walkthrough](./05_customer_support_ai.ipynb)
|
||||
Reference in New Issue
Block a user