1004 lines
31 KiB
Plaintext
1004 lines
31 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Prompting with images"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Vision capabilities"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"\n",
|
|
"When providing images to Claude, we have to write an image content block. Here's an example:\n",
|
|
"\n",
|
|
"\n",
|
|
"```py\n",
|
|
"message = {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": 'png',\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": image\n",
|
|
" }\n",
|
|
" }\n",
|
|
" }\n",
|
|
" ]\n",
|
|
" }\n",
|
|
"```\n",
|
|
"\n",
|
|
"This diagram explains the important pieces of information that are required when providing Claude with an image:\n",
|
|
"\n",
|
|
"The `content` in our message is set to a dictionary containing the following properties:\n",
|
|
"\n",
|
|
"* `format` - the image media type. We currently support image/jpeg, image/png, image/gif, and image/webp media types.\n",
|
|
"* `image` - the actual image data itself"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Image only prompting\n",
|
|
"\n",
|
|
"Most often, we'll want to provide some text alongside images in our prompt, but it's perfectly acceptable to only provide an image. Let's try it! We've included a handful of images for this lesson in the `prompting_images` folder. Let's start by looking at one of these images using Python:\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from IPython.display import Image\n",
|
|
"Image(filename='./prompting_images/uh_oh.png') "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Wikimedia Commons, CC-BY-SA"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now, let's work on providing this image to Claude. The first step is to get the base64 encoded image data string that we send to the model. The code might look a bit complex, but it boils down to the following steps: \n",
|
|
"\n",
|
|
"1. Open the file in \"read binary\" mode.\n",
|
|
"2. Read the entire binary contents of the file as a bytes object.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# opens the image file in \"read binary\" mode\n",
|
|
"with open(\"./prompting_images/uh_oh.png\", \"rb\") as image_file:\n",
|
|
"\n",
|
|
" #reads the contents of the image as a bytes object\n",
|
|
" binary_data = image_file.read() \n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can take a look at the resulting `base64_string` variable, but it's not going to make a lot of sense to us humans. Let's read the first 100 characters:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"binary_data"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now that we have our image data in a string, the next step is to properly format our messages list that we'll send to Claude:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"message = {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": 'png',\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": binary_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" }\n",
|
|
" ]\n",
|
|
" }"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The final step is to send our messages list off to Claude and see what kind of response we get!"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import boto3\n",
|
|
"\n",
|
|
"bedrock_client = boto3.client(service_name='bedrock-runtime', region_name=\"us-west-2\")\n",
|
|
"model_id = \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n",
|
|
"\n",
|
|
"messages = [{\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": 'png',\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": binary_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" }\n",
|
|
" ]\n",
|
|
" }]\n",
|
|
"\n",
|
|
"# Send the message.\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Claude starts describing the image, because we didn't provide any other explicit instructions."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Image and text prompts\n",
|
|
"\n",
|
|
"Now let's try sending a prompt that includes both an image AND text. All we need to do is add a second block to the user's message. This block will be a simple text block."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" \n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": 'png',\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": binary_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"text\": \"What could this person have done to prevent this?\"\n",
|
|
" },\n",
|
|
" ]\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
" "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's send a request to Claude and see what happens:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Multiple images\n",
|
|
"\n",
|
|
"We can provide multiple images to Claude by adding multiple image blocks to our `content` of a user message. Here's an example that includes multiple images:\n",
|
|
"\n",
|
|
"\n",
|
|
"```py\n",
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": image1_media_type,\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": image1_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": image2_media_type,\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": image2_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": image3_media_type,\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": image3_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" },\n",
|
|
" {\"text\": \"How are these images different?\"},\n",
|
|
" ],\n",
|
|
" }\n",
|
|
"]\n",
|
|
" \n",
|
|
"\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Building an image helper\n",
|
|
"\n",
|
|
"As you work with images, especially in dynamic scripts, it can get annoying to create the image content blocks by hand. Let's write a little helper function that will generate appropriately formatted image blocks."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import mimetypes\n",
|
|
"\n",
|
|
"def create_image_message(image_path):\n",
|
|
" # Open the image file in \"read binary\" mode\n",
|
|
" with open(image_path, \"rb\") as image_file:\n",
|
|
" # Read the contents of the image as a bytes object\n",
|
|
" binary_data = image_file.read()\n",
|
|
" \n",
|
|
" # Get the MIME type of the image based on its file extension\n",
|
|
" mime_type, _ = mimetypes.guess_type(image_path)\n",
|
|
"\n",
|
|
" sub_type = mime_type.split(\"/\")[-1] \n",
|
|
" \n",
|
|
" # Create the image block\n",
|
|
" image_block = {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": sub_type,\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": binary_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" }\n",
|
|
" \n",
|
|
" \n",
|
|
" return image_block"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The above function takes an image path and returns a dictionary that is ready to be included in a message to Claude. It even does some logic to automatically determine the mime type of the image.\n",
|
|
"\n",
|
|
"Let's try working with a new image:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"Image(\"./prompting_images/animal1.png\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Using our new image block helper function, let's send a request to Claude:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"collapsed": true
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" create_image_message(\"./prompting_images/animal1.png\")\n",
|
|
" ]\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"bedrock_client = boto3.client(service_name='bedrock-runtime', region_name=\"us-west-2\")\n",
|
|
"model_id = \"anthropic.claude-3-5-sonnet-20241022-v2:0\"\n",
|
|
"\n",
|
|
"# Send the message.\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's try an example that combines text and image in the prompt:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" create_image_message(\"./prompting_images/animal1.png\"),\n",
|
|
" {\"text\": \"Where might I find this animal in the world?\"}\n",
|
|
" ]\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's try providing multiple images to Claude. We have 3 different animal images:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from IPython.display import display\n",
|
|
"display(Image(\"./prompting_images/animal1.png\", width=300))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"display(Image(\"./prompting_images/animal2.png\", width=300))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"display(Image(\"./prompting_images/animal3.png\", width=300))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's try passing all 3 images to Claude in a single message along with a text prompt asking, \"What are these animals?\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" create_image_message('./prompting_images/animal1.png'),\n",
|
|
" create_image_message('./prompting_images/animal2.png'),\n",
|
|
" create_image_message('./prompting_images/animal3.png'),\n",
|
|
" {\"text\": \"what are these animals?\"}\n",
|
|
" ]\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This works great! However, it's important to note that if we try this with a slightly less-capable Claude model like Claude 3 Haiku, we may get worse results:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Much better!**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Working with non-local images (images from URL)\n",
|
|
"\n",
|
|
"Sometimes you may need to provide Claude with images that you do not have locally. There are many ways of doing this, but they all boil down to the same recipe: \n",
|
|
"\n",
|
|
"* Get the image data using some sort of request library\n",
|
|
"* Encode the binary data of the image content using Base64 encoding\n",
|
|
"* Decode the encoded data from bytes to a string using UTF-8 encoding\n",
|
|
"\n",
|
|
"We'll use `httpx` to request the image data from a URL. The URL in the example below is an image of a church with the Northern Lights in the sky above it."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import httpx\n",
|
|
"\n",
|
|
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Church_of_light.jpg/1599px-Church_of_light.jpg\"\n",
|
|
"image_media_type = \"jpeg\"\n",
|
|
"image_data = httpx.get(image_url).content\n",
|
|
"\n",
|
|
"\n",
|
|
"messages=[\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": image_media_type,\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": image_data\n",
|
|
" }\n",
|
|
" }\n",
|
|
" },\n",
|
|
" ],\n",
|
|
" }\n",
|
|
" ]\n",
|
|
"\n",
|
|
"\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Just as we did earlier, we can define a helper function to generate image blocks from URLs. Below is a very lightweight implementation of a function that expects a URL and does the following: \n",
|
|
"\n",
|
|
"* uses `httpx` to request the image data\n",
|
|
"* determines the MIME type using very simple string manipulation. It takes the content after the last '.' character, which is not a bulletproof solution\n",
|
|
"* encodes the image data using base46 encoding and decodes the bytes into a utf-8 string\n",
|
|
"* returns a properly formatted image block, ready to go into a Claude prompt!\n",
|
|
"\n",
|
|
"If we were to call `get_image_dict_from_url(\"https://somewebsite.com/cat.png\")` it would return the following dictionary: \n",
|
|
"\n",
|
|
"```py\n",
|
|
"{\n",
|
|
" \"type\": \"image\",\n",
|
|
" \"source\": {\n",
|
|
" \"type\": \"base64\",\n",
|
|
" \"media_type\": \"image/png\",\n",
|
|
" \"data\": <actual image data>\n",
|
|
" },\n",
|
|
"}\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"\n",
|
|
"def get_image_dict_from_url(image_url):\n",
|
|
" # Send a GET request to the image URL and retrieve the content\n",
|
|
" response = httpx.get(image_url)\n",
|
|
" image_content = response.content\n",
|
|
"\n",
|
|
" # Determine the media type of the image based on the URL extension\n",
|
|
" # This is not a foolproof approach, but it generally works\n",
|
|
" image_extension = image_url.split(\".\")[-1].lower()\n",
|
|
" if image_extension == \"jpg\" or image_extension == \"jpeg\":\n",
|
|
" image_media_type = \"jpeg\"\n",
|
|
" elif image_extension == \"png\":\n",
|
|
" image_media_type = \"png\"\n",
|
|
" elif image_extension == \"gif\":\n",
|
|
" image_media_type = \"gif\"\n",
|
|
" else:\n",
|
|
" raise ValueError(\"Unsupported image format\")\n",
|
|
"\n",
|
|
" # Encode the image content using base6\n",
|
|
"\n",
|
|
" # Create the dictionary in the proper image block shape:\n",
|
|
" image_dict = {\n",
|
|
" \"image\": {\n",
|
|
" \"format\": image_media_type,\n",
|
|
" \"source\": {\n",
|
|
" \"bytes\": image_content\n",
|
|
" }\n",
|
|
" }\n",
|
|
" }\n",
|
|
" return image_dict\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Now let's try it! In the following example, we are using two image URL: \n",
|
|
"\n",
|
|
"* A PNG of a firetruck\n",
|
|
"* A JPG of an emergency response helicopter"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We'll pass both to Claude, alongside a text prompt asking, \"What do these images have in common?\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"url1 = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/d0/Rincon_fire_truck.png/1600px-Rincon_fire_truck.png\"\n",
|
|
"url2 = \"https://upload.wikimedia.org/wikipedia/commons/thumb/b/bb/Ornge_C-GYNP.jpg/1600px-Ornge_C-GYNP.jpg\"\n",
|
|
"\n",
|
|
"messages=[\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" {\"text\": \"Image 1:\"},\n",
|
|
" get_image_dict_from_url(url1),\n",
|
|
" {\"text\": \"Image 2:\"},\n",
|
|
" get_image_dict_from_url(url2),\n",
|
|
" {\"text\": \"What do these images have in common?\"}\n",
|
|
" ],\n",
|
|
" }\n",
|
|
" ]\n",
|
|
"\n",
|
|
"\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Claude successfully identifies that both images are of emergency response vehicles! More importantly, we've now seen how to provide Claude with images downloaded from a URL."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Vision prompting tips"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Be specific \n",
|
|
"Just as with plain text prompts, we can get better results from Claude by writing specific and detailed multimodal prompts. Let's take a look at an example.\n",
|
|
"\n",
|
|
"Here's an image of a group of friends. There are 8 people in the image, but 2 of them are cut off by the bounds of the image."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from IPython.display import Image\n",
|
|
"Image(filename='./prompting_images/people.png') "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"If we simply ask Claude, \"how many people are in this image?\" we'll likely get a response saying there are 7 people:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"\n",
|
|
"messages=[\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" create_image_message(\"./prompting_images/people.png\"),\n",
|
|
" {\"text\": \"How many people are in this image?\"}\n",
|
|
" ],\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"If we instead employ some basic prompt engineering techniques like telling Claude to think step by step, that it's an expert in counting people, and that it should pay attention to \"partial\" people that may be cut off in the image, we will get better results:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"messages=[\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" create_image_message(\"./prompting_images/people.png\"),\n",
|
|
" {\"text\": \"You have perfect vision and pay great attention to detail which makes you an expert at counting objects in images. How many people are in this picture? Some of the people may be partially obscured or cut off in the image or may only have an arm visible. Please count people even if you can only see a single body part. Before providing the answer in <answer> tags, think step by step in <thinking> tags and analyze every part of the image.\"}\n",
|
|
" ],\n",
|
|
" }\n",
|
|
"]\n",
|
|
"\n",
|
|
"\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Using examples\n",
|
|
"\n",
|
|
"Including examples in your prompts can help improve Claude's response quality in both text and image input prompts. \n",
|
|
"\n",
|
|
"To demonstrate this, we're going to use a series of images from a slideshow presentation. Our goal is to get Claude to generate a JSON description of a slide's content. Take a look at this first image:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from IPython.display import display\n",
|
|
"display(Image(\"./prompting_images/slide1.png\", width=800))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Our goal is to get Claude to generate a JSON-formatted response that includes the slide's background color, title, body text, and description of the image. The JSON for the above image might look like this: \n",
|
|
"\n",
|
|
"```json\n",
|
|
"{\n",
|
|
" \"background\": \"#F2E0BD\",\n",
|
|
" \"title\": \"Haiku\",\n",
|
|
" \"body\": \"Our most powerful model, delivering state-of-the-art performance on highly complex tasks and demonstrating fluency and human-like understanding\",\n",
|
|
" \"image\": \"The image shows a simple line drawing of a human head in profile view, facing to the right. The head is depicted using thick black lines against a pale yellow background. Inside the outline of the head, there appears to be a white, spoked wheel or starburst pattern, suggesting a visualization of mental activity or thought processes. The overall style is minimalist and symbolic rather than realistic.\"\n",
|
|
"}\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"This is a great use-case for including examples in our prompt to coach Claude on exactly the type of response we want it to generate. For reference, here are two other slide images:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"display(Image(\"./prompting_images/slide2.png\", width=800))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"display(Image(\"./prompting_images/slide3.png\", width=800))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"To do this, we'll take advantage of the conversation message format to provide Claude with an example of a previous input and corresponding output:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"\n",
|
|
"def generate_slide_json(image_path):\n",
|
|
"\n",
|
|
" slide1_response = \"\"\"{\n",
|
|
" \"background\": \"#F2E0BD\",\n",
|
|
" \"title\": \"Haiku\",\n",
|
|
" \"body\": \"Our most powerful model, delivering state-of-the-art performance on highly complex tasks and demonstrating fluency and human-like understanding\",\n",
|
|
" \"image\": \"The image shows a simple line drawing of a human head in profile view, facing to the right. The head is depicted using thick black lines against a pale yellow background. Inside the outline of the head, there appears to be a white, spoked wheel or starburst pattern, suggesting a visualization of mental activity or thought processes. The overall style is minimalist and symbolic rather than realistic.\"\n",
|
|
" }\"\"\"\n",
|
|
"\n",
|
|
" messages = [\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" create_image_message(\"./prompting_images/slide1.png\"),\n",
|
|
" {\"text\": \"Generate a JSON representation of this slide. It should include the background color, title, body text, and image description\"}\n",
|
|
" ],\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"role\": \"assistant\",\n",
|
|
" \"content\": slide1_response\n",
|
|
" },\n",
|
|
" {\n",
|
|
" \"role\": \"user\",\n",
|
|
" \"content\": [\n",
|
|
" create_image_message(image_path),\n",
|
|
" {\"text\": \"Generate a JSON representation of this slide. It should include the background color, title, body text, and image description\"}\n",
|
|
" ],\n",
|
|
" },\n",
|
|
" ]\n",
|
|
"\n",
|
|
"response = bedrock_client.converse(\n",
|
|
" modelId=model_id,\n",
|
|
" messages=messages,\n",
|
|
")\n",
|
|
"\n",
|
|
"response\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"display(Image(\"./prompting_images/slide2.png\", width=800))\n",
|
|
"generate_slide_json(\"./prompting_images/slide2.png\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"display(Image(\"./prompting_images/slide3.png\", width=800))\n",
|
|
"generate_slide_json(\"./prompting_images/slide3.png\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"\n",
|
|
"## Exercise\n",
|
|
"\n",
|
|
"For this exercise, we'd like you to use Claude to transcribe and summarize an Anthropic research paper. In the `images` folder, you'll find ` research_paper` folder that contains 5 screenshots of a research paper. To help you out, we've provided all 5 image URLs in a list:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"research_paper_pages = [\n",
|
|
" \"./images/research_paper/page1.png\",\n",
|
|
" \"./images/research_paper/page2.png\",\n",
|
|
" \"./images/research_paper/page3.png\",\n",
|
|
" \"./images/research_paper/page4.png\",\n",
|
|
" \"./images/research_paper/page5.png\"\n",
|
|
" ]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's take a look at the first image:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"Image(research_paper_pages[0])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Your task\n",
|
|
"\n",
|
|
"Your task is to use Claude to do the following: \n",
|
|
"* Transcribe the text in each of the 5 research paper images\n",
|
|
"* Combine the text from each image into one large transcription\n",
|
|
"* Provide the entire transription to Claude and ask for a non-technical summary of the entire paper. \n",
|
|
"\n",
|
|
"An example output might look something like this: \n",
|
|
"\n",
|
|
">This paper explores a new type of attack on large language models (LLMs) like ChatGPT, called \"Many-shot Jailbreaking\" (MSJ). As LLMs have recently gained the ability to process much longer inputs, this attack takes advantage of that by showing the AI hundreds of examples of harmful or undesirable behavior. The researchers found that this method becomes increasingly effective as more examples are given, following a predictable pattern.\n",
|
|
"\n",
|
|
">The study tested MSJ on several popular AI models and found it could make them produce harmful content they were originally designed to avoid. This includes things like violent or sexual content, deception, and discrimination. The researchers also discovered that larger AI models tend to be more susceptible to this type of attack, which is concerning as AI technology continues to advance.\n",
|
|
"\n",
|
|
">The paper also looked at potential ways to defend against MSJ attacks. They found that current methods of training AI to be safe and ethical (like supervised learning and reinforcement learning) can help somewhat, but don't fully solve the problem. The researchers suggest that new approaches may be needed to make AI models truly resistant to these kinds of attacks. They emphasize the importance of continued research in this area to ensure AI systems remain safe and reliable as they become more powerful and widely used.\n",
|
|
"\n",
|
|
"To get the best results, we advise asking Claude to summarize each page in a separate request rather than providing all 5 images and asking for a single transcription of the entire paper."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"***"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "py311",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.6"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|