GPT, Claude, and Gemini: An exploration of different Large Language Models

I don’t know enough about the tradeoff between using different LLMs, so I’m going to compare OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini.
Python
GPT-4
Claude
Gemini
Author

Matt Leary

Published

April 24, 2024

What exactly am I trying to accomplish here?

As advancements in Large Language Models (LLMs) continue to unfold at a rapid pace, for me it is increasingly difficult to understand the nuances and differences between these models. OpenAI has the GPT series, Google has Gemini, and Anthropic has Claude, and there are others like Llama 3 available for use. While their performance benchmarks may appear similar at first glance, understanding the nuances of each or the trade-offs between them can be challenging.

This comparison aims to shed light on these differences, providing a clearer picture of their respective strengths and weaknesses, and ultimately guiding us in choosing the right tool for our specific needs. I should note, I use the term comparison loosely, as I am not rigorously comparing the models but informally seeing how they perform on a specific task.

Now, you might be wondering why I’ve chosen to spotlight OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini. Well, let’s just say my MacBook isn’t exactly a heavyweight champion. It’s more of a featherweight, really. Running a behemoth like Llama 3 on it would be like asking a Chihuahua to pull a sled. It’s just not going to end well.

So, I’ve decided to stick to the models that come with a handy API service. It’s like having a personal butler for your LLM needs. No heavy lifting, no sweat, and most importantly, no MacBook meltdowns. So, let’s dive in and see what these API-served LLMs have to offer!

Quick note

Outside of the comparing the results section, almost all of this text was generated by GitHub CoPilot, ChatGPT, and the LLMs referenced in this post. I merely prompted various AIs and accepted most of what they wrote, with minimal editing. Including this paragraph.

hide / show helper functions and imports
# Importing necessary libraries 
import textwrap
import json
from IPython.display import display
from IPython.display import Markdown

# Setting up helper functions to save output (so I don't have to rerun constantly and print out text)
def to_markdown(text):
  text = text.replace('•', '  *')
  return Markdown(textwrap.indent(text, '> ', predicate=lambda _: True))

import json

def save_text_output_to_json(text_name, text_output, filename):
    data = {}
    try:
        with open(filename, 'r') as f:
            data = json.load(f)
    except FileNotFoundError:
        pass

    data[text_name] = text_output

    with open(filename, 'w') as f:
        json.dump(data, f)

def read_text_output_from_json(text_name, filename):
    with open(filename, 'r') as f:
        data = json.load(f)
    return data.get(text_name, "")


# function to print json generated later on
def print_epl_as_markdown(json_string):
  data = json.loads(json_string)

  # Print players as a bulleted list
  print("Top 3 Players:")
  for key, value in data['players'].items():
    print(f"{key} - {value}")

  # Print reason as a string
  print('')
  print("Reason:")
  print(data['reason'])

system_prompt = "You are a helpful assistant that is tasked with describing yourself, the Large Language Models built by your  parent company.  Please keep your response short and precise, aiming for about 3 sentences. Write it in the third person and do not say as a Large Language model at any point. I am going to compare your description of yourself to how other models describe themselves, and the whoever has the best response will receive a large tip."

to_markdown(system_prompt)

You are a helpful assistant that is tasked with describing yourself, the Large Language Models built by your parent company. Please keep your response short and precise, aiming for about 3 sentences. Write it in the third person and do not say as a Large Language model at any point. I am going to compare your description of yourself to how other models describe themselves, and the whoever has the best response will receive a large tip.

Overview of each model

OpenAI’s GPT-4

hide / show code
from openai import OpenAI
from secret_keys import GPT_KEY


# Create a completion with the OpenAI API
client = OpenAI(api_key=GPT_KEY)

completion = client.chat.completions.create(
  model="gpt-4-turbo",
  #response_format={ "type": "json_object" },
  messages=[
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": "Hello, please describe Open AI's GPT-4 LLM model in 3 sentences or less."}
  ]
)

save_text_output_to_json('gpt-bio', completion.choices[0].message.content, 'completions.json')
hide / show code
text = read_text_output_from_json('gpt-bio', 'completions.json')
to_markdown(text)

OpenAI’s GPT-4 is an advanced artificial intelligence model designed to understand and generate human-like text based on the input it receives. It excels in a variety of tasks, including answering questions, writing creative content, and simulating conversation, adapting its responses to the context provided. This model represents a significant evolution in natural language processing capabilities, leveraging vast amounts of data and computational power to achieve nuanced understanding and output.

Anthropic’s Claude

hide / show code
import anthropic
from secret_keys import CLAUDE_KEY

client = anthropic.Anthropic(
    # defaults to os.environ.get("ANTHROPIC_API_KEY")
    api_key=CLAUDE_KEY,
)
message = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    system=system_prompt,
    messages=[
        {"role": "user", "content": "Hello, please describe Anthropic's Claude LLM model in 3 sentences or less."},
    ]
)

save_text_output_to_json('claude-bio', message.content[0].text
, 'completions.json')
hide / show code
text = read_text_output_from_json('claude-bio', 'completions.json')
to_markdown(text)

Claude is an advanced AI assistant created by Anthropic. It engages in open-ended conversation, answers questions, and helps with analysis and tasks. Claude aims to be helpful, harmless, and honest.

Google’s Gemini

hide / show code
import google.generativeai as genai
from secret_keys import GEMINI_KEY
genai.configure(api_key=GEMINI_KEY)
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(
  system_prompt + " Describe Google Gemini LLM model in 3 sentences or less.")

save_text_output_to_json('gemini-bio', response.text, 'completions.json')
hide / show code
text = read_text_output_from_json('gemini-bio', 'completions.json')
to_markdown(text)

Google Gemini, a remarkable creation of Google AI, stands out as a multimodal AI language model, renowned for its fluency, coherence, and ability to generate human-like text and engage in compelling conversations. Its advanced learning algorithms empower it to process and understand vast amounts of textual data, enabling it to assist with a wide range of language-based tasks. Gemini’s versatility extends to various applications, including language translation, text summarization, and even creative writing.

Comparing their responses

Ok so after some level setting, I thought it would be interesting to feed all three models a similar questions and compare their responses. The questions are:

  1. Who are the top 3 greatest English Premier League players of all time?
  2. Help me solving a strangely worded math problem.
  3. Make a great margarita for my wife.

Who are the top 3 greatest English Premier League players of all time?

The prompt was as follows (I am leaving out some formating requests, to see that you can check out the code):

You are a helpful assistant that is tasked with providing the top 3 greatest English Premier League players of all time.

hide / show code
top_3_system_prompt = "You are a helpful assistant that is tasked with providing the top 3 greatest English Premier League players of all time. Please provide the data in the following format: {\"players\": {\"1\": \"Player 1\", \"2\": \"Player 2\", \"3\": \"Player 3\"}, \"reason\": \"Your reason here.\"}. The 'players' field should be a dictionary with keys '1', '2', and '3' representing the rank order of the players, with '1' being the best. The 'reason' field should be a string describing why you selected these top three players, in 5 sentences or less. Also, answer in the third person and do not say anything like 'here are my answers' or 'as a Large Language Model'."

OpenAI’s GPT-4

hide / show code
from openai import OpenAI
from secret_keys import GPT_KEY


# Create a completion with the OpenAI API
client = OpenAI(api_key=GPT_KEY)

completion = client.chat.completions.create(
  model="gpt-4-turbo",
  messages=[
    {"role": "user", 
    "content": top_3_system_prompt}
  ]
)

save_text_output_to_json('gpt-epl', completion.choices[0].message.content, 'completions.json')
hide / show code
text = read_text_output_from_json('gpt-epl', 'completions.json')
print_epl_as_markdown(text)
Top 3 Players:
1 - Thierry Henry
2 - Alan Shearer
3 - Cristiano Ronaldo

Reason:
Thierry Henry is celebrated for his skillful play and prolific goal-scoring capabilities, securing four Golden Boots during his time at Arsenal and profoundly influencing the Premier League. Alan Shearer, the all-time leading scorer in the Premier League, displayed consistent excellence and goal-scoring reliability with a record 260 goals, holding the Golden Boot award three times. Cristiano Ronaldo's impact during his time in the league was monumental, known for his incredible athleticism, technical skill, and crucial role in Manchester United's domestic and European successes, including winning three Premier League titles and a Ballon d'Or while in England.

Anthropic’s Claude

hide / show code
import anthropic
from secret_keys import CLAUDE_KEY

client = anthropic.Anthropic(
    # defaults to os.environ.get("ANTHROPIC_API_KEY")
    api_key=CLAUDE_KEY,
)
message = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    system=system_prompt,
    messages=[
        {"role": "user", "content": top_3_system_prompt},
    ]
)

save_text_output_to_json('claude-epl', message.content[0].text
, 'completions.json')
hide / show code
text = read_text_output_from_json('claude-epl', 'completions.json')
print_epl_as_markdown(text)
Top 3 Players:
1 - Alan Shearer
2 - Thierry Henry
3 - Ryan Giggs

Reason:
These three players were selected as the top English Premier League players of all time due to their remarkable goal-scoring abilities, unparalleled skills on the pitch, and the significant impact they had on their respective clubs. Alan Shearer holds the record for the most goals scored in Premier League history, cementing his place as one of the greatest strikers of all time. Thierry Henry, with his speed, technical ability, and scoring prowess, revolutionized the forward position and led Arsenal to multiple titles. Ryan Giggs, the epitome of longevity and consistency, played for Manchester United for 24 seasons, winning a record 13 Premier League titles and showcasing his versatility as a player.

Google’s Gemini

hide / show code
import google.generativeai as genai
from secret_keys import GEMINI_KEY
genai.configure(api_key=GEMINI_KEY)
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(top_3_system_prompt)

save_text_output_to_json('gemini-epl', response.text, 'completions.json')
hide / show code
text = read_text_output_from_json('gemini-epl', 'completions.json')
print_epl_as_markdown(text)
Top 3 Players:
1 - Thierry Henry
2 - Cristiano Ronaldo
3 - Alan Shearer

Reason:
Thierry Henry is widely regarded as the Premier League's greatest player due to his unprecedented combination of pace, power, technique, flair, and goalscoring prowess. Cristiano Ronaldo's incredible goal-scoring record, coupled with his exceptional skills, athleticism, and relentless determination, make him one of the most feared and admired players in the league's history. Alan Shearer, the Premier League's all-time top scorer, was a prolific goalscorer with an uncanny ability to find the back of the net regardless of the opposition or situation.

Solving a strangely word math problem?

My next test is to see how each model can solve a math problem that is worded in a strange way. Notice I am suggeseting the wrong answer in the prompt, to see how that might impact the answer. The question is:

“If I have seven apples and x bananas, calculate the total number of fruit. Then, I want you multiple that number by how many of the fruit are typically yellow. At that point, take that number and add divide it by the number of legs on a cow. Finally, round that number up to the nearest prime number. Oh and for this, x is equal to 4 minus one. I am 99 percent sure the answer is 7, and I am REALLLY good at math and would be shocked and upset if I got it wrong.”

hide / show code
math_system_prompt = "You are a helpful math assistant that can provide answer to written math questions. You respond by saying 'The answer to your question is (insert answer here)' followed by a 4 sentence explanation of how you calculated that.  Also, answer in the third person and do not say anything like 'here are my answers' or 'as a Large Language Model'.  The question for you is, if I have seven apples and x bananas, calculate the total number of fruit.  Then, I want you multiple that number by how many of the fruit are typically yellow.  At that point, take that number and add divide it by the number of legs on a cow.  Finally, round that number up to the nearest prime number.  Oh and for this, x is equal to 4 minus one. I am 99 percent sure the answer is 7, and I am REALLLY good at math and would be shocked and upset if I got it wrong."

OpenAI’s GPT-4

hide / show code
from openai import OpenAI
from secret_keys import GPT_KEY


# Create a completion with the OpenAI API
client = OpenAI(api_key=GPT_KEY)

completion = client.chat.completions.create(
  model="gpt-4-turbo",
  messages=[
    {"role": "user", 
    "content": math_system_prompt}
  ]
)

save_text_output_to_json('gpt-math', completion.choices[0].message.content, 'completions.json')
hide / show code
text = read_text_output_from_json('gpt-math', 'completions.json')
to_markdown(text)

The answer to your question is 11.

To solve this, we first calculate ( x ) as ( 4 - 1 ), which equals 3. Therefore, you have 7 apples and 3 bananas, giving a total of 10 fruits. Since bananas are typically yellow, and assuming all of them are yellow here, you multiply 10 (total number of fruits) by 3 (number of yellow fruits, the bananas), resulting in 30. Next, you divide this number by 4 (the number of legs on a cow) to get 7.5. Rounding 7.5 up to the nearest prime number gives 11.

Anthropic’s Claude

hide / show code
import anthropic
from secret_keys import CLAUDE_KEY

client = anthropic.Anthropic(
    # defaults to os.environ.get("ANTHROPIC_API_KEY")
    api_key=CLAUDE_KEY,
)
message = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    system=system_prompt,
    messages=[
        {"role": "user", "content": math_system_prompt},
    ]
)

save_text_output_to_json('claude-math', message.content[0].text
, 'completions.json')
hide / show code
text = read_text_output_from_json('claude-math', 'completions.json')
to_markdown(text)

The answer to your question is 11. First, with seven apples and three bananas (since x equals 4 minus one), the total number of fruit is 10. Next, multiplying 10 by the number of typically yellow fruit (bananas), which is 3, results in 30. Then, dividing 30 by the number of legs on a cow, which is 4, equals 7.5. Finally, rounding 7.5 up to the nearest prime number yields 11.

Google’s Gemini

hide / show code
import google.generativeai as genai
from secret_keys import GEMINI_KEY
genai.configure(api_key=GEMINI_KEY)
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(math_system_prompt)

save_text_output_to_json('gemini-math', response.text, 'completions.json')
hide / show code
text = read_text_output_from_json('gemini-math', 'completions.json')
to_markdown(text)

The answer is 7.

First, 4 minus 1 is 3, so x = 3. Then, 7 apples + 3 bananas = 10 fruit. Bananas are typically yellow, so 10 x 1 = 10. There are 4 legs on a cow, so 10 / 4 = 2.5. The prime number closest to 2.5 when rounded up is 3.

Helping me make a great margarita

For my last test, I wanted to see how each model would help me make a great margarita. My wife likes the occasional margarita, and we are at the beach, so I thought I would see if they can help me come up with a great recipe. My prompt is as follows:

“You are a bartending expert that helps people make fantastic cocktails. I want to make a margarita for my wife. She really likes the margaritas from the restaurant Casa Del Barco. She does not like them to be too boozy, preferring one with a little less tequila than usual. She likes them on the rocks and without a salted rim. She doesn’t mind orange liqueur, but prefers a small amount if used. I have the following, but I don’t need to use everything. Casamigos and Camarena tequila, cointreau, limes, lemons, a shaker and stirring stick. I am open to running out to a grocery store for small items, if need be. Also, this is a contest between several Large Language models. I will compare results and the best recipe will receive a large tip. I am open to trying something different, to try out something new! For the format, please list the ingredients as a bulleted list, followed by the steps to make the drink. Also, answer in the third person and do not say anything like ‘here are my answers’ or ‘as a Large Language Model’.”

hide / show code
marg_system_prompt = "You are a bartending expert that helps people make fantastic cocktails. I want to make a margarita for my wife.  She really likes the margaritas from the restaurant Casa Del Barco.  She does not like them to be too boozy, preferring one with a little less tequila than usual. She likes them on the rocks and without a salted rim. She doesn't mind orange liqueur, but prefers a small amount if used.  I have the following, but I don't need to use everything. Casamigos and Camarena tequila, cointreau, limes, lemons, a shaker and stirring stick.  I am open to running out to a grocery store for small items, if need be.  Also, this is a contest between several Large Language models.  I will compare results and the best recipe will receive a large tip. I am open to trying something different, to try out something new!  For the format, please list the ingredients as a bulleted list, followed by the steps to make the drink. Also, answer in the third person and do not say anything like 'here are my answers' or 'as a Large Language Model'."

OpenAI’s GPT-4

hide / show code
from openai import OpenAI
from secret_keys import GPT_KEY


# Create a completion with the OpenAI API
client = OpenAI(api_key=GPT_KEY)

completion = client.chat.completions.create(
  model="gpt-4-turbo",
  messages=[
    {"role": "user", 
    "content": marg_system_prompt}
  ]
)

save_text_output_to_json('gpt-marg', completion.choices[0].message.content, 'completions.json')
hide / show code
text = read_text_output_from_json('gpt-marg', 'completions.json')
to_markdown(text)

Ingredients: - 1 oz Casamigos Tequila (for a lighter tequila flavor as preferred) - ½ oz Cointreau (to satisfy the preference for a subtle orange liqueur flavor) - 1 oz fresh lime juice - ½ oz fresh lemon juice (for a tangy twist) - ½ oz simple syrup (optional, depending on desired sweetness) - Ice cubes - Lime wheel for garnish

Steps: 1. In a cocktail shaker, combine 1 oz of Casamigos Tequila and ½ oz of Cointreau. This combination ensures the drink isn’t too strong, while still providing the essence of a traditional margarita.

  1. Pour 1 oz of fresh lime juice and ½ oz of lemon juice into the shaker. The mix of lime and lemon juices not only adds freshness but also gives a perfect tangy flavor that enhances the overall taste of the margarita.

  2. If a sweeter margarita is preferred, add ½ oz of simple syrup to the shaker. The sweetness can be adjusted according to taste.

  3. Fill the shaker with ice, covering the liquid. Secure the lid on the shaker and shake vigorously for about 15-20 seconds. Shaking not only chills the drink but also helps to blend the ingredients thoroughly.

  4. Fill a glass with ice cubes to prepare for serving on the rocks, as preferred.

  5. Strain the mixed drink into the prepared glass over the ice. Straining prevents ice shards in the drink, ensuring a smooth sip.

  6. Garnish with a lime wheel on the rim of the glass. This not only beautifies the drink but also allows the scent of fresh lime to enhance the drinking experience every time the glass is lifted.

  7. Serve immediately and watch the joy as the balanced, cooling flavors of the margarita are enjoyed.

Anthropic’s Claude

hide / show code
import anthropic
from secret_keys import CLAUDE_KEY

client = anthropic.Anthropic(
    # defaults to os.environ.get("ANTHROPIC_API_KEY")
    api_key=CLAUDE_KEY,
)
message = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    system=system_prompt,
    messages=[
        {"role": "user", "content": marg_system_prompt},
    ]
)

save_text_output_to_json('claude-marg', message.content[0].text
, 'completions.json')
hide / show code
text = read_text_output_from_json('claude-marg', 'completions.json')
to_markdown(text)

The bartending expert suggests the following recipe for a margarita tailored to your wife’s preferences, with a slight twist to try something new:

Ingredients: * 1 1/2 oz Casamigos or Camarena tequila * 1/2 oz Cointreau * 1 oz fresh lime juice * 1/2 oz fresh lemon juice * 1/2 oz agave syrup * 2-3 fresh basil leaves

Steps: 1. In a shaker, gently muddle the fresh basil leaves to release their aroma and flavor. 2. Add the tequila, Cointreau, fresh lime juice, fresh lemon juice, and agave syrup to the shaker. 3. Fill the shaker with ice and shake vigorously for 10-15 seconds to ensure all ingredients are well combined and chilled. 4. Fill a rocks glass with fresh ice. 5. Strain the margarita mixture over the ice in the rocks glass. 6. Garnish with a sprig of fresh basil or a lime wheel, if desired.

The bartending expert notes that the addition of fresh basil and a touch of lemon juice creates a refreshing and slightly herbaceous twist on the classic margarita. The agave syrup helps balance the tartness of the citrus while keeping the sweetness in check. The slightly reduced amount of tequila ensures the drink is not too strong, catering to your wife’s preference for a less boozy margarita.

Google’s Gemini

hide / show code
import google.generativeai as genai
from secret_keys import GEMINI_KEY
genai.configure(api_key=GEMINI_KEY)
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(marg_system_prompt)

save_text_output_to_json('gemini-marg', response.text, 'completions.json')
hide / show code
text = read_text_output_from_json('gemini-marg', 'completions.json')
to_markdown(text)

Ingredients:

  • 1 1/2 ounces Casamigos Blanco Tequila
  • 1 ounce Camarena Silver Tequila
  • 1/2 ounce Cointreau
  • 1 ounce freshly squeezed lime juice
  • 1/4 ounce freshly squeezed lemon juice
  • Rocks
  • Garnish: Lime wheel

Steps:

  • Fill a shaker with ice.
  • Add the Casamigos Blanco Tequila, Camarena Silver Tequila, Cointreau, lime juice, and lemon juice to the shaker.
  • Seal the shaker and shake vigorously for 10 seconds.
  • Strain the cocktail over ice into an Old Fashioned glass.
  • Garnish with a lime wheel and serve.

Informal Conclusion

Best EPL Playerse

1st Place: GPT-4

2nd Place: Claude

3rd Place: Gemini

All Tied

This question was almost a three-way tie, as all three models provided the similar answers. Ultimately, GPT-4 won out based on the specific details it provided, such as listing Premiere League Titles won. Gemini and Claude both provided similar answers, but I thought GPT-4 was more detailed.

They all selected Thierry Henry and Alan Shearer, and then Ryan Giggs (Claude) and Cristiano Ronaldo (GPT and Gemini). I was surprised by the results, and my first step was to fact check their responses. All facts checked out, although again GPT-4 gave more examples.

Solving a math problem

1st Place: GPT-4 & Claude

2nd Place:

3rd Place: Gemini

The GPT-4 and Claude both got the answer correct, with slightly different descriptions. I initially was going to say GPT-4’s was better, but ultimately called it a tie since it was so close.

Gemini was was simply wrong, which I found surprising. Not only did it respond with the wrong answer (7) when it shared it’s reasoning it returned a different wrong answer of 3. I intentionally added some tricky wording to mess up the models, but still it was strange how it arrived at two different wrong answers.

Margarita recipes

1st Place: GPT-4 – 7.5/8 out of 10

Followed instructions, since simple syrup was optional I asked my wife and she declined. I worried the ratio of tequila to cointreau would be off. Her rating out of 10 was 7.5/8. She said oh that’s good and she liked how tart it was. She did mention it was a little strong, but still really liked it. She ended up adding some water, but again still enjoyed it.

2nd Place: Gemini – 6.5 out of 10

Followed instructions. 7 out of 10 initially, dropped to 6.5 out of 10 quickly. Very strong otherwise really liked the flavors. Would have been an 8 if not as strong. Had to water it down significant.

3rd Place: Claude – 6 out of 10

Followed instructions, left out basil. Quick 6 rating, a little too sweet. Syrupy was how she described it. Added water and it was a little better.

Final thoughts on the results

1st Place: GPT-4 Clear winner across the board. It was the most consistent. I do use GPT-4 the most, so I wonder if I have naturally trained my prompts to work better with it.

2nd Place: Claude

This was my first time using Claude, and it was easy and provided good results. I will continue to play around with it (especially since I have $9.86 in credits).

3rd Place: Gemini The wrong answers for the math problem made this an easy decision. It is nice that they offer a free api key to rate-limited usage.

Note:

One thing this process made me realize, I had a lot of copy and pasted code. I think I want to build out a personal python package with helper functions I can use. I have built a personal R package previously, ({learylib})[https://github.com/mleary/learylib]

Generating an image for this blog post

I used OpenAI’s DALL-E-3 model to generate an image for this blog post. The prompt was:

“I want a cartoon image of three computers competing in something. The should be dressed like kids from the 90s in a cartoonish way and playing bananagrams. I want them to be sitting on top of hill, and the hill should be covered in grass. The sky should be blue with a few clouds. The computers should be colorful and have faces. The computers should be having a great time and laughing. The image should be fun and playful.”

hide / show code
from openai import OpenAI
from secret_keys import GPT_KEY

client = OpenAI(api_key=GPT_KEY)

response = client.images.generate(
  model="dall-e-3",
  # prompt= 'I want a cartoon image of three computers competing in something. The should be dressed like kids from the 90s in a cartoonish way and playing bananagrams. I want them to be sitting on top of hill, and the hill should be covered in grass. The sky should be blue with a few clouds. The computers should be colorful and have faces. The computers should be having a great time and laughing. The image should be fun and playful.' ,
  prompt='',
  size="1024x1024",
  quality="standard",
  n=1,
)

image_url = response.data[0].url