Gemini Code Snippets
Posted: | Categories: tech | Tags: evaluation, llm, prompt-engineering
Prompt Engineering Using Gemini API Setup from google import genai from google.genai import types client = genai.Client(api_key=GOOGLE_API_KEY) Basic Usage response = client.models.generate_content( model="gemini-2.0-flash", contents="Explain AI to me like I'm a kid." ) print(response.text) Interactive Chat chat = client.chats.create(model='gemini-2.0-flash', history=[]) response = chat.send_message('Hello! My name is Zlork.') print(response.text) Listing Available Models for model in client.models.list(): print(model.name) Model Output Settings # Max output tokens short_config = types.GenerateContentConfig(max_output_tokens=200) # High temperature for creative responses high_temp_config = types. Read more...