Tag: prompt-engineering

Gemini Code Snippets

Prompt Engineering Using Gemini API Setup from google import genai from google.genai import types client = genai.Client(api_key=GOOGLE_API_KEY) Basic Usage response = client.models.generate_content( model="gemini-2.0-flash", contents="Explain AI to me like I'm a kid." ) print(response.text) Interactive Chat chat = client.chats.create(model='gemini-2.0-flash', history=[]) response = chat.send_message('Hello! My name is Zlork.') print(response.text) Listing Available Models for model in client.models.list(): print(model.name) Model Output Settings # Max output tokens short_config = types.GenerateContentConfig(max_output_tokens=200) # High temperature for creative responses high_temp_config = types. Read more...

Prompt Engineering Notes

Prompt Engineering Notes Prompt engineering is the process of designing high-quality prompts to guide LLMs to produce accurate outputs. This involves refining length, structure, tone, and clarity. Model-Specific Setup Choose Your LLM and Configure: Model-specific prompts and capabilities Sampling parameters: Output Length More tokens = more compute cost. Reducing length just truncates output. ReAct model warning: Emits irrelevant tokens post-response. Sampling Controls Parameter Effect Temperature Low = deterministic; High = creative/random Top-K Limit prediction to top K likely tokens Top-P Nucleus sampling = choose from top cumulative probability P Num Tokens Max output length Practical Guidelines: Temperature = 0. Read more...