Skip to content

2.4 Re-usable Prompts in Genkit & Prompt Engineering

In the previous recipe, we explored Genkit Flows and how they enable you to create complex AI-driven workflows. If you can recall, we defined our prompts inline when invoking the model, using the ai.generate() method. While this approach works for simple use cases, it can quickly become unwieldy as your application grows in complexity, especially when you need to reuse prompts across multiple flows or want to maintain a clear separation of concerns.

Genkit provides a powerful abstraction for managing prompts called Prompt Templates. With Prompt Templates, you can define your prompts separately from your flow logic, making them easier to maintain, and reuse across different parts of your application. This not only promotes cleaner code but also allows for better version control and collaboration, especially in larger teams where prompt engineering is a critical aspect of the development process.

Genkit supports two primary ways to define prompts:

  1. Inline Prompts - using ai.definePrompt() directly in your code
  2. DotPrompt Templates - using separate .prompt files quite similar to markdown files with YAML frontmatter for configuration and mustache-style syntax for variable interpolation. DotPrompt templates are especially useful for more complex prompts that may require better formatting, examples, or when you want to keep your prompt logic separate from your application logic.

In this recipe, we’ll build a Travel Destination Suggestion system that demonstrates different ways to work with prompts in Genkit.

You should have your environment set up for Genkit development. Follow the Getting Started guide if you haven't done so already.

I have a Genkit Starter Template that you can use to quickly spin up a new project with everything configured. You can find it here.

First, let’s ensure we have the necessary packages installed:

Terminal window
npm install @genkit-ai/googleai @genkit-ai/express

Here’s a brief overview of the packages we’ll be using:

  • @genkit-ai/googleai: Genkit plugin for Google AI models, including Gemini
  • @genkit-ai/express: Genkit plugin to easily create an Express server for hosting your flows

Method 1: Inline Prompts with ai.definePrompt

Section titled “Method 1: Inline Prompts with ai.definePrompt”

Inline prompts are defined directly in your TypeScript code. They’re great for:

  • Simple prompts that don’t need complex formatting
  • Prompts that are tightly coupled to specific flows
  • Quick prototyping and testing

Here’s how to create an inline prompt:

import { genkit, z } from 'genkit';
import { googleAI } from '@genkit-ai/google-genai';
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-2.5-flash'),
});
const inlineDestinationPrompt = ai.definePrompt({
name: 'inlineDestinationSuggestion',
prompt: `You are a travel guru AI assistant. Suggest a travel destination based on:
- Interests: {{interests}}
- Budget: {{budget}}
- Period: {{period}}
Provide a detailed suggestion including destination, attractions, budget breakdown, and best time to visit.`,
input: {
schema: z.object({
interests: z.string().describe('User travel interests'),
budget: z.string().describe('Travel budget'),
period: z.string().describe('Travel dates or period'),
}),
},
});

Here, we have defined a prompt named inlineDestinationSuggestion that takes three inputs: interests, budget, and period. The prompt instructs the AI to suggest a travel destination based on these inputs. Similar to handlebars, we use {{variableName}} syntax to indicate where the input variables should be interpolated in the prompt. The input field defines the expected input schema using Zod, which helps ensure that the data passed to the prompt is structured correctly.

On top of the basic prompt definition, we can also configure additional parameters such as:

ParameterDescription
modelSpecify a different model for this prompt
temperatureControl the randomness of the output (0-1)
systemProvide system-level instructions to the AI
messagesDefine a conversation history for chat models
maxTokensLimit the maximum length of the response

And many more depending on your use case, in future recipes we will explore these advanced configurations in depth.

If you have a keen eye, you will notice that this is the same configuration object that you can also pass to ai.generate(), but by defining it in the prompt, you can reuse it across multiple flows without having to repeat the configuration each time. This promotes consistency and makes it easier to manage changes to your prompts in one place.

Once defined, you can use your prompt in a flow or directly in your code:

const { text } = await inlineDestinationPrompt(input);

Another powerful way for managing prompts, is using dotprompts files. This is not a Genkit-specific format but rather a convention for storing prompts in separate files with a .prompt extension. In essence, these are markdown-like files that contain a YAML frontmatter for configuration that we just saw in the inline prompt, followed by the prompt template itself.

For more complex prompts or when you want to keep your prompts in separate files, use DotPrompt templates. These are stored in .prompt files and support:

  • YAML frontmatter for configuration
  • Better syntax highlighting in editors
  • Version control for prompt iterations
  • Easier collaboration with non-technical team members

Create a file at src/prompts/suggestDestination.prompt:

---
model: googleai/gemini-2.5-flash
input:
schema:
interests:
type: string
description: The user's travel interests (e.g., beaches, mountains, culture).
budget:
type: string
description: The user's budget for the trip (e.g., $2000).
period:
type: string
description: The planned travel dates or period (e.g., June 2024).
---
You are a travel guru AI assistant.
Your task is to suggest a travel destination based on the user's preferences.
When a user provides their interests, budget, and travel dates, respond with a
well-thought-out destination suggestion that aligns with their criteria.
Make sure to include:
- A brief description of the destination
- Key attractions or activities
- Estimated budget breakdown (flights, accommodation, activities)
- Best time to visit
Example Input:
"I love beaches and historical sites, my budget is around $2000, and I plan to
travel in June."
Example Output:
"Based on your interests in beaches and historical sites, I suggest visiting
Barcelona, Spain. Barcelona offers beautiful beaches like Barceloneta, as well as rich
historical sites such as the Gothic Quarter and Sagrada Familia. Key attractions include:
- La Rambla
- Park Güell
- Montjuïc
Estimated Budget Breakdown:
- Flights: $600
- Accommodation: $800
- Activities: $400
- Food and Miscellaneous: $200
Best Time to Visit: June is a great time to visit Barcelona, as the weather is
warm and perfect for beach activities."
Now, please provide your travel preferences:
- Interests: {{interests}}
- Budget: {{budget}}
- Period: {{period}}

To use a DotPrompt template, you will need to specify the directory where your prompts are stored when initializing Genkit, as shown below:

const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-2.5-flash'),
// Specify the directory for DotPrompt templates, so Genkit knows where to load them from
promptDir: './prompts',
});

All prompts in the specified directory will be automatically loaded and can be accessed through the ai.prompt() method using the file name of the dotprompt file without extension to reference the prompt you want to use.

import { googleAI } from '@genkit-ai/google-genai';
import { genkit, z } from 'genkit';
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-2.5-flash'),
promptDir: './prompts',
});
// Load the prompt from file (name matches filename without extension)
const suggestDestinationPrompt = ai.prompt('suggestDestination');
// And then just like with inline prompts, you can use it in your flows or directly in your code:
const { text } = await suggestDestinationPrompt({
interests: 'beaches and historical sites',
budget: '$2000',
period: 'June 2024',
});

Note: By default, Genkit looks for .prompt files in the src/prompts directory.

Like mentioned earlier, you can also configure various model parameters when defining your prompts. This allows you to fine-tune the behavior of the AI model for specific use cases. For example, you might want to adjust the temperature for more creative responses or set a maxOutputTokens limit to control the length of the response.

Please note the configurations are model-specific, so make sure to refer to the documentation of the model you are using for supported parameters.

Here’s how you can define an advanced prompt with additional configuration (for Gemini 2.5 Flash):

const advancedPrompt = ai.definePrompt(
{
name: 'advancedDestinationPlanner',
input: {
schema: z.object({
interests: z.string(),
budget: z.string(),
period: z.string(),
travelers: z.number().describe('Number of travelers'),
}),
},
config: {
temperature: 0.7, // Controls randomness (0-1)
maxOutputTokens: 2000, // Limits response length
},
},
`You are an expert travel planner with 20 years of experience.
Given the following travel preferences:
- Interests: {{interests}}
- Budget: {{budget}}
- Period: {{period}}
- Number of Travelers: {{travelers}}
Create a comprehensive travel plan including:
1. Recommended destination with rationale
2. Detailed 3-day itinerary
3. Budget breakdown per person
4. Accommodation suggestions (3 options: budget, mid-range, luxury)
5. Local cuisine recommendations
6. Essential travel tips
7. Packing list
Format the response in clear Markdown.`,
);

Common model parameters include:

  • temperature: Controls randomness (0.0 = deterministic, 1.0 = creative)
  • maxOutputTokens: Maximum length of the response
  • topK: Limits sampling to top K tokens
  • topP: Nucleus sampling threshold

In some cases, especially when dealing with complex data or when you want to ensure consistent output formats, you can define structured outputs in your prompts. This is particularly useful for applications that require reliable data extraction, such as generating JSON objects or specific data structures.

By defining an output schema using Zod, Genkit will ensure that the response from the AI model adheres to the specified format. This can help prevent issues with parsing and allows for more robust handling of the AI’s output such as building complex UIs for generating reports, dashboards, or even feeding the output into other systems or APIs, as we will see in future recipes.

const structuredDestinationPrompt = ai.definePrompt(
{
name: 'structuredDestination',
input: {
schema: z.object({
interests: z.string(),
budget: z.string(),
}),
},
output: {
schema: z.object({
destination: z.string().describe('Recommended destination'),
country: z.string().describe('Country'),
highlights: z.array(z.string()).describe('Top attractions'),
estimatedCost: z.object({
flights: z.number(),
accommodation: z.number(),
activities: z.number(),
total: z.number(),
}),
bestMonths: z.array(z.string()).describe('Best months to visit'),
}),
},
},
`Based on interests: {{interests}} and budget: {{budget}}, suggest the perfect travel destination with complete details.`,
);

Here’s the complete code for our travel destination suggestion system:

import { startFlowServer } from '@genkit-ai/express';
import { googleAI } from '@genkit-ai/google-genai';
import { genkit, z } from 'genkit';
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-2.5-flash'),
});
// Example 1: Inline Prompt
const inlineDestinationPrompt = ai.definePrompt(
{
name: 'inlineDestinationSuggestion',
input: {
schema: z.object({
interests: z.string().describe('User travel interests'),
budget: z.string().describe('Travel budget'),
period: z.string().describe('Travel dates or period'),
}),
},
},
`You are a travel guru AI assistant. Suggest a travel destination based on:
- Interests: {{interests}}
- Budget: {{budget}}
- Period: {{period}}
Provide a detailed suggestion including destination, attractions, budget breakdown, and best time to visit.`,
);
// Example 2: DotPrompt Template
const dotPromptDestination = ai.prompt('suggestDestination');
// Example 3: Advanced Prompt with Configuration
const advancedPrompt = ai.definePrompt(
{
name: 'advancedDestinationPlanner',
input: {
schema: z.object({
interests: z.string(),
budget: z.string(),
period: z.string(),
travelers: z.number().describe('Number of travelers'),
}),
},
config: {
temperature: 0.7,
maxOutputTokens: 2000,
},
},
`You are an expert travel planner with 20 years of experience...`,
);
// Example 4: Structured Output Prompt
const structuredDestinationPrompt = ai.definePrompt(
{
name: 'structuredDestination',
input: {
schema: z.object({
interests: z.string(),
budget: z.string(),
}),
},
output: {
schema: z.object({
destination: z.string().describe('Recommended destination'),
country: z.string().describe('Country'),
highlights: z.array(z.string()).describe('Top attractions'),
estimatedCost: z.object({
flights: z.number(),
accommodation: z.number(),
activities: z.number(),
total: z.number(),
}),
bestMonths: z.array(z.string()).describe('Best months to visit'),
}),
},
},
`Based on interests: {{interests}} and budget: {{budget}}, suggest the perfect travel destination.`,
);
// Define flows using the prompts
const inlinePromptFlow = ai.defineFlow(
{
name: 'inlinePromptDestination',
inputSchema: z.object({
interests: z.string(),
budget: z.string(),
period: z.string(),
}),
outputSchema: z.string(),
},
async (input) => {
const { text } = await inlineDestinationPrompt(input);
return text;
},
);
const dotPromptFlow = ai.defineFlow(
{
name: 'dotPromptDestination',
inputSchema: z.object({
interests: z.string(),
budget: z.string(),
period: z.string(),
}),
outputSchema: z.string(),
},
async (input) => {
const { text } = await dotPromptDestination(input);
return text;
},
);
const advancedPromptFlow = ai.defineFlow(
{
name: 'advancedDestinationPlanner',
inputSchema: z.object({
interests: z.string(),
budget: z.string(),
period: z.string(),
travelers: z.number(),
}),
outputSchema: z.string(),
},
async (input) => {
const { text } = await advancedPrompt(input);
return text;
},
);
const structuredPromptFlow = ai.defineFlow(
{
name: 'structuredDestinationSuggest',
inputSchema: z.object({
interests: z.string(),
budget: z.string(),
}),
outputSchema: z.object({
destination: z.string(),
country: z.string(),
highlights: z.array(z.string()),
estimatedCost: z.object({
flights: z.number(),
accommodation: z.number(),
activities: z.number(),
total: z.number(),
}),
bestMonths: z.array(z.string()),
}),
},
async (input) => {
const { output } = await structuredDestinationPrompt(input);
if (!output) {
throw new Error('Failed to generate destination suggestion');
}
return output;
},
);
startFlowServer({
flows: [
inlinePromptFlow,
dotPromptFlow,
advancedPromptFlow,
structuredPromptFlow,
],
port: 3000,
cors: true,
});

Start your flow server:

Terminal window
npm run genkit:dev

Then test using the Genkit Developer UI at http://localhost:4000, or using curl:

Terminal window
# Test inline prompt
curl -X POST http://localhost:3000/inlinePromptDestination \
-H "Content-Type: application/json" \
-d '{
"data": {
"interests": "beaches and snorkeling",
"budget": "$3000",
"period": "August 2026"
}
}'
# Test structured prompt
curl -X POST http://localhost:3000/structuredDestinationSuggest \
-H "Content-Type: application/json" \
-d '{
"data": {
"interests": "hiking and local food",
"budget": "$2500"
}
}'
  1. Use DotPrompt for complex prompts: Keep your code clean by moving long prompts to separate files
  2. Version your prompts: Track changes to prompts alongside your code in version control
  3. Add examples: Include few-shot examples in your prompts for better results
  4. Describe schemas: Use .describe() on Zod fields to help the LLM understand expectations

In this recipe, we learned:

  • How to create inline prompts using ai.definePrompt()
  • How to use DotPrompt templates in separate .prompt files
  • How to configure model parameters like temperature and token limits
  • How to enforce structured outputs in prompts
  • Best practices for prompt engineering in Genkit

Prompt templates are a powerful tool for building maintainable AI applications. By separating your prompt logic from your flow logic, you can iterate faster and collaborate more effectively with your team.