Skip to content

1.2 Genkit Anatomy - Understand Genkit

To recap things, Genkit is an AI SDK that enables you to interact with different AI models - say Gemini, GPT, Claude, etc. using the same familiar syntax. With Genkit, you can swap out AI models or use different models from different companies without changing your application code.

In the previous section, we saw two examples - Gemini and OpenAI - that were quite similar. In this section, we will break down the anatomy of Genkit and see how to build AI Solutions with Genkit.

First, we need to create a Genkit instance - this will configure Genkit and this is where we can set the defaults of our Genkit SDK behavior - think of things like available models, default models, model params, etc. When invoking a model you can customize this behavior for each call, but you can also set it globally as well.

import { googleAI } from '@genkit-ai/google-genai';
import { genkit } from 'genkit';
// we are creating a Genkit instance here - the variable can be named to anything
const ai = genkit({
// Gemini models
plugins: [googleAI()],
// default model
model: googleAI.model('gemini-2.5-flash'),
});

As we will see in the next section, Genkit uses plugins to extend it’s behavior and support all kind of different models. And in the plugin section, we can provide as many plugins as we want, based on the models we want to use, as shown below:

import { googleAI } from '@genkit-ai/google-genai';
import { genkit } from 'genkit';
import { openAI } from '@genkit-ai/compat-oai/openai';
// we are creating a Genkit instance here - the variable can be named to anything
const ai = genkit({
plugins: [
// provides Gemini Models
googleAI(),
// provides GPT Models
openAI(),
],
// by default, we will use GPT 5.1 as our model now
model: model: openAI.model('gpt-5.1'),
});

At the heart of this, is the Genkit plugin system. For example, If you want to use Gemini models from Google, you have a Google AI plugin, to provide all AI models from Google. The same goes to GPT, Claude and even Ollama, if you hosting your own custom models and want to build AI solutions around them.

On top of that, since Genkit is an open source framework, there are community provided plugins to extend this behavior. Genkit plugins can provide support for AI models, tools, prompts and much more. For instance, there are plugins to provide support for vector databases, retrieval augmented generation (RAG) and more. This makes Genkit quite extensible and you can build AI solutions tailored to your needs, with ease.

To enable this, out of the box, Genkit doesn’t shipt with any plugins, even the ones from Google. Once you make a decision which model you are using, you have to install the corresponding Genkit plugin using NPM (Node Package Manager) and then configure it. This makes Genkit quite extensible, outside the core features providing plugins for things beyond AI models, as will see later in this section.

For instance, for Gemini, this is what we first must do:

Terminal window
npm install @genkit-ai/google-genai

And then configure it as shown below:

import { googleAI } from '@genkit-ai/google-genai';
import { genkit } from 'genkit';
const ai = genkit({
plugins: [googleAI()],
model: googleAI.model('gemini-2.0-flash'),
});

Since ChatGPT and GPT models are quite popular, there is a special plugin that enables you to use any OpenAI-compatible API with Genkit. This means, if you are using OpenAI, Azure OpenAI, or any other provider that supports OpenAI APIs, you can use this single plugin to access all of them, without needing to install multiple plugins.

To use this plugin, first install it using NPM:

Terminal window
npm install @genkit-ai/compat-oai

In this plugin library, we have a number of sub-plugins, each targeting a specific OpenAI-compatible API such as OpenAI (Obviously), xAI and Deep Seek.

For instance, you can configure it as shown below, for example, to use GPT 5.1 from OpenAI:

import { openAI } from '@genkit-ai/compat-oai/openai';
import { genkit } from 'genkit';
const ai = genkit({
plugins: [openAI()],
model: openAI.model('gpt-5.1'),
});

And if you want to use xAI Grok models, you can do that as shown below:

import { xAI } from '@genkit-ai/compat-oai/xai';
import { genkit } from 'genkit';
const ai = genkit({
plugins: [xAI()],
model: xAI.model('grok-3'),
});

And if you want to use Deep Seek models, you can do that as shown below:

import { deepSeek } from '@genkit-ai/compat-oai/deep-seek';
import { genkit } from 'genkit';
const ai = genkit({
plugins: [deepSeek()],
model: deepSeek.model('deepseek-chat'),
});

On top of that, there is a general purpose plugin that can be configured to work with any OpenAI-compatible API, by just providing the base URL and the API key.

For example, to use Ollama models, hosted locally (or in your own servers), you can configure it as shown below:

// todo: verify the code below with Ollama API specs

import { openAICompatible } from '@genkit-ai/compat-oai';
import { genkit, modelRef } from 'genkit';
// define a model ref, for a model available in our local Ollama server
const localOllamaModel = modelRef({
name: 'localLlama/llama3',
// You can specify model-specific configuration here if needed.
// or use default settings.
});
const ai = genkit({
plugins: [
openAICompatible({
name: 'localLlama',
apiKey: 'ollama',
baseURL: 'http://localhost:11434/v1',
}),
],
model: 'localLlama/llama3',
});

So as you can see, with a single OpenAI-compatible plugin, you can access multiple AI model providers, making it quite versatile.

On top of plugins and default model, there are other options you can set. We will cover them in detail in later chapters, but here is a quick overview of what else you can set during Genkit initialization:

OptionDescription
pluginsList of plugins to load.
promptDirDirectory where dotprompts are stored.
modelDefault model to use if no model is specified.
contextAdditional runtime context data for flows and tools.
nameDisplay name that will be shown in developer tooling.
clientHeaderAdditional attribution information to include in the x-goog-api-client header.

Most Genkit plugins require API keys to authenticate requests to the AI model providers. Each plugin will have its own way of providing the API key, usually through environment variables or configuration options during plugin setup.

For example, the Google AI plugin looks for the GEMINI_API_KEY environment variable by default. You can set it in your environment as shown below:

Terminal window
export GEMINI_API_KEY='your-google-api-key-here'

And the same for Open AI GPT models, you can set the OPENAI_API_KEY environment variable as shown below:

Terminal window
export OPENAI_API_KEY='your-openai-api-key-here'

Please refer to the specific plugin documentation for details on how to set up API keys for each provider.

Once you have initialized Genkit with the desired configuration, you can now invoke the models using the ai.generate method, as shown below:

import { googleAI } from '@genkit-ai/google-genai';
import { genkit } from 'genkit';
// we are creating a Genkit instance here - the variable can be named to anything
const ai = genkit({
// Gemini models
plugins: [googleAI()],
// default model
model: googleAI.model('gemini-2.5-flash'),
});
// we need a main function to use async/await, since ai.generate is async
async function main() {
// we pass a prompt to generate a response from the model
const response = await ai.generate('Tell me a joke about programmers.');
console.log(response);
}
// invoke the main function
main();

If you want to override the default model for a specific call, you can do that by providing the model option in the ai.generate method, as shown below:

// ... previous code
const response = await ai.generate({
model: googleAI.model('gemini-2.0-flash'),
prompt: 'Tell me a joke about programmers.',
});

We can also provide additional model parameters, such as temperature, max tokens, etc. as shown below:

// ... previous code
const response = await ai.generate({
model: googleAI.model('gemini-2.0-flash'),
prompt: 'Tell me a joke about programmers.',
temperature: 0.7,
maxTokens: 150,
});

As we will see in other sections, there will be very few situations where you will need to run Genkit as a standalone server. Most of the time, you will be using Genkit as an SDK, embedded within your application code. For example, in a Node.js backend, or in a serverless function, etc.

However, in the early stages of learning Genkit, you might want to run simple examples to get a feel of how things work. To do that, you can create a simple Node.js project, install the necessary Genkit plugins using NPM, and then create a simple script to invoke the models as shown above.

As we move forward, I will provide full recipes for different example for different scenarios such as using Genkit with Express.js, Next.js, and more. And hopefully, by the end of this course, you will be comfortable building AI solutions with Genkit, tailored to your needs.

That being said, for our case, I provided a starter project that you can clone and run locally, and experiment with Genkit yourself. You can find the project on GitHub at the following URL: Genkit Starter Project

But to break it down quickly, you need to do the following, if you don’t want to use the starter project:

  1. Create a new directory for your project and navigate into it:

    Terminal window
    mkdir genkit-example
    cd genkit-example
  2. Initialize a new Node.js project:

    Terminal window
    npm init -y
  3. Install the necessary Genkit packages. For example, if you want to use Gemini and OpenAI models, you can install the following:

    Terminal window
    npm install tsx genkit @genkit-ai/google-genai @genkit-ai/compat-oai

    Let’s break down what each package does:

    • genkit: This is the core Genkit SDK.
    • @genkit-ai/google-genai: This is the Genkit plugin for Google AI models (Gemini).
    • @genkit-ai/compat-oai: This is the Genkit plugin for OpenAI-compatible APIs (GPT models).
    • tsx: TypeScript execution engine to run TypeScript files directly.
  4. Create a tsconfig.json file to configure TypeScript settings. You can do this by running:

    Terminal window
    npx tsc --init

    In this course, you will see me using npx quite often. npx is a package runner tool that comes with npm 5.2+ and higher. It allows you to run Node.js packages without having to install them globally on your system. This is useful for running command-line tools and scripts that are part of your project’s dependencies. It ensures that you are using the version of the package that is specified in your project’s package.json, avoiding potential version conflicts.

  5. Create a new TypeScript file, say index.ts, and add the Genkit example code we saw earlier.

    import { googleAI } from '@genkit-ai/google-genai';
    import { genkit } from 'genkit';
    const ai = genkit({
    plugins: [googleAI()],
    model: googleAI.model('gemini-2.5-flash'),
    });
    async function main() {
    const response = await ai.generate('Tell me a joke about programmers.');
    console.log(response);
    }
    main();
  6. Set the necessary environment variables for API keys. For example, if you are using Gemini, set the GEMINI_API_KEY environment variable:

    Terminal window
    export GEMINI_API_KEY='your-google-api-key-here'

    With tsx, you can also create a .env file in the root of your project and add your environment variables there. For example, create a .env file with the following content:

    GEMINI_API_KEY='your-google-api-key-here'
  7. Finally, run your TypeScript file using tsx:

    Terminal window
    npx tsx --env-file=.env index.ts

    The --env-file flag tells tsx to load environment variables from the specified .env file.

  8. You should see the output from the Genkit model in the console.

And that’s it! You have successfully set up a simple Genkit project and invoked an AI model. You can now experiment with different prompts, models, and configurations to build your AI solutions.