Why grammY
Three serious options exist for building Telegram bots in the Node.js ecosystem: node-telegram-bot-api, Telegraf, and grammY. After shipping multiple production bots, grammY is what we reach for every time.
node-telegram-bot-api is a thin wrapper around the Bot API. It works, but you write everything yourself — middleware, session management, conversation state, error recovery. Fine for a weekend hack. Not for production.
Telegraf was the standard for years. It has middleware, scenes, and a large community. But development stalled, TypeScript support was bolted on after the fact, and the type definitions have gaps that surface at the worst moments. When the Bot API adds new features, Telegraf lags behind.
grammY was built by a core Telegraf contributor who started fresh with TypeScript as the foundation, not an afterthought. The type system is generated directly from the Bot API specification, which means every method, every parameter, every update type is correctly typed the moment Telegram ships it. Auto-complete works. The compiler catches mistakes before the bot runs. The plugin ecosystem — sessions, conversations, menus, rate limiting — is first-party and maintained alongside the framework.
The practical difference: with grammY, you write bot logic. With the alternatives, you write bot infrastructure.
Project Setup
Start with a clean TypeScript project. No bundler needed — tsx handles execution directly.
mkdir my-bot && cd my-bot
npm init -y
npm install grammy
npm install -D typescript @types/node tsx
npx tsc --init
Adjust tsconfig.json for a bot project. The defaults are conservative — tighten them.
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"outDir": "dist",
"rootDir": "src"
},
"include": ["src"]
}
Create src/bot.ts with the minimal viable bot.
import { Bot } from "grammy";
const bot = new Bot(process.env.BOT_TOKEN!);
bot.command("start", (ctx) => {
ctx.reply("Online.");
});
bot.start();
Run it with BOT_TOKEN=your_token npx tsx src/bot.ts. Send /start to your bot on Telegram. It replies. That is your foundation.
A note on the bot token: create your bot through @BotFather on Telegram. The token goes in environment variables, never in source code. We will cover this properly in the deployment section.
Commands and Message Handling
grammY routes updates through filter methods. Each one narrows the type of ctx so you get full auto-complete for the specific update type. This is not just convenience — it eliminates an entire class of runtime errors where you access properties that do not exist on the current update type.
// Respond to the /help command
bot.command("help", (ctx) => {
ctx.reply("Available commands:\n/start — Initialize\n/help — This message\n/status — Bot status");
});
// Handle plain text messages
bot.on("message:text", (ctx) => {
const text = ctx.message.text;
ctx.reply(`Received: ${text}`);
});
// Handle photos
bot.on("message:photo", (ctx) => {
const photo = ctx.message.photo;
const largest = photo[photo.length - 1];
ctx.reply(`Photo received. ${largest.width}x${largest.height}`);
});
// Handle callback queries from inline keyboards
bot.callbackQuery("confirm", (ctx) => {
ctx.answerCallbackQuery({ text: "Confirmed." });
});
The filter strings are fully typed. "message:text", "message:photo", "message:document", "callback_query:data" — the compiler enforces valid combinations. Type "message:" and auto-complete shows every possible sub-filter. This is where grammY's generated types pay off.
You can also combine filters. bot.on(["message:text", "message:caption"]) handles both plain messages and media with captions. For complex routing, use bot.filter() with a custom predicate that narrows the context type however you need.
Middleware
Every grammY handler is middleware. The framework processes updates through a chain — each function can act on the update, pass it downstream with next(), or stop the chain. This is the same pattern as Koa or Express, applied to bot updates instead of HTTP requests.
// Logging middleware — runs on every update
bot.use(async (ctx, next) => {
const start = Date.now();
await next();
const ms = Date.now() - start;
console.log(`[${ctx.update.update_id}] ${ms}ms`);
});
// Auth middleware — block non-admin users from admin commands
function adminOnly(ctx, next) {
if (ctx.from?.id !== Number(process.env.ADMIN_CHAT_ID)) {
return ctx.reply("Unauthorized.");
}
return next();
}
bot.command("broadcast", adminOnly, (ctx) => {
// Only admins reach this handler
});
Composers
As the bot grows, a single file becomes unmanageable. grammY’s Composer lets you split handlers into modules.
// src/handlers/admin.ts
import { Composer } from "grammy";
import type { MyContext } from "../context";
const admin = new Composer<MyContext>();
admin.command("stats", async (ctx) => {
const userCount = await db.getUserCount();
ctx.reply(`Users: ${userCount}`);
});
admin.command("broadcast", async (ctx) => {
// broadcast logic
});
export { admin };
// src/bot.ts
import { admin } from "./handlers/admin";
import { userHandlers } from "./handlers/user";
bot.use(admin);
bot.use(userHandlers);
Each composer is isolated. It has its own middleware chain, its own handlers, and it can be tested independently. In production bots like FridgeKit, we split by domain: inventory.ts, receipts.ts, onboarding.ts, payments.ts. Each file owns its slice of the bot.
Composers can also be nested. A top-level composer can filter all updates from admin users and delegate to sub-composers for different admin functions. This creates a clean permission hierarchy without scattering authorization checks across individual handlers.
Sessions
Bots need state. grammY’s session plugin attaches a typed object to every ctx, loaded before your handler runs and saved after it completes.
import { session } from "grammy";
interface SessionData {
language: "en" | "pl";
onboardingComplete: boolean;
itemCount: number;
}
bot.use(session({
initial: (): SessionData => ({
language: "en",
onboardingComplete: false,
itemCount: 0,
}),
}));
By default, sessions are stored in memory. That works for development but dies on restart. For production, use a storage adapter.
import { FileAdapter } from "@grammyjs/storage-file";
bot.use(session({
initial: (): SessionData => ({
language: "en",
onboardingComplete: false,
itemCount: 0,
}),
storage: new FileAdapter({
dirName: "sessions",
}),
}));
Other adapters exist for Redis, SQLite, PostgreSQL, MongoDB, and more. The @grammyjs/storage-* packages are all first-party. Pick the one that matches your infrastructure. For most single-server bots, the file adapter or SQLite adapter is sufficient. WP Jobs runs SQLite sessions on a single EC2 instance and handles thousands of updates without issue.
One thing to watch: session data is loaded on every update. Keep it small. Store user preferences and conversation state in the session. Store large datasets — inventory items, order history, analytics — in a proper database and reference them by user ID. The session is a fast lookup table, not a document store.
Custom Context
To get proper types on ctx.session, define a custom context type and pass it to Bot.
import { Context, SessionFlavor } from "grammy";
interface SessionData {
language: "en" | "pl";
onboardingComplete: boolean;
itemCount: number;
}
type MyContext = Context & SessionFlavor<SessionData>;
const bot = new Bot<MyContext>(process.env.BOT_TOKEN!);
Now every handler knows exactly what ctx.session contains. No casting. No guessing. The compiler enforces it.
Conversations
Most bot interactions are multi-step. A user sends /add, you ask for a product name, they reply, you ask for a quantity, they reply, you save the item. The conversations plugin handles this without manually tracking state across separate handlers.
import { conversations, createConversation } from "@grammyjs/conversations";
// Install the plugin
bot.use(conversations());
async function addItem(conversation, ctx) {
await ctx.reply("What item do you want to add?");
const nameCtx = await conversation.wait();
const name = nameCtx.message?.text;
if (!name) {
await ctx.reply("I need a text name. Try again with /add.");
return;
}
await ctx.reply(`How many units of "${name}"?`);
const qtyCtx = await conversation.wait();
const qty = parseInt(qtyCtx.message?.text ?? "", 10);
if (isNaN(qty) || qty < 1) {
await ctx.reply("Invalid quantity. Try again with /add.");
return;
}
await db.addItem(ctx.from.id, name, qty);
await ctx.reply(`Added ${qty}x ${name}.`);
}
bot.use(createConversation(addItem));
bot.command("add", async (ctx) => {
await ctx.conversation.enter("addItem");
});
The conversation function reads like synchronous code but executes across multiple Telegram updates. Each conversation.wait() pauses until the user sends the next message. The plugin serializes conversation state internally, so it survives bot restarts if you configure a session storage backend.
This is the pattern FridgeKit uses for its three-step onboarding flow: language selection, location, dietary preferences. Each step waits for user input, validates it, and moves forward. No state machine. No manual flags. Just sequential code.
Conversations also support timeouts. If a user starts the /add flow and disappears for an hour, you do not want the conversation hanging indefinitely. Set a timeout and handle the expiration gracefully — clean up partial state, notify the user if they come back, and free the resources.
Error Handling
Unhandled errors in a bot handler crash the process. grammY provides bot.catch as the global error boundary.
bot.catch((err) => {
const ctx = err.ctx;
const e = err.error;
console.error(`Error handling update ${ctx.update.update_id}:`);
if (e instanceof GrammyError) {
// Error from the Telegram Bot API
console.error("Telegram API error:", e.description);
} else if (e instanceof HttpError) {
// Network error reaching Telegram
console.error("Network error:", e);
} else {
// Your code threw something
console.error("Application error:", e);
}
// Notify the user that something went wrong
ctx.reply("Something went wrong. Try again.").catch(() => {});
});
Three error types cover the entire surface: GrammyError for API rejections (blocked user, invalid message, rate limit), HttpError for network failures, and everything else for application bugs. Catch them distinctly. Log them differently. Alert on them differently.
In production, pipe the error details to an admin notification. We send critical errors directly to a Telegram admin chat — a bot monitoring itself through the platform it runs on. This means errors are visible within seconds, on the same device where users are reporting problems. No need to check a dashboard or log aggregator.
Deployment
A production Telegram bot needs three things: a container, environment variables, and a process supervisor. Docker handles all three.
Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY tsconfig.json ./
COPY src ./src
RUN npx tsc
CMD ["node", "dist/bot.js"]
docker-compose.yml
services:
bot:
build: .
container_name: my-bot
restart: unless-stopped
env_file: .env
volumes:
- bot-data:/app/sessions
volumes:
bot-data:
Environment
# .env
BOT_TOKEN=123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11
ADMIN_CHAT_ID=123456789
The restart: unless-stopped policy means Docker restarts the bot after crashes, server reboots, or OOM kills. The volume persists session data across container rebuilds. Environment variables stay out of the image.
Deploy with GitHub Actions on push to main: SSH into the server, pull the latest code, rebuild the container, bring it up. No downtime for long-polling bots — the old container stops, the new one starts, and grammY picks up from the latest update offset. For webhook-based bots, use a reverse proxy like nginx to handle TLS termination and route requests to the container.
Keep the image small. node:20-alpine starts at roughly 130MB. Install only production dependencies in the image — dev dependencies like tsx and @types/node stay out. The compiled JavaScript in dist/ is what runs in production. The smaller the image, the faster the deploy, the less memory consumed on the server.
Production Checklist
- Set bot commands via
bot.api.setMyCommands()on startup - Use
bot.api.setMyDescription()for the bot profile - Implement graceful shutdown:
process.on("SIGTERM", () => bot.stop()) - Rate limit outgoing messages — Telegram enforces 30 messages/second globally
- Use the
auto-retryplugin for transient API failures - Log every error with enough context to reproduce it
From Tutorial to Production
This covers the core patterns. Every production grammY bot at GlacierPhonk™ — FridgeKit (kitchen inventory with receipt OCR), WP Jobs (WordPress job aggregation), the content automation bots — uses these exact foundations. The framework scales from a 50-line utility bot to a multi-container system with microservices. The type system keeps it maintainable as it grows.
If you need a production Telegram bot built with this stack — or want to discuss your project — reach out through the GlacierPhonk™ inquiry bot.