What Was Missing
grammY’s src/convenience/constants.ts exports API_CONSTANTS — a frozen object containing ALL_UPDATE_TYPES, DEFAULT_UPDATE_TYPES, and ALL_CHAT_PERMISSIONS. These constants are used across production bots to configure which updates to receive and which permissions to grant. Every other module in src/convenience/ had a corresponding test file. This one did not.
The risk: if someone adds a new update type to the Bot API types package but forgets to add it to ALL_UPDATE_TYPES, or if Telegram adds a new chat permission and it gets missed in ALL_CHAT_PERMISSIONS, nothing would catch it. The existing satisfies clauses prevent incorrect values from being added, but they do not enforce completeness.
Runtime Tests, Then a Better Idea
The initial PR submitted runtime tests — assertions checking array contents, duplicate detection, and Object.isFrozen verification. The maintainer, KnorpelSenf, pushed back. The runtime tests duplicated the arrays in the test file, creating a second place to keep in sync. More maintenance, not more confidence.
His counter-proposal: type-level tests. Instead of checking values at runtime, assert at compile time that the types derived from the implementation match the types declared in the Bot API type definitions. If they diverge, the build fails. Zero maintenance overhead — no arrays to keep in sync, no runtime execution needed.
PR #882 — Compile-Time Completeness Checks
The final implementation uses IsExact type assertions from Deno’s standard testing library. Two key checks:
// ALL_UPDATE_TYPES covers every update type
assertType<IsExact<
(typeof API_CONSTANTS.ALL_UPDATE_TYPES)[number],
Exclude<keyof Update, "update_id">
>>(true);
// ALL_CHAT_PERMISSIONS covers every chat permission
assertType<IsExact<
keyof typeof API_CONSTANTS.ALL_CHAT_PERMISSIONS,
keyof ChatPermissions
>>(true);
The first assertion extracts the union of all values in the ALL_UPDATE_TYPES tuple and checks it exactly matches every key of the Update type (minus update_id). If Telegram adds a new update type to @grammyjs/types and it is not added to the array, the build breaks.
The second compares the property names of ALL_CHAT_PERMISSIONS against ChatPermissions. The keyof on both sides is deliberate — the object types themselves are structurally incompatible (ALL_CHAT_PERMISSIONS has required/readonly/true properties, while ChatPermissions has optional/mutable/boolean properties), so only the key sets can be compared for exact equality.
A single runtime test remains: Object.isFrozen(API_CONSTANTS). Immutability is a runtime guarantee that TypeScript cannot enforce — the type system has no concept of frozen objects.
The Review Process
This PR went through real review. The maintainer rejected the first approach, proposed type tests instead, then suggested a further simplification — dropping the keyof operators to compare full object types directly. That did not work due to the structural type differences described above. I explained why, the maintainer agreed, and the PR was merged as-is.
Midway through the review, the maintainer pointed out that my PR comments read like they were written by an LLM. He was right — they were. The emdashes, the level of detail, the fact that I wrote the implementation in a GitHub comment instead of just pushing a commit. He spotted it immediately.
On Using AI
I used to be a developer. Years of freelance work burned me out to the point where I never wanted to write code again. I moved to tech support and helping people — work that used my dev experience but did not drain me the same way. I was good at it and enjoyed it more.
When AI coding tools started appearing, I kept checking in. Could this make development sustainable for me again? For a long time, the answer was no. The output was not up to my standard. I know what good code looks like, and early AI assistants did not write it. Close enough to be tempting, not close enough to trust.
That changed when Anthropic released Claude Opus 4.5. For the first time, the code quality matched what I would expect from a competent developer — if you set it up correctly. Not “AI-generated code that kind of works.” Actual production code that I could read, understand, and stand behind. That is what made GlacierPhonk possible. Not AI replacing a developer, but AI making it possible for a burned-out developer to come back.
I am upfront about using AI. The grammY maintainer spotted it in my PR comments — the code itself passed his review. That is the bar that matters.
3 PRs merged into grammY.