What's new in beta.37
v2.0.0-beta.37 is the biggest beta in a while. The
headline addition is cross-stack and cross-stage
references — the missing piece that lets PR-preview
stages share a database with staging instead of
re-provisioning a whole Postgres cluster every time someone
opens a draft PR. On top of that: fully-typed
Worker-to-Worker bindings, typed Workflow I/O, the new
Alchemy.Secret / Alchemy.Variable one-liners, cron
triggers, and an Analytics Engine binding.
A bunch of this came from outside the core team — props to the contributors throughout, and full credits at the bottom of the post.
Cross-stack and cross-stage references
Section titled “Cross-stack and cross-stage references”Lazy, typed references to resources deployed by a
different stack or stage. The use case it was built for:
ephemeral PR-preview stages that need a Neon project,
shouldn’t pay for their own, and should instead share one
that’s owned by staging.
Pointing a PR stage at staging’s database
Section titled “Pointing a PR stage at staging’s database”Same alchemy.run.ts, conditional on the stage. If the
stage looks like a PR preview (pr-147, pr-148, …),
Neon.Project.ref reaches into the staging stage’s state
file and pulls out the already-deployed project. Otherwise
the stage creates its own.
import * as Alchemy from "alchemy";import * as Drizzle from "alchemy/Drizzle";import * as Neon from "alchemy/Neon";import * as Effect from "effect/Effect";
export const NeonDb = Effect.gen(function* () { const { stage } = yield* Alchemy.Stack;
const schema = yield* Drizzle.Schema("app-schema", { schema: "./src/schema.ts", out: "./migrations", });
// PR previews share the long-lived staging project. // Every other stage gets its own. const project = stage.startsWith("pr-") ? yield* Neon.Project.ref("app-db", { stage: "staging" }) : yield* Neon.Project("app-db", { region: "aws-us-east-1" });
// Branches are cheap and per-stage either way. const branch = yield* Neon.Branch("app-branch", { project, migrationsDir: schema.out, });
return { project, branch, schema };});Three things to note:
- Same logical id, same type.
"app-db"matches the idstaginguses to create the project;projectisNeon.Projecteither way, so downstream (Neon.Branch({ project })) doesn’t know or care whether it’s real or referenced. - Resolved at plan time. Alchemy reads the project’s
attributes (id, host, etc.) out of
staging’s persisted state store. Ifstaginghasn’t been deployed yet, plan fails loudly withInvalidReferenceError. - PR teardown stays scoped.
alchemy destroy --stage pr-147deletes the per-PRNeon.Branchbut doesn’t touch the shared project — this stage doesn’t own it.
Deploy staging once, then PR stages can point at it:
alchemy deploy --stage staging # creates the project oncealchemy deploy --stage pr-147 # references it, creates only the branchThe full file lives in
examples/cloudflare-neon-drizzle/src/Db.ts;
the guide is at
Guides › Shared database across stages.
Referencing an entire stack’s outputs
Section titled “Referencing an entire stack’s outputs”The example above pulls one resource across stages. The other shape — pulling a whole stack’s outputs — is what you reach for in a monorepo where the frontend package wants to read the backend stack’s deployed URL.
Declare a typed stack handle once:
import * as Alchemy from "alchemy";
export class Backend extends Alchemy.Stack< Backend, { url: string }>()("Backend") {}Deploy the backend with Backend.make(...) (the typed
shorthand for Alchemy.Stack), then yield* Backend from
the frontend’s stack to get its outputs back, type-checked:
import * as Alchemy from "alchemy";import * as Cloudflare from "alchemy/Cloudflare";import { Backend } from "backend";import * as Effect from "effect/Effect";
export default Alchemy.Stack( "Frontend", { providers: Cloudflare.providers(), state: Cloudflare.state() }, Effect.gen(function* () { // Resolves Backend's outputs from the same stage of the same // stack name. `pr-42` frontend reads `pr-42` backend. const backend = yield* Backend; // ^? { url: string }
return yield* Cloudflare.Vite("Website", { env: { VITE_API_URL: backend.url }, }); }),);yield* Backend defaults to “same stage as the consumer”.
When you need to pin — say, the prod frontend always reads
the prod backend regardless of which branch deploys it —
use Backend.stage.<name>:
const backend = yield* Backend.stage.prod; // always pin to prodconst backend = yield* Backend.stage["pr-42"]; // arbitrary stage nameUnder the hood, both shapes are
Output.stackRef /
Resource.ref reading the
state store. The new APIs just make them ergonomic.
Concepts › References · Guides › Shared database · Guides › Monorepos · Tutorial › Branch from a shared database
Alchemy.Secret and Alchemy.Variable
Section titled “Alchemy.Secret and Alchemy.Variable”The boring part of a stack — wiring an env var into a deploy target — used to leak across three files. One yield now collapses it into a line that’s also a typed runtime accessor.
// alchemy.run.ts — declare once on the Workerexport default Cloudflare.Worker("Api", { main: import.meta.path }, Effect.gen(function* () { const apiKey = yield* Alchemy.Secret("OPENAI_API_KEY"); // ^? Output<Redacted<string>>
return { fetch: Effect.gen(function* () { // …and read the bound value inside the handler. const key = yield* apiKey; // Redacted<string> return HttpServerResponse.text( `key has ${Redacted.value(key).length} chars`, ); }), }; }),);Alchemy.Variable is the same shape without Redacted:
const port = yield* Alchemy.Variable("PORT", 3000);const flags = yield* Alchemy.Variable("FLAGS", { beta: true });
// inside fetchconst p = yield* port; // number — 3000const f = yield* flags; // { beta: true }Both accept a literal, an Effect, a Config, or default
to reading the value from the active ConfigProvider under
the same name. The same call routes to the platform’s
native secret/variable binding — Cloudflare secret_text
for Secret, Lambda encrypted env vars on AWS — and the
runtime accessor decodes back to the original type.
The deeper write-up is at Secrets and Variables; docs at Concepts › Secrets · Guides › Secrets and env vars.
Worker-to-Worker bindings, fully typed
Section titled “Worker-to-Worker bindings, fully typed”A Worker is now bindable as a binding on another Worker,
and the caller gets full RPC types on the other side — no
codegen, no manual interfaces, no as casts on env.
Three call shapes are supported, all on the same
deployment.
The example: a Backend Worker exposing both an RPC method
and an HTTP route, plus a TanStack Start frontend calling
it three different ways. (This is the
examples/cloudflare-tanstack
project — cut down to the relevant pieces.)
The backend Worker:
import * as Cloudflare from "alchemy/Cloudflare";import * as Effect from "effect/Effect";import { HttpServerRequest } from "effect/unstable/http/HttpServerRequest";import * as HttpServerResponse from "effect/unstable/http/HttpServerResponse";
export const Bucket = Cloudflare.R2Bucket("Bucket");
export default class Backend extends Cloudflare.Worker<Backend>()( "Backend", { main: import.meta.path }, Effect.gen(function* () { const bucket = yield* Cloudflare.R2Bucket.bind(Bucket);
return { // RPC method — callable via `backend.hello(key)` on the other side. hello: Effect.fn("Backend.hello")(function* (key: string) { const object = yield* bucket.get(key); return object === null ? null : yield* object.text(); }),
// HTTP handler — callable via `env.BACKEND.fetch(...)`. fetch: Effect.gen(function* () { const request = yield* HttpServerRequest; const key = new URL(request.url, "http://backend").searchParams.get("key"); if (!key) return HttpServerResponse.text("missing key", { status: 400 });
if (request.method === "GET") { const object = yield* bucket.get(key); return object === null ? HttpServerResponse.text("not found", { status: 404 }) : HttpServerResponse.stream(object.body); } return HttpServerResponse.text("method not allowed", { status: 405 }); }), }; }).pipe(Effect.provide(Cloudflare.R2BucketBindingLive)),) {}Wire it into another Worker as a binding:
import Backend, { Bucket } from "./src/backend.ts";
export const Website = Cloudflare.Vite("Website", { bindings: { BUCKET: Bucket, // R2 binding BACKEND: Backend, // Worker-to-Worker binding },});Now the caller has three ways to talk to the backend. All three are real, all three are typed, all three work in the same handler — pick whichever fits the call site.
// frontend route handlerimport * as Cloudflare from "alchemy/Cloudflare";import type Backend from "../backend.ts";import { env } from "../env.ts";
// Option 1 — async binding (just call the platform API directly).const object = await env.BUCKET.get(key);
// Option 2 — Worker-to-Worker fetch over the service binding.const res = await env.BACKEND.fetch(`https://backend/?key=${encodeURIComponent(key)}`);
// Option 3 — typed RPC. `toPromiseApi<Backend>` wraps the wire-shape// binding into a Promise<T> view that throws on `Effect.fail` and// unwraps stream envelopes — full method signatures from `Backend`.const backend = Cloudflare.toPromiseApi<Backend>(env.BACKEND);const value = await backend.hello(key);// ^? string | null (typed end-to-end)Effect-native callers also get a fourth path — yield* Backend.bind(env.BACKEND) returns the same RPC surface
without the Promise envelope. The HTTP option (#2) is the
right one when you need request/response semantics with
streaming bodies; the RPC option (#3) is the right one
when you want typed method calls.
Tutorial › Vite SPA + Worker bridge ·
Example › cloudflare-tanstack
Workflows with typed input and output
Section titled “Workflows with typed input and output”Cloudflare.Workflow is now generic over input and
output types. The body is an Effect.fn that takes the
typed input directly; workflow.create(input) is
type-checked end to end; the returned value flows through
to instance.status().output.
A realistic example — a notifier workflow that touches KV,
reads an Alchemy.Secret, broadcasts through a Durable
Object, sleeps, and finalizes — with each side effect
wrapped in a task so a crash + replay returns the
persisted result instead of re-running:
import * as Alchemy from "alchemy";import * as Cloudflare from "alchemy/Cloudflare";import * as Effect from "effect/Effect";import * as Redacted from "effect/Redacted";import { KV } from "./KV.ts";import Room from "./Room.ts";
export default class NotifyWorkflow extends Cloudflare.Workflow<NotifyWorkflow>()( "Notifier", Effect.gen(function* () { // Outer init phase: resolve shared dependencies once. const rooms = yield* Room; const kv = yield* Cloudflare.KVNamespace.bind(KV); const secret = yield* Alchemy.Secret("WORKFLOW_SECRET");
return Effect.fn(function* (input: { roomId: string; message: string }) { const { roomId, message } = input;
// Each `task` is a checkpoint — replay-safe. const stored = yield* Cloudflare.task("kv-roundtrip", Effect.gen(function* () { const key = `notify:${roomId}`; yield* kv.put(key, message); return (yield* kv.get(key)) ?? message; }).pipe(Effect.orDie), );
const value = Redacted.value(yield* secret); const processed = yield* Cloudflare.task("process", Effect.succeed({ text: `Processed: ${stored}`, secret: value }), );
yield* Cloudflare.task("broadcast", rooms.getByName(roomId).broadcast(`[workflow] ${processed.text}`), );
yield* Cloudflare.sleep("cooldown", "2 seconds"); yield* Cloudflare.task("finalize", rooms.getByName(roomId).broadcast(`[workflow] complete for ${roomId}`), );
return processed; }); }),) {}Start it from a Worker — create is typed against the
input shape, instance.status() reports the typed output:
// inside a Worker's fetch handlerconst notifier = yield* NotifyWorkflow;
const instance = yield* notifier.create({ roomId: "room-42", message: "hello",});// instance.id: string
const status = yield* (yield* notifier.get(instance.id)).status();// status.output: { text: string; secret: string } | undefinedCron triggers via Cloudflare.cron(...)
Section titled “Cron triggers via Cloudflare.cron(...)”Subscribe to a Cloudflare Cron Trigger with an Effect
handler. The deploy-time half attaches the cron expression
to the host Worker; the runtime half registers a
scheduled listener. Use it alongside any other binding
you’ve already wired up — the handler runs inside the same
Worker context.
export default Cloudflare.Worker("Reporter", { main: import.meta.path }, Effect.gen(function* () { const kv = yield* Cloudflare.KVNamespace.bind(Counters);
// Fires once at the top of every hour. yield* Cloudflare.cron("0 * * * *").subscribe((controller) => Effect.gen(function* () { yield* kv.put(`tick:${controller.scheduledTime}`, "ok"); yield* Effect.log(`tick at ${new Date(controller.scheduledTime).toISOString()}`); }), );
return { fetch: Effect.succeed(HttpServerResponse.text("ok")) }; }),);Multiple cron("…") calls register multiple schedules on
the same Worker; the cheapest cron granularity Cloudflare
offers is one minute (* * * * *).
— Thanks to Dawson (#288) for the contribution.
Analytics Engine binding
Section titled “Analytics Engine binding”Cloudflare Workers Analytics Engine, exposed as a
zero-provisioning Worker binding. Declare a dataset
resource, bind it on a Worker, and call writeDataPoint
from the handler — same Effect-error channel as every
other Alchemy binding.
export const Events = Cloudflare.AnalyticsEngineDataset("Events", { dataset: "app-events",});
// inside the Workerexport default Cloudflare.Worker("Api", { main: import.meta.path }, Effect.gen(function* () { const analytics = yield* Cloudflare.AnalyticsEngineDataset.bind(Events);
return { fetch: Effect.gen(function* () { yield* analytics.writeDataPoint({ indexes: ["account-1"], // queryable, low-cardinality blobs: ["signup"], // arbitrary string columns doubles: [1], // numeric metrics }); return HttpServerResponse.text("recorded"); }), }; }).pipe(Effect.provide(Cloudflare.AnalyticsEngineDatasetBindingLive)),);— Thanks to Dawson (#286) for the contribution.
R2 buckets empty themselves on destroy
Section titled “R2 buckets empty themselves on destroy”destroy on an R2Bucket now drains the contents before
deleting the bucket. No more BucketNotEmpty failures
during teardown — ephemeral PR previews and integration
tests tear down cleanly with a single alchemy destroy.
const Photos = Cloudflare.R2Bucket("Photos");// `alchemy destroy` empties Photos and deletes it in one goIf you want the old behavior (fail if non-empty) for production guards, opt out per-bucket:
const Photos = Cloudflare.R2Bucket("Photos", { emptyOnDestroy: false });— Thanks to Michael K (#276) for the contribution.
Fixes worth knowing about
Section titled “Fixes worth knowing about”- D1
prepare()/bind()are synchronous now. Matches the upstream Cloudflare Workers API — no moreyield*on trivial statement construction. - WASM modules in the local sidecar bundle.
bun alchemy devnow correctly bundles.wasmmodules into the local sidecar, fixing a class of “module not found” errors for Workers that depend on WASM. Thanks to Baptiste Arnaud (#305). - Unresolved
OutputJS-coercion throws. Accidentally using an unresolvedOutput<string>in a template literal (e.g.`${bucket.bucketName}`outside an Effect) previously coerced to"[object Output]"and shipped garbage to the cloud. It now throws. Thanks to Zé Yuri (#306). - Deprecated libsodium wrapper types removed. No public API impact; if you were importing internal types they’re gone. Thanks to 齐天大圣 (#311).
Contributors
Section titled “Contributors”Big thank-you to everyone who shipped code in this beta:
- Michael K — R2 empty-on-destroy (#276)
- Dawson — Worker cron triggers (#288)
- Dawson — Analytics Engine binding (#286)
- Baptiste Arnaud — WASM in local sidecar bundle (#305)
- Zé Yuri — throw on unresolved
OutputJS-coercion (#306) - 齐天大圣 — remove deprecated libsodium wrapper types (#311)