Tutorials13 min read

Claude Code /routines: Scheduled Agents, No Local Machine

Anthropic's /routines: schedule Claude Code agents on a cron, API call, or GitHub event — all on Anthropic infra, no laptop required.

CL

ComputeLeap Team

Share:
Claude Code /routines — cloud-scheduled AI agents with circuit nodes, cron dials, and GitHub webhook arrows flowing into Anthropic infrastructure

On April 14, 2026, Anthropic shipped the announcement that redrew what Claude Code actually is. Not a feature. A category shift.

The official blog post put it plainly: routines let you "configure Claude Code once and run it automatically — on Anthropic's infrastructure, with no local machine required." Seven independent YouTube creators published Claude Code content the same day. The announcement hit 3.88 million views and 17,150 likes in under 24 hours.

@claudeai tweet announcing /routines — 3.88M views, 17,150 likes: Configure routines to run on a schedule, via API, or in response to events. Runs on Anthropic's infra — no laptop required.

Before routines, Claude Code was powerful but tethered. Your agent ran while you were at your keyboard. You closed the laptop, the session died. Remote Tasks gave us a preview of cloud-hosted agent execution in March — routines are the formalization: a named feature, a proper UI, three distinct trigger types, and plan-tiered limits that signal Anthropic is treating this as a product, not an experiment.

This is the setup guide that every developer who saw that tweet and thought "how do I actually use this?" needs.

The Category Shift

Claude Code spent its first year as a development environment. You invoked it, it helped you, you closed it. The mental model was: sophisticated autocomplete with tool use. The constraint baked into that model was always the same — you had to be present.

Routines dissolve that constraint. The New Stack called it accurately: Claude Code can now do your job overnight. Not metaphorically. You configure the agent with a prompt, point it at a GitHub repository, set a trigger, and Anthropic's infrastructure handles execution. Your MacBook can be off, on a plane, or at the bottom of a lake. The routine runs.

This is the difference between a tool and a service. GitHub Actions made CI/CD feel like infrastructure rather than a script you ran manually. Routines do the same thing for AI agents — they move from "things you invoke" to "things that run."

The developer community recognized this immediately. The HackerNews outage thread on April 14 — 211 upvotes, 186 comments of developer frustration — is the inverse proof: the outage generated that much noise precisely because the dependency has deepened. You don't get 186 comments of frustration about a tool you only use occasionally.

What Is a Routine?

A routine is a saved Claude Code configuration with four components:

  1. A prompt — what you want Claude to do. Can be as simple as "triage new GitHub issues and label them" or as complex as a multi-step deploy verification workflow.
  2. One or more repositories — which Claude clones from the default branch at the start of each run. The clone is fresh each time, so there's no session state carried over between runs.
  3. Connectors — optional integrations (Slack, Linear, Sentry, etc.) that the routine can read from or write to during execution.
  4. Triggers — the mechanism that fires the routine. This is where routines become genuinely flexible.
Routines are available at claude.ai/code/routines — or type /schedule in the Claude Code CLI to jump straight to creation. Available on all paid plans: Pro, Max, Team, and Enterprise.

Every time a routine runs, Claude Code starts a fresh session in Anthropic's cloud environment, executes the prompt against the cloned repo, uses any configured connectors, and exits. No persistent agent state. No leftover context. Clean slate every run — which is both the strength (deterministic, reproducible) and a limitation worth planning around. If you need cross-run memory, pair routines with an external persistence layer like claude-mem, which hit +2,330 stars on April 15 as developers actively solving exactly this problem.

Three Trigger Types

The trigger is what makes routines genuinely useful rather than just a fancy cron wrapper. Three options, each with distinct use cases:

1. Schedule Triggers

The simplest trigger. Pick a cadence — hourly, nightly, weekly — and the routine fires on that schedule. Anthropic's official documentation uses the nightly bug triage as the canonical example: "Every night at 2am: pull the top bug from Linear, attempt a fix, and open a draft PR."

That example is worth sitting with. This isn't a notification or a summary email. Claude is reading the Linear queue, writing code, and opening a draft PR — all while you sleep. You wake up to a draft PR waiting for your review, not a task waiting for you to start it.

Schedule triggers require no external setup beyond the routine configuration itself. No webhook. No API key management. Just pick the cadence at claude.ai/code/routines.

Best for: Nightly digests, weekly code quality reports, automated documentation drift detection, recurring backlog triage.

2. API Triggers

An API trigger generates a unique HTTP endpoint for your routine — a URL plus a bearer token. Send a POST to that endpoint, the routine fires. The API documentation notes it ships under the experimental-cc-routine-2026-04-01 beta header, and request shapes may change during the research preview.

The practical power: you can now fire a Claude Code agent from anywhere in your stack that can make an HTTP request. Your CI pipeline can trigger a post-deploy smoke test. An alert from Datadog or PagerDuty can trigger an incident triage. A webhook from any event source can kick off a processing routine. The agent isn't limited to your local machine — it's available as a web service.

VentureBeat's hands-on testing confirmed the setup is fast: "once identified, however, the process was remarkably efficient, with a routine operational in under two minutes running autonomously on Anthropic's web infrastructure."

Best for: Alert triage with proposed fixes, on-demand deploy verification, CI/CD pipeline integrations, event-driven agent workflows.

3. GitHub Event Triggers

The most sophisticated trigger type. Connect Claude Code to your GitHub repository via the Claude GitHub App, then configure the routine to fire on specific repository events: pull request opened, PR merged, release published, issue created.

The granular filtering is what makes GitHub triggers production-viable rather than noisy. According to the official docs, you can filter PR events by author, title text, base branch, head branch, labels, draft state, and whether the PR comes from a fork. Set up a routine that runs only on PRs targeting main from external contributors that have changed files in your /src/api/ directory — nothing else fires it.

During the research preview, GitHub webhook events are subject to per-routine and per-account hourly caps. Events beyond the limit are dropped until the window resets — worth accounting for in high-traffic repos.

Best for: Automated PR review for compliance checks, dependency update summaries, cross-SDK porting, release notes generation.

Setting Up Your First Routine

The fastest path to a working routine is five steps:

1. Navigate to claude.ai/code/routines Or type /schedule in the CLI — both land at the same creation UI. Click "New routine."

2. Name it and write the prompt The prompt is the agent's instruction for each run. Be specific: "Check the Linear backlog for all issues tagged bug with priority High. For each one, attempt a fix, run the test suite, and if tests pass, open a draft PR with a description explaining the fix." Vague prompts produce vague results — the same discipline that makes CLAUDE.md configuration effective applies here.

3. Select a repository Choose one or more GitHub repos. Claude clones from the default branch at the start of each run. Private repositories require authorizing the Claude GitHub App.

4. Configure connectors (optional) If your routine needs to read from Linear, post to Slack, or pull from Sentry, configure those connectors now. Connectors are the glue between Claude's code work and the rest of your toolchain.

5. Add a trigger Pick schedule, API, or GitHub events. For schedule triggers, select the cadence. For API triggers, save the routine first — the URL and bearer token generate post-save since they depend on the routine ID. For GitHub events, install the Claude GitHub App on the target repo.

First routine recommendation: Start with a simple scheduled report before attempting automated PR creation. "Every Monday morning, review the open PR list and summarize what each one does, who needs to review it, and any obvious conflicts with main." Low stakes, high signal — you'll see exactly how Claude reads your codebase before you trust it with commit rights.

Chase AI's hands-on walkthrough is the best video tutorial for the full setup flow.

Real-World Use Cases

The most compelling signal that routines are ready for real workflows came before they even launched: a Claude Code agent already ran 108 machine learning experiments over 49 hours with no human in the loop, building a golf forecasting system. With cloud-hosted execution, that kind of autonomous workload is now a configuration file, not a requirement to keep your terminal open for two days.

@dphuang2 tweet: Claude Code + Tinker ran 108 ML experiments over 49 hours, no human in loop, building a golf forecasting system — 64K views

Here are the use cases worth building first:

Nightly bug triage — Claude pulls high-priority bugs from Linear, attempts fixes, runs tests, opens draft PRs. You review drafts each morning instead of starting from scratch.

Deploy verification — API trigger from your CI pipeline. Post-deploy, Claude checks health endpoints, reads recent error logs from Sentry, cross-references with the deployed diff, and posts a summary to Slack. No manual smoke testing.

PR review automation — GitHub event trigger on PR opened. Claude reads the diff, identifies what changed, flags files touching critical paths (auth, payments, data models), and posts a structured comment. Human reviewers know where to focus immediately.

Documentation drift detection — Weekly schedule trigger. Claude compares /docs against recent code changes, identifies outdated function references, opens a PR with updates. The fix for "docs are always wrong" is making the update automatic.

Alert triage — API trigger from PagerDuty or Datadog. When an alert fires, Claude reads the alert context, checks git history for recent changes to affected services, and posts a structured triage report to the incident channel. First responders have context before they've finished reading the page.

Plan Limits and Token Considerations

The research preview ships with daily execution caps:

PlanDaily Routine Executions
Pro5
Max15
Team / Enterprise25

Additional executions are available through extra usage — see the current billing structure for rates.

The Register made a fair point: routines "burn through tokens far more rapidly than judiciously applied, carefully reviewed AI assistance." They're right. A complex codebase analysis with multiple tool calls can run 50K–200K tokens per execution. At 5 runs/day on Pro, that's manageable and within budget. At 25 runs/day on Enterprise with broad, exploratory prompts, the bill can surprise you.

Calculate token spend before scheduling nightly PR-generating routines. Start narrow — a report or summary routine — measure actual token consumption, then expand scope once you have a baseline. Routines amplify both the good and the wasteful in your prompts.

The honest framing: routines are high-leverage for well-scoped, recurring tasks where the output is deterministic enough to be useful without constant human oversight. They're low-leverage (and expensive) for exploratory work that benefits from real-time judgment. Treat them like any automation — best for tasks where "good enough, automatically" beats "perfect, manually."

The Desktop App Redesign

The routines launch shipped alongside a complete rebuild of the Claude Code desktop app for Mac and Windows. The headline feature: parallel sessions — multiple Claude Code agents running simultaneously in a single UI, each handling a different task.

@amorriscode tweet: Claude Code Desktop rebuilt from scratch — redesigned to make it easier to parallelize work with Claude. I haven't opened an IDE or terminal in weeks.

Boris Cherny, who led the rebuild at Anthropic, described it plainly: "I haven't opened an IDE or terminal in weeks." The desktop app is no longer a wrapper around the CLI — it's a purpose-built environment for parallelized agentic workflows.

The redesign and routines are architecturally connected: if you're running parallel desktop sessions alongside cloud-scheduled routines, you need an interface that makes concurrent agent state legible. The desktop rebuild is the control plane for a multi-agent workstation that didn't exist six months ago.

What This Means for Your Stack

Routines make a specific claim: that agent execution is reliable enough to delegate to cloud infrastructure without human oversight on each run. That claim is worth stress-testing before staking production workflows on it.

The HackerNews outage thread is the honest counterpoint. 186 developer comments about unavailability during peak hours is not the reliability record you want for a nightly CI gate. Anthropic's broader platform positioning suggests the team knows this — enterprise SLAs and reliability improvements are on the roadmap, but the research preview ships with "use at your own risk" implied.

For now: use routines for anything where a missed run or a failed execution is annoying but not catastrophic. Draft PRs you'll review anyway. Summaries that inform decisions rather than make them. Reports you'd read with healthy skepticism regardless. That's a large, genuinely valuable category of work — it just isn't "replace your CI pipeline on day one."

The trajectory is clear. Seven independent YouTube creators covering the same tool on the same day. GitHub trending dominated by Claude Code tooling for the third consecutive day. The "but I have to keep Claude Code running" objection — the one that ended every agent tutorial for the last year — is gone. That's the shift worth building around.

@karpathy tweet: A lot of people tried the free tier of ChatGPT last year and allowed it to inform their views on AI a little too much — 19,837 likes, 3.99M views

The official routines documentation is the authoritative reference for trigger configuration and connector setup. For cross-session memory, claude-mem is the leading open solution pairing well with routines-based automation.

CL

About ComputeLeap Team

The ComputeLeap editorial team covers AI tools, agents, and products — helping readers discover and use artificial intelligence to work smarter.

💬 Join the Discussion

Have thoughts on this article? Discuss it on your favorite platform:

Join 100+ engineers

Stay ahead of the AI curve

Get weekly insights on AI agents, tools, and engineering delivered to your inbox. No spam, just actionable updates.

No spam. Unsubscribe anytime.