AUTONOMOUS B2B OUTBOUND // ORGANIZED AI

Map Leads

Discovers local businesses from Google Maps, reads their reviews, extracts pain points with Claude + Gemma, scores leads hot/warm/cold, and fires personalized outreach across 4 channels overnight. Runs 24/7 on dynamic Cloudflare Workers.

6 agents
4 outreach channels
$42/month
Gemma handles 8/10 AI tasks
24/7 cron
never publishes without review
// CORE IDEA

Map Leads

Karpathy's autoresearch loop — applied to B2B outbound instead of neural nets. Discover a business, read what their customers complain about, generate an email that references their exact pain, score it, send it. Repeat overnight while you sleep.

Firecrawl MCP Claude Sonnet 4.6 Gemma 3 27B claws-mac-mini M4 Tailscale ElevenLabs + VAPI
                    MAP LEADS — SYSTEM DIAGRAM

┌─────────────────────────────────────────────────────────────────┐
│  OPERATOR                                                       │
│  bash scripts/deploy-workers.sh                                 │
└──────────────────────────┬──────────────────────────────────────┘
                           │  cron: 0 2 * * *
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│  CLOUDFLARE WORKERS MESH                                         │
│  6 independent workers · CF Queues between each                 │
│                                                                 │
│  ┌───────────┐    ┌───────────┐    ┌───────────┐    ┌────────┐ │
│  │ discovery  │───▶│  scraper   │───▶│ analyzer  │───▶│ scorer │ │
│  │ Places API│    │ Firecrawl │    │Claude/Gem │    │ 0–1.0  │ │
│  └───────────┘    └───────────┘    └───────────┘    └────┬───┘ │
│                                                           │hot  │
│  ┌────────────────────────────────────────────────┐      │     │
│  │ outreach dispatcher                               │◀─────┘     │
│  │  email · form · GBP message · voice call       │            │
│  └────────────────────────┬───────────────────────┘            │
│                           │ outcomes                            │
│  ┌────────────────────────▼───────────────────────┐            │
│  │ flywheel                                          │            │
│  │  EXP-N loops · weight update · JSONL export    │────────────┘
│  └────────────────────────────────────────────────┘  improves
└─────────────────────────────────────────────────────────────────┘
              │                         │
              ▼                         ▼
  claws-mac-mini :11434          CF Pages dashboard
  Gemma 3 27B via ExoClaw        mapleads.organizedai.vip
  Hermes :7700 (memory)

The Agent's Contract

  • One source of truth — Google Maps reviews. Everything the agent knows about a business comes from what its customers wrote publicly.
  • One metric — lead score 0.0–1.0. Hot (>0.7) gets all 4 channels. Warm (0.4–0.7) gets email only. Cold goes to the flywheel log.
  • One directive — pain-point match. The email references the business's specific complaint pattern, not a generic template.
  • No live blast without review — hot leads queue for human approval before voice calls go out. Email and form auto-send; calls require confirmation.
~100
leads processed per night
~20h
to ship v1
$42
monthly infra (Arch C)

Explore the Guide

// AGENT PIPELINE

The 6-Agent Pipeline

Each agent runs as an independent Cloudflare Worker, triggered by a CF Queue message. Workers scale to zero when idle and cost nothing between runs. Durable Objects track each lead through every state transition.

worker-discovery
  trigger: CF Cron (0 2 * * *)
  input:   { niche, city, radius_km } from KV config
  action:  Google Places New API Text Search
  filter:  rating ≥ 2.5, review_count ≥ 10, not seen within 7 days
  output:  → scrape-queue
           → LeadStateMachine DO (state: "discovered")

worker-scraper  [Firecrawl]
  trigger: scrape-queue
  action:  Firecrawl scrape Maps reviews page (100–200 reviews)
           Firecrawl crawl business site — extract:
             contact_form_url, staff_names[], services[], emails[]
  output:  ReviewCorpus JSONL (low-star first)
           SiteContext (form URL, staff names, services)
           → analyze-queue · DO state: "scraped"

worker-analyzer  [Claude / Gemma via ExoClaw]
  trigger: analyze-queue
  action:  Gemma 3 27B (NoClaw :11434 via ExoClaw) — classify reviews
           Claude Sonnet — extract top 3 pain points as JSON:
             { topic, frequency_score, evidence_quote,
               severity: low|medium|high, solvable_by }
           Zod schema validation
           KV category tracking for flywheel
  output:  → score-queue · DO state: "analyzed"

worker-scorer
  trigger: score-queue
  action:  load weights from KV: map-leads:weights:current
           score = (severity×w1) + (volume×w2) + (recency×w3) + (gap×w4)
           tier:  hot >0.7 | warm 0.4–0.7 | cold <0.4
  routing: hot/warm → outreach-queue (all channels)
           all     → outcome-queue (flywheel baseline)
  output:  DO state: "scored"

worker-outreach  [4 channels in parallel]
  trigger: outreach-queue
  → see Channels section

worker-flywheel  [EXP-N autoresearch]
  trigger: CF Cron (0 3 * * *) + flywheel-queue
  → see Flywheel section

Lead State Machine (Durable Object)

One DO per lead, persisted across Workers calls. Full history log per state transition.

discovered → scraped → analyzed → scored → outreach_sent
           → opened → replied → converted
           ↘ bounced / cold_filed / unsubscribed

Queue Topology

QueueProducerConsumerPayload
scrape-queueworker-discoveryworker-scraperplace_id, name, address, phone, website
analyze-queueworker-scraperworker-analyzerReviewCorpus JSONL + SiteContext
score-queueworker-analyzerworker-scorerpain_points[], business metadata
outreach-queueworker-scorerworker-outreachlead + score + tier + contact
outcome-queueworker-outreachworker-flywheellead_id, channel, event_type, timestamp
flywheel-queueworker-scorer (all leads)worker-flywheelscore baseline for experiments
// OUTREACH CHANNELS

4-Channel Outreach

Hot leads hit all 4 channels in parallel via Promise.allSettled. Warm leads get email only. Every outcome pushes an event to the outcome-queue for flywheel tracking. Calls require human confirmation before firing.

outreach-queue  →  worker-outreach
                       │
           ┌───────────┼───────────┬──────────────┐
           ▼           ▼           ▼              ▼
     5a Email    5b Form    5c GBP Msg   5d Voice Call
     Resend API  Firecrawl   Business     ElevenLabs TTS
     A/B subj.   + Playwright Profile API  → VAPI outbound
     tracking    form fill   OAuth SA key  call stream
     webhook     CAPTCHA→skip reply poll   VAPI webhook
        │           │           │              │
        └───────────┴───────────┴──────────────┘
                           │
                    outcome-queue
                { lead_id, channel, event_type, ts }

5a — Email (Resend)

Claude Sonnet generates subject + body from KV template (map-leads:prompts:email). Two subject variants for A/B. Open and click events flow back via Resend webhook → outcome-queue.

// Claude prompt (loaded from KV each run)
"Write a cold email from Jordan Hill at Organized AI to {name} in {city}.
Their customers mention: {top_pain_point}.
Evidence from reviews: '{evidence_quote}'.
Under 150 words. No generic openers. End with one clear question."

5b — Contact Form (Firecrawl + Playwright)

form_url extracted by worker-scraper in Phase 2. CF Worker runs Playwright browser binding to fill and submit.

// Firecrawl scrape with form-fill actions
{
  "url": "{form_url}",
  "actions": [
    { "type": "fill", "selector": "input[type='text']", "value": "Jordan Hill" },
    { "type": "fill", "selector": "textarea", "value": "{personalized_message}" },
    { "type": "fill", "selector": "input[type='email']", "value": "jordan@organizedai.com" },
    { "type": "click", "selector": "button[type='submit']" }
  ]
}
// CAPTCHA detected → log and skip, push outcome: "captcha_blocked"

5c — GBP Messages (Google Business Profile)

Auth via GOOGLE_SA_JSON wrangler secret (set by scripts/setup-gws.sh). JWT token via google-auth-library. No manual OAuth refresh anywhere.

import { GoogleAuth } from 'google-auth-library';
const auth = new GoogleAuth({
  credentials: JSON.parse(env.GOOGLE_SA_JSON),
  scopes: ['https://www.googleapis.com/auth/business.manage'],
});
const token = await (await auth.getClient()).getAccessToken();
await fetch(`https://mybusiness.googleapis.com/v4/accounts/-/locations/${id}/messages`,
  { method: 'POST', headers: { Authorization: `Bearer ${token.token}` },
    body: JSON.stringify({ representativeName: 'Jordan Hill',
      messagePayloads: [{ text: message }] }) });

5d — Voice Call (ElevenLabs + VAPI)

Claude generates a 45–60 sec call script. ElevenLabs renders TTS audio, uploaded to CF R2. VAPI places the outbound call streaming the audio. VAPI webhook fires { answered, duration_sec, call_sentiment } back to outcome-queue.

// 1. Generate script with Claude
const script = await claude("Generate 45-second call script for {name}...");

// 2. ElevenLabs TTS
const audio = await fetch("https://api.elevenlabs.io/v1/text-to-speech/{voice_id}",
  { body: JSON.stringify({ text: script, model_id: "eleven_turbo_v2_5" }) });
const url = await r2.put(`calls/${leadId}.mp3`, audio);

// 3. VAPI outbound call
await fetch("https://api.vapi.ai/call/phone", {
  body: JSON.stringify({
    phoneNumberId: env.VAPI_PHONE_NUMBER_ID,
    customer: { number: phone },
    assistant: {
      voice: { voiceId: env.ELEVEN_VOICE_ID, provider: "11labs" },
      model: { provider: "anthropic", model: "claude-sonnet-4-6" }
    }
  })
});

Channel Performance Tracking

ChannelOutcome signalsTracked viaCost/lead
Email (Resend)open, click, reply, bounceResend webhook~$0.001
Form submitsuccess, captcha_blocked, errorFirecrawl response~$0.005
GBP messagesent, reply (poll)GBP API response~$0.002
Voice callanswered, duration, sentimentVAPI webhook~$0.03/min
// SCRAPING LAYER

Firecrawl — The Scraping Layer

Firecrawl replaces SerpAPI as the sole scraping dependency. One tool handles both Google review extraction and business website crawling. Already connected via MCP (fc-a48926b25c7848bcad8c1354430359f7).

Why Firecrawl Over SerpAPI

FeatureFirecrawlSerpAPIVerdict
Google Reviews scrapingScrape Maps page via actionsDedicated maps_reviews endpointSERP cleaner; FC needs parse step
Website crawl✓ Yes✗ NoFC only — critical for forms
Contact form discovery✓ Yes✗ NoFC only — SERP cannot do this
LLM-ready Markdown✓ Yes✗ NoFC → Claude/Gemma directly
JS rendering✓ Yes✗ NoRequired for modern sites
MCP connected✓ Active✗ Not connectedFC zero setup
Free tier500 scrapes/monthNone ($50 min)FC free at low volume
Paid entry$16/month (3k pages)$50/month (5k searches)FC 3× cheaper
1,000 leads/month~$16–$32~$50–$80FC saves $34–$64/month

Review Scrape Pattern

// worker-scraper: Firecrawl scrape Maps reviews
const reviews = await firecrawl.scrapeUrl(
  `https://www.google.com/maps/place/?q=place_id:${placeId}`,
  {
    formats: ['markdown'],
    actions: [
      { type: 'click', selector: 'text=Reviews' },
      { type: 'scroll', direction: 'down', amount: 5000 },
      { type: 'scroll', direction: 'down', amount: 5000 },  // load more
    ]
  }
);
// Returns markdown with review text, star ratings, dates, owner responses
// Parse: extract 1-3 star reviews first (highest pain signal)

Website Crawl Pattern

// Crawl business site for contact form + context
const site = await firecrawl.crawlUrl(website, {
  crawlOptions: { maxDepth: 2, limit: 10 },
  scrapeOptions: {
    formats: ['extract'],
    extract: {
      schema: {
        contact_form_url: 'string',
        contact_emails: ['string'],
        staff_names: ['string'],
        services: ['string'],
        pricing_signals: ['string']
      }
    }
  }
});
// SiteContext stored to DO + passed downstream for email personalization

Data Quality Note

SerpAPI's google_maps_reviews endpoint returns pre-structured JSON with zero post-processing. Firecrawl returns Markdown that needs a lightweight parse. For LLM-based analysis (Gemma handles this) the difference is negligible — both produce the review text a model needs. Use Firecrawl for everything; add SerpAPI only if Firecrawl review parsing proves unreliable across the first 100-lead batch.

// GWS CLI AUTH

GWS CLI — Google Auth

All Google API authentication uses gcloud Application Default Credentials (ADC) and service accounts. No manual OAuth tokens, no refresh token rotation. Run scripts/setup-gws.sh once per project.

GWS CLI Setup (Google APIs)

# scripts/setup-gws.sh — run once to provision all GCP resources
set -euo pipefail
PROJECT_ID="${GCP_PROJECT_ID:?Set GCP_PROJECT_ID}"
SA_NAME="map-leads-sa"
SA_EMAIL="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"

# 1 — Authenticate
gcloud auth login --quiet
gcloud config set project "$PROJECT_ID"
gcloud auth application-default login --quiet

# 2 — Enable required APIs
gcloud services enable \
  places.googleapis.com \
  mybusinessbusinessinformation.googleapis.com \
  mybusinessaccountmanagement.googleapis.com \
  businessprofileperformance.googleapis.com

# 3 — Create service account + grant roles
gcloud iam service-accounts create "$SA_NAME" --display-name="Map Leads SA"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
  --member="serviceAccount:${SA_EMAIL}" \
  --role="roles/businessprofileperformance.viewer"

# 4 — Generate SA key + write to .env
mkdir -p credentials
gcloud iam service-accounts keys create credentials/map-leads-sa.json \
  --iam-account="$SA_EMAIL"
echo "GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/credentials/map-leads-sa.json" >> .env

# 5 — Push to CF Workers as wrangler secret
cat credentials/map-leads-sa.json | wrangler secret put GOOGLE_SA_JSON \
  --name map-leads-outreach

Auth Split: Local vs CF Workers

Local Dev

GOOGLE_APPLICATION_CREDENTIALS env var pointing to the SA JSON file. gcloud ADC picks it up automatically. No code changes needed between local and CI.

CF Workers

GOOGLE_SA_JSON wrangler secret holds the full SA JSON. google-auth-library reads it via env.GOOGLE_SA_JSON. Zero credentials on disk in production.

OAuth Codex — Claude Auth (no API key)

Claude calls never use ANTHROPIC_API_KEY. All inference routes through OpenClaw on claws-mac-mini, which is authenticated via Claude Code OAuth (the claude CLI).

// CF Workers: call Claude via ExoClaw → Tailscale → OpenClaw → claude CLI
// packages/agents/src/openclaw-client.ts

export async function claudeViaOpenClaw(env, req) {
  return fetch(`${env.OPENCLAW_URL}/v1/messages`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      // Codex OAuth token authenticates this CF Worker to OpenClaw
      'X-Codex-Token': env.OPENCLAW_CODEX_TOKEN,
    },
    body: JSON.stringify({
      model: req.model ?? 'claude-sonnet-4-6',
      system: req.system,
      messages: req.messages,
    }),
  }).then(r => r.json());
}

// Local scripts: call claude CLI directly (OAuth, no key)
// packages/agents/src/openclaw-local.ts
const result = execSync(`claude --print -p ${JSON.stringify(system)} ${JSON.stringify(message)}`);

CF Workers

Use OPENCLAW_URL + OPENCLAW_CODEX_TOKEN wrangler secrets. ExoClaw bridges the Workers to OpenClaw over Tailscale.

Local scripts / flywheel

claude CLI subprocess or localhost:18789. Claude Code OAuth handles auth. No token or key needed.

Places API Key (separate from SA)

# Create a restricted API key for Places API only
gcloud services api-keys create \
  --api-target=service=places.googleapis.com \
  --display-name="map-leads-places"
# Returns keyString → add to .env as GOOGLE_PLACES_API_KEY
# Push to CF Workers:
echo $GOOGLE_PLACES_API_KEY | wrangler secret put PLACES_API_KEY \
  --name map-leads-discovery

Why Not Manual OAuth?

Manual OAuth tokensGWS CLI (gcloud SA)
Expire — need refresh token rotationSA keys don't expire (rotate annually)
CLIENT_ID + SECRET + REFRESH_TOKEN in .envOne JSON file, one wrangler secret
Browser OAuth flow required per environmentgcloud auth login once, generates SA
Manual token refresh in CF Workersgoogle-auth-library handles JWT refresh
3 env vars to manage per deployment1 wrangler secret per worker
// AUTORESEARCH FLYWHEEL

The Autoresearch Flywheel

After outreach fires, every outcome feeds an EXP-N experiment loop that runs at 3am. It tests variations in scoring weights and prompt templates against real conversion data. Winners overwrite the KV config the next morning — all workers self-improve overnight.

FLYWHEEL LOOP  (mirrors gtm-autoresearch EXP-N pattern)

outcomes (7 days)
     │
     ▼
┌──────────────────────────────────────────────────────┐
│  Aggregate per segment                               │
│  • open rate by channel                              │
│  • reply rate by pain_point_category                 │
│  • call answer rate by time_of_day                   │
│  • convert rate by lead_score_tier                   │
└──────────────────────┬───────────────────────────────┘
                       │
                       ▼
┌──────────────────────────────────────────────────────┐
│  EXP-N round loop (max 20 rounds)                    │
│                                                      │
│  ┌──────────┐   ┌──────────┐   ┌──────────────────┐ │
│  │  SCORE   │──▶│ MUTATE   │──▶│ RE-SCORE vs      │ │
│  │ baseline │   │ weights  │   │ outcome data     │ │
│  └──────────┘   └──────────┘   └────────┬─────────┘ │
│        ▲                                │            │
│        │         improved?              │            │
│        │    yes → keep, write KV        │            │
│        │    no  → revert                │            │
│        └────────────────────────────────┘            │
└──────────────────────────────────────────────────────┘
                       │
                       ▼
          if lift > 5%: write KV weights + Slack ping
          always: export JSONL to R2 (EXP-N.jsonl)

The Round (simplified)

const baseline = aggregateOutcomes(past7Days);
for (let round = 0; round < MAX_ROUNDS; round++) {
  const weights = mutateWeights(currentWeights);
  const score = simulateWithWeights(weights, baseline);
  if (score > currentScore) {
    currentWeights = weights;
    currentScore = score;
    await kv.put('map-leads:weights:current', JSON.stringify(weights));
  }
  await r2.put(`experiments/EXP-${expNum}-round-${round}.json`, {
    weights, score, baseline_score: currentScore
  });
}
if (currentScore > previousScore * 1.05) {
  await slack.post(`Model improved ${pct}% — weights updated`);
}

What the Flywheel Improves

KV keyWhat changesUpdate trigger
map-leads:weights:currentScoring formula weights (w1–w4)Lift > 5% on conversion vs baseline
map-leads:prompts:emailClaude email templateA/B subject winner after 100 sends
map-leads:prompts:call_scriptVAPI call script templateCall answer rate diff > 10%
map-leads:prompts:pain_extractionGemma/Claude pain point promptPain-to-convert correlation shift

JSONL Export (fine-tune compatible)

Every experiment round exports JSONL to CF R2 in the same format as gtm-autoresearch's fine-tune pipeline. Feed it into Phase 3 (JSONL Training Data) of the 6-phase pipeline for a client-specialized lead scoring brain.

// EXP-N.jsonl format (gtm-autoresearch compatible)
{ "lead_id": "pl_abc", "pain_points": [...], "score": 0.82,
  "channels_used": ["email","form"], "outcomes": {"opened":true,"replied":false} }
// IMPLEMENTATION

Stack & Conventions

TypeScript-first, pnpm monorepo, no build step for Workers. Each Worker is an independent package. Shared types and DB helpers live in packages/. Same conventions as the broader Organized AI codebase.

Runtime

Node 22 + TypeScript pnpm workspaces Turborepo CF Workers runtime no build step

Key Dependencies

PackagePurpose
@anthropic-ai/sdkClaude Sonnet 4.6 — email + call script generation
google-auth-libraryGBP JWT auth from GOOGLE_SA_JSON wrangler secret
zodPain point JSON schema validation in worker-analyzer
@cloudflare/workers-typesCF Workers + Queues + DO + KV + R2 types
wrangler (dev)Deploy all Workers + Pages + manage secrets

External Services

Google Places API (New)

Text Search endpoint. Field-masked to minimize cost. API key provisioned via gcloud services api-keys create.

Firecrawl

MCP-connected. Reviews + site crawl + form submission. Key fc-a48926b25c7848bcad8c1354430359f7 already active.

Anthropic (Claude Sonnet 4.6)

Email + call script generation only. Gemma handles 8/10 AI tasks at zero cost. Claude spend ~$6/month at 2k leads.

Gemma 3 27B (NoClaw)

Ollama on claws-mac-mini M4 at :11434. Reached via ExoClaw CF Worker → Tailscale bridge. Free inference.

ElevenLabs + VAPI

ElevenLabs: TTS audio from call script. VAPI: places outbound call, streams audio, fires outcome webhook.

Resend

Email delivery. A/B subject lines. Open + click webhooks → outcome-queue. ~$5/month at 2k emails.

Monorepo Layout

apps/
  worker-discovery/     # CF Worker — Google Places
  worker-scraper/       # CF Worker — Firecrawl
  worker-analyzer/      # CF Worker — Claude + Gemma
  worker-scorer/        # CF Worker — lead scoring
  worker-outreach/      # CF Worker — 4 channels
  worker-flywheel/      # CF Worker — EXP-N autoresearch
  dashboard/            # CF Pages — Kanban + analytics
packages/
  types/                # shared TypeScript types
  agents/               # shared agent base class
  db/                   # Durable Object + KV helpers
scripts/
  setup-gws.sh          # GCP service account provisioning
  deploy-workers.sh     # deploy all 6 Workers + dashboard
  deploy-docs.sh        # redeploy plan docs to CF Pages
.env.example            # GOOGLE_APPLICATION_CREDENTIALS + others
CLAUDE.md               # agent conventions for Claude Code

Conventions

  • All Workers are independently deployable: wrangler deploy --name map-leads-discovery
  • All outputs idempotent — re-running a Worker never duplicates leads or outreach
  • Errors logged to CF Worker logs; DO tracks failure states per lead
  • Every KV write is namespaced: map-leads:{type}:{id}
  • JSONL exports to R2 follow gtm-autoresearch EXP-N format for fine-tune pipeline compatibility
// DEPLOY & RUN

Deploy & Run

Three scripts handle everything: GWS CLI provisioning, Worker deployment, and docs deploy. Wrangler handles all Cloudflare infrastructure. No CI/CD required — run locally from jordans-mac-mini or via Claude Code Web.

1. GWS CLI Setup (once)

GCP_PROJECT_ID=your-project bash scripts/setup-gws.sh
# Provisions:
#   • gcloud auth login + ADC
#   • Enables Places, GBP, IAM APIs
#   • Creates map-leads-sa service account
#   • Generates credentials/map-leads-sa.json
#   • Writes GOOGLE_APPLICATION_CREDENTIALS to .env
#   • Pushes GOOGLE_SA_JSON to map-leads-outreach Worker

2. Configure .env

cp .env.example .env
# Fill in:
# No ANTHROPIC_API_KEY -- Claude routes through OpenClaw (OAuth Codex)
OPENCLAW_URL=https://exoclaw.your-account.workers.dev
OPENCLAW_CODEX_TOKEN=codex_...
FIRECRAWL_API_KEY=fc-a48926b25c7848bcad8c1354430359f7
RESEND_API_KEY=re_...
ELEVENLABS_API_KEY=...
VAPI_API_KEY=...
VAPI_PHONE_NUMBER_ID=...
CLOUDFLARE_ACCOUNT_ID=691fe25d377abac03627d6a88d3eeac9
# GOOGLE_APPLICATION_CREDENTIALS written automatically by setup-gws.sh

3. Push Wrangler Secrets

# Push all secrets to their respective Workers
cat credentials/map-leads-sa.json | wrangler secret put GOOGLE_SA_JSON --name map-leads-outreach
# OAuth Codex — replaces ANTHROPIC_API_KEY
echo $OPENCLAW_URL          | wrangler secret put OPENCLAW_URL          --name map-leads-analyzer
echo $OPENCLAW_URL          | wrangler secret put OPENCLAW_URL          --name map-leads-outreach
echo $OPENCLAW_CODEX_TOKEN  | wrangler secret put OPENCLAW_CODEX_TOKEN  --name map-leads-analyzer
echo $OPENCLAW_CODEX_TOKEN  | wrangler secret put OPENCLAW_CODEX_TOKEN  --name map-leads-outreach
# Other service secrets
echo $FIRECRAWL_API_KEY     | wrangler secret put FC_API_KEY            --name map-leads-scraper
echo $RESEND_API_KEY        | wrangler secret put RESEND_API_KEY        --name map-leads-outreach
echo $ELEVENLABS_API_KEY    | wrangler secret put ELEVENLABS_API_KEY    --name map-leads-outreach
echo $VAPI_API_KEY          | wrangler secret put VAPI_API_KEY          --name map-leads-outreach
echo $PLACES_API_KEY        | wrangler secret put PLACES_API_KEY        --name map-leads-discovery

4. Deploy Everything

# Deploy all 6 CF Workers + dashboard
bash scripts/deploy-workers.sh

# Redeploy plan docs to CF Pages
bash scripts/deploy-docs.sh

# Individual Worker redeploy
CLOUDFLARE_ACCOUNT_ID=691fe25d377abac03627d6a88d3eeac9 \
  wrangler deploy --name map-leads-discovery \
  --config apps/worker-discovery/wrangler.toml \
  --commit-dirty=true

Deploying These Docs

# The pattern that deployed this page
wrangler pages project create map-leads-arch --production-branch=main
wrangler pages deploy /path/to/docs \
  --project-name=map-leads-arch \
  --branch=main \
  --commit-dirty=true

Architecture Options (A → C recommended)

A — CloudB — LocalC — Hybrid ✓D — Hermes
AI costHigh ($30+)FreeLow ($6)Free
Email qualityBestOKBestOK
ReliabilityBestMac-dependentGoodMac-dependent
Hermes memoryNoneFullPartialFull
Score / 5037383936

Operational Checklist

  • Never auto-fire voice calls — keep call queue requiring human confirmation. Email and form auto-send; calls do not.
  • Rotate SA key annuallygcloud iam service-accounts keys create, then repush wrangler secret.
  • Watch flywheel weight drift — if open rates drop >15%, roll back weights to previous KV version.
  • Monitor ExoClaw uptime — if claws-mac-mini is down, Arch C degrades to all-Claude. Set Slack alert on Gemma latency spike.
  • Review JSONL exports weekly — feed high-quality rounds into gtm-autoresearch fine-tune pipeline for compound improvement.

Quick Links

GitHub
github.com/Organized-AI/map-leads — scripts, wrangler.toml, GWS CLI setup
Plan v1
map-leads-plan-v1.pages.dev — Phase 0–6 baseline plan
Plan v2
map-leads-plan-v2.pages.dev — Hardened with all channels + flywheel
Reference
gtm-autoresearch-guide.pages.dev — same loop pattern, GTM domain