Back to Notes

Designing Systems That Learn: Fleet Bots and Self-Recycling Memory

How I deployed three specialized bot fleets with memory systems that get smarter over time. It's not about what AI can do today — it's about designing systems that generate their own intelligence.


The Problem With Smart Agents

You can build an agent that's brilliant today. It queries your systems, synthesizes data, sends perfect reports. But tomorrow it has amnesia.

Every run starts from zero. It doesn't remember that last week's ad ROAS was 3.2x, that inventory of SKU-401 has been declining for three weeks, that this customer always responds to abandoned cart emails within 4 hours.

The agent is reactive, not predictive. It sees snapshots, not patterns. It tells you what happened, not what's about to happen.

The solution isn't smarter models. It's systems that remember.

The Fleet Architecture

Today I finished deploying three specialized bot fleets on a dedicated VPS. Each bot runs as its own OpenClaw gateway instance with systemd supervision — if it crashes, it restarts. If the server reboots, they come back up.

The architecture looks like this:

┌───────────────────────────────────────────────────────────────────────────┐ │ FLEET BOT ARCHITECTURE │ ├───────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────────────┐ ┌─────────────────────┐ ┌────────────────┐ │ │ │ SHIPBOT │ │ MKTGBOT │ │ SALESBOT │ │ │ │ (Operations) │ │ (Marketing) │ │ (Sales) │ │ │ ├─────────────────────┤ ├─────────────────────┤ ├────────────────┤ │ │ │ pending-orders │ │ review-monitor │ │ lead-followup │ │ │ │ 8:00am daily │ │ 9:00am daily │ │ 9:00am daily │ │ │ │ │ │ │ │ │ │ │ │ amazon-health │ │ meta-ads-report │ │ pipeline- │ │ │ │ 7:00am daily │ │ 9:15am daily │ │ monitor │ │ │ │ │ │ │ │ 9:00am daily │ │ │ │ stock-replenish │ │ competitor-watch │ │ │ │ │ │ Mon 7:00pm │ │ 10:00am daily │ │ dtc-promo │ │ │ │ │ │ │ │ Mon only │ │ │ │ │ │ seo-pulse │ │ │ │ │ │ │ │ Mon 8:00am │ │ b2b-pipeline │ │ │ │ │ │ │ │ Mon only │ │ │ └──────────┬──────────┘ └──────────┬──────────┘ └────────┬───────┘ │ │ │ │ │ │ │ └─────────────────────────┼───────────────────────┘ │ │ │ │ │ ▼ │ │ ┌───────────────────────────┐ │ │ │ SHARED DATA SOURCES │ │ │ │ NetSuite | Shopify │ │ │ │ Amazon | Meta Ads │ │ │ │ Airtable | ShipStation │ │ │ └───────────────────────────┘ │ │ │ └───────────────────────────────────────────────────────────────────────────┘

Each bot is domain-focused:

They're not siloed for permissions or security. They're siloed for cognitive clarity. Each bot has deep context about its domain. It knows the metrics that matter, the thresholds that trigger alerts, the stakeholders to notify.

The Three-Tier Memory System

Here's where it gets interesting. Each bot has a self-healing memory system with three layers:

Layer 1: Daily Files

Every weekday morning at 5:30am, a context-keeper agent wakes up and writes a daily file:

~/.openclaw/context/daily/
├── 2026-02-03-monday.md
├── 2026-02-04-tuesday.md
├── 2026-02-05-wednesday.md
├── 2026-02-06-thursday.md
├── 2026-02-07-friday.md
└── 2026-02-08-saturday.md

Each file captures:

These files stay live for 14 days, then get moved to an archive folder. At 30 days, they're deleted.

Layer 2: Weekly Summaries

Every Sunday at 8pm, a weekly-digest agent synthesizes the week:

~/.openclaw/context/weekly/
├── 2026-week-05.md
├── 2026-week-06.md
└── 2026-week-07.md

The weekly digest pulls patterns across dailies:

Weeklies are kept for 12 weeks (~3 months).

Layer 3: Permanent Memory

Only truly significant information gets promoted to MEMORY.md:

This file is permanent and version-controlled.

┌──────────────────────────────────────────────────────────────────────────┐ │ MEMORY LIFECYCLE & RECYCLING │ ├──────────────────────────────────────────────────────────────────────────┤ │ │ │ Daily Events │ │ ───────────── │ │ Orders, alerts, ┌──────────────┐ │ │ agent runs ──▶ │ Daily Files │ (14 days live) │ │ │ Mon-Fri 5:30 │ → archive → delete at 30 │ │ └──────┬───────┘ │ │ │ │ │ │ synthesize │ │ ▼ │ │ ┌──────────────┐ │ │ │ Weekly │ (12 weeks) │ │ │ Digest │ Patterns, trends, WoW │ │ │ Sunday 8pm │ │ │ └──────┬───────┘ │ │ │ │ │ │ promote significant │ │ ▼ │ │ ┌──────────────┐ │ │ │ MEMORY.md │ (permanent) │ │ │ │ Baselines, thresholds, │ │ │ │ major changes only │ │ └──────────────┘ │ │ │ └──────────────────────────────────────────────────────────────────────────┘

All Deployed Agents

Here's the complete agent roster across all three bots:

Bot Agent Schedule Purpose
Shipbot context-keeper 5:30am Mon-Fri Write daily memory file
pending-orders-alert 8:00am daily Unfulfilled orders > 24hrs
amazon-health 7:00am daily Seller health metrics
stock-replenishment Mon 7:00pm Inventory reorder suggestions
Mktgbot context-keeper 5:30am Mon-Fri Write daily memory file
review-monitor 9:00am daily New reviews across platforms
meta-ads-report 9:15am daily Yesterday's ad performance
competitor-watch 10:00am daily Competitor pricing/positioning
seo-pulse Mon 8:00am Ranking changes, GSC data
Salesbot context-keeper 5:30am Mon-Fri Write daily memory file
lead-followup 9:00am daily Stale leads needing outreach
pipeline-monitor 9:00am daily Deals at risk or closing soon
dtc-promo-monitor Mon only Discount code usage analysis
b2b-pipeline Mon only Wholesale deal velocity
All Bots weekly-digest Sun 8:00pm Synthesize week, archive dailies

The Cron Schedules

Each bot's crontab looks like this:

# Shipbot crontab
30 5 * * 1-5 /usr/local/bin/openclaw agent run context-keeper
0 7 * * * /usr/local/bin/openclaw agent run amazon-health
0 8 * * * /usr/local/bin/openclaw agent run pending-orders-alert
0 19 * * 1 /usr/local/bin/openclaw agent run stock-replenishment
0 20 * * 0 /usr/local/bin/openclaw agent run weekly-digest

# Mktgbot crontab
30 5 * * 1-5 /usr/local/bin/openclaw agent run context-keeper
0 9 * * * /usr/local/bin/openclaw agent run review-monitor
15 9 * * * /usr/local/bin/openclaw agent run meta-ads-report
0 10 * * * /usr/local/bin/openclaw agent run competitor-watch
0 8 * * 1 /usr/local/bin/openclaw agent run seo-pulse
0 20 * * 0 /usr/local/bin/openclaw agent run weekly-digest

# Salesbot crontab
30 5 * * 1-5 /usr/local/bin/openclaw agent run context-keeper
0 9 * * * /usr/local/bin/openclaw agent run lead-followup
0 9 * * * /usr/local/bin/openclaw agent run pipeline-monitor
0 9 * * 1 /usr/local/bin/openclaw agent run dtc-promo-monitor
0 9 * * 1 /usr/local/bin/openclaw agent run b2b-pipeline
0 20 * * 0 /usr/local/bin/openclaw agent run weekly-digest

What Data Enables After Two Weeks

Here's the thesis: AI isn't useful because it's smart today. It's useful when you design systems that get smarter over time.

After two weeks of daily memory files, the bots can:

What's Possible After 4-8 Weeks

With a full month of data, the system unlocks predictive capabilities:

The Compounding Effect

Every day the system runs, it gets smarter. Not because the model improved, but because the data it's learning from got richer. This is the difference between an assistant and an operating system.

Why This Architecture Works

1. Separation of Concerns

Each bot is narrowly focused. Shipbot doesn't care about ad ROAS. Mktgbot doesn't care about inventory. This makes each agent's context smaller, cheaper to run, and easier to debug.

2. Self-Healing Memory

The three-tier system prevents memory bloat. Dailies capture everything, weeklies compress patterns, permanent memory holds only what's truly significant. The system auto-prunes stale data.

3. Systemd Supervision

Each bot runs as a systemd service. If it crashes, systemd restarts it. If the server reboots, the bots come back up automatically. No babysitting.

# /etc/systemd/system/shipbot.service
[Unit]
Description=Shipbot OpenClaw Gateway
After=network.target

[Service]
Type=simple
User=shipbot
WorkingDirectory=/home/shipbot
ExecStart=/usr/local/bin/openclaw gateway start
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

4. Cron-Driven Execution

Agents don't run continuously. They wake up on schedule, do their work, write their findings, and go back to sleep. This keeps costs low and logs clean.

What's Next

The foundation is in place. Now comes the iteration:


This is not about replacing people. It's about giving them leverage. A COO shouldn't spend Monday morning pulling data from five systems to figure out what happened last week. The system should tell them — and tell them what it means.

The goal isn't smarter AI. It's systems that generate their own intelligence over time.

That's what I built today.


I'm the COO at Innovative Eyewear (NASDAQ: LUCY). Building agentic operations infrastructure. Follow me on Twitter/X or check out OpenClaw.