Neon Triforce / Agents

Operate your revenue engine like a system.
At AI speed.

Most B2B companies bolt AI on top of broken processes. Neon Triforce builds the Company Operating System first (strategy, rhythm, data spine), then installs the agents that execute it. This page shows that system, running on us, with receipts.

61+
Skills
across 3 layers
48
Scheduled tasks
22 Cowork + 26 Code
25+
Use cases
mapped to the bowtie
1457+
Vault files
feeding the system

When something breaks, point at the process, not the person. 94% of problems are system issues. The agent stack makes that explicit.

The Company OS sits above the agents. Strategy sets direction, agents execute, signals flow back up. No system, no learning.

Eight tiles, no more. Three layers, no fewer. If a metric does not earn its place, it goes in a deep-dive, not on the wall.

Strategy on top. Agents at the bottom. Signals flow both ways.

Two diagrams explain the whole intellectual property. The Company OS sets direction through three rings. The agent stack executes through four layers.

5

Strategy · Company OS

Direction flows down · Results flow up
Sets direction

Governance

Strategy, priorities, KPIs, rhythm. The control system that steers.

Enables execution

Enablement

People, process, platforms, data spine. The infrastructure value runs on.

Generates results

ICP Value Loops

Product engine + revenue engine. Where value gets created and learned.

Sets direction for the agent stack
4

Interface

How humans talk to the system.

Cowork (desktop chat agent)
Claude Code CLI
Mobile dispatch
Slack channels (8 topical channels)
3

Intelligence

Skills, scheduled agents, sub-agents.

65+ skills across 3 layers
33 scheduled tasks across two schedulers
Sub-agents on demand
Cluster-loaded by context
2

Memory

Long-term memory. INDEX-linked, semantically retrievable.

666+ markdown files in connected vault
Semantic retrieval (ChromaDB)
Neon Canon (22 IP docs)
153+ call transcripts, 100+ contact profiles
1

Data Sources

Where signals come from.

Clarify CRM, Gmail, Drive, Calendar
Fireflies, Slack, Kondo (LinkedIn DMs)
Apify, Notion, AuthoredUp
Web search and fetch

Eight tiles. One board. Click any tile to see what is behind it.

≤8 tiles, every tile has an owner, target, band, breach rule, latest note. Applied to a one-person business with VAs, the owner column lists which operating role wears the hat.

Manifest live · refreshed today

Generator works. Refresh is currently manual: npm run manifest && git push. A consolidated Code-side scheduled routine is spec'd in Inbox/Work-Orders/WO-agents-site-code-publisher-2026-05-07.md but not yet installed.

25 places AI earns its keep, mapped to where they belong.

Acquisition is frequency-based and polynomial. Small conversion gains compound across many steps. Retention is time-based and exponential. Small GRR/NRR gains compound across years. Mutual Commit is the transformation point between the two.

25

Use cases

7

Bowtie stages

4

Maturity stages

7

At Stage 3 (AI-assisted)

Where each use case sits today.

Honest self-assessment. Most sit in Stage 1-2. A few have moved to Stage 3 (AI-Assisted). Stage 4 is empty. Reviewed quarterly. Stage transitions require a proof artefact.

Stage 1 — Ad-hoc

7 use cases

Manual or 1–2 point solutions. No measured ROI. Humans review 100% of outputs.

Stage 2 — Programmatic

7 use cases

3–5 integrated tools. Native CRM connectors. Automated data pipelines. Standardised. Humans review before action.

Stage 3 — AI-Assisted

7 use cases

5–8 tools with copilots. Deep CRM integration. 80%+ adoption. Humans act on recommendations.

Stage 4 — AI-Orchestrated

0 use cases

8+ tools with autonomous agents. Real-time bidirectional feedback. Humans handle exceptions only.

none yet — Stage 4 requires autonomous agents within defined lanes, with proof of consistent quality and human exception-only oversight

Two schedulers. 33 scheduled workflows. One reconciliation.

Cowork has 12 tasks via the scheduled-tasks MCP. Claude Code has 21 in its native Routines panel. Six appear in both. Not automatically a problem — but a question worth asking: is each overlap intentional (different jobs, same name), redundant (the same job running twice), or stale (one is canonical, the other should be deleted).

A note on framing. The site uses the word "agents" aspirationally. What runs here are scheduled AI workflows: Claude or Code sessions with prompts that fetch, decide, write, and notify. Some are agentic in the multi-step / decision-making sense. Others are simple cron-style fetches. Calling them all "agents" is a stretch; "schedules" or "scheduled workflows" is more accurate.

27

Total

6

Both schedulers

6

Cowork only

15

Code only

3

Overlaps

Overlaps to review (3)

weekly-competitive-scan — Paused in Code, live Thursday in Cowork. Verify which is canonical.

content-pipeline — Two different schedules. Confirm whether they do the same job or different stages.

linkedin-engager-scan — Different times. Likely the same job running twice. Reconcile.

62+ skills. Click any one to see what it does.

A skill is a domain-specific instruction set with embedded best practices. Claude directory-walks the layers and loads only what is in scope for the current context.

Usage tracking · pending

Phase 3 of build plan

Right now skill cards show name, description, and layer. The dashboard cannot yet show when each skill was last used or how often. That requires a logging layer Claude does not expose, plus session log parsing. Once built, each skill card will surface:

  • Last edited (filesystem mtime)
  • Last invoked (session log scan)
  • Invocations per week (rolling)
  • Frequency band (active / recent / stable / dormant)

Build path: nightly scheduled task walks .claude/skill-library/ for mtime, scans Cowork + Claude Code session transcripts in the vault for skill mentions, writes the result to manifest.json, this site reads it at build time. Vercel rebuilds via webhook. Refresh latency: 24h, which is fine for a maturity tile.

62

Skills

13

Categories

7

Clusters

3

Layers

Layer 1 — Global

21

Loaded every session. Voice, document toolkit, vault infrastructure, always-on sales workflow.

Layer 2 — Neon Business

6

Loaded inside Neon-Business. Content production with brand canon embedded.

Layer 3 — Client Delivery

35

Loaded inside Client-Work. RevOps frameworks, GTM playbooks, Neon canon delivery.

4 skills
3 skills
1 skills
4 skills
5 skills
1 skills
3 skills
6 skills
12 skills
13 skills
3 skills
2 skills
5 skills

Every chat starts with a bootstrap question. Which cluster to load. Keeps token usage tight, multi-select for cross-cluster work.

neon-linkedin-posting, neon-blog-writer, neon-content-pipeline, neon-gemini-infographic, neon-thought-leadership-architect, marketing-psychology
neon-linkedin-lead-triage, neon-positioning-messaging-designer (proposal-generator always-on)
neon-ai-readiness-day, neon-gtm-ai-use-case-mapper, neon-client-diagnostic, neon-client-skill-library-builder, neon-obeya-builder, neon-operating-cadence-designer, neon-customer-interview-orchestrator, neon-benchmark-reference, neon-company-os-architect, neon-revenue-leak-diagnostic
all revops-* skills, pipeline-visibility, lead-routing, data-enrichment, revenue-operating-cadence
gtm-planning, gtm-compensation, sales-methodology, neon-deal-velocity-engineer, marketing-operations, neon-icp
cs-operations, neon-expansion-revenue-architect, neon-partner-ecosystem-architect, partner-channel-operations

Where this dashboard is, what's coming, what's deferred.

Three concentric layers. Layer A is visual + UX parity with reference dashboards. Layer B is operational depth (auto-refresh, more data sources, sparkline trends). Layer C is strategic capability (agent system grouping, autonomy levels, public/internal split).

v0.9

Current version

8

Tiles wired

3

Roadmap layers

3

Work orders open

Mostly client-side. 1–2 sessions to close.

Skills tab search + counter row + category icons

live

Skill card tag pills (category + cluster + layer)

live

Status pill colour discipline (LIVE / BUILDING / DESIGN / PLANNED)

live

Schedules tab search + counter row

live

Roadmap tab (this view)

live

Trigger pills extracted from frontmatter (per-skill keywords)

design

Bow-tie tab counter row + use case search

design

Filter dropdowns at top-level (Use Case / Phase / Status)

design

Mix of Astro + data work. 3–4 sessions to close. Two work orders are spec'd; one is currently being installed.

Manifest pipeline (build-time data feed)

live

Vault source files for decisions / experiments / maturity

live

Consolidated Code-side nightly publisher routine

pending-install building

Decision + experiment retrospective scraper

verify next Friday run building

Skill one-liners (hand-curated, replace frontmatter)

WO open design

AuthoredUp performance feed → tile 4 expansion

design

Kondo MCP query → tile 5 DM throughput

design

Sparkline trend per tile (last 12 points)

design

Latest note per tile (most recent decision touching it)

design

Longer-term. Restructures the dashboard or adds whole new mental models.

Group scheduled tasks into "agent systems" (Sales / Content / Operations / Architecture)

design

Autonomy level per task (L1 manual → L5 fully autonomous)

design

Cross-modal navigation (skill name in decision modal opens skill modal)

design

Auth-gated /room route for internal-only metrics

design

Forecast accuracy or Time Saved tile (replaces tile 7)

design

Chart.js island for use case maturity sector chart

design

Chart.js island for "tasks per system by status" bar chart

design

This roadmap is hand-maintained for now. Each item moves from designbuildinglive as it ships. When the decision/experiment scraper is live, work order creation events will auto-populate the Building column.

The rules that make this an operating system, not a dashboard.

These are not opinions. Skip any of them and you end up with the same telemetry wall everyone else has. Pretty, expensive, ignored.

  1. 01

    ≤8 tiles on the executive board. No ninth tile, no shadow dashboards.

  2. 02

    Every tile has target, band, actual, owner (a name not a role), trend, breach rule, latest note.

  3. 03

    48-hour containment SLA on any breach. Owner publishes containment plan. Logged in Decisions.

  4. 04

    ≤3 decisions per weekly session. Anything beyond is tabled for a deep-dive.

  5. 05

    Definition change control. No mid-quarter definition changes without approval.

  6. 06

    WIP caps per function. Explicit limits on active experiments per lane.

  7. 07

    Public kill-rate. Every quarter publish how many experiments were killed and why. Target 20 to 40%.

  8. 08

    Forecast accuracy cadence. Track % of quarters where actuals fell within forecast bands.

Bands are not targets

A band is the normal range. Hitting the band means you are in control. Below band means special cause. Investigate. Do not fire someone.

Annotations are not optional

Every breach, every decision, every experiment needs a note explaining context. This is how you build organisational memory.

Containment is not root cause

A 48-hour containment plan is a stopgap. Root cause analysis happens in A3s or deep-dives. Log both. Do not conflate them.

Dashboard sprawl is the enemy

One room. Eight tiles. Anything else lives in linked deep-dives. When someone asks for a ninth tile, ask which of the 8 is less critical.