Accelerant Demo Playbook
Live Walkthrough - Decision Intelligence in Action
Presenter: Thomas Knee, Staff SDET & Founder
Audience: Barry King (Director of Architecture), Stefan Walther (Chief Engineering Officer)
Duration: 30 minutes
Environment: app.preview.align.tech (live preview)
Date: March 2026
Internal - Demo Preparation Only
Know Your Audience
Barry King - Director of Architecture said: "Not seeing what value this would bring. Seems like a solution looking for a problem."
This tells you three things:
- He likely skimmed align.tech and didn't read the whitepaper
- He doesn't think his org has a "decision drift" problem (he may think he is the decision tracking system)
- He's giving you the chance to change his mind - a flat "no" would have been worse
The Core Principle
Don't pitch a product. Surface a problem he already has but can't see.
Barry doesn't wake up thinking "we have decision drift." He wakes up thinking "why did that team ship something that contradicts what we agreed?" Your demo needs to make him connect those dots himself.
The Narrative Arc
You are telling a single story across three tools. A decision is born, evolves, and conflicts - all live, all real.
Slack
Decision born
→
Align
Captured
→
Jira
Refined
→
Align
Supersession
→
Channel 2
Conflict
→
Align
Conflict caught
Demo Timing Overview
| Phase | Time | What Happens |
| Open with questions | 3 min | Surface the problem before showing the solution |
| Dashboard + connectors | 2 min | Quick context on what's connected |
| Scene 1: Slack capture | 5 min | Live @align - decisions + open questions + consensus |
| Scene 2: Jira supersession | 5 min | Live /align-preview - cross-platform detection |
| Scene 3: Conflict detection | 4 min | Contradictory decision caught automatically |
| Decision graph | 3 min | Visual network - the full picture |
| Discover scan | 4 min | Historical import - "what's buried in your tools" |
| AI integration (MCP) | 2 min | Claude querying decisions, drift check |
| Close + offer | 2 min | Free pilot proposal |
Post these conversations in your demo Slack workspace and Jira the day before. Use real-looking names. The conversations are designed to trigger specific Align behaviours: decision extraction, open questions, consensus topics, supersession, and conflict.
Slack Thread 1: #engineering - Import Pipeline Extraction
This thread produces: 2-3 decisions, open questions needing consensus, action items. Uses realistic platform engineering patterns (Fastify, SQS, PostgreSQL, pgvector, EKS, worker services) that mirror patterns Barry works with daily.
MR
Marcus Reid 10:14 AM
We need to make a call on the import pipeline before the next planning cycle. The historical import logic in the API Gateway has grown to 4000+ lines in a single file. It's doing connector-specific data fetching, batch analysis orchestration, progress tracking, AND the two-phase bulk approval flow - all inside the main Fastify process. Every time we tune the batch pipeline, we're redeploying the entire Gateway. I think it's time to extract it into a dedicated worker service.
SP
Sarah Park 10:22 AM
Agree it needs extracting. The import pipeline is already SQS-driven under the hood - we've got job queues for import and bulk approval. The Gateway just shouldn't be the consumer. Extract the worker into a standalone TypeScript service on EKS with its own deployment. Gateway enqueues jobs to SQS, worker consumes and processes. Clean separation. We keep the existing two-phase flow: Phase 1 creates records instantly, Phase 2 does async analysis enrichment in batches.
JT
James Torres 10:31 AM
Makes sense architecturally. One concern though - the analysis batching is the part that's hard to get right. We're currently sending batches of 8 items to the Brain service for LLM analysis. Smaller batches give better hit rates but cost more in tokens. If we move to a dedicated worker, do we keep batch size at 8 or should we experiment with adaptive sizing based on content length? The 400KB payload cap is already forcing us to split large batches.
MR
Marcus Reid 10:38 AM
Keep batch size at 8 for now. We tested batch=5 vs batch=20 last quarter - batch=5 gave 33% hit rate vs 9% at batch=20, but the token cost was 3x higher. 8 is a reasonable middle ground. We can make it configurable per connector once the worker is standalone. The real win from extraction is that we can scale worker replicas independently during bulk imports without affecting API latency.
LW
Lisa Wu 10:45 AM
The Postgres connection pooling is the part that worries me. Right now the Gateway shares a single pool across everything - API requests, import jobs, bulk approval workers. PG_POOL_MAX is 15 with 12 worker slots and 3 reserved. If we extract the worker, it needs its own dedicated pool. During bulk imports we've seen pool exhaustion at 3am when multiple tenants run discover scans simultaneously. Dedicated pool for the worker means the API never starves.
SP
Sarah Park 10:52 AM
Agreed on dedicated pool. Give the worker its own PG pool with max 10 connections. Gateway keeps its 15. We should also move the progress tracking from Redis pub/sub to SQS message attributes - the worker can post progress updates back to a lightweight Gateway endpoint that pushes to the UI via SSE. Keeps the worker stateless on EKS.
JT
James Torres 10:58 AM
So to summarise - we're extracting import as a standalone worker service on EKS, SQS for job orchestration, keep batch size at 8, dedicated PG pool (max 10), progress via SQS attributes back to Gateway SSE. I'll draft the service boundary ADR. Lisa - can you model the RDS connection impact across environments?
LW
Lisa Wu 11:03 AM
I'll model the connection limits. One thing we haven't resolved though - the connector-specific fetch logic. Right now each connector (Slack, Jira, GitHub, Teams) has its own fetch function baked into the import file with different parallelism settings. Slack does 10 parallel channels, Jira does 20 parallel comments, GitHub does 5 parallel repos. If we extract the worker, does the fetch logic move with it or do the connectors handle their own fetching and just push items to SQS? That coupling to per-connector parallelism tuning is messy.
What Align will detect here: 2-3 clear decisions (extract as standalone worker, SQS orchestration, dedicated PG pool max 10), 2 open questions needing consensus (connector fetch logic ownership, adaptive batch sizing long-term), action items (Lisa models RDS impact, James drafts ADR), and potential ambiguity around the connector coupling.
Jira Issue: "Extract Import Pipeline from Gateway"
Create this Jira issue with these comments. The final comment refines the original Slack decision - Align will detect the supersession.
| Field | Value |
| Project | Your demo project |
| Type | Epic |
| Title | Extract Import Pipeline from Gateway |
| Description | Extract the historical import and bulk approval pipeline from the Fastify Gateway into a standalone SQS-driven worker service on EKS. Dedicated Postgres connection pool. Independent scaling for bulk import workloads. |
Comment 1 (by you or another account):
Starting the service boundary analysis. The import pipeline currently has 14 dependencies on other Gateway modules. Main coupling points: tenant context resolution, connector credential lookup, and the Brain client for analysis. Will need to expose these as internal APIs or extract shared packages.
Comment 2 (posted a few days later - this triggers the SUPERSESSION):
After modelling the RDS connection limits, a dedicated pool with max 10 is too aggressive for preview environments where RDS max_connections is only 80. Going with adaptive pool sizing - worker reads RDS max_connections at startup and claims 25% (min 5, max 15). Also ran load tests on the two-phase bulk approval: Phase 1 with DECISION_CREATION_CONCURRENCY=5 is the bottleneck, not the pool. Bumping to 10 concurrent inserts with row-level locking instead of table locks. The connector fetch logic MUST stay in the worker - moving it to connectors would require every connector to implement their own SQS producer. This supersedes the fixed pool size decision from the architecture thread.
What Align will detect here: When you run /align-preview on this issue, it will extract decisions and detect that Lisa's load testing supersedes the original Slack decision. Fixed pool (max 10) changed to adaptive sizing, concurrency bottleneck identified and fixed, connector ownership resolved. Clear refinement based on evidence.
Slack Thread 2: #data-eng - The Conflicting Decision
Post this in a DIFFERENT channel. The platform team independently proposes a competing approach to the same analysis pipeline. Align catches the conflict.
NK
Nadia Kovac 2:15 PM
The batch analysis pipeline is too slow. Users are waiting 10-15 minutes for a discover scan to complete because we queue everything through SQS and process in batches of 8. I've been prototyping a streaming approach - instead of batching items and sending them to the Brain service via SQS, we process each item inline as it arrives. A new FastAPI microservice that does embedding generation and relationship detection in real-time. We can get results back to the user in seconds instead of minutes.
RD
Ryan Dempsey 2:23 PM
That would be a massive improvement. The SQS batch dependency is a bottleneck anyway - we've hit visibility timeout issues twice this quarter during large imports. A Python service doing inline pgvector similarity search gives us real-time relationship detection. We could use the existing Gateway's Postgres pool since the queries are lightweight vector comparisons.
NK
Nadia Kovac 2:30 PM
Exactly. Building this as a standalone FastAPI service. It takes each item as it's fetched, generates embeddings inline, does a pgvector cosine similarity search for relationships, and returns results via a REST endpoint that the Gateway calls synchronously. No SQS needed - it's a request-response pattern. We skip the two-phase bulk approval entirely and just create records with relationships in a single pass. Starting this sprint.
What Align will detect here: This is the money moment. Multiple conflicts with the architecture decision:
(1) Two separate services being built for the same pipeline (TypeScript worker vs FastAPI inline processor).
(2) REST synchronous vs SQS async for analysis orchestration.
(3) Inline single-pass processing vs two-phase batch approach.
(4) Shared Gateway PG pool vs dedicated worker pool.
Two teams, two channels, contradictory approaches to the same pipeline. This would normally be caught weeks later when both teams try to deploy.
3 min Opening - No Screen Share
Start with voice only. Don't share your screen yet. You want Barry thinking about the problem before he sees the solution.
Barry, before I show you anything - quick question. When you onboard a new architect or tech lead into Accelerant, how do they find out WHY the architecture looks the way it does? Not the what - the why behind the decisions.
Let him answer. Then:
And when two squads make conflicting technical choices - say one team picks one approach in Slack and another team goes a different direction in a Jira thread - how does that get caught today?
Let him answer. If he says "Confluence" or "ADRs" or "architecture reviews":
Right. And what percentage of actual engineering decisions make it into a formal ADR? In my experience across multiple orgs it's maybe 5-10%. The other 90% happen in Slack threads, Jira comments, meeting transcripts - and they just... disappear. Let me show you what happens when you can actually see all of them.
2 min Dashboard + Connectors
Share screen. Open app.preview.align.tech. Show the dashboard.
This is Align connected to our own engineering tools - Slack, Jira, GitHub, Teams, Confluence, Zoom. It's been running against our environment and has already found [X] engineering decisions across [Y] platforms with [Z] relationships detected automatically.
Quick click to /connectors page. Show the 6 connected tools, all green/healthy.
No workflow changes. Engineers keep using their normal tools. Align connects via standard OAuth - read access. Each connector runs as an isolated microservice. Let me show you what it actually does.
Don't linger on the dashboard or connectors. Barry doesn't care about setup - he cares about output. Move fast.
5 min Scene 1: Slack - Live Decision Capture
Switch to Slack. Navigate to #engineering channel. Show the import pipeline discussion thread.
Here's a typical architecture discussion. A team is debating whether to extract the import pipeline from their API Gateway into a standalone worker service. There are clear decisions being made - SQS orchestration, dedicated Postgres pool, batch size at 8 - but also open questions around connector fetch logic ownership and adaptive pool sizing. This is the kind of thread that normally gets lost in Slack history within a week. Watch what happens.
Type @Align in the thread and send it. Wait for the response.
Within 2-3 seconds: "Analysing conversation for decisions and consensus..." message appears.
Within 20-35 seconds: The full analysis card drops in showing:
- Status: Active (because open questions remain)
- Decisions Found (2-3): Extract import as standalone worker on EKS + SQS orchestration + dedicated PG pool (max 10)
- Topics Needing Consensus (2): Connector fetch logic ownership (move with worker vs connectors push to SQS?) + adaptive batch sizing long-term
- Action items: Lisa models RDS connection impact, James drafts service boundary ADR
- Buttons: "Create Decisions" and "Propose Consensus"
Look at what it found. Two clear decisions with confidence scores - the worker extraction and the dedicated PG pool. But more importantly - it identified topics where the team hasn't reached consensus. The connector fetch logic ownership is still unresolved. The batch sizing question is flagged as open. In a normal Slack thread, these open questions get buried. Someone assumes a decision was made when it wasn't.
Click "Create Decisions" button. Show the confirmation. If relationship alerts appear (from existing decisions in the system), highlight them.
30 seconds. A Slack thread just became structured, searchable engineering decisions. No one wrote an ADR. No one updated Confluence. Let me show you what happens when the same topic evolves in a different tool.
While waiting for analysis (20-35s): Fill the silence by narrating what's happening. "The AI is reading the full conversation, extracting decisions, scoring confidence, and identifying areas where the team hasn't actually agreed yet."
5 min Scene 2: Jira - Cross-Platform Supersession
Now the implementation starts. Someone creates a Jira epic for the extraction. A few days later, after running load tests against the RDS connection limits, the team refines the approach. The pool sizing changes, the concurrency bottleneck is identified, the connector ownership gets resolved. This happens in Jira comments, not in the original Slack thread.
Switch to Jira. Open the Extract Import Pipeline from Gateway issue. Scroll through the comments so Barry can see the conversation.
The original decision was a fixed PG pool with max 10 connections. After load testing, the team discovered that's too aggressive for smaller environments and the real bottleneck was insert concurrency, not the pool. They also resolved the connector ownership question. But the original Slack decision still says fixed pool max 10. Watch.
Add a comment on the Jira issue: /align-preview and submit. Wait for the response.
Within 20-35 seconds: Align analyses all comments and posts back:
- Decisions created: 2 decisions extracted from Jira comments
- Supersession detected: "This decision supersedes: Dedicated PG pool max 10 (from #engineering)"
- Inline relationship badges: Shows the connection to the original Slack decision
- Available actions: "accept supersession" / "reject supersession"
This is the cross-platform detection. Align just connected a Jira comment to a Slack thread. It understood that the load testing findings supersede the original pool sizing decision. The old decision is now linked - anyone who finds the Slack thread gets pointed to the updated version. The decision evolved across tools, and Align tracked it automatically.
Key phrase for Barry: "Without this, someone reads the Slack thread, sets PG_POOL_MAX=10 in the Helm chart, and causes connection exhaustion in preview. The Jira refinement never reaches them."
4 min Scene 3: The Conflict - The Money Moment
Now here's where it gets interesting. The platform team - working in their own channel - is tackling the same analysis pipeline bottleneck. But they've independently decided on a completely different approach.
Switch to Slack. Navigate to #data-platform channel. Show the inline analysis discussion thread.
This team is building a standalone FastAPI service that does inline embedding generation and pgvector similarity search, bypassing SQS batching entirely. They're using synchronous REST instead of async queues, the shared Gateway pool instead of a dedicated one, and skipping the two-phase bulk approval. Every decision here contradicts what the architecture team agreed. But they don't know that - they're solving the same problem from a different angle in a different channel.
Type @Align in the thread and send it. Wait for the response.
Align analyses and posts:
- Decision Found: "Build standalone FastAPI inline analysis service with real-time pgvector search"
- Click "Create Decisions"
Seconds later - the conflict alert drops:
- ⚠️ Decision Conflict Detected
- "This decision conflicts with: Extract import pipeline as standalone SQS-driven worker service (from #engineering / Jira)"
- Confidence: ~80%+
- Reasons: Competing services for same pipeline, REST sync vs SQS async, inline single-pass vs two-phase batch
- Options: "resolve conflict" / "keep both" / "skip"
Two teams, two channels, contradictory approaches to the same pipeline. Caught automatically. Without this, when does this conflict get discovered? When both teams try to deploy and realise they've built two services for the same job with incompatible communication patterns. Weeks of wasted work. Align caught it before a single line of conflicting code was written.
Pause here. Let this land. Don't rush to the next thing. This is the moment Barry either gets it or doesn't. Give him space to react.
3 min The Decision Graph - Full Picture
Switch to Align UI. Navigate to /graph.
Here's every engineering decision we've captured, visualised as a network. Each node is a decision. The colour tells you which platform it came from - purple is Slack, blue is Jira, grey is GitHub.
Point out the three decisions you just created. Show the blue supersession edge between the Slack and Jira decisions. Show the red dashed conflict edge between the worker extraction and inline analysis decisions.
See the blue line? That's the supersession - the pool sizing decision evolved from Slack to Jira. See the red dashed line? That's the conflict between the architecture team and platform team. You can filter to show only conflicts across the whole system - every red line is a contradiction between engineering decisions that nobody flagged.
Filter to "Conflicts only" view. Drag nodes around to arrange. Click a node to show detail sidebar.
4 min Discover Scan - What's Buried in Your Tools
Everything I just showed was real-time capture - individual decisions as they happen. But what about the thousands of decisions that were already made before Align was connected? That's what Discover does.
Navigate to /discover. Click "New Import Job". Select connectors to scan (Slack + Jira + GitHub, or all). Start the scan.
Real-time progress streaming:
- SSE progress bar updates as items are processed
- Suggestions appear in real-time as the Brain extracts decisions
- Each suggestion shows: title, confidence score, source platform, source URL
This is scanning [your Slack channels / Jira projects / GitHub repos] right now. Every message, comment, PR description - the AI is reading them and extracting engineering decisions. These are decisions that were already made and buried in tool history. Watch the confidence scores - anything above 80% is very likely a real decision. Below 60% we flag for human review.
You don't need to wait for the full scan to complete. Show 20-30 suggestions arriving, then say "This will keep running - let me show you what it looks like when you approve them in bulk" and switch to the suggestions tab with the pre-populated data from your day-before scan.
When you bulk-approve, the AI runs cross-batch relationship analysis. It compares every new decision against every other decision and detects conflicts, supersessions, duplicates. The graph you saw earlier? That was built automatically from a discover scan just like this one.
2 min AI Integration - MCP Tools
One last thing. When your engineers use AI assistants - Claude, Copilot, ChatGPT - those assistants can query Align directly via MCP.
Open Claude Code or a terminal. Run the MCP tools:
align.search("import pipeline architecture")
→ Returns the latest decision (the superseding adaptive pool sizing), not the outdated fixed max 10. The AI gets the current truth, not a stale Slack thread.
align.get_conflicts()
→ Shows the active SQS worker vs FastAPI inline conflict with confidence scores and context.
align.check_drift(decision_id, code_snippet) (if time permits)
→ Paste a Helm values.yaml with PG_POOL_MAX=10. It detects drift against the adaptive pool sizing decision.
Before merging a PR, an engineer can ask: "Does this violate any architecture decisions?" Before an AI agent acts autonomously, it can check: "Is this aligned with what the team decided?" This is decision-aware AI. It doesn't hallucinate your architecture - it queries what was actually agreed.
2 min The Close
So that's Align. What you saw today: an architecture decision made in Slack, refined in Jira after load testing, a conflicting approach caught in the platform channel - all automatic, no workflow changes. Plus the discover scan that finds every buried decision across your tool history.
Pause. Let him respond. Then:
I'm not asking for anything commercial. What I'd propose is a tightly scoped experiment. Connect your Slack and Jira. Run a discover scan on real Accelerant data. See what it finds. If it surfaces nothing useful, you've lost nothing and I've learned something. If it catches even one conflict that would have caused rework - that's the value case.
Free tier covers 50 users with all integrations. Deployment would be within Accelerant's own tenant with full data isolation. Bring your own key, strict access controls. No Align infrastructure touches your data. We define a narrow hypothesis, measure it, and decide from there.
Things That Could Go Wrong
OAuth token expired during demo
Connector shows unhealthy. Mitigation: Refresh all tokens day-before AND 1 hour before. Have pre-populated data as fallback so the dashboard/graph still shows content even if a live capture fails.
Brain service slow (LLM API latency)
Analysis takes >45 seconds. Mitigation: Fill the silence by narrating what's happening. If it takes >60 seconds, say "Normally this takes 20-30 seconds - looks like the LLM provider is slow today. Let me show you the results from a previous scan while this completes" and switch to the pre-populated data.
No conflict detected
The Memcached vs Redis conflict doesn't fire. Mitigation: This is unlikely if both decisions exist in the system. If it doesn't auto-detect, navigate to the graph and show the decisions manually - "The system is still running relationship analysis in the background. Let me show you what detected conflicts look like from the discover scan."
No supersession detected from Jira
The cross-platform link doesn't fire between Slack and Jira decisions. Mitigation: Show the decisions side-by-side in Align UI. "The deferred analysis runs in the background and typically takes 30-60 seconds. The relationship will appear shortly - let me show you what it looks like with the existing data."
Discover scan returns few results
Demo org doesn't have enough conversation data. Mitigation: Run the scan day-before to know what to expect. If the demo org is thin, focus on the live capture scenes and keep the discover scan brief.
Slack rate limiting
Too many API calls during discover scan. Mitigation: Run the full scan day-before. During demo, only run a targeted scan on 1-2 channels.
Recovery Phrases
| Situation | What to Say |
| Something is loading slowly | "The AI is doing the heavy lifting here - reading every message, comparing against existing decisions, scoring confidence. This normally takes 20-30 seconds." |
| An error appears | "That's the preview environment being temperamental. Let me show you the same flow from the data we already have." Switch to pre-populated results. |
| Empty results | "This channel/project may not have enough decision-type conversations. Let me switch to one that has richer data." Switch to a known-good source. |
Key Timing Notes
| Action | Expected Time | Max Acceptable |
| @Align mention acknowledgement | 2-3 seconds | 5 seconds |
| Conversation analysis | 20-35 seconds | 60 seconds |
| Relationship detection (background) | 10-60 seconds | 2 minutes |
| /align-preview Jira analysis | 20-35 seconds | 60 seconds |
| Discover scan first results | 10-15 seconds | 30 seconds |
| Graph page load | 2-5 seconds | 10 seconds |
| MCP tool response | 3-8 seconds | 15 seconds |
Be the Engineer, Not the Founder
Barry needs to see the engineer who built this because he felt the pain, not the founder who wants a design partner. Channel "let me show you something I found" energy, not "let me present my product." The moment it feels like a sales pitch, you've lost a Director of Architecture.
Let the Product Speak
The three live scenes do 90% of the convincing. The conflict detection is visceral - two teams making contradictory decisions is something every engineering leader has experienced. You don't need to explain why it matters. He'll feel it.
Listen More Than You Talk
After each scene, pause. Let Barry react. If he asks questions, that's engagement - answer thoroughly. If he's quiet, ask him: "Does this resonate with anything you've seen at Accelerant?" Draw him into the conversation.
Don't Oversell
If Barry says "I'm not sure we need this" - don't push back. Say "That's fair. The pilot would answer that question with real data. If your discover scan shows 5 decisions and no conflicts, maybe you don't need it. If it shows 500 decisions and 15 conflicts, the data speaks for itself."
The Stefan Dynamic
Stefan avoided the Align topic after promoting you to the AI Council. He may have concerns about a direct report pitching a commercial product internally. Keep this framed as "I built something that might help the engineering org" not "I'm starting a company." If Stefan is on the call, address him naturally but direct the technical demo at Barry.
Allies in the Room
If Anna (QA Lead) or Daan (Senior DevOps) join, they're on your side. Anna understands the pain of missing context during test planning. Daan deals with deployment decisions that drift. If either of them reacts positively during the demo, that social proof matters more than anything you say.
Michael Seibel's Advice Applied
- "Start with a problem, not an idea" - That's why you open with questions, not a product tour
- "Hand-pick your initial users" - Accelerant is a strong fit IF the pain exists. The discover scan proves or disproves that
- "Don't fall in love with the product" - Be willing to hear "we don't need this." That flexibility earns respect
- "Your MVP won't solve the problem perfectly" - If something fails during the demo, own it: "We're iterating fast. This is early. The core detection works - the UX is evolving."
The One Line to Remember
Two teams. Two channels. Contradictory approaches to the same pipeline. Caught automatically.