Building a CRM with Next.js, Prisma, and PostgreSQL — Architecture and Lessons
How I architected LeadPulse — a full-stack CRM with real-time pipeline tracking, automated follow-ups, and analytics — and the decisions I'd make differently.
LeadPulse is a CRM I built to manage sales pipelines, automate follow-ups, and track conversions across clients. Here's the full technical architecture and the decisions that shaped it.
What Makes a CRM Different From a CRUD App
Most CRMs die as glorified spreadsheets. What makes them actually useful:
- State machine — a lead moves through defined stages; transitions should be explicit and tracked
- Activity timeline — every interaction, note, and status change is recorded
- Automation — follow-up reminders, email sequences, triggered actions
- Analytics — pipeline value, conversion rates, cycle length by source
Each of these requirements has implications for the data model.
Data Model
model Lead {
id String @id @default(cuid())
name String
email String?
phone String?
company String?
source LeadSource
stage LeadStage @default(NEW)
value Decimal @default(0) @db.Decimal(12, 2)
ownerId String
owner User @relation(fields: [ownerId], references: [id])
activities Activity[]
notes Note[]
tags LeadTag[]
stageChangedAt DateTime @default(now())
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
@@index([ownerId, stage])
@@index([stage, createdAt])
}
enum LeadStage {
NEW
CONTACTED
QUALIFIED
PROPOSAL
NEGOTIATION
WON
LOST
}
model Activity {
id String @id @default(cuid())
leadId String
type ActivityType // EMAIL, CALL, MEETING, NOTE, STAGE_CHANGE
subject String?
body String?
metadata Json? // Flexible: email headers, call duration, etc.
userId String
lead Lead @relation(fields: [leadId], references: [id], onDelete: Cascade)
user User @relation(fields: [userId], references: [id])
createdAt DateTime @default(now())
@@index([leadId, createdAt])
}The Activity table is the audit log. Every action — stage change, email sent, note added — writes an activity record. The metadata: Json field handles the variance without needing a table per activity type.
Stage Transitions as Explicit Operations
Rather than a raw UPDATE leads SET stage = 'WON', stage changes go through a service that enforces valid transitions and records the history:
const VALID_TRANSITIONS: Record<LeadStage, LeadStage[]> = {
NEW: ["CONTACTED", "LOST"],
CONTACTED: ["QUALIFIED", "LOST"],
QUALIFIED: ["PROPOSAL", "LOST"],
PROPOSAL: ["NEGOTIATION", "LOST"],
NEGOTIATION: ["WON", "LOST"],
WON: [],
LOST: ["NEW"], // Allow re-opening
}
export async function moveLeadToStage(
leadId: string,
newStage: LeadStage,
userId: string,
reason?: string
) {
const lead = await prisma.lead.findUniqueOrThrow({ where: { id: leadId } })
if (!VALID_TRANSITIONS[lead.stage].includes(newStage)) {
throw new Error(`Invalid transition: ${lead.stage} → ${newStage}`)
}
return prisma.$transaction([
prisma.lead.update({
where: { id: leadId },
data: { stage: newStage, stageChangedAt: new Date() },
}),
prisma.activity.create({
data: {
leadId,
userId,
type: "STAGE_CHANGE",
subject: `Moved to ${newStage}`,
metadata: {
from: lead.stage,
to: newStage,
reason,
},
},
}),
])
}This makes the pipeline history queryable: "How long do leads spend in PROPOSAL on average?" is just a query on activities with type STAGE_CHANGE.
Real-Time Pipeline View
The Kanban pipeline view uses Server-Sent Events for live updates — lighter than WebSocket for unidirectional data:
// app/api/pipeline/stream/route.ts
export async function GET(request: Request) {
const userId = await getSessionUserId(request)
const stream = new TransformStream()
const writer = stream.writable.getWriter()
const encoder = new TextEncoder()
const send = (data: object) => {
writer.write(encoder.encode(`data: ${JSON.stringify(data)}\n\n`))
}
// Subscribe to lead updates for this user
const unsubscribe = leadEventEmitter.on(`user:${userId}`, (event) => {
send(event)
})
request.signal.addEventListener("abort", () => {
unsubscribe()
writer.close()
})
return new Response(stream.readable, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
},
})
}// Client-side
function usePipelineUpdates() {
const queryClient = useQueryClient()
useEffect(() => {
const es = new EventSource("/api/pipeline/stream")
es.onmessage = (event) => {
const update = JSON.parse(event.data)
// Update the query cache in place
queryClient.setQueryData<Pipeline>(["pipeline"], (old) =>
applyLeadUpdate(old!, update)
)
}
return () => es.close()
}, [])
}Analytics Queries
Pipeline analytics require aggregations that can be expensive on large datasets. I precompute daily snapshots with a cron job:
-- Daily pipeline snapshot
INSERT INTO pipeline_snapshots (date, stage, count, total_value)
SELECT
CURRENT_DATE,
stage,
COUNT(*),
SUM(value)
FROM leads
WHERE owner_id = $1
AND deleted_at IS NULL
GROUP BY stage
ON CONFLICT (date, stage, owner_id) DO UPDATE
SET count = EXCLUDED.count,
total_value = EXCLUDED.total_value;Analytics queries then hit the snapshot table instead of scanning all leads — 10ms vs 800ms on a large dataset.
What I'd Change
-
Multi-tenancy from day one — I retrofitted organization-level isolation after building user-level isolation. Schema-level multi-tenancy (separate Postgres schema per org) would have been cleaner.
-
Email threading is hard — tracking email conversations as activities requires parsing Message-ID and In-Reply-To headers. I underestimated this.
-
The mobile view needs more love — sales reps often update leads from phones. The pipeline Kanban is unusable on mobile. A list view with quick-action buttons works better for mobile users.
Building a CRM is a master class in data modeling and state management. Every UX decision has a data model implication.