Skip to main content
Restricted Access: This documentation is only accessible to @tenzo.ai and @salv.ai email addresses.

Overview

The analytics page loads data from a single GET /analytics/dashboard endpoint that returns all Call-table-backed metrics (scalars + charts) in one request. A set of shared CTEs (Common Table Expressions) define base filtering once — all metric aggregations run against them. 8 additional endpoints serve data from non-Call-table sources (SMS, time saved, candidate satisfaction, campaigns launched, credits per candidate).

Key Files

FilePurpose
server/analytics/analytics_api.pyDashboard endpoint + Pydantic response models
server/analytics/analytics_filters.pyShared filter parsing (ResolvedFilters)
server/dao/call_dao/call_dao_analytics_base.py3 shared CTE builders
server/dao/call_dao/call_dao_analytics_metrics.pyMetric DAO methods (counts, rates, charts)
server/dao/call_dao/call_dao_analytics_passthrough.pyPassthrough rate + thumbs up/down DAO methods

Shared CTEs

CallDaoAnalyticsBase provides 3 CTE builders that all analytics DAO methods inherit:
CTEBase TablesKey ColumnsUsed By
CallCall JOIN Campaigncall_id, candidate_id, campaign_id, call_status, call_length_sec, question_completion_rate, createdTotal calls, answer rate, call completion, interviews, call length
CandidateInfoCandidateCampaignUserReview JOIN Call JOIN Campaigncandidate_id, campaign_id, feedback, max_scorePassthrough rate, thumbs up/down, qualified candidates
ReviewCandidateCampaignUserReview JOIN Call JOIN Campaigncandidate_id, feedback, call_last_reviewed_atThumbs up/down grouped by time
All 3 CTEs accept the same filter parameters: org_id, date_range, campaign_ids, candidate_filter. All queries use the read-only replica via self.read_only_session().

Dashboard Endpoint

GET /analytics/dashboard returns an AnalyticsDashboardResponse containing:
  • Scalar metrics: answer rate, call completion rate, total calls, interviews, people contacted, call length, avg interview length, avg time to first interview, opt-outs, qualified candidates, passthrough rate, thumbs up/down
  • Chart data: 6 typed time-series charts (calls, candidates, interviews, thumbs up, thumbs down, passthrough rate) — each chart has its own Pydantic model (e.g. CallsChartResponse, PassthroughRateChartResponse)
  • Withdrawal & accommodation: withdrawal rate, withdrawal reasons breakdown, accommodation rate
All metrics are fetched in parallel via asyncio.gather, except qualified_candidates which runs sequentially (needs the passing score resolved first).

Per-Campaign Passing Scores

When filtering to a single campaign, the dashboard looks up that campaign’s custom passing score from Cosmos via campaign_scripts_cosmos_dao.get_passing_scores(). This matches the behavior of the passthrough rate DAO, which also uses per-campaign scores internally.

Frontend

The frontend uses Orval-generated API client functions with Pydantic-backed TypeScript types. A dashboardRequestIdRef counter provides stale response protection — if a newer request is in flight, the older response is discarded.

Individual Endpoints

These 8 endpoints are separate from the dashboard because they use different data sources:
EndpointData Source
/analytics/sms_sentSMS helper (Cosmos + external)
/analytics/total_smsSMS helper
/analytics/time_savedCalls + SMS + web calls + resume screening
/analytics/total_time_savedSame as above (scalar)
/analytics/candidate_satisfactionCall reviews (1-5 rating)
/analytics/average_candidate_satisfactionSame as above (scalar)
/analytics/campaigns_launched_per_weekCampaign table (no filters)
/analytics/credits_per_qualified_candidateCredits + qualified candidates

Adding a New Filter

A new filter only needs 3 touch-points — no per-metric changes:
1

Parse in resolve_filters

Add the query param to server/analytics/analytics_filters.py and include it in ResolvedFilters.
# analytics_filters.py
@dataclass
class ResolvedFilters:
    org_id: str
    date_range: DateRange | None
    campaign_ids: list[str]
    candidate_filter: CandidateFilter | None
    job_title: str | None  # new
2

Apply in one CTE builder

Add a where clause in the relevant CTE method in server/dao/call_dao/call_dao_analytics_base.py. All metrics using that CTE now respect it automatically.
# call_dao_analytics_base.py
if job_title:
    query = query.where(Campaign.job_title == job_title)
3

Pass through in dashboard endpoint

No per-metric logic needed — the dashboard passes filters to each DAO method, which passes them to the CTE builder. If the CTE uses it, all metrics respect it.