DashboardHC opens your operations to whichever AI your team already uses: Claude, ChatGPT, Copilot, and the next frontier models. Same OAuth, same scoping, no vendor chatbot to adopt.
Two patterns drive most of the labor variance.
Willow Lane alone is about $48K of monthly variance. Want a memo for the regional?
MCP is just the open standard that makes it possible. The point is simpler: don't get trapped in someone else's AI silo. Your AI of choice can call real tools on your real data, on behalf of a real user, with real permissions.
The MCP connector blocks direct patient identifiers at the query layer. Your AI never sees PHI, by design. The same permission model your team signs in with carries straight through to every AI call.
Your AI inherits the exact same permissions as the signed-in user: entity, region, measure, and time scoping. OAuth for people. Named, scoped API tokens for autonomous agents. Tokens revocable from the Integrations page.
Tools are atomic on purpose. The AI can plan, run multiple ask_data calls, and combine the results into one rich answer, whether driven by a person or a scheduled agent.
You ask in plain English. The model picks the right tool and stitches the answer together.
get_data_model
Always called first. Returns measures, dimensions, buildings, and the P&L structure for this account, plus a routing guide so the AI picks the right tool for the question type.
get_dimension
Looks up exact member names within a dimension before they're used in a query. Drill-down and search modes for big dimensions like buildings, providers, or payers.
ask_data
Operational questions in plain English: occupancy, census, staffing, revenue trends. The AI generates and runs the underlying query, returns data plus an AI summary.
financial_review
Built for P&L, expenses, budget vs. actual, variance, and named GL line items. Use this when a question is fundamentally financial.
list_dashboards
Returns the dashboards available to a user, so the AI knows what visuals can be pulled before running one.
run_dashboard
Runs a specific DashboardHC dashboard for a given date and returns its chart data, ready for the AI to summarize or drop into a deck.
data_review
Runs all saved dashboard queries across multiple time periods (current month, prior, prior quarter, prior year) and returns a consolidated executive snapshot.
ask_data for this month, then again for last month, then composes the comparison itself. For broad snapshots, data_review does the heavy lifting in a single call. For P&L-shaped questions the AI routes straight to financial_review.
In Claude Desktop, go to Settings → Developer → Edit Config.
Replace the placeholders with the OAuth credentials your DashboardHC admin will issue you. (No admin yet? Email help@dashboardhc.com.)
{
"mcpServers": {
"dashboardhc": {
"command": "npx",
"args": ["mcp-remote", "https://apis.dashboardhc.com/v2/mcp"],
"env": {
"OAUTH_CLIENT_ID": "<your-client-id>",
"OAUTH_CLIENT_SECRET": "<your-client-secret>"
}
}
}
}
Restart Claude Desktop. The first time you call a DashboardHC tool, Claude opens a browser window for you to sign in with your DashboardHC credentials.
Try: "What's occupancy for yesterday?" or "Build me a one-page operations review for last week."
ChatGPT supports DashboardHC as an Action on Custom GPTs, with the same OAuth model. Once configured, your team can share the GPT inside your workspace.
chatgpt.com → Explore GPTs → Create. Paste a starter prompt that points at the DashboardHC tools.
Configure → Actions → Create new action. Import the OpenAPI schema URL we provide on signup.
Authentication → OAuth. Plug in your client ID, secret, and the DashboardHC authorization & token URLs from your Integrations page.
The first call prompts each user to sign in via OAuth. From then on, the GPT calls DashboardHC on their behalf.
30 minutes. We'll show MCP firing against a real DashboardHC warehouse, live in Claude or ChatGPT, and walk through what it'd look like once we stand yours up.