Agentforce is Live. But Are Your Agents Actually Using It?
- Sandeep Supehia
- Oct 9
- 4 min read

Rolling out Agentforce is exciting, but too often, once it’s live, leaders have no idea whether it’s actually landing.
Which agents are using it daily?
Who hasn’t touched it at all?
How often does usage happen - once a week or 20 times a day?
When feedback comes in, is it positive or negative, and from whom?
Without this visibility, adoption conversations quickly stall. Managers can’t coach teams on usage. Product owners can’t prioritize fixes or improvements. Leadership only sees licenses consumed not whether Agentforce is trusted and making work easier.
Most companies stop here. They know usage and feedback data exists somewhere in the system, but what they actually see are cryptic logs, IDs, and raw events. No names, no summaries, no trends. Just noise.
The outcome you want looks very different:
Clear weekly usage reports tied to agent names (not IDs)
Active vs. inactive user counts
Frequency of engagement per user
Positive/negative feedback connected to real people and real prompts
Simple dashboards that summarize adoption and sentiment over time
Until you bridge the gap between telemetry data and business-readable reporting, you don’t actually know if Agentforce is being adopted.
The result? Leaders end up with “Agentforce is turned on” as their metric of success. But that’s not the same as Agentforce is being trusted and embedded into daily work.
What the Right Outcome Looks Like
Instead of a wall of IDs and logs, adoption and feedback reporting should feel like a simple dashboard that answers the questions every leader asks:
Who is using Agentforce?
Clear weekly usage reports show usernames and team names, not system IDs.
How often are they using it?
Frequency metrics highlight power users, occasional users, and inactive agents.
Is adoption growing?
Trend lines track engagement week over week so you know whether rollout is sticking.
What’s the quality of the experience?
Thumbs up/down and text feedback are tied back to individual users, so you can see where sentiment is positive and where prompts need improvement.
Where are the risks?
Inactive users and negative feedback clusters show you where training or change management may be needed.
What’s the overall sentiment?
Weekly summaries roll up positive vs. negative feedback so you have a quick “pulse check” on trust and adoption.
The Solution: How to Get There
Achieving business-readable adoption and feedback reporting is a two-step journey: first, capture everything, then translate raw data into human context.
Enable Full Monitoring
Enable Einstein Audit, Analytics, and Monitoring Setup from salesforce quick find setup and switch on the entire stack so every interaction and feedback event is captured:
Data Cloud
Audit & Feedback
Prompt Builder Usage & Feedback Metrics (Beta)
Agent Analytics
Agentforce Session Tracing
Agentforce Optimization
You need all the raw signals sessions, interactions, prompts, feedback. That’s why you switch it on. Without this foundation, you won’t capture who is engaging or how they’re reacting. Adoption reporting starts here.
Know Your DMOs
Each DMO is just a building block. On their own, they don’t mean much. Together, they form the footprint of how agents are (or aren’t) engaging.
Agentforce Interactions → AI Agent Session, AI Agent Session Participant, AI Agent Interaction, AI Agent Interaction Message
GenAI Interactions → GenAI Generation, GenAI Feedback, GenAI Feedback Detail
Prompt Monitoring → GenAIGatewayRequest, GenAIGatewayResponse
Multiple Bots? Ingest BotDefinition and BotVersion from CRM to separate usage by bot.



Solve the “UserID Problem”
The biggest blocker: DMOs store userId but not usernames. Business can’t interpret cryptic IDs.
Prerequisite
Ingest the Salesforce User object into Data Cloud.
Create a Calculated Insight (CI) joining AI Agent Session Participant → User on userId.
Aggregate measures like count of sessions, dimensions like username, start date, end date, agent API name.
Outcome: clear usage reporting by name, not ID.
Save and Schedule CI
On CI Record page, click on publish
Once published successful, click on create report
Adjust the report based on requirement

Make Feedback Usable
Feedback data needs to be tied to people and context.
Join GenAI Feedback → User on userId.
Extend to GenAI Feedback Detail on parentId.
Aggregate measures like count of feedback IDs, dimensions like feedback text, app feedback, feature, user full name.
Outcome: every thumbs up/down tied to the person, prompt, and feature.
Now you don’t just know there were 50 thumbs-downs last week. You know who gave them, on which prompts, and in what context. That makes feedback a coaching and product signal, not a random number
Flatten for Reports
Complex joins are powerful in Calculated Insights (CI) but unreadable for the business. To fix that:
Create a custom Salesforce object mirroring the CI output fields.
Use a CI Trigger Flow to write results into this object on schedule.
Run standard Salesforce reports & dashboards on the flattened object easy for managers and execs to consume.
Instead of sending leaders into complex data models, you hand them simple dashboards: Top 10 active users, adoption trendline, weekly sentiment shifts. Exactly what they need, in a format they already use.
The End Result
Leaders can now answer, in one glance:
Who is adopting Agentforce?
How often are they using it?
Is usage growing?
What’s the sentiment?
That’s how you turn telemetry into adoption intelligence and prove that Agentforce isn’t just running, it’s trusted and embedded into daily work.
Every C-suite will talk about AI and Data Cloud at Dreamforce. Few will actually leave with a plan that works.
Truffle builds those plans and delivers on them.
If you’re ready to see what “enterprise AI” looks like when it delivers results, let’s connect.
Email us now: hello@trufflecorp.com
Fill our reachout form: https://www.trufflecorp.com/contact-us
Comments