Flux Tools
Capture feedback and quality signals around agent output.
Flux is the review and measurement surface for DataGrout. Use it to collect qualitative feedback, rate specific tools used in a session, and track custom events tied to user journeys or workflow outcomes.
flux.feedback@1
Submit platform or workflow feedback.
Call with no arguments to receive a guided feedback prompt, or pass feedback fields directly to store a structured submission.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
nps_score |
integer (1-10) | no | Overall experience rating |
liked |
string | no | What worked well |
improve |
string | no | What should improve |
highlight |
string | no | Specific tool or experience to call out |
free_text |
string | no | Open-ended feedback |
ease_of_use |
integer (1-5) | no | How intuitive the tools were |
trust_level |
integer (1-5) | no | How much you trusted the outputs |
surprise_score |
integer (1-5) | no | How much the tools surprised you |
would_use_voluntarily |
boolean | no | Would you reach for these tools by choice? |
unique_capability |
string | no | What this system can do that you can’t do natively |
feedback_ref |
string | no | Reference to a prior submission for follow-up feedback |
agent_name |
string | no | Name of the client or agent submitting feedback |
model |
string | no | Model used during the session |
ref |
string | no | Reference to a specific tool call or cache_ref |
campaign_id |
integer | no | Campaign ID if responding to a Flux campaign |
Example
{
"name": "data-grout@1/flux.feedback@1",
"arguments": {
"nps_score": 8,
"liked": "Discovery found the right tools quickly",
"improve": "Tool descriptions could be shorter",
"agent_name": "Cursor",
"model": "gpt-5"
}
}
flux.scorecard@1
Rate the individual tools used during a session.
Scorecard is useful after a workflow or experiment when you want per-tool ratings instead of only a single overall impression.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
feedback_ref |
string | no | Link the scorecard to a prior feedback submission |
tools |
array | yes | List of rated tools |
Each entry in tools includes:
-
tool -
rating -
optional
comment -
optional
would_use_again -
optional
bugs
Example
{
"name": "data-grout@1/flux.scorecard@1",
"arguments": {
"feedback_ref": "fb_abc123",
"tools": [
{
"tool": "discovery.perform",
"rating": 5,
"comment": "Very easy to use"
},
{
"tool": "prism.refract",
"rating": 4,
"comment": "Good results after payload cleanup"
}
]
}
}
flux.track@1
Track a custom event tied to usage or workflow state.
Use this when you want structured product or operational events alongside qualitative feedback.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
event_name |
string | yes | Name of the event |
event_data |
object | no | Arbitrary structured event payload |
Example
{
"name": "data-grout@1/flux.track@1",
"arguments": {
"event_name": "skill_invoked",
"event_data": {
"skill_id": "skill_abc123",
"surface": "toolsmith.invoke"
}
}
}
When To Use Flux
| Scenario | Tool |
|---|---|
| Capture overall qualitative feedback |
flux.feedback |
| Rate specific tools used in a session |
flux.scorecard |
| Record custom product or workflow events |
flux.track |