Understanding Feedback Signals
Each feedback signal contains several components that provide a comprehensive assessment of model performance:| Field | Type/Range | Required | Description |
|---|---|---|---|
| Score | -1 to 1 | ✓ | Quality rating of the response, where -1 is the worst and 1 is the best |
| Weight | 0 to 1 | ✓ | Importance of this signal, where 0 is least important and 1 is most important |
| Improvement | JSON object | Optional | An improved version of the response for supervised fine-tuning |
| Qualitative Feedback | String | Optional | Text feedback explaining the assessment (used for RLHF) |
| Signal Name | String | Optional | Identifier for the signal/sensor (e.g., “user_thumbs_up”, “response_accuracy”) |
| Signal Source | user or system | Optional | Origin of the feedback signal |
| Signal Type | explicit or implicit | Optional | Classification of the feedback type |
| Metadata | JSON object | Optional | Additional context for observability and analysis |
- Combine multiple feedback sources into a holistic quality assessment
- Weight different signals based on their reliability or importance
- Track both user sentiment and automated evaluation metrics
- Provide corrected responses for supervised fine-tuning
- Generate detailed observability reports by signal type, source, or name
Adding Feedback Signals Through the Dashboard
You can manually add feedback signals to your inference logs through the Datawizz dashboard. This is useful for reviewing batches of logs, conducting manual evaluations, or adding expert feedback. To add a feedback signal:- Navigate to the Logs section and open any inference log
- In the log details section, you’ll see a “Feedback Signals” section
- Click “Add Feedback Signal” to create a new signal
- Fill in the signal details including score, weight, and optional fields
- Save the signal
Sending Feedback Signals Programmatically (via API)
For production applications where users interact with AI-generated responses, you’ll want to collect feedback programmatically through the Datawizz API.When to Collect Feedback
Explicit Feedback
Explicit feedback occurs when users deliberately provide feedback by clicking buttons, providing ratings, or submitting corrections. This might include:- Thumbs up/down buttons
- Star ratings
- Correction submissions
- Quality assessments
Implicit Feedback
Implicit feedback is inferred from user behavior without requiring explicit action. Examples include:- User saves or shares the AI response (positive signal)
- User copies the response to their clipboard (positive signal)
- User requests regeneration or modification (negative signal)
- User ignores or dismisses the response (negative signal)
- User spends significant time reading the response (positive signal)
Sending Feedback Signals via API
To send a feedback signal via the API, use thePOST /{project_uid}/{endpoint_uid}/feedback/{inference_log_id} endpoint.
When you generate a response through the OpenAI or Anthropic compatible API endpoints, Datawizz returns the log ID in the response. Use this log ID to submit feedback signals.
Feedback JWT Tokens
Each inference response includes anX-Feedback-Token header containing a signed JWT that can be used to submit feedback without requiring your API key. This is perfect for client-side feedback collection where you don’t want to expose your API key.
How It Works:
- Token Generation: The gateway generates a signed JWT for each inference response
- Token Contents: The JWT contains:
inference_id: The unique identifier for the inferenceuser_id: (optional) The user ID if client JWT authentication was used
- Token Usage: Use this JWT as a Bearer token to submit feedback securely from client-side code
- Token Expiration: Tokens are valid for 24 hours after generation
- Authentication: The feedback endpoint accepts either:
- JWT auth:
Authorization: Bearer <feedback-jwt>(recommended for client-side) - API key auth:
Authorization: Bearer <api-key>(for server-side)
- JWT auth:
X-Feedback-Url: The URL to submit feedback (e.g.,https://gw.datawizz.app/{project}/{endpoint}/feedback/{inference_id})X-Feedback-Token: The signed JWT token for feedback authentication (valid for 24 hours)
Example: Mapping Traditional Thumbs Up/Down to Feedback Signals
If you’re migrating from a simple thumbs up/down system, here’s how to map those to feedback signals:Combining Multiple Feedback Sources
The power of feedback signals comes from combining multiple sources. For example:Best Practices
Choose Appropriate Weights
Not all feedback signals are equally reliable. Consider:- Explicit feedback from expert users: weight = 1.0
- Explicit feedback from regular users: weight = 0.8
- Automated evaluations: weight = 0.7-0.9 depending on confidence
- Implicit signals: weight = 0.3-0.6 depending on strength of correlation
Use Descriptive Signal Names
Use clear, consistent signal names to enable better observability:- Good:
user_thumbs_up,accuracy_score,response_copied - Bad:
signal1,feedback,good
Include Metadata for Context
Add relevant metadata to enable detailed analysis:Combine Automated and Human Feedback
For best results, use both automated evaluators and human feedback:- Automated evaluations provide consistent, scalable assessment
- Human feedback captures nuanced quality aspects that automated systems might miss
- The weighted combination gives you the best of both worlds
POST /{project_uid}/feedback/{inference_log_id}
View the complete API reference for submitting feedback signals