Skip to main content
Collecting feedback on AI responses is essential for improving model performance. Datawizz provides a powerful feedback signals system that allows you to gather multiple types of feedback from various sources and use them for reinforcement learning, fine-tuning, and observability. Unlike traditional thumbs up/down systems, feedback signals offer a structured approach that supports multiple feedback sources (users and automated evaluators), different signal types (explicit and implicit), weighted importance, and rich contextual information.

Understanding Feedback Signals

Each feedback signal contains several components that provide a comprehensive assessment of model performance:
FieldType/RangeRequiredDescription
Score-1 to 1Quality rating of the response, where -1 is the worst and 1 is the best
Weight0 to 1Importance of this signal, where 0 is least important and 1 is most important
ImprovementJSON objectOptionalAn improved version of the response for supervised fine-tuning
Qualitative FeedbackStringOptionalText feedback explaining the assessment (used for RLHF)
Signal NameStringOptionalIdentifier for the signal/sensor (e.g., “user_thumbs_up”, “response_accuracy”)
Signal Sourceuser or systemOptionalOrigin of the feedback signal
Signal Typeexplicit or implicitOptionalClassification of the feedback type
MetadataJSON objectOptionalAdditional context for observability and analysis
This structured approach allows you to:
  • Combine multiple feedback sources into a holistic quality assessment
  • Weight different signals based on their reliability or importance
  • Track both user sentiment and automated evaluation metrics
  • Provide corrected responses for supervised fine-tuning
  • Generate detailed observability reports by signal type, source, or name

Adding Feedback Signals Through the Dashboard

You can manually add feedback signals to your inference logs through the Datawizz dashboard. This is useful for reviewing batches of logs, conducting manual evaluations, or adding expert feedback. To add a feedback signal:
  1. Navigate to the Logs section and open any inference log
  2. In the log details section, you’ll see a “Feedback Signals” section
  3. Click “Add Feedback Signal” to create a new signal
  4. Fill in the signal details including score, weight, and optional fields
  5. Save the signal
You can add multiple feedback signals to a single inference log, allowing you to capture different perspectives or evaluation criteria.

Sending Feedback Signals Programmatically (via API)

For production applications where users interact with AI-generated responses, you’ll want to collect feedback programmatically through the Datawizz API.

When to Collect Feedback

Explicit Feedback

Explicit feedback occurs when users deliberately provide feedback by clicking buttons, providing ratings, or submitting corrections. This might include:
  • Thumbs up/down buttons
  • Star ratings
  • Correction submissions
  • Quality assessments
Explicit feedback is straightforward to implement and provides clear signals, though participation rates may be lower.

Implicit Feedback

Implicit feedback is inferred from user behavior without requiring explicit action. Examples include:
  • User saves or shares the AI response (positive signal)
  • User copies the response to their clipboard (positive signal)
  • User requests regeneration or modification (negative signal)
  • User ignores or dismisses the response (negative signal)
  • User spends significant time reading the response (positive signal)
Implicit feedback offers more natural user experiences and higher participation rates, but requires careful interpretation and may be less accurate than explicit signals.

Sending Feedback Signals via API

To send a feedback signal via the API, use the POST /{project_uid}/{endpoint_uid}/feedback/{inference_log_id} endpoint. When you generate a response through the OpenAI or Anthropic compatible API endpoints, Datawizz returns the log ID in the response. Use this log ID to submit feedback signals.

Feedback JWT Tokens

Each inference response includes an X-Feedback-Token header containing a signed JWT that can be used to submit feedback without requiring your API key. This is perfect for client-side feedback collection where you don’t want to expose your API key. How It Works:
  1. Token Generation: The gateway generates a signed JWT for each inference response
  2. Token Contents: The JWT contains:
    • inference_id: The unique identifier for the inference
    • user_id: (optional) The user ID if client JWT authentication was used
  3. Token Usage: Use this JWT as a Bearer token to submit feedback securely from client-side code
  4. Token Expiration: Tokens are valid for 24 hours after generation
  5. Authentication: The feedback endpoint accepts either:
    • JWT auth: Authorization: Bearer <feedback-jwt> (recommended for client-side)
    • API key auth: Authorization: Bearer <api-key> (for server-side)
Response Headers: Every inference response includes:
  • X-Feedback-Url: The URL to submit feedback (e.g., https://gw.datawizz.app/{project}/{endpoint}/feedback/{inference_id})
  • X-Feedback-Token: The signed JWT token for feedback authentication (valid for 24 hours)
Example: Using Feedback JWT Tokens (Client-Side)
// 1. Make an inference request
const response = await fetch('https://gw.datawizz.app/{project}/{endpoint}/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer your-api-key',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({ messages: [...] })
});

// 2. Extract feedback headers
const feedbackUrl = response.headers.get('X-Feedback-Url');
const feedbackToken = response.headers.get('X-Feedback-Token');

// 3. Submit feedback using the JWT token (no API key needed - safe for client-side)
await fetch(feedbackUrl, {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${feedbackToken}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    score: 0.8,
    weight: 1.0,
    qualitative_feedback: 'Great response!'
  })
});
Example: Using API Key (Server-Side)
// After receiving a response and collecting user feedback
const response = await fetch(`https://gw.datawizz.app/${projectId}/feedback/${inferenceLogId}`, {
    method: 'POST',
    headers: {
        'Authorization': `Bearer ${apiKey}`,
        'Content-Type': 'application/json'
    },
    body: JSON.stringify({
        score: 0.9,
        weight: 1.0,
        signal_source: 'user',
        signal_type: 'explicit',
        qualitative_feedback: 'Great response, very helpful!'
    })
});
Here’s an example of submitting feedback with an improved response:
// User provides a correction
const response = await fetch(`https://gw.datawizz.app/${projectId}/feedback/${inferenceLogId}`, {
    method: 'POST',
    headers: {
        'Authorization': `Bearer ${apiKey}`,
        'Content-Type': 'application/json'
    },
    body: JSON.stringify({
        score: -0.5,
        weight: 0.8,
        signal_source: 'user',
        signal_type: 'explicit',
        qualitative_feedback: 'Response was inaccurate',
        improvement: {
            role: 'assistant',
            content: 'Here is the corrected response...'
        }
    })
});

Example: Mapping Traditional Thumbs Up/Down to Feedback Signals

If you’re migrating from a simple thumbs up/down system, here’s how to map those to feedback signals:
// Thumbs up
{
    score: 1.0,
    weight: 1.0,
    signal_source: 'user',
    signal_type: 'explicit',
    signal_name: 'thumbs_up'
}

// Thumbs down
{
    score: -1.0,
    weight: 1.0,
    signal_source: 'user',
    signal_type: 'explicit',
    signal_name: 'thumbs_down'
}

Combining Multiple Feedback Sources

The power of feedback signals comes from combining multiple sources. For example:
// User gives thumbs up
await submitFeedback({
    score: 1.0,
    weight: 1.0,
    signal_source: 'user',
    signal_type: 'explicit',
    signal_name: 'thumbs_up'
});

// Automated evaluator checks factual accuracy
await submitFeedback({
    score: 0.7,
    weight: 0.9,
    signal_source: 'system',
    signal_type: 'explicit',
    signal_name: 'factual_accuracy',
    metadata: {
        evaluator: 'fact_checker_v1',
        confidence: 0.85
    }
});

// Implicit signal: user copied response to clipboard
await submitFeedback({
    score: 0.6,
    weight: 0.5,
    signal_source: 'user',
    signal_type: 'implicit',
    signal_name: 'copied_to_clipboard'
});

Best Practices

Choose Appropriate Weights

Not all feedback signals are equally reliable. Consider:
  • Explicit feedback from expert users: weight = 1.0
  • Explicit feedback from regular users: weight = 0.8
  • Automated evaluations: weight = 0.7-0.9 depending on confidence
  • Implicit signals: weight = 0.3-0.6 depending on strength of correlation

Use Descriptive Signal Names

Use clear, consistent signal names to enable better observability:
  • Good: user_thumbs_up, accuracy_score, response_copied
  • Bad: signal1, feedback, good

Include Metadata for Context

Add relevant metadata to enable detailed analysis:
{
    score: 0.8,
    weight: 0.9,
    signal_name: 'user_rating',
    metadata: {
        user_segment: 'premium',
        session_length: '15m',
        previous_interactions: 5,
        ui_version: '2.1.0'
    }
}

Combine Automated and Human Feedback

For best results, use both automated evaluators and human feedback:
  • Automated evaluations provide consistent, scalable assessment
  • Human feedback captures nuanced quality aspects that automated systems might miss
  • The weighted combination gives you the best of both worlds

POST /{project_uid}/feedback/{inference_log_id}

View the complete API reference for submitting feedback signals