This quick getting-started tutorial will take you though the steps to start routing your LLM requests through Datawizz to collect and organize your data. Once you start collecting data on Datawizz, you can start analyzing your LLM usage, train specialized models and protect your AI models from abuse…

For this tutorial, we’ll focus on code that uses the OpenAI SDK, like in the following code snippet (see both Python and Javascript tabs below):

from openai import OpenAI

client = OpenAI(
    api_key="sk-your_openai_api_key", # <--- your datawizz project API key
    base_url="https://gw.datawizz.app/**************/openai/v1", # <--- your datawizz project base URL
)

response = client.chat.completions.create(
    model="********",  # <--- this is the DataWizz Endpoit you are routing to,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's the speed of light?"},
            ],
        }
    ]
)

1 - Setup your project

As a first step, you need to create a project on Datawizz. If you haven’t already, signup for Datawizz[https://www.datawizz.app/auth/signup]. Once in the dashboard, click ”+ New Project” to create a new project.

2 - Add a provider

Once you have a project, you need to connect it to your existing LLM moedel provider - like OpenAI. Providers are different platform that provide AI models you can call from your app - like OpenAI, Anthropic or AWS.

In your project, head to the providers section and click add provider:

In the new provider menu, select your provider type, and add your API key and base URL (note not to add the /v1 at the end of the base URL):

Once you add the provider, you should also add all the different models you want to use from that provider. For instance, if you’re using OpenAI, you can add the gpt-4o model to your provider:

3 - Create an endpoint

Next up, you need to create an endpoint to route your LLM requests through. Endpoints are the integration medium between your app and the AI models. When your app calls an AI model - it’ll call an endpoint. The endpoint defines the rules for routing the request to different models, and any policies that should be applied to the request (like caching or input screening).

In this examole we’ll have a very simple endpoint that routes all requests to the gpt-4o model. Head to the endpoints section and click create endpoint:

Once you have created an endpoint, open it (click Manage) and click Add Upstream. Upstreams are the different moedls & providers you want to route requests to. Every upstream can have a weight (priority) and conditions to determine how requests get routed. In this case, we’ll add the gpt-4o model from the OpenAI provider and leave the weight as 1 and the conditions empty - as we only have one upstream, all requests will be routed to it:

4 - Configure your app

Now that you have an endpoint set up to route requests and a provider to provide the models, you can start using the endpoint in your app. In the endpoint view, you can see the changes you need to make in your code to start using the endpoint:

Note you’ll need to change three items:

ItemDescription
base_urlNeed to set up the DW base URL to start sending traffic to Datawizz
api_keyNeed to input your datawizz API key (you can manage your API keys in project settings)
model_idNeed to set the model ID to the ID of the endpoint you created

That’s it - your requests will now be routed to Datawizz. The dashboard will show your your LLM analytics, and you can see individual logs in the logs section.

5 - Next steps

Once you have your LLM requests routed through Datawizz, you can start analyzing your LLM usage, train specialized models and protect your AI models from abuse. Check out the other sections of the documentation to learn more about the Datawizz platform and how to use it to manage your AI models.