Skip to main content
This page is coming soon. Inference worker documentation is currently being prepared.
Models trained on Datawizz can be deployed inside the main Datawizz machine, or deployed separately for easier horizontal scaling. We provide separate Docker images for inference which can run as independent servers with autoscaling. Check back soon for detailed instructions on running inference workers outside the main Docker Compose setup.