We have included two tutorials on how to use Kaptain 2.2 in multiple clusters.

The tutorials referenced in this page are deployment examples performed with toy data sets in controlled environments. D2iQ does not offer support for any of the examples provided under the tutorials section.

Model inferencing

The first tutorial uses a light-weight version of Kaptain for Model Inferencing, which, among others, can be used to:

  • Run inferences at the Edge or for IoT scenarios

  • Run inferences in a cluster with limited resources

Train and Deploy your Model on Different Clusters

The second tutorial is a comprehensive example of how to use Kaptain and the Kaptain SDK to train a model in one cluster, and deploy it from another cluster.

This workflow can be used to isolate the training cluster from the deployment cluster, so you can:

  • Keep the model training process private for competitive edge.

  • Enforce higher security for the training cluster (for example, by putting it in an air-gapped environment).

It can also be used to save your model state to a s3 bucket, so you can:

  • Have a snapshot of your model at a certain point in time for backup or reporting purposes.

  • Import your model to another separate cluster for further training or for deployment.

  • Have a production-ready version of your model in case your production cluster fails.

Using a platform other than D2iQ for the inferencing, edge, or model deployment cluster is not covered under D2iQ’s support policy.