AI & Big Data Compute Runtimes

Kaspian empowers data teams to launch customizable and autoscaling clusters for popular runtimes, including Pandas, Spark, PyTorch, and Tensorflow, without dealing with devops hurdles or overpriced vendors

Customize and deploy runtimes in three steps:

1. Define a  Cluster (Hardware Control Plane)

Clusters are virtual compute groups that can be configured for distributed workloads (ex. Spark), single or multi-GPU model training jobs, or single-machine tasks (ex. Pandas).


2. Define an Environment (Software Control Plane)

Environments encapsulate the software configurations of runtimes, such as their dependencies and container settings. Environments can be specified using Dockerfiles or Pip/Conda requirements files.
‍‍

3. Connect your Cluster and Environment (Compute Application Plane)

A Cluster-Environment combination specifies a hardware and software configuration that can then be leveraged in any compute application, including Kaspian Jobs and Workflows, Airflow tasks, and Kaspian-hosted Jupyter notebooks.

Get started today

No credit card needed