dagster-ray
dagster-ray allows running jobs orchestrated by Dagster on Ray.
This combines Dagster’s rich orchestration capabilities with Ray’s close to instant job startup time, compute autoscaling and distributed workflows, without any overhead for the user.
The same Dagster code can be executed locally or on a remote Ray cluster. Local scripts can be immidiately executed in the cluster without redeploys.
Some of the implemented resources:
RunLauncher
Executor
IOManager
PipesClient
Some example code:
# default settings for launched Runs
run_launcher:
module: dagster_ray
class: RayRunLauncher
config:
num_cpus: 1
num_gpus: 0
# a really heavy PyTorch computation
= ...
= ...
return
...
=