Sieve functions are run in the cloud and are written in Python. Any function deployed on Sieve is automatically available as an API and can also be accessed via the Sieve Python client library.

These functions can be treated as normal Python functions and can be used to pipeline other functions together, deploy custom ML models on cloud GPUs, or arbitrary Python code in the cloud.

Defining a Sieve Function

To run a function in the cloud, we need to define both:

  • the Python function to run itself
  • its runtime / environment

Defining a Sieve function is as simple as adding the @sieve.function decorator on top of your Python function and running sieve deploy in your directory.

@sieve.function(name="foo")
def foo(a):
	return a + 1
Sieve also supports running functions with a __setup__ method. This is especially useful for loading models into memory once for many jobs. Learn more about sieve.Model here.

To define a custom environment, Sieve lets you specify your Python packages, system packages, and any other build-time commands that you may want to run on function build.

Here’s a quick example of all 3 in action:

# my_sieve_func/main.py

@sieve.function(
	name="my_custom_function",
	python_packages=[
		"opencv-python"
	],
	system_packages=[
		"ffmpeg"
	],
	run_commands=[
		"mkdir -p /root/.cache/models/",
	]
)
def my_custom_function(a: int,  b: int) -> int:
	import cv2 # This will be installed in your cloud function and can be imported here

	return a + b

To deploy & build this function to use on the Sieve platform, you can simply run:

cd my_sieve_func
sieve deploy main.py

You’ll now see your custom function on the Sieve dashboard and can run jobs via API, Python client, or the dashboard. Keep reading for a deep dive into the function lifecycle and how to monitor your function.

There are loads of other things you can customize including CUDA versions, Python versions, and more.

Sieve Function Lifecycle

After you’ve defined your code and environment, you’ll run sieve deploy to deploy your function to the cloud. On deployment, Sieve will install your Python & system packages, run your build commands, and package your environment and code as a container.

When you run a job, Sieve will spin up a container with your function and run your function with the provided arguments. Our autoscaler will tear down the container after the function has finished running. To control idle time and minimum replicas on your function, see the autoscaling section.

To monitor the state of your workers in the cloud, reference the current_workers field here.

Jobs are automatically routed to the latest version of your function unless otherwise specified.