API Reference
Get Function
GET
/
v2
/
functions
/
{organization_name}
/
{function_name}
Authorization
Path
curl --request GET \
--url https://mango.sievedata.com/v2/functions/{organization_name}/{function_name} \
--header 'X-API-Key: <x-api-key>'
{
"name": "video_retalker",
"owner_name": "sieve-internal",
"visibility": "private",
"title": "video_retalker",
"code_url": "",
"description": "Sync your lips to any audio",
"tags": [
"Video",
"Generative"
],
"cover_image_url": null,
"examples": [
"d9d382d6-171d-4b79-9ce6-7bfea0691049"
],
"readme": "# Video Lipsyncing\n\nSync your lips with any video using this model. Similar to methods such as [Wav2Lip](https://github.com/Rudrabha/Wav2Lip) but with much higher quality.\n\n**Note:** The processing time depends on video resolution and video length but a general rule of thumb is that it takes ~13 seconds to generate a single second of video.\n\nOther tips:\n- Ensure there is only a single person in the video\n- Ensure the person is facing the camera\n- Ensure the person is not wearing any accessories that cover the mouth (e.g. mask, scarf, etc.)\n- Ensure the person is not moving their head too much\n- Ensure the person is at at most arms length from the camera",
"latest_version": {
"id": "0d5fbb11-5bea-48f8-baf8-28e82c06d31f",
"build_status": "ready",
"queued_at": "2023-09-22T22:55:01.510000",
"built_at": "2023-09-22T22:55:02.177000",
"ready_at": "2023-09-22T22:55:02.188000",
"compute_type": "a100",
"python_version": "3.10",
"python_packages": [
"cmake==3.26.3",
"einops==0.4.1",
"face-alignment==1.3.4",
"facexlib==0.2.5",
"gradio>=3.7.0",
"librosa==0.9.2",
"mediapipe==0.9.1.0",
"ninja==1.10.2.3",
"numpy==1.23.1",
"torch==1.13.1"
],
"system_packages": [
"ffmpeg",
"libgl1-mesa-glx",
"libglib2.0-0"
],
"cuda_version": "",
"gpu": true,
"minimum_replicas": 1,
"maximum_replicas": 0,
"inputs": [
{
"type": "sieve.Video",
"name": "source_video",
"data": null,
"description": ""
},
{
"type": "sieve.Audio",
"name": "target_audio",
"data": null,
"description": ""
}
],
"outputs": [
{
"type": "sieve.Video",
"name": "",
"data": null,
"description": ""
}
],
"environment_variables": [
{
"name": "stabilize_expression",
"description": "whether or not to stabilize the expression before lip-syncing",
"default": "true"
},
{
"name": "reference_enhance",
"description": "whether or not to enhance the reference image before lip-syncing",
"default": "false"
},
{
"name": "gfpgan_enhance",
"description": "whether or not to enhance the generated lipsync segment",
"default": "true"
},
{
"name": "post_enhance",
"description": "whether or not to enhance the image as a whole after the lipsyncing has been added back",
"default": "true"
}
],
"stream_output": false,
"function_dependencies": []
}
}
This endpoint returns a JSON object that describes a given function given its organization name and function name.
Request
organization_name
string
requiredname of Sieve organization
function_name
string
requiredname of Sieve function
Response
name
string
The name of the function.
owner_name
string
The owner name of the function.
visibility
string
The visibility of the function.
title
string
The title of the function.
code_url
string
The link to the code of the function.
description
string
The description of the function.
tags
string array
The tags of the function.
cover_image_url
string array
The url of the function’s cover image.
examples
string array
An array of job ids that the function playground will use as an example.
readme
object
The readme of the function.
latest_version
object
curl --request GET \
--url https://mango.sievedata.com/v2/functions/{organization_name}/{function_name} \
--header 'X-API-Key: <x-api-key>'
{
"name": "video_retalker",
"owner_name": "sieve-internal",
"visibility": "private",
"title": "video_retalker",
"code_url": "",
"description": "Sync your lips to any audio",
"tags": [
"Video",
"Generative"
],
"cover_image_url": null,
"examples": [
"d9d382d6-171d-4b79-9ce6-7bfea0691049"
],
"readme": "# Video Lipsyncing\n\nSync your lips with any video using this model. Similar to methods such as [Wav2Lip](https://github.com/Rudrabha/Wav2Lip) but with much higher quality.\n\n**Note:** The processing time depends on video resolution and video length but a general rule of thumb is that it takes ~13 seconds to generate a single second of video.\n\nOther tips:\n- Ensure there is only a single person in the video\n- Ensure the person is facing the camera\n- Ensure the person is not wearing any accessories that cover the mouth (e.g. mask, scarf, etc.)\n- Ensure the person is not moving their head too much\n- Ensure the person is at at most arms length from the camera",
"latest_version": {
"id": "0d5fbb11-5bea-48f8-baf8-28e82c06d31f",
"build_status": "ready",
"queued_at": "2023-09-22T22:55:01.510000",
"built_at": "2023-09-22T22:55:02.177000",
"ready_at": "2023-09-22T22:55:02.188000",
"compute_type": "a100",
"python_version": "3.10",
"python_packages": [
"cmake==3.26.3",
"einops==0.4.1",
"face-alignment==1.3.4",
"facexlib==0.2.5",
"gradio>=3.7.0",
"librosa==0.9.2",
"mediapipe==0.9.1.0",
"ninja==1.10.2.3",
"numpy==1.23.1",
"torch==1.13.1"
],
"system_packages": [
"ffmpeg",
"libgl1-mesa-glx",
"libglib2.0-0"
],
"cuda_version": "",
"gpu": true,
"minimum_replicas": 1,
"maximum_replicas": 0,
"inputs": [
{
"type": "sieve.Video",
"name": "source_video",
"data": null,
"description": ""
},
{
"type": "sieve.Audio",
"name": "target_audio",
"data": null,
"description": ""
}
],
"outputs": [
{
"type": "sieve.Video",
"name": "",
"data": null,
"description": ""
}
],
"environment_variables": [
{
"name": "stabilize_expression",
"description": "whether or not to stabilize the expression before lip-syncing",
"default": "true"
},
{
"name": "reference_enhance",
"description": "whether or not to enhance the reference image before lip-syncing",
"default": "false"
},
{
"name": "gfpgan_enhance",
"description": "whether or not to enhance the generated lipsync segment",
"default": "true"
},
{
"name": "post_enhance",
"description": "whether or not to enhance the image as a whole after the lipsyncing has been added back",
"default": "true"
}
],
"stream_output": false,
"function_dependencies": []
}
}