API Quick Start Guide

Below is the tutorial for common use-cases. And you can find a full list of API in Swagger documentation https://saas.haut.ai/api/swagger/

This is a link to our Haut.AI portal. All calls to API should start with /api/v1/ base URL. The whole list of API is available in Swagger https://saas.haut.ai/api/swagger/

SAAS_HOST = "https://saas.haut.ai"

Prepare your image

For your convenience, we've prepared a sample face image. You can skip this step and use your image instead.

!wget https://storage.googleapis.com/haut/face.jpg
!pip install websockets 
import base64
import time
import requests
import asyncio
import websockets
import datetime

Login

You need to call login API to get access_token (Auth token) since all requests to API must be authorized. Please note - Auth token lives for 1 hour and later you need to create a new one, or use endpoint /api/v1/auth/refresh/ There is an option to use private token that works the same way as auth token, but does not expire in 1 hour. Do not call login method every second/image, Firebase has rate limit that will block such calls.

To get Auth token use your email and password in the code below (instead of sample credentials).

resp = requests.post(
    f"{SAAS_HOST}/api/v1/login/",
    json={"username": "your@gmail.com", "password": "your_password"},
)

token = resp.json()["access_token"]
company_id = resp.json()["company_id"]
user_id = resp.json()["id"]

print("Access token:", token)
print("Your company id:", company_id)
print("Your busines user id:", user_id)

Generate API token instead of Auth token (Optional)

In some cases developers prefer to use API token instead of Auth token that is returned from Login method. The good part of API token is that you can set it to last for a longer time and it will not expire, and in case someone will change password for Login - it will not break your code in production. You can have multiple private tokens at once if you need them for different projects. A private token also covers API calls for your linked companies — the token is associated with a user/account. If this user is part of several companies, it can manage API calls for these companies. At this moment you can generate API key and set expiration time as you wish through API:

curl -X 'POST' \ 'https://saas.haut.ai/api/v1/auth/private_tokens/' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "name": "string", "expiration_time": "2023-04-17T16:10:17.350Z" }'

Then take .data from response body and use instead of token from /login

curl -X GET https://saas.haut.ai/api/v1/companies/.../datasets/ -H 'authorization: Bearer {.data}'

See swagger how you can list and delete private tokens.

Create Dataset

Dataset is an object which works as input for images. You can use different Datasets if you want to separate data coming from different endpoints (e.g. different apps or different countries) You can create Dataset with API or via web UI https://saas.haut.ai. This is only required to do once, no need to create Dataset with the same name every time. Please see supported image types via this API https://saas.haut.ai/api/v1/dicts/image_types/

resp = requests.post(
    f"{SAAS_HOST}/api/v1/companies/{company_id}/datasets/",
    # image_type_id = 1 is for selfie
    json={"name": "My dataset name", "image_type_id": 1},
    headers={"Authorization": f"Bearer {token}"},
)

resp.raise_for_status()
dataset_id = resp.json()["id"]

print("Your dataset id: ", dataset_id)

Attach Application to your Dataset:

Application is a set of algorithms that can be applied to data (which comes from Dataset).

You need to link Application to Dataset only once. After Application and Dataset are linked all images from Dataset will be processed by the Application. All newly coming images to the Dataset will be processed in live mode by default (no need to call API after sending new images).

Face Skin Metrics is a configured Application you have in your subscription by default. You can get a full list of available applications with /companies/{company_id}/applications/ API call.

This is how to attach Application to Dataset:

#Face Metrics 2.0
FACE_SKIN_METRICS_APPLICATION_ID = "8b5b3acc-480b-4412-8d2c-ebe6ab4384d7" 
# "d2942e17-6239-49d1-8ff5-d74c4eb0bd20" for depricated Face Metrics 1.0

resp = requests.post(
    f"{SAAS_HOST}/api/v1/companies/{company_id}/"
    f"applications/{FACE_SKIN_METRICS_APPLICATION_ID}/runs/",
    json={"dataset_id": dataset_id},
    headers={"Authorization": f"Bearer {token}"},
)

resp.raise_for_status()

print(
    "Application Face Skin Metrics has been successfully attached to dataset:",
    dataset_id,
)

Upload Image to Dataset

We upload images to Dataset by batches. Selfie batch can have one frontal image or it can have three images for left, right and frontal side of the face. Skin batch and Visia batch can have only one image. See side_id parameter below. To review such parameters please refer to Dict API (/dicts/image_types/) or open in browser https://saas.haut.ai/api/v1/dicts/image_types/ In some cases you may want to upload several related images in one batch. For instance put front, left and right side of the face to processing For this you need to use the same batch_id for all these 3 images, but side_id should be different: front side_id=1 right side_id=2 left side_id=3

Please note, we have concept of "subjects" - these are your end customers, and every image should be associated with a Subject. If you don't need to associate every customer with unique subject, just create one default subject ("My Subject Name" in code below to edit).

Here we upload face.jpg, it should be stored locally on your disk.

# 1. Create subject
resp = requests.post(
    f"{SAAS_HOST}/api/v1/companies/{company_id}/datasets/{dataset_id}/subjects/",
    json={"name": "My subject name"},
    headers={"Authorization": f"Bearer {token}"},
)

resp.raise_for_status()
subject_id = resp.json()["id"]

print("Your subject id: ", subject_id)

# 2. Create batch
print("Create batch")

resp = requests.post(
    f"{SAAS_HOST}/api/v1/companies/{company_id}/"
    f"datasets/{dataset_id}/subjects/{subject_id}/batches/",
    headers={"Authorization": f"Bearer {token}"},
)

resp.raise_for_status()
batch_id = resp.json()["id"]
print("Your batch id: ", batch_id)

# 3. Send image
print("Upload image")

with open("face.jpg", "rb") as file:
    resp = requests.post(
        f"{SAAS_HOST}/api/v1/companies/{company_id}/"
        f"datasets/{dataset_id}/subjects/{subject_id}/batches/{batch_id}/images/",
        json={
            # side_id = 1 is for front image
            "side_id": 1,
            # light_id = 1 is for regular light
            "light_id": 1,
            "b64data": base64.b64encode(file.read()).decode(),
        },
        headers={"Authorization": f"Bearer {token}"},
    )
    
resp.raise_for_status()
image_id = resp.json()["id"]

print("Your image id: ", image_id)
print("Wait while image is being processed...")

time.sleep(10)

Get human-readable info about algorithms

resp = requests.get(
    f"{SAAS_HOST}/api/v1/dicts/algorithms/",
    headers={"Authorization": f"Bearer: {token}"},
)

resp.raise_for_status()
algorithms = resp.json()

print("Algorithms dict:", algorithms)

Subscribe for notifications on image processing

This step is optional and needed only if you want to have real-time notifications when procession of images is done. It requires auth token.

ws = websocket.create_connection(
    f"wss://saas.haut.ai/notifications/?user_id={user_id}",
    cookie=f"authorization={token}",
)
print("Receiving...")
result = ws.recv()
print(f"Received msg: '{result}'")
ws.close()

As alternative, you can listen to WebSocket, please see how to configure it in UI https://docs.saas.haut.ai/interface-guidelines-1/datasets/get-notifications-via-webhooks

Get results for processed image

Transferring an image from the client to your backend and then to the Haut.AI backend takes time. Additionally, image processing takes around 3-5 seconds, depending on the size. If you request results too early, you might only receive a subset of metrics that are ready at that moment.

To ensure all metrics are calculated, set up the webhook for the dataset and wait for a callback to this webhook.

UI for webhook configuration described here Here's an example of how to obtain /results for the image. You need to provide the metadata (batch_id, image_id, subject, and dataset) from the previous step.

 resp = requests.get(
    f"{SAAS_HOST}/api/v1/companies/{company_id}/"
    f"datasets/{dataset_id}/"
    f"subjects/{subject_id}/batches/{batch_id}/images/{image_id}/results/",
    headers={"Authorization": f"Bearer {token}"},
)
resp.raise_for_status()
print("Processed results for image:", image_id)
for result in resp.json():
    alg = [algo for algo in algorithms if algo["id"] == result["algorithm_version_id"]][
        0
    ]
    print(f"{alg['algorithm_family']['name']} {alg['version']}: {result['result']}")
    print("-" * 20)

print("Get images for segments and their results")
resp = requests.get(
    f"{SAAS_HOST}/api/v1/companies/{company_id}/"
    f"datasets/{dataset_id}/"
    f"subjects/{subject_id}/batches/{batch_id}/images/{image_id}/aux/",
    headers={"Authorization": f"Bearer {token}"},
)
resp.raise_for_status()
for aux_image in resp.json():
    print("Segment:", aux_image["aux_image_type"]["name"])
    print(f"URL: {SAAS_HOST}/images/{aux_image['id']}.jpg?token={token}")
    for result in aux_image["results"]:
        alg = [
            algo for algo in algorithms if algo["id"] == result["algorithm_version_id"]
        ][0]
        print(
            f"{alg['algorithm_family']['name']} v{alg['version']}: {result['result']}"
        )
    print("-" * 20)

Return history of parameters for the the user for the selected timeframe.

Often it's needed to know the history of parameters for user's skin.Kind of skin diary. Please use /companies/{company_id}/datasets/{dataset_id}/subjects/{id}/all_results/ API for that and set date_to and date_from for the timeframe.

print("6 Return history for given user for last week")

resp = requests.get(
    f"{SAAS_HOST}/api/v1/companies/{company_id}/"
    f"datasets/{dataset_id}/"
    f"subjects/{subject_id}/all_results/",
    params={
        "date_from": (
            datetime.datetime.today() - datetime.timedelta(weeks=1)
        ).isoformat()
    },
    headers={"Authorization": f"Bearer {token}"},
)
resp.raise_for_status()
print(f"Subject [{subject_id}] history for last week:")
print(resp.json())

Streaming results

You can subscribe to server to server results streaming. Algorithms of the Application iare calculated in parallel and are streamed over the socket, this enables faster response time compared to REST call. You will first get the fastest calculated algorithm and the rest of them as they are calculated.

import asyncio
import os
import time

import aiohttp
import websockets


async def test_results(host: str, email: str, password: str, company_id: str):
    async with aiohttp.ClientSession() as session:
        async with session.post(
            f"https://{host}/api/v1/login/",
            json={"username": email, "password": password},
        ) as resp:
            resp_json = await resp.json()
            token = resp_json["access_token"]
    async with websockets.connect(
        f"wss://{host}/notifications/results/?company_id={company_id}",
        extra_headers={"Authorization": token}
    ) as ws:
        while True:
            print(ws.open)
            text = await ws.recv()
            print(text)
            

asyncio.get_event_loop().run_until_complete(
    test_results("saas.haut.ai", "youremail.gmail.com", "yourPassowrd", "your-company-id-1afb-4b3d-8099-a75fc7f7e1d4")
)

Streaming results

What if I want to change list of algorithms running over dataset in scope of the App?

First you need to get list of available algorithms and see their IDs https://saas.haut.ai/api/v1/dicts/algorithms

Look for algorithms with selfie_v2.* techname as they are in scope of Face Metrics 2.0 you most probably want to use. You must use the Face Detector algorithm because without face detection other algorithms will fail to process selfie. it's id=40 see screenshot below,

selfie_v2.redness id=30. We will use Face Detector and Redness algorithm as example to create AppRun (instance of App bound to the particular dataset)

You will need companyId you get from login API https://saas.haut.ai/api/v1/login/

and will need also ApplicationId = 8b5b3acc-480b-4412-8d2c-ebe6ab4384d7 for FaceMetrics2.0 where algorithms for face analysis reside, and datasetId where you want to add or remove algorithms for processing. With all that above we need to call POST request with a body that contains list of "enabled_algorithms" : [ 30,40] this is Face Detector and Redness in our example

companies/{companyId}/applications/{AppId}/runs/ You can try it in swagger https://saas.haut.ai/api/swagger/

Or use curl - put your own companyId, datasetid, auth token and it will work!

curl -X 'POST' \
  'https://saas.haut.ai/api/v1/companies/8c2ce170-cfa8-45a5-85f6-f872f9d54970/applications/8b5b3acc-480b-4412-8d2c-ebe6ab4384d7/runs/' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer myTocken' \
  -H 'Content-Type: application/json' \
  -H 'X-CSRFTOKEN: tP2ZZ0wMYDUByIMa3GoqsjzxJL1cRwSCL1Tsp5M8YjBwLq92FhRR6UiJurWzGcQO' \
  -d '{
  "dataset_id": "8b567f0a-89f1-45da-8b0e-42473e0293da",
  "enabled_algorithms": [
    30,40]
}'

As a result, only Redness Algo and Face Detector will be calculated for upcoming images to this dataset

Smoothing results

Skin conditions for a user can fluctuate daily due to various reasons. Smoothing can help present the user with data that is free from such fluctuations. The unique subject in the API must be a single person; otherwise, the result will be an average of multiple people. This method returns ONLY MAIN metrics and does not include sub-metrics.

You should call companies/datasets/subjects/batches/images/smoothed_results API Like:

  curl -X 'GET' \
  'https://saas.haut.ai/api/v1/companies/2f1c3e0f-45c3-448f-81c1-885c1b725ef0/datasets/0132ced0-f78a-4717-a301-f0e8ba0efdda/subjects/145d2764-c924-499f-ac17-81d2a09bb121/batches/84007136-7842-4c4f-87ab-61f6031831ba/images/d5cc9441-12da-4841-b2c8-a6cc6acc4b20/smoothed_results/?sample_time_window=14&sample_max_size=10&smoothing_method=mean' \
  -H 'accept: application/json'

History graph smoothing method has a couple of settings for smoothing.

Smoothing method: - mean - mean_without_outliers - linear_approximation

The sample time window in days is configurable. By default 14 days.

Sample_max_size is configurable. By default 10

Try out it with swagger tool https://saas.haut.ai/api/swagger/

Getting PDF report for the image

You might want to download generated PDF report for selected image. It contains all metrics, dynamics of metrics over time, and masks. PDF report is easy to send to user or print on paper.

Use /pdf API call for that: saas.haut.ai/service/pdf-generator/companies/{company_id}/datasets/{dataset_id}/subjects/{subject_id}/batches/{batch_id}/images/{image_id}/pdf/ 👆Please note, the endpoint stand separatly from /api/v1 endpoint.

curl -X 'POST' \
  'https://saas.haut.ai/service/pdf-generator/companies/8c2ce170-cfa8-45a5-85f6-f872f9d54970/datasets/7ec46dc1-8e3a-4d25-80ab-d751b3b2b2e7/subjects/065de3eb-d0d3-434b-b1fc-1ec6a4612a34/batches/d165b917-c7f0-42bf-9b88-ba117002947c/images/5a1177b1-dc77-465f-ae95-2ca7af887150/pdf/' \
  -H 'accept: */*' \
  -H 'access-token: yourSecretTokenHere' \
  -H 'Content-Type: application/json' \
  -d '{
  "timeout": 30000,
  "waitUntil": "networkidle0"
}'

Args:

  • company_id: ID of an image owner's company.

  • dataset_id: ID of a dataset containing the image.

  • subject_id: ID of a subject the image related to.

  • batch_id: ID of a batch of images in which this specific image resembles.

  • image_id: ID of the target image.

  • access_token: Access token for authorization.

  • options: JSON object with timeout and waitUntil fields. See pyppeteer.page.Page.goto() for details.

Returns: fastapi.responses.FileResponse: A generated PDF.

Manage data retention

Data retention is managed per dataset. You can do this via API on dataset creation or can apply it to existing dataset later. PUT or POST arguments below to request body /api/v1/companies/{company_id}/datasets/{core_id}/ After N days images will be auto-deleted

expire_images_after_days  

After N days metadata will be auto-deleted. Metadata is calculated results

expire_metadata_after_day

Auto-delete images right after results where calculated by algorithms.

clean_images_as_soon_as_ready

Last updated