API Quick Start Guide
Below is the tutorial for common use-cases. And you can find a full list of API in Swagger documentation https://saas.haut.ai/api/swagger/
This is a link to our Haut.AI portal. All calls to API should start with /api/v1/
base URL.
The whole list of API is available in Swagger https://saas.haut.ai/api/swagger/
Prepare your image
For your convenience, we've prepared a sample face image. You can skip this step and use your image instead.
Login
You need to call login API to get access_token (Auth token)
since all requests to API must be authorized.
Please note - Auth token lives for 1 hour and later you need to create a new one, or use endpoint /api/v1/auth/refresh/
There is an option to use private token that works the same way as auth token, but does not expire in 1 hour. Do not call login method every second/image, Firebase has rate limit that will block such calls.
To get Auth token use your email and password in the code below (instead of sample credentials).
Generate API token instead of Auth token (Optional)
In some cases developers prefer to use API token instead of Auth token that is returned from Login method. The good part of API token is that you can set it to last for a longer time and it will not expire, and in case someone will change password for Login - it will not break your code in production. You can have multiple private tokens at once if you need them for different projects. A private token also covers API calls for your linked companies — the token is associated with a user/account. If this user is part of several companies, it can manage API calls for these companies. At this moment you can generate API key and set expiration time as you wish through API:
curl -X 'POST' \ 'https://saas.haut.ai/api/v1/auth/private_tokens/' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "name": "string", "expiration_time": "2023-04-17T16:10:17.350Z" }'
Then take .data
from response body and use instead of token from /login
curl -X GET https://saas.haut.ai/api/v1/companies/.../datasets/ -H 'authorization: Bearer {.data}'
See swagger how you can list and delete private tokens.
Create Dataset
Dataset is an object which works as input for images. You can use different Datasets if you want to separate data coming from different endpoints (e.g. different apps or different countries) You can create Dataset with API or via web UI https://saas.haut.ai. This is only required to do once, no need to create Dataset with the same name every time. Please see supported image types via this API https://saas.haut.ai/api/v1/dicts/image_types/
Attach Application to your Dataset:
Application is a set of algorithms that can be applied to data (which comes from Dataset).
You need to link Application to Dataset only once. After Application and Dataset are linked all images from Dataset will be processed by the Application. All newly coming images to the Dataset will be processed in live mode by default (no need to call API after sending new images).
Face Skin Metrics is a configured Application you have in your subscription by default.
You can get a full list of available applications with /companies/{company_id}/applications/
API call.
This is how to attach Application to Dataset:
Upload Image to Dataset
We upload images to Dataset by batches.
Selfie batch can have one frontal image or it can have three images for left, right and frontal side of the face.
Skin batch and Visia batch can have only one image.
See side_id
parameter below. To review such parameters please refer to Dict API (/dicts/image_types/
) or open in browser https://saas.haut.ai/api/v1/dicts/image_types/
In some cases you may want to upload several related images in one batch.
For instance put front, left and right side of the face to processing
For this you need to use the same batch_id for all these 3 images, but side_id should be different:
front side_id=1
right side_id=2
left side_id=3
Please note, we have concept of "subjects" - these are your end customers, and every image should be associated with a Subject. If you don't need to associate every customer with unique subject, just create one default subject ("My Subject Name" in code below to edit).
Here we upload face.jpg, it should be stored locally on your disk.
Get human-readable info about algorithms
Subscribe for notifications on image processing
This step is optional and needed only if you want to have real-time notifications when procession of images is done. It requires auth token.
As alternative, you can listen to WebSocket, please see how to configure it in UI https://docs.saas.haut.ai/interface-guidelines-1/datasets/get-notifications-via-webhooks
Get results for processed image
Transferring an image from the client to your backend and then to the Haut.AI backend takes time. Additionally, image processing takes around 3-5 seconds, depending on the size. If you request results too early, you might only receive a subset of metrics that are ready at that moment.
To ensure all metrics are calculated, set up the webhook for the dataset and wait for a callback to this webhook.
UI for webhook configuration described here Here's an example of how to obtain /results for the image. You need to provide the metadata (batch_id, image_id, subject, and dataset) from the previous step.
Return history of parameters for the the user for the selected timeframe.
Often it's needed to know the history of parameters for user's skin.Kind of skin diary. Please use /companies/{company_id}/datasets/{dataset_id}/subjects/{id}/all_results/ API for that and set date_to and date_from for the timeframe.
Streaming results
You can subscribe to server to server results streaming. Algorithms of the Application iare calculated in parallel and are streamed over the socket, this enables faster response time compared to REST call. You will first get the fastest calculated algorithm and the rest of them as they are calculated.
Streaming results
What if I want to change list of algorithms running over dataset in scope of the App?
First you need to get list of available algorithms and see their IDs https://saas.haut.ai/api/v1/dicts/algorithms
Look for algorithms with selfie_v2.* techname as they are in scope of Face Metrics 2.0 you most probably want to use. You must use the Face Detector algorithm because without face detection other algorithms will fail to process selfie. it's id=40 see screenshot below,
selfie_v2.redness id=30. We will use Face Detector and Redness algorithm as example to create AppRun (instance of App bound to the particular dataset)
You will need companyId you get from login API https://saas.haut.ai/api/v1/login/
and will need also ApplicationId = 8b5b3acc-480b-4412-8d2c-ebe6ab4384d7 for FaceMetrics2.0 where algorithms for face analysis reside, and datasetId where you want to add or remove algorithms for processing. With all that above we need to call POST request with a body that contains list of "enabled_algorithms" : [ 30,40] this is Face Detector and Redness in our example
companies/{companyId}/applications/{AppId}/runs/ You can try it in swagger https://saas.haut.ai/api/swagger/
Or use curl - put your own companyId, datasetid, auth token and it will work!
As a result, only Redness Algo and Face Detector will be calculated for upcoming images to this dataset
Smoothing results
Skin conditions for a user can fluctuate daily due to various reasons. Smoothing can help present the user with data that is free from such fluctuations. The unique subject in the API must be a single person; otherwise, the result will be an average of multiple people. This method returns ONLY MAIN metrics and does not include sub-metrics.
You should call companies/datasets/subjects/batches/images/smoothed_results API Like:
History graph smoothing method has a couple of settings for smoothing.
Smoothing method: - mean - mean_without_outliers - linear_approximation
The sample time window in days is configurable. By default 14 days.
Sample_max_size is configurable. By default 10
Try out it with swagger tool https://saas.haut.ai/api/swagger/
Getting PDF report for the image
You might want to download generated PDF report for selected image. It contains all metrics, dynamics of metrics over time, and masks. PDF report is easy to send to user or print on paper.
Use /pdf API call for that: saas.haut.ai/service/pdf-generator/companies/{company_id}/datasets/{dataset_id}/subjects/{subject_id}/batches/{batch_id}/images/{image_id}/pdf/ 👆Please note, the endpoint stand separatly from /api/v1 endpoint.
Args:
company_id: ID of an image owner's company.
dataset_id: ID of a dataset containing the image.
subject_id: ID of a subject the image related to.
batch_id: ID of a batch of images in which this specific image resembles.
image_id: ID of the target image.
access_token: Access token for authorization.
options: JSON object with timeout and waitUntil fields. See pyppeteer.page.Page.goto() for details.
Returns: fastapi.responses.FileResponse: A generated PDF.
Manage data retention
Data retention is managed per dataset. You can do this via API on dataset creation or can apply it to existing dataset later. PUT or POST arguments below to request body /api/v1/companies/{company_id}/datasets/{core_id}/ After N days images will be auto-deleted
After N days metadata will be auto-deleted. Metadata is calculated results
Auto-delete images right after results where calculated by algorithms.
Last updated