Hugging Face
Hugging Face is a platform where the machine learning
community collaborates on models, datasets, and applications. Hugging Face's
dedicated Inference
Endpoints and Serverless
Inference API allow for
running any model via an HTTP REST API, which can be accessed from Pipelines
using the webhook
transformer.
The Serverless Inference API is free but rate-limited. It should only be used for testing scenarios, and it is expected that some operations may be dropped if the specified model is not already loaded in memory. Inference Endpoints should be used in production scenarios.
The following example demonstrates using the Hugging Face Serverless Inference API to perform sentiment analysis of audio samples streamed to Golioth. The Hubert-Large for Emotion Recognition model is targeted. Sentiment analysis results are delivered to LightDB Stream as a JSON payload.
Example sentiment analysis result:
[
{
"score": 0.6310836672782898,
"label": "neu"
},
{
"score": 0.2573806643486023,
"label": "sad"
},
{
"score": 0.09393830597400665,
"label": "hap"
},
{
"score": 0.017597444355487823,
"label": "ang"
}
]
Make sure to create a secret named HUGGING_FACE_TOKEN
in the format Bearer <token>
.
filter:
path: "/audio"
steps:
- name: emotion-recognition
transformer:
type: webhook
version: v1
parameters:
url: https://api-inference.huggingface.co/models/superb/hubert-large-superb-er
headers:
Authorization: $HUGGING_FACE_TOKEN
- name: embed
transformer:
type: embed-in-json
version: v1
parameters:
key: text
- name: send-lightdb-stream
destination:
type: lightdb-stream
version: v1