BlindAI
Searchβ¦
π
Welcome
π
Getting started
Quick start
Run the BlindAI server
Prepare the model
Send and run the model
Cloud deployment
Deploy on Hardware
Telemetry
π
Resources
Client API Reference
Examples
π
Advanced
Certificate and policy
Setting up your dev environment
Build the server from source
Build the BlindAI Client SDK from source
BlindAI Project Structure
Code Repository
Powered By
GitBook
Quick start
Let's get started with how to use BlindAI to deploy an AI model in minutes!
This section guides you through deploying your first model with
BlindAI Inference Server!
We
will use the example of
DistilBert
model
for demonstration purposes. We will go through three main phases:
1. Run the BlindAI Server
Run the BlindAI server
2. Prepare the model
Loading and exporting the model in ONNX format.
Prepare the model
3. Send and run the model
Sending the model in BlindAI Server and running the inference.
Send and run the model
Previous
Welcome
Next
Run the BlindAI server
Last modified
5d ago
Copy link
Contents
1. Run the BlindAI Server
2. Prepare the model
3. Send and run the model