...

Managed GPU Containers

Autoscaling GPU containers for your inference workloads.
Build your API endpoint, not the infrastructure to run it on.

Contact Sales Deploy a Container
...

Managed Docker Container Hosting

Configure your container, and we'll handle the rest — autoscaling, load balancing, and redundancy

Build a container

Build your container from scratch, or use one of our preconfigured templates. All the container needs to do is provide an HTTP endpoint.

Configure your deployment

Configure your container's CPU, RAM, and disk resources. Rank GPUs by priority, and set the minimum and maximum number of replicas.

That's it!

We'll handle the rest — autoscaling, load balancing, and redundancy. You'll be ready to send API requests to your new endpoint in minutes.

Abstract away all the infrastructure... for $0

TensorDock's managed GPU containers are the easiest way to deploy a scalable API endpoint for any use case, and it's available at no additional charge on top of the cost of the compute resources.

LLM inference

Image endpoints

Transcoding APIs

Data analytics

Deploy atop the widest range of GPUs
... all at 80% less than the big clouds.

Our managed container platform lets you deploy container groups atop any of our massive fleets of GPUs and autoscale with ease.

...
$0.36/hour

NVIDIA RTX 4090

Uncompromising performance for image and video processing, gaming, and rendering.

Deploy a 4090 container
...
$1.50/hour

NVIDIA A100 80GB

Accelerated machine learning LLM inference with 80GB of GPU memory.

Deploy an A100 container
...
From $0.05/hour

More: L40, A6000, etc.

24 GPU models available, choose the one that best suits your workload.

Customize a container

Chat with Sales

Response within 24 hours

Need help deciding which GPU type to utilize for your container group, or need a custom white glove solution? Chat with our sales team.

Schedule a video chat Send us an email or message

Enterprise

High spenders get enterprise accounts.

When you spend more than $10,000/month on TensorDock:

Account Manager

A person to ensure that your needs have been met.

24/7 Priority Support

Your requests will be placed in a priority queue.

Slack and MS Teams

Chat with our team in realtime to resolve issues.

FAQ

Frequently asked billing questions.

Contact us to ask something else!

How does autoscaling work?

We will automatically add a replica if the average GPU utilization over a 10-minute period surpasses 80%, and delete a replica if the average GPU utilization over a 10-minute period is below 20%.

How does load balancing work?

We use a round-robin load balancer to distribute requests evenly across replicas. We try to keep replicas within the same general geographic region but on different physical servers for best-in-class latency and redundancy

How does billing work?

We operate on a prepaid model: you deposit money and then provision compute. You must manually add more funds, we do not automatically charge your card.

How do I provide a Docker image?

You can either provide a Docker image from a public registry (e.g. Docker Hub) or a private registry, which you'll need to provide authentication credentials for. We recommend using a private registry for security reasons.

What payment methods do you accept?

At the moment, we only accept 3D Secure credit card payments via Stripe. We can also manually accept cryptocurrency payments on deposits larger than $500 on request after you have completed a KYC check.

Do you offer refunds?

Each refund costs us, so we can offer refunds of 97% of a remaining account balance on a case by case basis. Please start off small with a $5 deposit to test out our services.

"Deploying our models was a breeze. And managing them is even easier."

ELBO AI builds an app "Puppetry" that allows users to animate faces onto still images and provide an API for other startups to do so as well.

@Saravana

"The server is really fast, you never have to worry about noisy neighbors. Wow!"

Ultramarine Linux is an open-source Fedora-based distribution that uses TensorDock's CPU servers to process hundreds of gigabytes of data.

@Neko
...

World-class enterprise support

Delivered by dedicated professionals

Deploy your first TensorDock server.

And you'll never look back.

... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...