Customer stories
We're creating a platform for progressive AI companies to build their products in the fastest, most performant infrastructure available.
What our customers are saying
See allYou guys have literally enabled us to hit insane revenue numbers without ever thinking about GPUs and scaling. We would be stuck in GPU AWS land without y'all. Truss files are amazing, y'all are on top of it always, and the product is well thought out. I know I ask for a lot so I just wanted to let you guys know that I am so blown away by everything Baseten.
I want the best possible experience for our users, but also for our company. Baseten has hands down provided both. We really appreciate the level of commitment and support from your entire team.
Nathan Sobo,
Co-founder
You guys have literally enabled us to hit insane revenue numbers without ever thinking about GPUs and scaling. We would be stuck in GPU AWS land without y'all. Truss files are amazing, y'all are on top of it always, and the product is well thought out. I know I ask for a lot so I just wanted to let you guys know that I am so blown away by everything Baseten.
I want the best possible experience for our users, but also for our company. Baseten has hands down provided both. We really appreciate the level of commitment and support from your entire team.
Our team spent weeks researching and vetting inference providers. It was a thorough process and we confidently believe Baseten was a clear winner. Baseten has helped us abstract away so much of the complexity of AI model deployments and MLOps. On Baseten, things just work out of the box - this has saved us countless engineering hours. It’s made a huge difference in our productivity as a team - most of our engineers have experience now in training and deploying models on Baseten. Every time we start an ML project, we think about how quickly we can get things going through Baseten.
Our team spent weeks researching and vetting inference providers. It was a thorough process and we confidently believe Baseten was a clear winner. Baseten has helped us abstract away so much of the complexity of AI model deployments and MLOps. On Baseten, things just work out of the box - this has saved us countless engineering hours. It’s made a huge difference in our productivity as a team - most of our engineers have experience now in training and deploying models on Baseten. Every time we start an ML project, we think about how quickly we can get things going through Baseten.
Eric Lehman,
Head of Clinical NLP
Our team spent weeks researching and vetting inference providers. It was a thorough process and we confidently believe Baseten was a clear winner. Baseten has helped us abstract away so much of the complexity of AI model deployments and MLOps. On Baseten, things just work out of the box - this has saved us countless engineering hours. It’s made a huge difference in our productivity as a team - most of our engineers have experience now in training and deploying models on Baseten. Every time we start an ML project, we think about how quickly we can get things going through Baseten.
Our team spent weeks researching and vetting inference providers. It was a thorough process and we confidently believe Baseten was a clear winner. Baseten has helped us abstract away so much of the complexity of AI model deployments and MLOps. On Baseten, things just work out of the box - this has saved us countless engineering hours. It’s made a huge difference in our productivity as a team - most of our engineers have experience now in training and deploying models on Baseten. Every time we start an ML project, we think about how quickly we can get things going through Baseten.
Customer Stories

OpenEvidence delivers instant, accurate medical information with Baseten
OpenEvidence partners with Baseten for their inference infrastructure to focus on what they do best: making exceptional tools for physicians.

Latent delivers pharmaceutical search with 99.999% uptime on Baseten
Latent Health uses Baseten to power fast, reliable clinical AI.

Praktika delivers ultra-low-latency transcription for global language education with Baseten
With Baseten, Praktika delivers <300 milliseconds latency empowering language learners worldwide with a seamless conversational and learning experience.
Zed Industries serves 2x faster code completions with the Baseten Inference Stack
By partnering with Baseten, Zed achieved 45% lower latency, 3.6x higher throughput, and 100% uptime for their Edit Prediction feature.

Wispr Flow creates effortless voice dictation with Llama on Baseten
Wispr Flow runs fine-tuned Llama models with Baseten and AWS to provide seamless dictation across every application.
Rime serves speech synthesis API with stellar uptime using Baseten
Rime AI chose Baseten to serve its custom speech synthesis generative AI model and achieved state-of-the-art p99 latencies with 100% uptime in 2024

Bland AI breaks latency barriers with record-setting speed using Baseten
Bland AI leveraged Baseten’s state-of-the-art ML infrastructure to achieve real-time, seamless voice interactions at scale.

Baseten powers real-time translation tool toby to Product Hunt podium
The founders of toby worked with Baseten to deploy an optimized Whisper model on autoscaling hardware just one week ahead of their Product Hunt launch and had a top-three finish with zero downtime.
Custom medical and financial LLMs from Writer see 60% higher tokens per second with Baseten
Writer, the leading full-stack generative AI platform, launched new industry-specific LLMs for medicine and finance. Using TensorRT-LLM on Baseten, they increased their tokens per second by 60%.
Patreon saves nearly $600k/year in ML resources with Baseten
With Baseten, Patreon deployed and scaled the open-source foundation model Whisper at record speed without hiring an in-house ML infra team.