The cloud you own: Webinar feat. Oxide

The cloud you own: webinar with Oxide

When Amazon goes down, every company running on AWS goes down with it. When your own infrastructure fails, only you are affected.

Justin Garrison, Head of Products at Sidero, and Matthew Sanabria, Head of Solutions Software Engineering at Oxide, sat down for a joint webinar on running Talos Linux on Oxide hardware and the importance of owning your fault domain.

Keep reading or watch the entire webinar below.

Owning your infrastructure

Oxide delivers open-source hardware and software with no software licensing fees, achieving roughly twice the compute density at half the power consumption of a standard rack.

The power angle matters more than it might seem. As AI workloads consume more power, relatively inefficient compute faces pressure to leave data centers entirely. Oxide's rack-scale efficiency is designed to take that displaced compute. Many businesses run workloads where public cloud rental costs would have paid for owned hardware in a month or two.

The historical catch has been that owning hardware meant sacrificing elastic APIs and the ability to spin up instances, disks, and VPCs on demand. Oxide keeps those APIs. Roll the rack off the truck, connect power and networking, and within 90 minutes you're provisioning VMs via API. Depending on your starting point, that's hours, weeks, or months faster to production-ready.

Oxide achieves its efficiency by co-designing hardware and software from the ground up. The rack uses a centralized DC power shelf rather than individual AC/DC converters in each server, eliminating a class of heat and cabling inefficiency. The BIOS has been eliminated entirely; machines initialize directly from the service processor, removing an entire category of firmware security vulnerabilities.

Owning your Kubernetes journey

The same ownership principle applies at the platform layer. Talos Linux is managed solely via API with no shell access. When you're managing dozens of clusters across hundreds of machines, you need a system that is declarative by design and impossible to configure incorrectly through ad-hoc intervention.

Omni is the central management plane for that fleet, providing a single discovery endpoint for all Talos nodes regardless of where they run, with centralized authentication, observability, and cluster management. The pattern is familiar: one server is easy to manage directly, two is fine, ten requires a spreadsheet, a hundred requires an orchestrator. Omni is that orchestrator for Talos.

Omni's infrastructure providers let it call out to any infrastructure API to provision machines on demand. For static infrastructure, the bare metal provider calls IPMI to boot machines, provisions them with Talos, and registers them in Omni. For dynamic infrastructure, providers create machines on demand from any capacity pool—including Oxide. A single Omni-managed cluster can also draw nodes from multiple providers simultaneously, so GPU workloads can be pinned to bare metal nodes while general workloads run on Oxide instances.

The demo: Oxide meets Omni

The workflow is straightforward. An infrastructure provider is registered in Omni with a provider ID and credentials, along with Oxide-specific configuration: project placement, CPU and memory allocations, and VPC and subnet assignments. Machine classes map these configurations to provider IDs—the demo used separate classes for control plane and worker nodes with different resource profiles.

Creating a cluster is then a matter of specifying a name, a Talos version, and machine classes with node counts. Omni calls the provider, which generates a Talos schematic, downloads the corresponding image from Talos Factory, uploads it to Oxide, and launches instances from it. The instances boot from a generic Talos image and fetch their Omni join configuration from the no-cloud data source, with a join token linking each instance back to Omni and the Oxide provider.

The demo provisioned four nodes, scaled up dynamically, then destroyed the cluster entirely, with Omni draining and terminating worker nodes before control plane nodes in the correct order.

What's next

The Oxide provider for Omni is open source, available on GitHub, and buildable from source today. Go spin up a cluster. And be sure to thank Matt for building it.

Want to build your own provider? Go right ahead and see the docs here.