CoreWeave Logo

CoreWeave

Principal Engineer, Cluster Orchestration

Posted 14 Days Ago
Be an Early Applicant
In-Office
2 Locations
206K-303K Annually
Expert/Leader
In-Office
2 Locations
206K-303K Annually
Expert/Leader
As a Principal Engineer at CoreWeave, you will design cluster orchestration systems, lead technical direction, ensure system reliability, and mentor engineers while focusing on large-scale GPU cluster performance for AI workloads.
The summary above was generated by AI
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com.
About the Role

CoreWeave runs some of the largest GPU clusters in the world. The AI infrastructure behind those clusters determines how workloads are placed, how resources are shared, and how reliably systems perform under constant pressure.

As a Principal Engineer in AI Infrastructure, you will lead the design and evolution of the cluster orchestration systems that make this possible. This includes Slurm, Kubernetes, SUNK, and the control planes that support AI training, inference, and model onboarding at scale.

You will define long-term architecture, solve hard scaling problems, and set technical direction across teams. Your work will directly affect how quickly customers can run models, how efficiently we use GPUs, and how reliably the platform behaves at scale.

What You’ll DoArchitecture and Technical Direction
  • Define the long-term architecture for CoreWeave’s orchestration platforms across Kubernetes, Slurm, SUNK, Kueue, and related systems.
  • Act as a technical authority on scheduling, quota enforcement, fairness, pre-emption, and multi-tenant GPU isolation.
  • Make design decisions that balance performance, reliability, cost, and operational complexity.
Orchestration Platform Development
  • Lead the evolution of Kubernetes-native control planes, including SUNK and custom operators.
  • Design systems that support workload admission, validation, and rollout, including model onboarding flows.
  • Identify and remove scaling limits across schedulers, control planes, registries, networking, and storage.
Reliability and Operations
  • Set standards for reliability, observability, and operational readiness across orchestration services.
  • Define SLOs, alerting, and incident response practices for platform-critical systems.
  • Ensure systems behave predictably during failures, peak load, and rapid growth.
Hands-on Engineering
  • Write and review production code for Kubernetes controllers, schedulers, admission logic, and internal tooling.
  • Measure and improve scheduling latency, container startup time, image distribution, and cold-start performance.
  • Lead architecture and design reviews across infrastructure teams.
Leadership and Influence
  • Mentor senior and staff engineers and help grow technical leaders.
  • Influence platform, infrastructure, security, and product teams through clear technical judgment.
  • Engage with customers and open-source communities on deep technical topics when needed.
Who You Are
  • 15+ years of experience building and operating large-scale distributed systems.
  • Deep, practical knowledge of Kubernetes and Slurm internals.
  • Experience running GPU-heavy platforms for AI training, inference, or HPC workloads.
  • Strong background in Go and cloud-native systems development.
  • Proven ability to set technical direction across teams without direct authority.
  • Comfortable making high-impact technical decisions in complex systems.
  • Bachelor’s or Master’s degree in a relevant field, or equivalent experience.
Preferred Qualifications
  • Experience with systems such as Kueue, Kubeflow, Argo Workflows, Ray, Istio, or Knative.
  • Background in ML platform engineering, model onboarding, or lifecycle management.
  • Strong understanding of scheduling strategies, pre-emption, quota enforcement, and elastic scaling.
  • Track record of operating highly reliable systems with clear SLOs and incident processes.
  • Contributions to Kubernetes, ML infrastructure, or related open-source projects.
  • Experience mentoring senior engineers and raising engineering standards.
Is This a Good Fit?

You may be a good fit if you enjoy defining long-term architecture, solving deep systems problems, and working close to the hardware layer of AI platforms. This role suits engineers who care about correctness, scale, and operational discipline, and who want their work to directly shape how AI runs in production.

Why CoreWeave?

At CoreWeave, AI infrastructure is the product. As a Principal Engineer in cluster orchestration, you will be responsible for systems that directly determine how efficiently GPUs are used, how reliably large models run, and how quickly customers can move from research to production.

This role puts you at the center of hard problems in scheduling, resource isolation, and large-scale control planes. You will work on systems where small design choices affect thousands of GPUs and real customer workloads. If you care about building infrastructure that runs under constant pressure, scales without shortcuts, and enables the next generation of AI workloads, CoreWeave is a place where your work will matter.

The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility). 

What We Offer

The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.

In addition to a competitive salary, we offer a variety of benefits to support your needs, including:

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance 
  • Voluntary supplemental life insurance 
  • Short and long-term disability insurance 
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement 
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health 
  • Family-Forming support provided by Carrot
  • Paid Parental Leave 
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption

Our Workplace

While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.

California Consumer Privacy Act - California applicants only

CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.

As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: [email protected].


Export Control Compliance

This position requires access to export controlled information.  To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency.  CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.

Top Skills

Cloud-Native Systems
Go
Kubernetes
Slurm
Sunk

CoreWeave Bellevue, Washington, USA Office

CoreWeave Bellevue, WA Office

Bellevue, Washington, United States, 98004

Similar Jobs at CoreWeave

17 Hours Ago
In-Office
2 Locations
188K-250K Annually
Expert/Leader
188K-250K Annually
Expert/Leader
Cloud • Information Technology • Machine Learning
Seeking a Staff Engineer to lead data platform development, focusing on stream processing, data architecture, and team mentorship within a high-performance environment.
Top Skills: CockroachdbGoKafkaKubernetesLinuxNatsPythonShell ScriptingTidbYdbYugabyte
17 Hours Ago
In-Office
3 Locations
165K-242K Annually
Senior level
165K-242K Annually
Senior level
Cloud • Information Technology • Machine Learning
As an HPC Performance Engineer, you will design and optimize systems, maintain kernel and OS images, develop performance testing tools, and collaborate on performance benchmarks and issues.
Top Skills: Amd CpusArm CpusBash/ShCContainerdDockerGoGrafanaIntel CpusKubeletKubernetesKubevirtLinuxNvidia GpusPrometheusPythonUbuntuVictoria Metrics
17 Hours Ago
In-Office
3 Locations
83K-110K Annually
Junior
83K-110K Annually
Junior
Cloud • Information Technology • Machine Learning
The Operations Engineer in Fleet Reliability ensures uptime of server nodes by managing provisioning, troubleshooting hardware/software issues, and improving system performance.
Top Skills: BashGrafanaKubernetesLinuxPowershellPrometheusPython

What you need to know about the Seattle Tech Scene

Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.

Key Facts About Seattle Tech

  • Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Amazon, Microsoft, Meta, Google
  • Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Madrona, Fuse, Tola, Maveron
  • Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account