OpenAI Logo

OpenAI

Software Engineer, Compute Infrastructure

Posted 2 Hours Ago
Be an Early Applicant
Hybrid
Seattle, WA, USA
230K-405K Annually
Mid level
Hybrid
Seattle, WA, USA
230K-405K Annually
Mid level
Develop and optimize large-scale compute systems for AI workloads, improving infrastructure reliability and performance across various layers of the stack.
The summary above was generated by AI

About the Team:

Compute Infrastructure builds the platform that turns enormous amounts of compute into a reliable engine for frontier AI. We design, provision, schedule, operate, and optimize the systems that connect accelerators, CPUs, networks, storage, data centers, orchestration software, agent infrastructure, developer tools, and observability into one coherent experience for researchers and product teams.

Our work spans the entire stack: capacity planning and cluster lifecycle, bare-metal automation, distributed systems, Kubernetes and scheduling, deep system optimization, high-performance networking, storage, fleet health, reliability, workload profiling, benchmarking, and the developer experience that lets teams use enormous compute systems with confidence. At this scale, small improvements to communication, scheduling, hardware efficiency, or debugging workflows can compound into meaningful research velocity. We are hiring across Compute Infrastructure rather than for a single narrow team, and we use this opening to match strong engineers to the problems where they can have the most leverage.

About the Role

We are looking for engineers who want to build the compute platform behind OpenAI's research and products. You may be strongest in low-level systems, high-performance computing, distributed infrastructure, reliability, CaaS, agent infrastructure, developer platforms, tooling, or the user experience around infrastructure. What matters is that you can reason carefully about complex systems, write durable software, and raise the quality and velocity of the people around you.

Depending on your background and interests, you might work close to hardware, close to users, on CaaS and agent infrastructure, or on the control planes and data planes in between. You could help bring new supercomputing capacity online, optimize training workloads from profiler traces and benchmarks, improve NCCL and collective communication behavior, reason about GPUs, NICs, topology, firmware, thermals, and failure modes, or design abstractions that make heterogeneous clusters feel like one coherent platform.

We do not expect every candidate to have worked at every layer. Some engineers will go deep on systems performance, kernel or runtime behavior, large-scale networking protocols, RDMA, NCCL, GPU hardware behavior, benchmarking, scheduling, or hardware reliability; others will make the platform more usable through APIs, tools, workflows, and developer experience. The common thread is strong engineering judgment and excitement about making enormous compute systems faster, more reliable, and easier to use.

This is a general opening for Compute Infrastructure. We will consider candidates for teams across Compute Infrastructure and match you based on your strengths, the problems that motivate you, and where the infrastructure needs are highest.

Where you might work

  • Compute Foundations: Build the low-level platform primitives that make heterogeneous hardware, providers, and data centers repeatable, automatable, and operable at scale.

  • Fleet / Orchestration: Turn raw capacity into reliable, efficient clusters and scheduling systems that researchers and product teams can use with minimal friction and great experience.

  • Core Network Engineering: Build and operate the high-performance networking fabrics, protocols, and observability needed for the largest training and serving workloads.

  • Hardware Health and Observability: Detect, diagnose, remediate, and prevent hardware and fleet-health issues so usable compute stays high across providers and accelerator generations.

  • Storage: Build scalable, performant, durable storage abstractions that keep data movement and storage access from becoming a bottleneck to research or products.

  • Agent Infrastructure: Build sandboxed execution infrastructure for agentic workloads across research and production, with strong isolation, reliability, and scale.

In this role, you will:

  • Build and deeply optimize reliable system software for large-scale compute systems that run some of the world's most demanding AI workloads

  • Design and operate infrastructure across accelerators, CPUs, NICs, switches, networking protocols, storage, data centers, cluster orchestration, scheduling, and fleet health

  • Profile, benchmark, and optimize training workloads across compute, memory, storage, networking, NCCL and collective communication, and cluster scheduling bottlenecks

  • Create hardware-aware automation that makes provisioning, firmware and driver upgrades, incident response, and day-to-day operations faster and less error-prone

  • Build CaaS, agent infrastructure, profiling, observability, benchmarking, and platform tools that help researchers, product engineers, and operators launch, debug, and optimize workloads with less friction

  • Turn operational lessons into better systems, stronger abstractions, and clearer ownership boundaries across teams

  • Collaborate across research, engineering, security, networking, hardware, and data center teams to make compute capacity more capable and easier to use

You might thrive in this role if you:

  • Have built or operated distributed systems, infrastructure platforms, high-performance computing environments, large-scale networking systems, Kubernetes clusters, developer tools, or production systems with demanding reliability requirements

  • Enjoy working across layers of the stack and are comfortable moving between software, hardware, networking, systems performance, reliability, and user needs

  • Care about making complex infrastructure understandable, observable, and usable for the people depending on it

  • Can diagnose hard problems under real operational pressure while still investing in long-term engineering quality

  • Like building leverage for others, whether through APIs, automation, debugging tools, CaaS and agent infrastructure primitives, workflow improvements, or better platform abstractions

  • Are motivated by scale, efficiency, reliability, and disciplined measurement through benchmarks, profiles, and production evidence

  • Communicate clearly, take ownership, and work well with teams whose constraints and goals differ from your own

Qualifications

  • Strong software engineering skills and experience building, operating, or improving production infrastructure systems

  • Experience in one or more relevant areas such as distributed systems, operating systems, networking protocols, RDMA, NCCL or collective communication, storage, Kubernetes, scheduling, observability, reliability engineering, high-performance computing, GPU infrastructure, CaaS, agent infrastructure, hardware-aware performance optimization, benchmarking, developer experience, or infrastructure tooling

  • Ability to debug complex system behavior across software, hardware, networking, and workload layers, then turn findings into robust improvements

  • Comfort with ambiguity, strong ownership, and a bias toward practical, durable solutions

  • Interest in working on infrastructure that directly enables frontier AI research and product impact

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Similar Jobs

10 Days Ago
In-Office
Bellevue, WA, USA
188K-275K Annually
Senior level
188K-275K Annually
Senior level
Cloud • Information Technology • Machine Learning
As a Staff Software Engineer, lead the architecture and operation of GPU-driven infrastructure, focusing on automation, reliability, and scalability while mentoring engineers and collaborating across teams.
Top Skills: GoGrafanaGrpcKubernetesPrometheusRest
18 Minutes Ago
In-Office
Seattle, WA, USA
180K-225K Annually
Senior level
180K-225K Annually
Senior level
eCommerce • Fintech • Payments • Software • Financial Services
The Senior Software Engineer will develop backend services for the Decision Intelligence Platform, ensuring high reliability and scalability, and collaborate across teams to enhance decision-making processes at Remitly.
Top Skills: Analytics InfrastructureCloud PlatformsData PipelinesEvent-Driven SystemsGoJavaKotlinMetrics PlatformsPython
19 Minutes Ago
Remote or Hybrid
USA
75K-193K Annually
Senior level
75K-193K Annually
Senior level
Consumer Web • Coupons • Healthtech • Social Impact • Pharmaceutical
Manage compliance projects for data privacy, collaborate with various teams, develop policies, monitor regulations, and address consumer privacy requests.
Top Skills: Ad Tracking TechnologyCompliance Management ToolsData Governance FrameworksData Privacy RegulationsProject Management MethodologiesSdks

What you need to know about the Seattle Tech Scene

Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.

Key Facts About Seattle Tech

  • Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Amazon, Microsoft, Meta, Google
  • Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Madrona, Fuse, Tola, Maveron
  • Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account