NVIDIA Logo

NVIDIA

Senior Systems Software Engineer - Deep Learning Solutions

Reposted Yesterday
Be an Early Applicant
In-Office or Remote
Hiring Remotely in Toronto, ON
225K-275K Annually
Senior level
In-Office or Remote
Hiring Remotely in Toronto, ON
225K-275K Annually
Senior level
Lead deep-learning inference optimization for edge/autonomous systems: analyze models to operator/kernel level, benchmark and deploy TensorRT/compiler solutions on Jetson/DRIVE/ARM/GPU platforms, collaborate with compiler/runtime/hardware teams, and engage partners to deliver production-ready performance improvements.
The summary above was generated by AI

NVIDIA is a global leader in physical AI, powering self-driving cars, humanoid robots, intelligent environments, and medical devices. Our software platforms are central to this mission. We help innovators build products that save lives, enhance working conditions, and improve living standards globally! We are hiring a Senior Engineer to become part of our team as a technical authority in deep learning inference optimization for autonomous vehicles and robotics on edge hardware. This role requires a hands-on expert who can inspect model architectures down to the operator level. They will uncover performance bottlenecks through kernel traces and evaluate how modern architectures (transformers, vision-language models, diffusion/flow matching, state space models) function on GPU and SOC. The work performed directly advances how autonomous vehicles and robots sense and respond in the real world, with instant impact!

This group addresses some of the toughest optimization problems in the industry, operating at the crossroads of innovative model architectures, compiler technology, and embedded hardware. We work in close partnership with automotive OEMs, robotics collaborators, and internal hardware teams to expand the limits of what can be achieved on edge devices.

What you'll be doing:

  • Address customer and partner optimization challenges: Engage directly with prominent automotive OEMs and robotics associates to analyze, debug, and improve their deep learning models on NVIDIA platforms. We emphasize delivering solutions rather than just recommendations.

  • Own performance benchmarking: Drive efforts to achieve leading results on MLPerf Edge and industry benchmarks, as well as closed-source engagements with key partners. Define methodology, ensure reproducibility, and turn results into actionable optimization priorities.

  • Evaluate emerging model architectures: Analyze new DL architectures, including vision encoders, multi-modal VLMs, hybrid SSM-Transformer backbones, diffusion/flow matching decoders, and multi-camera tokenizers, for compilation feasibility, memory footprint, and latency on target SOCs.

  • Collaborate across teams: Partner with our compiler, runtime, and hardware teams to connect model-level insight with platform capabilities.

  • Contribute to build reviews and help develop internal roadmap priorities based on real customer workload patterns.

  • Represent NVIDIA externally: Share our deep learning optimization expertise at conferences, webinars, and partner events. Help elevate the broader team by bringing back insights and establishing guidelines.

  • Deliver TensorRT and compiler-stack solutions for edge: Create and deploy inference solutions on Jetson, DRIVE, and GPU + ARM platforms for AV and robotics workloads. Develop Proofs of Readiness (PORs) and work closely with our compiler team on Torch-TRT, MLIR-TRT, and related frameworks to bridge performance gaps.

What we need to see:

  • Master’s degree or equivalent experience in Computer Science, Electrical Engineering, or a related field.

  • 12 + years of industry experience with over 8 years in deep learning model optimization, inference engineering, or neural network compilation. You need to be adept at interpreting and reasoning about model architectures at the operator/kernel level, not only operating them.

  • Over 5 years of validated expertise in embedded/edge software, with experience delivering production inference solutions within power-limited, latency-sensitive deployment environments.

  • Deep knowledge of current DL architectures: transformers, attention variants, vision encoders (ViT), multi-modal/vision-language model frameworks, and experience with diffusion models and/or state space models.

  • Expert knowledge of GPU architecture fundamentals, CUDA, and low-level performance optimization using heterogeneous computing. Experience with TensorRT, compiler IRs, or equivalent inference optimization toolchains.

  • Solid understanding of embedded operating system internals (QNX/Linux), memory management, C/C++, and embedded/system software concepts.

  • Background in parallel programming (e.g., CUDA, OpenMP) and experience reasoning about memory hierarchies, data movement, and compute utilization.

  • Demonstrated capability to collaborate directly with external partners and customers in a deep technical role, solving their workload issues, identifying performance problems, and providing solutions within production limitations.

Ways to Stand Out from the Crowd:

  • Experience with ML compiler frameworks (TVM, MLIR, XLA, Triton) or contributing to inference runtime development.

  • Production deployment experience with autonomous vehicle perception or planning stacks, understanding the full pipeline from sensor input through trajectory output.

  • Familiarity with the Physical AI model landscape: VLM + action expert architectures, end-to-end driving models, or robot foundation models.

  • Contributions to MLPerf benchmarks and large-scale industry performance optimization efforts.

  • Experience with automotive safety standards (ISO 26262, SOTIF) and their implications for inference system development.

  • Experience leading technical initiatives across globally distributed engineering teams.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 225,000 CAD - 275,000 CAD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 2, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

Top Skills

Arm
C
C++
Cuda
Diffusion Models
Drive
Gpu
Jetson
Linux
Mlir
Mlir-Trt
Mlperf
Openmp
Qnx
State Space Models
Tensorrt
Torch-Trt
Transformers
Triton
Tvm
Vision Transformer
Xla
HQ

NVIDIA Seattle, Washington, USA Office

4545 Roosevelt Way NE 6th Floor, Seattle, Washington, United States, 98105

Similar Jobs

18 Minutes Ago
Remote
2 Locations
117K-212K Annually
Junior
117K-212K Annually
Junior
Artificial Intelligence • Productivity • Software • Automation
As a Pre-sales Solutions Architect, you'll educate customers, drive revenue, collaborate with teams, and influence product development by understanding technical needs and providing automation solutions.
Top Skills: Ai ToolsAutomationIntegration Architectures
12 Hours Ago
Remote or Hybrid
56 Locations
125K-200K Annually
Senior level
125K-200K Annually
Senior level
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
Lead enterprise-wide resilience programs (BC/DR, crisis management) by managing portfolios, metrics, dashboards, cross-functional execution, risk mitigation, governance, vendor/tools, and executive reporting to improve preparedness and response.
Top Skills: AgileBc Management PlatformsCloud-Native EnvironmentsConfluenceJIRAPower BIScrumServicenowSnowflakeTableauWaterfall
15 Hours Ago
Remote
Canada
197K-266K Annually
Senior level
197K-266K Annually
Senior level
Artificial Intelligence • Cloud • Consumer Web • Productivity • Software • App development • Data Privacy
The Director GTM FP&A leads financial planning, forecasting, and reporting for GTM functions, supports revenue growth, and optimizes investment decisions while managing a high-performing team.
Top Skills: ExcelOracleSalesforceSnowflakeTableau

What you need to know about the Seattle Tech Scene

Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.

Key Facts About Seattle Tech

  • Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Amazon, Microsoft, Meta, Google
  • Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Madrona, Fuse, Tola, Maveron
  • Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account