As a Senior Data Engineer at Auger, you will build data systems to transform customer data into a unified ontology for analytics and AI workflows, ensuring data quality and managing the data lifecycle.
Build at Auger
What You’ll Do
What You Bring
Auger is the autonomous operating system for supply chains. It connects enterprise supply chain systems—ERP, WMS, TMS—into a single data layer, then uses AI to detect problems, evaluate trade-offs, and execute decisions automatically. The platform eliminates the coordination tax: the time and capital lost when disconnected systems force humans to become the integration layer between planning and execution. Actions that previously required days of meetings and manual coordination happen in seconds, within constraints the customer defines.
Founded by Dave Clark and backed by $100M from Oak HC/FT. Headquartered in Bellevue, Washington.
About the RoleBuild the data foundation that powers Auger’s Supply Chain OS, AI systems, and execution workflows.
Auger is building an operating system for supply chain teams. Our customers rely on Auger to understand reality and change it: reporting, AI-powered decision support, and write-back execution systems that operate at scale.
At the core of this system is Data Engineering. This role sets direction for and owns the evolution of the transformation of messy, customer-shared data into a unified, production-grade ontology that directly powers analytics, AI workflows, and execution systems.
This is not a “move data from A to B” role. This is system-level semantic ownership at the heart of the product.
As a Senior Data Engineer, you will leverage new and existing customer data sources ingested into Auger’s core data lake and transform them into our ontology. We’re seeking teammates who love data of all kinds, are masters of building efficient, scalable, operable and durable data systems, and are ready to take hands-on ownership beyond individual pipelines in the following areas:
- Own and evolve the data lifecycle across systems, from ingestion through production-ready ontology
- Ingest data from databases, data streams, batch files, and incremental feeds
- Define standards for and operate medallion-style lakehouse pipelines (bronze → silver → gold)
- Transform raw inputs into a consistent digital twin of supply chain reality that scales across customers
- Serve high-quality data to analytics, AI workflows, and write-back systems with clear correctness guarantees
- Own data correctness and reliability in production, including monitoring, on-call, incident response, and post-incident systemic improvements
- Define and enforce data quality checks, validations, and robust backfill strategies
- Use AI-assisted tools responsibly, setting expectations for review, validation, and production readiness of generated code
- Reduce complexity at scale by simplifying pipelines, eliminating redundancy, and automating recurring workflows
- Partner with product, science, and platform tooling teams to translate ambiguous needs into durable technical designs
- Degree in Computer Science, Mathematics, Statistics, or other data-intensive discipline with substantive engineering experience
- 5+ years demonstrated development experience using SQL, Scala, Spark, Flink, Beam, and/or Python
- 5+ years demonstrated experience in data management (structured and unstructured) and modern database technologies
- Proven experience owning and evolving large-scale production data systems in distributed environments
- Hands-on experience designing and operating lakehouse or warehouse architectures at scale
- Strong schema design skills and deep intuition for data modeling in complex domains
- A production mindset—you’ve owned critical systems, led incident resolution, and driven long-term fixes
- Experience supporting AI/ML or AI-powered products where data quality directly impacts outcomes
- Familiarity with streaming or incremental processing at scale
- Experience defining data quality, observability, anomaly detection, or reliability standards
- A deep curiosity and eagerness to problem solve in ambiguous, high-impact problem spaces without a playbook
- Ability to lead through ambiguity with urgency, patience, and good judgment while raising the bar for others
- Strong communication and collaboration skills
- A plus if you have prior experience in the supply chain domain
As part of our commitment to People Powered Greatness, we invest in our team members with competitive compensation and a comprehensive benefits to support your health, financial future, and daily life. The package includes medical, dental, and vision coverage, a 401(k) with company match, and commuter benefits. Total compensation may include a combination of a competitive base salary and equity. Your initial placement within our salary range will be based on your experience, qualifications.
The base pay range for this role is $225,000 – $300,000 per year.
Auger considers all qualified applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Top Skills
Beam
Flink
Python
Scala
Spark
SQL
Auger Bellevue, Washington, USA Office
Bellevue, WA, United States, 98004
Similar Jobs
Artificial Intelligence • Hardware • Healthtech • Software
The Senior Data Engineer will own and evolve data pipelines, collaborate with teams for data models and pipelines, optimize ETL processes, and implement data storage solutions.
Top Skills:
AWSDatabricksETLJavaPythonSQL
Fintech • Information Technology • Software • Financial Services
The Senior Data Engineer will design and implement scalable data models in BigQuery using dbt for analytics and reporting, ensuring data governance and optimal performance.
Top Skills:
BigQueryDbtFivetranSQL
Fintech • Information Technology • Software • Financial Services
The Senior Data Engineer will develop real-time streaming data pipelines for pricing algorithms and machine learning models, ensuring low-latency data delivery and system reliability. Responsibilities include collaborating with other teams, defining architectural standards, and mentoring junior engineers.
Top Skills:
FlinkKafkaPythonSQL
What you need to know about the Seattle Tech Scene
Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.
Key Facts About Seattle Tech
- Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Amazon, Microsoft, Meta, Google
- Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Madrona, Fuse, Tola, Maveron
- Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute


