This role involves owning aspects of data lifecycle for Auger's Supply Chain operating system, including schema design and data processing using AI and SQL.
Build at Auger
What You’ll Do
What You Bring
Auger is the autonomous operating system for supply chains. It connects enterprise supply chain systems—ERP, WMS, TMS—into a single data layer, then uses AI to detect problems, evaluate trade-offs, and execute decisions automatically. The platform eliminates the coordination tax: the time and capital lost when disconnected systems force humans to become the integration layer between planning and execution. Actions that previously required days of meetings and manual coordination happen in seconds, within constraints the customer defines.
Founded by Dave Clark and backed by $100M from Oak HC/FT. Headquartered in Bellevue, Washington.
About the Team & RoleAuger is building an autonomous operating system for the supply chain. Our customers rely on Auger to understand reality and change it: reporting, AI-powered decision support, and write-back execution systems that operate at scale.
This role is data-centric software engineering. We hold a high bar for quality: you’ll help turn messy, customer-shared data into a unified semantic layer that analytics, AI workflows, and execution paths can rely on.
This is not a “move data from A to B” role. You are expected to own data correctness, semantic correctness, and operability for the data lifecycle.
As a Software Development Engineer, you have a solid data engineering background. You will deliver end-to-end on assigned pipelines and transformation work, help troubleshoot production issues, and consistently improve quality through tests, checks, and sound operational habits.
- Build and maintain data pipelines lifecycle. Ship production-grade transformation logic and operational outputs, using schema contracts and measurable validation.
- Work in an agent-native style—use AI tools to move faster on data exploration, data transformation, data queries, data investigation, and refactors. Contribute to reusable patterns and tooling (including agent-assisted workflows) so the team can discover schemas, draft transforms, generate SQL faster, and troubleshoot with less one-off work.
- Build and maintain the integration points between data pipelines and ML pipelines. Implement schema-bound datasets that transform pipeline outputs into ML-ready inputs, and write ML results back to the semantic layer following established contracts. Contribute to schema design and enforce data contracts that keep model logic cleanly separated from the system of record.
- Operate what you build: monitoring and alerting as appropriate, participating in incidents remediations, and following through so issues do not repeat. Practice test-driven habits for data: clarify correctness for the datasets you touch; add automated checks and regression coverage where it matters; turn bugs and incidents into fixes that stick.
- Partner with product, science, and platform teammates to clarify requirements, flag tradeoffs early, and deliver work that holds up to customers.
- Degree in Computer Science, Mathematics, Statistics, or another data-intensive discipline (or equivalent practical experience). 4+ years of professional development experience with strong hands-on SQL and Python in production (Spark or equivalent large-scale batch processing preferred; Scala/Flink/Beam a plus). 3+ years in data work (structured and semi-structured), modern warehouses/lakehouses, and practical schema design in evolving domains.
- Ownership mindset on production systems: you debug methodically, improve reliability over time, and connect your work to customer/product outcomes. Hands-on experience with lakehouse/warehouse patterns, incremental processing, and basic performance/cost awareness. Notebook fluency and the judgment to structure notebook work so it is reviewable and promotable.
- Validation-first habits for data: meaningful checks between layers, DQ where it counts, and regression protection for critical transforms. Agent-native fluency with verification—you treat generated SQL/pipelines as proposals until proven.
- Clear communication and collaboration: you ask good questions, drive work to completion, and leave the codebase better than you found it.
- A plus if you have experience in supply chain, planning, or fulfillment domains.
As part of our commitment to People Powered Greatness, we invest in our team members with competitive compensation and a comprehensive benefits to support your health, financial future, and daily life. The package includes medical, dental, and vision coverage, a 401(k) with company match, and commuter benefits. Total compensation may include a combination of a competitive base salary and equity. Your initial placement within our salary range will be based on your experience, qualifications.
The base pay range for this role is $150,000 – $200,000 per year.
Auger considers all qualified applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Additionally, our privacy policy is available at https://auger.com/privacy-notice/.
Auger Bellevue, Washington, USA Office
Bellevue, WA, United States, 98004
Similar Jobs
Fintech
As a Senior Software Development Engineer, you will design and build AI-powered productivity tools, partnering with teams to innovate and streamline workflows across PitchBook, while leading complex initiatives and mentoring other engineers.
Top Skills:
Ci/CdGitJavaJavaScriptNoSQLPythonSQLTypescript
Fintech
Lead technical initiatives for AI-powered productivity tools, improve workflows, and mentor teams to enhance software development through AI enablement.
Top Skills:
GenaiJavaJavaScriptLlmsPythonTypescript
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
The Principal Scientist will lead downstream process development for recombinant proteins, contribute to technology transfer, and provide scientific leadership within multidisciplinary teams.
Top Skills:
Biochemical EngineeringBiochemistryBiological EngineeringBiotechnologyChemistryProtein Analytics
What you need to know about the Seattle Tech Scene
Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.
Key Facts About Seattle Tech
- Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Amazon, Microsoft, Meta, Google
- Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Madrona, Fuse, Tola, Maveron
- Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute


