Similar Jobs
Fintech • Insurance • Financial Services
Design, develop, and maintain ETL workflows and data pipelines using Databricks and Python. Ensure data accuracy and quality through testing.
Top Skills:
DatabricksETLPl/SqlPythonSsis
Artificial Intelligence • Cloud • Internet of Things • Machine Learning • Analytics • Industrial
As an Analytics Data Engineer III, you will design, develop, and maintain data systems, build pipelines, and ensure data integrity to support business intelligence needs.
Top Skills:
Power BIPythonScalaSQLTableau
AdTech • Cloud • Digital Media • Information Technology • News + Entertainment • App development
Design, develop, and improve data systems. Collaborate with engineers, perform code reviews, and troubleshoot production issues in an Agile environment.
Top Skills:
AthenaAWSDynamoDBEmrEmrGlueHadoopHiveInformaticaJavaPythonRedshiftS3ScalaSparkTalend
Encora is looking for a skilled and motivated Data Engineer with 2–5 years of experience to join our growing data team. You will play a key role in designing, building, and maintaining scalable data pipelines and infrastructure that empower analytics, machine learning, and business intelligence. The ideal candidate is passionate about data, has a solid understanding of modern data engineering practices, and is comfortable working in a fast-paced environment.
This is a 6 month project with high likelihood of extension, working 100% remote supporting EST work hours.
Key Responsibilities:- Design, develop, and maintain robust ETL/ELT pipelines using tools like Apache Airflow, dbt, or similar.
- Work closely with data analysts, scientists, and business stakeholders to understand data requirements and translate them into scalable solutions.
- Optimize and manage data storage and data lake/warehouse solutions (e.g., Snowflake, BigQuery, Redshift, or Azure Synapse).
- Ensure data quality, integrity, and compliance with data governance and security policies.
- Monitor and troubleshoot data workflows to ensure high availability and performance.
- Contribute to the architecture and development of a modern data platform using cloud technologies (AWS, GCP, or Azure).
- Document data models, pipelines, and processes for internal use and training.
- 2–5 years of hands-on experience as a Data Engineer or similar role.
- Proficiency in SQL and at least one programming language (e.g., Python, Java, or Scala).
- Experience with ETL/ELT tools and orchestration frameworks (e.g., Airflow, dbt, Luigi).
- Solid understanding of data modeling, warehousing concepts, and performance tuning.
- Experience with cloud platforms such as AWS, GCP, or Azure.
- Familiarity with version control tools like Git and CI/CD practices.
- Strong problem-solving skills and attention to detail.
What you need to know about the Seattle Tech Scene
Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.
Key Facts About Seattle Tech
- Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Amazon, Microsoft, Meta, Google
- Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Madrona, Fuse, Tola, Maveron
- Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute