People Data Labs Logo

People Data Labs

Senior Data Engineer

Posted 23 Days Ago
Remote
Hiring Remotely in USA
190K-210K Annually
Senior level
Remote
Hiring Remotely in USA
190K-210K Annually
Senior level
As a Senior Data Engineer, you will build infrastructure for data ingestion and transformation, develop CI/CD pipelines, and resolve data engineering challenges using modern tools and frameworks.
The summary above was generated by AI

Note for all engineering roles: with the rise of fake applicants and AI-enabled candidate fraud, we have built in additional measures throughout the process to identify such candidates and remove them.

About Us

People Data Labs (PDL) is the provider of people and company data. We do the heavy lifting of data collection and standardization so our customers can focus on building and scaling innovative, compliant data solutions. Our sole focus is on building the best data available by integrating thousands of compliantly sourced datasets into a single, developer-friendly source of truth. Leading companies across the world use PDL’s workforce data to enrich recruiting platforms, power AI models, create custom audiences, and more.

We are looking for individuals who can balance extreme ownership with a “one-team, one-dream” mindset. Our customers are trying to solve complex problems, and we only help them achieve their goals as a team. Our Data Engineering Team is the secret sauce behind all that we do and we are looking for the best of the best.

If you are looking to be part of a team discovering the next frontier of data-as-a-service (DaaS) with a high level of autonomy and opportunity for direct contributions, this might be the role for you. We like our engineers to be thoughtful, quirky, and willing to fearlessly try new things. Failure is embraced at PDL as long as we continue to learn and grow from it.

What You Get to Do

  • Build infrastructure for ingestion, transformation, and loading an exponentially increasing volume of data from a variety of sources using Spark, SQL, AWS, and Databricks

  • Building an organic entity resolution framework capable of correctly merging hundreds of billions of individual entities into a number of clean, consumable datasets.

  • Developing CI/CD pipelines and anomaly detection systems capable of continuously improving the quality of data we're pushing into production.

  • Dreaming up solutions to largely undefined data engineering and data science problems.

The Technical Chops You’ll Need

  • 5-7+ years of industry experience with clear examples of strategic technical problem-solving and implementation

  • Strong software development fundamentals.

  • Experience with Python 

  • Expertise with Apache Spark (Java, Scala, and/or Python-based)

  • Experience with SQL

  • Experience building scalable data processing systems (e.g., cleaning, transformation)  from the ground up.

  • Experience using developer-oriented data pipeline and workflow orchestration (e.g., Airflow (preferred), dbt, dagster or similar)

  • Knowledge of modern data design and storage patterns (e.g., incremental updating, partitioning and segmentation, rebuilds and backfills)

  • Experience working in Databricks (including delta live tables, data lakehouse patterns, etc.)

  • Experience with cloud computing services (AWS (preferred), GCP, Azure or similar)

  • Experience with data warehousing (e.g., Databricks, Snowflake, Redshift, BigQuery, or similar)

  • Understanding of modern data storage formats and tools (e.g., parquet, ORC, Avro, Delta Lake)

People Thrive Here Who Can

  • Balance high ownership and autonomy with a strong ability to collaborate

  • Work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities)

  • Demonstrate strong written communication skills on Slack/Chat and in documents

  • Exhibt experience in writing data design docs (pipeline design, dataflow, schema design)

  • Scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholders

Some Nice To Haves

  • Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering

  • Experience working with entity data (entity resolution / record linkage)

  • Experience working with data acquisition / data integration

  • Expertise with Python and the Python data stack (e.g., numpy, pandas)

  • Experience with streaming platforms (e.g., Kafka)

  • Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness)

Our Benefits

  • Stock

  • Competitive Salaries

  • Unlimited paid time off

  • Medical, dental, & vision insurance 

  • Health, fitness, and office stipends

  • The permanent ability to work wherever and however you want

Comp: $190K - $210K

People Data Labs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits.

Qualified Applicants with arrest or conviction records will be considered for Employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.

Personal Privacy Policy for California Residents

https://www.peopledatalabs.com/pdf/privacy-policy-and-notice.pdf

Top Skills

Airflow
Spark
Avro
AWS
BigQuery
Data Warehousing
Databricks
Dbt
Delta Lake
Orc
Parquet
Python
Redshift
Snowflake
SQL

Similar Jobs

Yesterday
Easy Apply
Remote or Hybrid
USA
Easy Apply
130K-165K Annually
Senior level
130K-165K Annually
Senior level
Artificial Intelligence • Insurance • Machine Learning • Software • Analytics
Lead design and implementation of scalable, HIPAA-compliant data pipelines and platforms for healthcare ML. Build ETL, orchestration, and tooling for processing EHR, claims, pharmacy, and bioinformatics data; collaborate with data scientists to produce modeling-ready datasets and ensure data quality, reliability, and operational excellence.
Top Skills: Python,Sql,Apache Spark (Pyspark),Databricks,Snowflake,Airflow,Dagster,Prefect,Terraform,Docker,Kubernetes,Aws,Dbt,Ci/Cd
2 Days Ago
In-Office or Remote
Raleigh, NC, USA
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Design and build enterprise data models, ETL/ELT pipelines and high-performance PySpark jobs on Azure/Databricks. Enable ML/AI with MLflow and model serving, secure restricted-data environments using Unity Catalog and access controls, deploy AI agents, and evaluate emerging data and AI trends.
Top Skills: Azure,Databricks,Databricks Workflows,Azure Data Factory,Python,Pyspark,Sql,Databricks Sql,Delta Lake,Unity Catalog,Mlflow,Model Serving,Github Copilot,Databricks Genai,Mlops
9 Days Ago
Easy Apply
Remote or Hybrid
United States
Easy Apply
165K-175K Annually
Senior level
165K-175K Annually
Senior level
AdTech • Artificial Intelligence • Marketing Tech • Software • Analytics
The Senior Data Engineer will design, build, and operate data pipelines for Zeta's AdTech platform, focusing on high-scale data processing and analytics-ready datasets.
Top Skills: AirflowAthenaAWSCassandraDagsterDeltaDynamoDBEmrFlinkGlueGoHudiIcebergJavaKafkaKinesisMySQLParquetPostgresPythonRedisRedshiftS3ScalaSparkSQLStep Functions

What you need to know about the Seattle Tech Scene

Home to tech titans like Microsoft and Amazon, Seattle punches far above its weight in innovation. But its surrounding mountains, sprinkled with world-famous hiking trails and climbing routes, make the city a destination for outdoorsy types as well. Established as a logging town before shifting to shipbuilding and logistics, the Emerald City is now known for its contributions to aerospace, software, biotech and cloud computing. And its status as a thriving tech ecosystem is attracting out-of-town companies looking to establish new tech and engineering hubs.

Key Facts About Seattle Tech

  • Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Amazon, Microsoft, Meta, Google
  • Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
  • Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Madrona, Fuse, Tola, Maveron
  • Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account