About Us
KYROS is the only actuarial firm solely focused on loyalty programs. We’re a small, boutique, fully remote consulting firm built by actuaries and technologists who wanted to rethink how actuarial work is done. We help your favorite frequent flyer program, hotel, bank or retail loyalty program measure and optimize the economic value that the program creates through a modern, highly technical combination of actuarial theory, machine learning and big data technology.
As a lean and growing team, we operate much more like a startup than a large consulting firm. We move quickly, expect a high level of execution, and rely on team members to take ownership of their work end-to-end. This is an environment for people who work efficiently, make sound decisions with incomplete information, and enjoy delivering high-quality work without layers of process or bureaucracy. If you thrive on variety, enjoy coming to work not always knowing exactly how your day will unfold, and get energized by balancing multiple priorities in a fast-moving environment, this role is for you.
The Role
We’re looking for a highly motivated data scientist to help us deliver a growing portfolio of client projects. This role is best suited for someone who thrives in a fast-paced, hands-on environment and wants to have a real impact on both client outcomes and how our work is done. The ideal candidate has strong hands-on PySpark, Python, and SQL skills, experience working with large, complex datasets, and a proven ability to clean and structure data as the foundation for analytical and modeling work.
You’ll go beyond data engineering to design and build machine learning models and take ownership of the end-to-end data science workflow from data through insights. Just as important: the ability to connect that technical work to the bigger picture — understanding not just how to build something, but why, and what it means for the client.
Note: This is not an AI research or NLP/LLM engineering role. This role works primarily with structured, tabular data (e.g. member-level transactional loyalty program data), not unstructuredtext data.
A Day in the Life
No two days look exactly the same, but here's what the work typically looks like:
- You start with a client's business problem — for example, what are the key drivers that impact a loyalty program’s liability or the customer behaviors that drive long-term customer lifetime value — and figure out how to turn that into a full end-to-end data science workflow
- You work directly with the client to define data requirements and build the data pipelines to ingest and transform raw loyalty program data into clean structured datasets that are ready for analysis and modeling
- You design, build and evaluate predictive models, making sound decisions around feature engineering, methodology, and trade-offs along the way
- You synthesize insights from the results and translate them into a clear story that helps a non-technical client make a real business decision
- In between, there's always ad hoc work — new client questions, one-off analyses, and the occasional problem nobody has solved before
There's also real opportunity to innovate: to build new solutions, design new approaches, and contribute to how our analytics platform evolves. We're always improving how the work gets done, and we want people who want to be part of that.
Qualifications
Basic qualifications:
- Years of Experience: At least 5 years of professional experience in a data science, data engineering, or analytically focused role
- Education: Bachelor’s or master’s degree in math, economics, bioinformatics, statistics, engineering, computer science, actuarial science or other quantitative field
- Data Engineering: Demonstrated hands-on experience using Python and/or PySpark to clean, transform, and manipulate large, complex, and messy datasets
- Skills: Strong Python and/or PySpark expertise; strong SQL expertise; ability to build and maintain production-quality analytical workflows; advanced Excel skills
- Business Sense: Ability to connect technical work to business goals — to understand not just how to build something, but why, and what the result means for the client
- Modeling: Familiarity with predictive modeling techniques used in an actuarial, data science, or predictive analytics focused role
- Communication: Ability to clearly explain modeling approaches, assumptions, and results to non-technical audiences
- Other: Strong communicator, growth mindset; ability to clearly explain modeling techniques, methodologies, assumptions, and results to both technical and non-technical audiences
Nice-to-haves:
- Actuarial background, direct experience building and analyzing P&C actuarial models end-to-end, or ACAS/FCAS qualifications are a plus
- Demonstrated experience working with big data frameworks (e.g., Hadoop, Spark)
- Experience with dashboarding/data visualization (e.g., Plotly Dash, Power BI)
Top Skills
Similar Jobs
What you need to know about the Seattle Tech Scene
Key Facts About Seattle Tech
- Number of Tech Workers: 287,000; 13% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Amazon, Microsoft, Meta, Google
- Key Industries: Artificial intelligence, cloud computing, software, biotechnology, game development
- Funding Landscape: $3.1 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Madrona, Fuse, Tola, Maveron
- Research Centers and Universities: University of Washington, Seattle University, Seattle Pacific University, Allen Institute for Brain Science, Bill & Melinda Gates Foundation, Seattle Children’s Research Institute



