What if you could shape the future of work and be part of the team that creates the digital workforce of tomorrow, by means of Robotic Process Automation?
In the beginning of the 20th century, Henry Ford had a vision of creating assembly lines and facilitating mass production.
100 years later, UiPath has a grand vision of liberating the human workforce from tedious, boring, repetitive tasks, by means of software robots, artificial intelligence and machine learning.
Here's what you would be doing at UiPath:
As a Data Engineer, you will be responsible for building and maintaining the infrastructure for big data and AI lifecycle. You will also be responsible for maintaining the algorithms developed by our data scientists and using a myriad of tools for production readiness of AI models. You will be working on a cross-functional team of Product Managers, devops engineers, machine learning engineers, and software engineers for high impact shipping. Being a part of hypergrowth startup, you are not afraid of getting your hands dirty and are expected to be a jack of all trades for all steps in the ML lifecycle right from tagging, featurization, training and benchmarking to experimentation, monitoring and analytics.
Role & Responsibilities:
You will work with our team of experts in machine learning and software engineering to do the following:
- Design and build large scale data ingestion, storage and processing platform.
- Build and maintain platform to for the complete machine learning lifecycle.
Qualification & Educational Requirements:
- Post Graduate / Graduate in computer science or a related field.
- Overall 4+ years of experience in IT industry with 1+ years working on products built to handle big data in both cloud and on-prem settings.
- Experience of working on both SQL & No-SQL databases.
- Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures.
- Experience with big data technologies like Hadoop, Spark & Kafka.
- Experience in Python, data processing and parsing.
- Experience of building platforms in cloud using AWS, Azure or GCP.
- Experience in offline batch processing and/or online real-time stream processing systems.
- Knowledge of Machine Learning and Interested in working across our entire data science stack including model building, data pipelining, and performance/scale analysis.
We are offering the possibility to work from home or flexible working hours in a nice office plus free daily premium catering. Healthcare plan.
Competitive salary, a Stock Options Plan and the unique opportunity of working with us to develop state-of-the-art robotics technology are just a few of the pluses.
We must have caught your attention by now if you've read so far, so we must connect.