Starship Technologies is developing the future of delivery – a network of self-driving robots ready to serve you any time, anywhere. Our mission is to redefine how goods flow through the places in which we live. We see a future where delivery is more convenient for everyone – made possible by our star team of autonomous neighbourhood robots. We have entered the commercial phase with hundreds of robots delivering parcels to thousands of happy customers across the United States, as well as four countries in Europe, with further expansion planned.
Starship Technologies is a paradise for data enthusiasts. Data is gathered from the robots, their infrastructure, and the world around them. We are truly a data-driven company. Obviously, managing data flow, making that flow faster, and improving the quality of data are necessities.
We are seeking a Data Engineer to join our Data team. This role will give you hands-on experience of scaling a data platform through participating in the choice of architecture, tools and processes, but also through implementing those choices.
Some work you will encounter includes:
- High-impact projects that improve data availability and quality and provide reliable access to data for analytics and the rest of the business.
- Making our infrastructure more scalable, reliable, and easier to use.
- Consulting with others on the team, assisting them with some of their daily data challenges.
Skills you need to have:
- Strong SQL skills, as we use several variants daily (Redshift, Presto, Spark)
- Experience of coding in several different languages on a varied tech stack
- Experience with some of the tech we currently use (below). Most of it is straightforward to learn, but needing to learn all of it from scratch takes a long time
- Linux command line experience
Also, you should have...
- An active interest in the world of data engineering
- Some experience in a similar role. While the names may be new, the same tasks have been around for a long time.
Our software stack
- Our data infrastructure is 100% deployed on AWS and 95% popular open source tools
- We make heavy use of Spark (both Pyspark and Scala) to process and transform data. There are some Golang and Python mixed in as well.
- We use Redshift and Athena for ad-hoc queries and analysis
- We schedule everything with Airflow
- We are currently in the midst of shifting our architecture to streaming data (Kafka)
We’ll value our team and make sure that there are:
- Fun company events
- Weekly team lunches
- Never-ending snacks and drinks
- Full-time contract with a four-month trial period
- Location: Tallinn, Estonia
Be part of one of the most talked-about technology companies. To get to know our robots, our people and help us change the world, get in touch and let’s have a chat!