Overview
Location: Full remote.
Schedule: Full-time, European time zone availability.
Job Purpose
We are seeking a Data Engineer with a strong foundation in software engineering to contribute to the development of robust data systems and pipelines. This role emphasizes delivering high-performance, maintainable, and scalable solutions, while actively engaging in code reviews, design discussions, and promoting engineering best practices.
Key Responsibilities
- Design, develop, and optimize data pipelines to ensure scalability, performance, and reliability.
- Write clean, efficient, and reusable code, applying modern software engineering principles.
- Collaborate with cross-functional teams to translate business and technical requirements into actionable solutions.
- Participate in code reviews and design discussions to maintain high-quality standards.
- Take part in on-call rotations to support and maintain system stability.
Experience & Qualifications
- Proficiency in Scala and Python (advanced knowledge).
- Strong knowledge of SQL (advanced knowledge).
- Experience with Apache Spark, Flink, and streaming technologies such as Kafka (intermediate knowledge).
- Experience using AWS services -EC2, S3, etc. (intermediate knowledge).
- Familiarity with batch orchestration frameworks like Argo Workflows or Airflow.
- Familiarity with Docker.
- Proven ability to work autonomously, drive initiatives, and deliver high-quality solutions.
- Advanced written and spoken English.