6 days ago
🏢 In-office - London
Amazon Redshift
Apache
AWS
Azure
Cloud
Cyber Security
Docker
ETL
Google Cloud Platform
Hadoop
Kubernetes
Python
PyTorch
Spark
SQL
Tensorflow
• Develop and maintain scalable data pipelines using tools like Apache Beam, Spark, or Hadoop. • Collaborate with Data Scientists to support machine learning workflows. • Implement data warehousing solutions for robust reporting and analysis. • Ensure data integrity and reliability through strong ETL/ELT practices. • Work with Product and Engineering teams to integrate data solutions into production.
• Bachelor’s degree in Computer Science, Engineering, or related field. • 3+ years of experience in data engineering roles. • Proficiency in Python, SQL, and ETL tools. • Experience with cloud platforms (AWS, GCP, Azure) and modern data warehousing solutions (e.g., Redshift, BigQuery). • Familiarity with distributed computing frameworks (Spark, Hadoop).
Apply Now