GroupM is the world’s leading media investment company responsible for more than $50B in annual media investment through agencies Mindshare, MediaCom, Wavemaker, Essence and m/SIX, as well as the outcomes-driven programmatic audience company, Xaxis$1. .$1
Media Strategy • Media Buying • Data Management • New Media • Analytics
July 24
🏢 In-office - London
GroupM is the world’s leading media investment company responsible for more than $50B in annual media investment through agencies Mindshare, MediaCom, Wavemaker, Essence and m/SIX, as well as the outcomes-driven programmatic audience company, Xaxis$1. .$1
Media Strategy • Media Buying • Data Management • New Media • Analytics
• As a Data Engineer in the WPP Enterprise Data Group, you will be responsible for the design and implementation of scalable data solutions providing enterprise-scale data transformation across a broad range of projects. • Your role will focus on delivering solutions that utilize large-scale data ingestion, processing, storage/querying, streaming, and batch analytics. • As a hands-on technologist with an extensive data engineering background, you will be joining a group of Data Engineers who are passionate about building the best possible solutions for our business and endorse a culture of life-long learning and collaboration. • Design, build, test and maintain data pipelines (ELT), according to business and technical requirements. • Implement secure platforms with data governance in mind. • Play a key role in automation and building industry leading solutions against architectural best practices. • Propose technical designs and develop integrations. • Deliver data migrations between legacy & modern platforms. • Design & deliver data warehousing solutions and key data engineering workstreams for any required solution. • Support cross-functional teams across the data space.
• Experience designing and building scalable distributed data pipelines and dimensional data models • Experience in Python and SQL • Experience using Databricks is a must. • Extensive experience of Microsoft Azure data services – Data Factory, ADLS gen2, Event Hubs, Azure SQL, Azure Key Vault • Ideally experience in DBT, Kafka, Delta Lake, PySpark, Pandas • A demonstrable understanding of Continuous Integration, Continuous Delivery (CI/CD) and Agile practices, unit & integration tests and development practices using Azure DevOps. • Fluency in English. • Exposure to Data Science & ML technologies is a plus.
Apply Now