We're committed to solving intelligence, to advance science and benefit humanity.
Artificial Intelligence • Machine Learning
23 hours ago
🏢 In-office - London
We're committed to solving intelligence, to advance science and benefit humanity.
Artificial Intelligence • Machine Learning
•As a data scientist in the Responsible Development and Innovation (ReDI) team, you’ll work on safety evaluations of Google DeepMind’s most groundbreaking models. •You will work with teams at Google DeepMind, internal and external partners, to ensure that our work is conducted in line with responsibility and safety best practices. •You’ll be part of a team working on safety evaluations, using your expertise to gather specialised data for training and evaluating our models across numerous modalities. •Responding to needs of the business in a timely manner and prioritising projects accordingly. •Contributing to the design and development of new evaluations, particularly focussing on content policy coverage of sensitive content. •Proactively engaging with prompt dataset curation, analysis and refinement to provide feedback for iteration with 3P vendors. •Investigating the behaviour of our latest models to inform evaluation design. •Investigating the accuracy and patterns in human rating of evaluation outputs. •Assessing the quality and coverage of safety datasets. •Supporting improvements to how evaluation findings are visualised to key stakeholders and leadership.
•Expertise in analytical and statistical skills, data curation and data collection design, prompt data set curation and validation •Familiarity with sociotechnical considerations of generative AI, including content safety (such as child safety) and fairness •Ability to thrive in a fast-paced, live environment where decisions are made in a timely fashion •Demonstrated ability to work within cross-functional teams, foster collaboration, and influence outcomes •Significant experience presenting and communicating data science findings to non-data science audiences, including senior stakeholders •Strong command of Python •Experience of working with sensitive data, access control, and procedures for data worker wellbeing •Prior experience working with product development or in similar agile settings would be advantageous •Experience in sociotechnical research and content safety •Demonstrated prior experience designing and implementing audits or evaluations of cutting edge AI systems •Experience working with ethics and safety topics associated with AI development in a technology company such as child safety, privacy, representational harms and discrimination, misinformation, or other areas of content or model risks
Apply Now