Real Estate, Workplace Programs and User Experience Lead
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, enabling sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms for high precision manipulation of complex or deformable objects. Collaborate with software engineers to optimize and deploy research prototypes on robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Researcher, Safety & Privacy
The role involves designing and implementing privacy-first architectures to detect and mitigate harmful model behaviors, building frameworks for auditable private identification of high-risk content such as jailbreaks, cyber threats, or weaponization instructions, and developing strict, auditable mechanisms that are triggered only by harm signals. Additionally, the researcher will drive the development of automated safety systems that preserve privacy at every level, operationalizing frameworks for identifying and addressing frontier risks while ensuring privacy guarantees remain intact even under adversarial conditions, and working on foundational problems including privacy-preserving monitoring, algorithmic auditing, secure enclaves, and adversarially robust safety enforcement protocols.
Forward Deployed Engineer, RL Environments
As an Applied Research Engineer at Labelbox, the responsibilities include developing cutting-edge systems and methods to create, analyze, and leverage high-quality human-in-the-loop data for frontier model developers. The role involves designing and implementing advanced systems aligning human feedback into AI training processes, such as Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO). Responsibilities also include working on innovative techniques to measure and improve human data quality, developing AI-assisted tools to enhance data labeling, investigating the impact of different types of human feedback on model performance, optimizing human feedback collection algorithms, integrating breakthroughs into Labelbox's product suite to make human-AI alignment techniques scalable, engaging with customers and the AI community to understand evolving data needs, publishing research in top-tier journals and conferences, exploring new frontiers in human-AI collaboration and AI alignment, and creating technical documentation and educational content to establish Labelbox as a thought leader in AI.
Applied Scientist / Research Engineer - Multimodal (Come to Singapore)
The role involves focusing on multimodal learning across text, image, audio, and video to drive innovative research and collaborate with clients on complex projects. Responsibilities include designing, training, and deploying state-of-the-art multimodal models such as Omni-models, VLMs, audio, image generation, and robotics, applying them to diverse use cases like enterprise search, agents grounded in images and documents, video understanding, and speech interfaces. The position requires running pre-training, post-training, and deploying state-of-the-art models on clusters with thousands of GPUs, generating and curating multimodal datasets at web scale, building evaluators and benchmarks for perception, grounding, OCR, and captioning, developing tools and frameworks for data generation, model training, evaluation, and deployment, collaborating cross-functionally with science, engineering, and product teams to address complex use cases using agents and RAG pipelines, and managing research projects and communications with client research teams.
Director of Biomarkers and Experimental Medicine
Develop and advance machine learning models for biological, preclinical, and translational datasets, including multimodal omics, imaging, text, and assay data; design and implement scalable pipelines for data curation, training, evaluation, and inference integrated into discovery workflows; own projects end-to-end from problem framing to prototyping, validation, and deployment; evaluate robustness, reliability, and interpretability of models to support scientific decision-making; contribute technical leadership by proposing new directions, shaping platform capabilities, and raising engineering and research standards through collaboration.
Research Intern – Reinforcement Learning (RL)
Design and build reinforcement learning environments that model real-world customer interaction workflows. Design reinforcement learning agents that learn from these environments using real-world interaction data, rewards, and feedback loops. Define reward models and feedback loops using real-world signals (outcomes and human feedback). Enable learning from production data by structuring interaction traces into training-ready datasets for offline and online learning. Experiment with multi-agent systems and simulation frameworks for complex coordination and decision-making. Collaborate with engineering and product teams to deploy, evaluate, and iterate on learning systems in production at scale.
Senior Scientist, Biology & Pharmacology
Develop and advance machine learning models for biological, preclinical, and translational datasets including multimodal omics, imaging, text, and assay data. Design and implement scalable pipelines for data curation, training, evaluation, and inference integrated into discovery workflows. Own projects end-to-end from problem framing, prototyping, validation to deployment. Evaluate robustness and reliability including generalization, uncertainty, failure modes, and interpretability where it supports scientific decision-making. Contribute technical leadership by proposing new directions, shaping platform capabilities, and raising engineering and research standards through collaboration. Work may involve foundation and representation models over multimodal data, methods addressing small, biased, or noisy datasets, ML systems for experimental prioritization, assay interpretation, translational signal discovery, evaluation frameworks, and tooling for model usability by scientists.
Member of Technical Staff, Robotics Research Lead
Lead the design and execution of the AI's robotics research agenda; recruit, mentor, and manage a small team of research scientists and engineers in the London lab; collaborate with the world model and simulation teams to develop state-of-the-art training platforms for robotics; guide the creation of persistent 3D/4D scene representations and advanced embodied AI methodologies; drive research efforts in scene understanding, sim-to-real transfer, and advanced planning; foster partnerships with leading ML researchers, hardware specialists, and external collaborators; help establish the lab's technical culture and external reputation.
Head of Lab Platform
The Head of Lab Platform is responsible for leading and operating the lab platform, including providing strategic direction and oversight for lab platform scientists across high throughput validation, data collection, and automation workstreams. They manage and conduct the screening workloads with the team and work with the Head of Operations to organize lab operations. They foster a collaborative, innovative environment promoting technical excellence and continuous learning, mentor and develop team members, and manage and develop CRO partnerships to ensure data quality and turnaround times. Additionally, they develop and own a comprehensive lab platform strategy, focusing on high throughput validation, expanding data collection capabilities, and progressing toward full lab autonomy. The role includes identifying, evaluating, and integrating new automation hardware and software, prioritizing platform development areas with high scientific impact and commercial success potential, and executing plans to connect laboratory workflows with agentic AI systems for closed-loop experimental design, execution, and learning. The position requires close collaboration with computational biologists, machine learners, and software engineers to deeply integrate the experimental platform with AI workflows, ensure seamless data integration and interoperability across platform components, and integrate external services into the experimental platform workflows.
Research Intern – Reinforcement Learning (RL) - Onsite
Design and build reinforcement learning environments that model real-world customer interaction workflows. Design RL agents that learn from these environments using real-world interaction data, rewards, and feedback loops. Define reward models and feedback loops using real-world signals such as outcomes and human feedback. Enable learning from production data by structuring interaction traces into training-ready datasets for offline and online learning. Experiment with multi-agent systems and simulation frameworks for complex coordination and decision-making. Collaborate with engineering and product teams to deploy, evaluate, and iterate on learning systems in production at scale.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
