Member of Technical Staff, Robotics Research Lead
Lead the design and execution of the AI’s robotics research agenda, recruit, mentor, and manage a small team of research scientists and engineers in the London lab, collaborate with the world model and simulation teams to develop state-of-the-art training platforms for robotics, guide the creation of persistent 3D/4D scene representations and advanced embodied AI methodologies, drive research efforts in scene understanding, sim-to-real transfer, and advanced planning, foster partnerships with leading ML researchers, hardware specialists, and external collaborators, and help establish the lab's technical culture and external reputation.
Member of Technical Staff - Robotics Research Lead
Collaborate with the world model team to build a state-of-the-art training and simulation platform for robotics. Develop persistent 3D/4D scene representations that maintain temporal consistency. Unlock advanced robotics planning and decision making through in-house, cutting-edge world models. Ensure sensing and system dynamics perform reliably in high-stakes, real-world operations where models think, simulate, and act. Partner with ML researchers to innovate on generative models and physical AGI. Collaborate closely with ML researchers developing multimodal world models and generative systems, and with hardware teams and partners to ensure robotic platforms, sensing, and system dynamics behave optimally in real-world operation.
PhD Research Intern, Vision Language Action Models
Work on the Multimodal Language Action model by exploring novel discrete action tokenization and flow matching approaches, building on MotionLM, FAST, and other models. Train models at the billion+ scale using millions of miles of proprietary Zoox driving data. Gain experience and insight into training Multimodal Language Action models at scale. Contribute to publishable research that could be integrated into Zoox vehicles.
Compensation and Analytics Program Manager
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms for high precision manipulation of complex or deformable objects. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art computer vision and robot learning advancements to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
PhD Research Intern, Offline Driving Intelligence
Interns on the Offline Driving Intelligence team will develop state-of-the-art agent policies, contribute to publishable research, and receive mentorship from experienced researchers. They will work with a mentor to address a major open research question currently facing the team. Their research may directly be used in production as part of the simulation system that tests Zoox's autonomous driving software.
Research Scientist
Define and lead research directions in action-conditioned world models, physical AI, and generative modeling for embodied systems. Design novel architectures, training objectives, and evaluation frameworks for VLMs, VLAs, and world models. Direct research efforts with the goal of publishing in top journals. Partner with industrial collaborators to ground research in real-world physical AI use cases. Mentor research engineers and collaborate cross-functionally to move research into production. Stay at the frontier of the field by synthesizing relevant literature and identifying opportunities for impactful contributions. Contribute to Hedra's research culture and external scientific reputation.
Research Scientist
Define and lead research directions in action-conditioned world models, physical AI, and generative modeling for embodied systems; design novel architectures, training objectives, and evaluation frameworks for VLMs, VLAs, and world models; direct research efforts with the goal of publishing in top journals; partner with industrial collaborators to ground research in real-world physical AI use cases; mentor research engineers and collaborate cross-functionally to move research into production; stay at the frontier of the field by synthesizing relevant literature and identifying opportunities for impactful contributions; contribute to Hedra's research culture and external scientific reputation.
Research Engineer
Design, implement, and run pre-training and post-training pipelines for action-conditioned world models and vision-language-action (VLA) models. Develop and refine training methodologies, including fine-tuning, reinforcement learning, and large-scale multimodal learning. Design and generate training and evaluation datasets from simulation, including environment setup, domain randomization, and sim-to-real transfer strategies. Build distributed training infrastructure using PyTorch, FSDP, and DeepSpeed. Work with multimodal data pipelines involving video, sensory inputs, and action sequences. Evaluate model performance using both benchmark datasets and real-world deployment metrics. Collaborate with industrial partners to adapt generative models for real-world physical AI applications. Contributions to research publications is a plus.
Real Estate, Workplace Programs and User Experience Lead
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, enabling sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms for high precision manipulation of complex or deformable objects. Collaborate with software engineers to optimize and deploy research prototypes on robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Member of Technical Staff, Robotics Research Lead
Lead the design and execution of the AI's robotics research agenda; recruit, mentor, and manage a small team of research scientists and engineers in the London lab; collaborate with the world model and simulation teams to develop state-of-the-art training platforms for robotics; guide the creation of persistent 3D/4D scene representations and advanced embodied AI methodologies; drive research efforts in scene understanding, sim-to-real transfer, and advanced planning; foster partnerships with leading ML researchers, hardware specialists, and external collaborators; help establish the lab's technical culture and external reputation.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
