PhD Research Intern, Multi-Modal Foundation Encoder for Perception
During this internship, the intern will lead the development of a multi-modality (vision, LiDAR, Radar, and language), temporal foundation encoder to support 3D object detection & tracking, 3D segmentation (occupancy), and live maps. The research will aim to significantly improve system performance on long-tail events and rare classes by utilizing a large-capacity foundation model to learn rich representations across different sensor modalities. The project also aims to improve perception in adverse environmental conditions such as medium to heavy rain and fog, reduce false positives on water splashes or dust particles, achieve long-range sensing for highway driving, and build robustness to occlusion. The role includes exploring novel directions such as tri-modal foundation models with self-supervised pre-training, radar-language grounding for zero-shot detection, efficient sensor fusion via sparse cross-attention, or integrating 3D Gaussian Splats for dynamic agent geometry and streaming sparse Gaussian occupancy prediction.
Research Scientist
The Research Scientist will investigate how intervening on training data can improve the quality and behavior of deep learning models. Responsibilities include sourcing, vetting, implementing, and improving ideas from the research literature and personal insights, conducting research guided by real customer needs rather than conference benchmarks, and collaborating closely with engineers and product teams to turn research findings into tangible impact. The role requires working autonomously in a fast-moving startup environment, engaging with customers, and contributing to shaping the product vision.
Compensation and Analytics Program Manager
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms for high precision manipulation of complex or deformable objects. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art computer vision and robot learning advancements to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
PhD Research Intern, Offline Driving Intelligence
Interns on the Offline Driving Intelligence team will develop state-of-the-art agent policies, contribute to publishable research, and receive mentorship from experienced researchers. They will work with a mentor to address a major open research question currently facing the team. Their research may directly be used in production as part of the simulation system that tests Zoox's autonomous driving software.
AI Research Director
The AI Research Director leads webAI's AI and ML research strategy including long-term vision, experimentation roadmap, and architectural innovation. They oversee research on large language models, diffusion and multimodal models, inference optimization, and distributed execution. The role advances techniques for compression, quantization, distillation, and privacy-preserving learning for edge and on-device AI. The director collaborates with Engineering and Product teams to translate research breakthroughs into scalable production-ready capabilities, builds, mentors, and leads a research team fostering creativity, scientific rigor, and innovation, evaluates emerging technologies, academic research, and industry trends to influence strategic direction, designs and evaluates experiments, benchmarks, and methodologies for model performance and efficiency, represents webAI in research discussions with customers, partners, and the broader AI community, and ensures research initiatives align with customer missions, security requirements, and enterprise needs.
Abuse Investigator (AI Self-Improvement Risk)
As an Abuse Investigator focused on AI Self-Autonomy and Agentic Risk on the Intelligence and Investigations team, you will be responsible for identifying and investigating cases where models exhibit autonomous or agentic behavior, including chaining capabilities, acting with increasing independence, or demonstrating patterns that may introduce safety risk. This includes detecting behaviors that are not explicitly intended, understood, or covered by existing safeguards. You will review leads, investigate model behavior, and identify cases where systems demonstrate agentic or autonomous patterns that introduce safety risks. You will detect and analyze behaviors such as multi-step planning, capability chaining, tool use, persistence, and workaround behavior. You will develop signals and tracking strategies to help proactively identify emerging agentic risk patterns across the platform. You will identify gaps in existing safeguards, evaluations, or monitoring systems and propose improvements. You will communicate investigation findings clearly to technical, policy, and leadership stakeholders. This role involves working in high-pressure environments and interacting with others effectively.
Member of technical staff - Research - Agent
Design and develop new agents and propose new research directions involving reinforcement learning and foundation models. Design, implement, and scale high-performance systems for training large-scale agents, including infrastructure, algorithms, reward models, and training environments. Collaborate with researchers and engineers to implement, test, and productionize new agent logics, learning algorithms, and system architectures. Create, manage, and scale benchmarks and evaluation systems to track agent capabilities, owning system reliability, scalability, and observability for research infrastructure. Mentor and guide engineers and researchers, establishing and enforcing engineering standards, tooling, and best practices. Conduct code and design reviews, champion technical innovation, and proactively address technical debt to accelerate R&D lifecycle.
Research Engineer
Design, implement, and run pre-training and post-training pipelines for action-conditioned world models and vision-language-action (VLA) models. Develop and refine training methodologies, including fine-tuning, reinforcement learning, and large-scale multimodal learning. Design and generate training and evaluation datasets from simulation, including environment setup, domain randomization, and sim-to-real transfer strategies. Build distributed training infrastructure using PyTorch, FSDP, and DeepSpeed. Work with multimodal data pipelines involving video, sensory inputs, and action sequences. Evaluate model performance using both benchmark datasets and real-world deployment metrics. Collaborate with industrial partners to adapt generative models for real-world physical AI applications. Contributions to research publications are a plus.
Research Scientist
Define and lead research directions in action-conditioned world models, physical AI, and generative modeling for embodied systems. Design novel architectures, training objectives, and evaluation frameworks for VLMs, VLAs, and world models. Direct research efforts with the goal of publishing in top journals. Partner with industrial collaborators to ground research in real-world physical AI use cases. Mentor research engineers and collaborate cross-functionally to move research into production. Stay at the frontier of the field by synthesizing relevant literature and identifying opportunities for impactful contributions. Contribute to Hedra's research culture and external scientific reputation.
Research Scientist
Define and lead research directions in action-conditioned world models, physical AI, and generative modeling for embodied systems; design novel architectures, training objectives, and evaluation frameworks for VLMs, VLAs, and world models; direct research efforts with the goal of publishing in top journals; partner with industrial collaborators to ground research in real-world physical AI use cases; mentor research engineers and collaborate cross-functionally to move research into production; stay at the frontier of the field by synthesizing relevant literature and identifying opportunities for impactful contributions; contribute to Hedra's research culture and external scientific reputation.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
