Senior Engineer, XBAT Simulation Modeling
Build and scale simulation frameworks for integrated testing of autonomy, GNC, and embedded systems in C++. Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release. Implement scenario simulation tooling and formal test infrastructure. Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios. Develop infrastructure for CI integration, parallel simulation execution, and automated regression testing. Profile, optimize, and validate C++ codebases for performance, determinism, and fidelity. Contribute to architecture decisions that define the next generation of aircraft simulation tools within Shield AI. Mentor engineers and guide best practices in C++, simulation architecture, and performance engineering.
Robotics Engineer
The Production AI Ops Lead designs and develops the production lifecycle of full-stack AI applications while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and resilient cloud infrastructure for international government partners. Responsibilities include owning the production outcome with full accountability for long-term performance and reliability of AI use cases across international government agencies; ensuring full-stack integrity by overseeing the end-to-end health of the platform and seamless integration between AI core and other components; building automated systems to monitor model performance and data drift across dispersed environments; managing the technical lifecycle within diverse regulatory frameworks; leading incident response for production issues in mission-critical environments and establishing preventive guardrails; translating technical performance metrics into clear insights for senior government officials; and partnering with Engineering and ML teams to influence future technical architecture and decisions based on field learnings.
Senior Software Engineer, ML Core
Design, develop, and deploy custom and off-the-shelf ML libraries and toolings to improve ML development, training, deployment, and on-vehicle model inference latency. Build tooling and establish development best practices to manage and upgrade foundational libraries such as Nvidia driver, PyTorch, TensorRT, to improve ML developer experience and expedite debugging efforts. Collaborate closely with cross-functional teams including applied ML research, high-performance compute, advanced hardware engineering, and data science to define requirements and align on architectural decisions. Work across multiple ML teams within Zoox, supporting in- and off-vehicle ML use cases and coordinating to meet the needs of vehicle and ML teams to reduce the time from ideation to productionization of AI innovations.
Senior Software Engineer, Pilots
As a Senior Software Engineer on the Pilots team, the responsibilities include delivering robust, thoroughly tested, and maintainable C++ code for edge and robotics platforms, designing, implementing, and owning prototype perception systems that may transition into production-grade solutions, constructing and refining real-time perception pipelines including detection, tracking, and sensor fusion, adapting and integrating ML and CV models for Hayden-specific applications, driving technical decision-making balancing prototyping speed with production readiness, collaborating with the Product team and cross-functional Engineering departments, and contributing to shared infrastructure, tooling, and architectural patterns as pilots mature into foundational products.
Robotics Engineer
As a Software Engineer in the Robotics and Automation group, you will design and deploy systems to automate material science research and discovery laboratories, specializing in robotics, automation, and perception software development. You will architect and develop software systems that control and orchestrate robotic workcells for autonomous materials experimentation, design scalable control frameworks for flexible automation involving robots, motion systems, sensors, and lab instruments. Your role involves collaborating with hardware, mechatronics, and science teams to translate experimental workflows into reliable automated processes; building and maintaining APIs and services for scheduling, execution, monitoring, and data capture; developing simulation, testing, and validation tools to accelerate development and ensure system reliability; integrating 2D and 3D vision systems with robotic manipulation, motion planning, and execution; optimizing system performance, robustness, and throughput under rapid iteration cycles; contributing to technical direction, architecture decisions, and best practices; mentoring junior engineers and helping establish engineering standards; and fostering collaboration and open-mindedness to empower the team to deliver world-class technology at an unprecedented speed.
Perception Engineer
Design, implement, and deploy 2D and 3D vision systems for robotic manipulation, inspection, state verification, and sensor fusion; develop vision-guided automation solutions integrating cameras, lighting, optics, and robots in laboratory and industrial environments; implement perception pipelines for object detection, segmentation, pose estimation, and feature extraction; own camera calibration and system-level accuracy validation; develop novel algorithms for state estimation of fluids and particle flows; integrate vision outputs with robot motion planning, grasping, and task execution; tune and harden vision systems for robustness against variability in materials, reflections, and environmental conditions; collaborate with software, mechatronics, and mechanical teams to translate experimental and operational needs into automated solutions; contribute to technical direction, architecture decisions, and best practices across the robotics, perception, and automation software stack; and bring an attitude of collaboration and open-mindedness to facilitate fearless and creative problem solving that empowers the team to ship world-class technology at an unprecedented speed.
Mechanical Engineer - Hands
Design, deploy, and maintain Figure's training clusters. Architect and maintain scalable deep learning frameworks for training on massive robot datasets. Work together with AI researchers to implement training of new model architectures at a large scale. Implement distributed training and parallelization strategies to reduce model development cycles. Implement tooling for data processing, model experimentation, and continuous integration.
Senior / Staff Software Engineer - Perception 3D Tracking
The role involves defining on-vehicle architecture for producing core tracking results from the Perception stack, working with both the model teams and optimization teams to develop a highly performant and efficient system that can run on vehicle, working with Perception data both on the input and output of machine learned models, and taking tracking output to integrate this into the larger behavioral system in the Autonomy stack.
Senior Software Engineer, ML Ops & Infrastructure
As a Senior AI Research Scientist for Vision-guided robotics, you will lead the research and development of novel deep learning algorithms enabling robots to perform complex, contact-rich manipulation tasks. You will explore the intersection of computer vision and robotic control, designing systems for robots to perceive and interact with objects in dynamic environments, creating models that integrate visual data to guide physical manipulation beyond simple grasping. Collaborating with a multidisciplinary team, you'll translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. You will research and develop deep learning architectures for visual perception and sensorimotor control, design algorithms for manipulation of complex or deformable objects with high precision, collaborate with software engineers to optimize and deploy prototypes onto robotic hardware, evaluate model performance in simulations and real-world environments to ensure robustness, identify opportunities to apply advancements in computer vision and robot learning to industrial problems, and mentor junior researchers contributing to the technical direction of the manipulation research roadmap.
Robotics Software Engineer, Sensor-based Control and Robot Learning
Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks, exploring the intersection of computer vision and robotic control to design systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Work with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in both simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.
Access all 4,256 remote & onsite AI jobs.
Frequently Asked Questions
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
