Chef Robotics is on a mission to accelerate the advent of intelligent machines in the physical world, starting with food. We build AI-powered robots that assemble fresh meals at scale for some of the largest food producers in North America — companies making ready-to-eat meals for airlines, retailers, meal kits, and the frozen food aisle.
Our robots operate in high-mix, high-variability production environments where ingredients change shape, color, and consistency from one tray to the next. Solving this requires tightly coupled perception, manipulation, and learning — and a team that ships hardware-software systems into customer facilities and keeps them running.
Headquartered in San Francisco, we are a venture-backed team of robotics, ML, and operations engineers building the foundation for general-purpose food robotics.
Chef Robotics is building autonomous robots that work alongside humans in commercial food preparation environments — and perception is at the heart of what makes them reliable. As a Perception Engineer, you will own the full stack of how our robots see and understand the world: from integrating cutting-edge camera hardware, to training production-grade deep learning models, to ensuring those models perform accurately and efficiently in real-time on the factory floor.
In this role, you will:
- Design, train, and optimize deep learning models for detection, segmentation, segmentation, pose estimation, and classification — with a focus on real-world robustness over benchmark performance.
- Build low-latency inference pipelines that approach real-time performance; profile and optimize models for deployment on embedded and edge hardware.
- Develop and improve multi-object tracking algorithms for reliable identification and motion prediction of items across frames.
- Solve challenging perception problems specific to food robotics: deformable objects, occlusions, varying lighting, and high visual similarity between categories.
- Own the end-to-end ML lifecycle: data collection strategy, annotation tooling, dataset curation, augmentation pipelines, model training, evaluation, deployment, and field debugging.
- Develop tooling to monitor model performance in production and drive continuous improvement cycles.
- Partner closely with robotics, hardware, and software engineers to translate perception capabilities into reliable end-to-end robot behaviors.
- Help define the perception roadmap and influence technical direction as the team grows.
- Assist in integrating new cameras and sensors for enhanced robotic vision.
What You Bring:
- BS, MS, or PhD in Computer Science, Robotics, Electrical Engineering, or a closely related field.
- 5+ years of combined research and industry experience in computer vision and machine learning, with a track record of shipping perception systems to production.
- Deep expertise in at least two of: instance/semantic segmentation, object detection, 3D perception, or multi-object tracking.
- Strong Python skills; experience building production-quality, maintainable code — not just research prototypes.
- Hands-on experience with deep learning frameworks (PyTorch strongly preferred) and the full training pipeline from data to deployed model.
- Experience working with RGBD sensors, depth cameras, and point cloud data.
- Proven ability to build and optimize models for low-latency, real-time inference.
- Familiarity with ROS or similar robotics middleware.
Nice-to-have:
- Experience using simulation environments (e.g. Isaac Sim, Gazebo) for synthetic data generation, domain randomization, and sim-to-real transfer of perception models.
- C++ proficiency for performance-critical modules and embedded deployment.
- Experience with cloud ML infrastructure (GCP, AWS) and containerization (Docker, Kubernetes).
- Background in autonomous vehicles, warehouse robotics, or other perception-heavy robotics applications.
- Contributions to open-source CV/ML projects or publications in top-tier venues (CVPR, ECCV, NeurIPS, etc.).




