AI Applied Research Scientist Jobs

Discover the latest remote and onsite AI Applied Research Scientist roles across top active AI companies. Updated hourly.

Check out 45 new AI Applied Research Scientist opportunities posted on The Homebase

Technical Director of AI Safety

New
Top rated
Faculty
Full-time
Full-time
Posted

The Technical Director of AI Safety is responsible for owning the technical strategy for AI Safety by determining research directions and building technologies that mitigate risks from alignment to societal harms. The role leads a high-performing R&D team through intentional hiring, mentorship, and cultivation of a culture defined by technical excellence and high output. It involves driving academic impact by guiding complex machine learning projects and securing top-tier publications to establish Faculty's reputation in the AI safety domain. The position shapes market-leading offerings for frontier labs and security institutes by translating cutting-edge R&D into practical safety solutions. The role oversees technical delivery of AI safety and security projects, ensuring scientific rigor and high-quality outputs across evaluations and red-teaming efforts. Additionally, the Technical Director will represent Faculty externally as a primary technical voice, delivering thought leadership and speaking at major global industry events. The role includes collaboration with business unit directors and commercial teams to align research investments with strategic growth and client needs, as well as the opportunity to hire and build a world-class AI safety technical team, design and lead an AI safety R&D program, build scaling work with Frontier Labs, and contribute to the international debate on AI safety including working with governments and other key bodies.

Undisclosed

()

London, United Kingdom
Maybe global
Hybrid

Research Engineer – Benchmarking, Evals & Failure Analysis

New
Top rated
Mercor
Full-time
Full-time
Posted

As a Research Engineer at Mercor, you will own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how frontier language models are trained and improved. You will design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning, ensuring they scale with training and align with product and research goals. You will build and operate LLM evaluation systems including runs, scoring, dashboards, and reporting to allow tracking and comparison of model performance at scale. You will conduct systematic failure analysis on model outputs, categorize failure modes, quantify their prevalence, and use these insights to influence reward design, data curation, and benchmark design. Additionally, you will create and refine rubrics, automated evaluators, and scoring frameworks that influence training and evaluation decisions, balancing rigor and scalability. You will quantify data usability and quality, guide data generation, augmentation, and curation based on evaluations and failure analysis. Collaboration with AI researchers, applied AI teams, and data producers to align evaluations with training objectives and prioritize important benchmarks and failure analyses is expected. Finally, you will operate with strong ownership in a fast-paced, high-iteration research environment.

$130,000 – $500,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

SAP Key User - Procurement

New
Top rated
helsing
Full-time
Full-time
Posted

You will be responsible for defining operational domains and evaluating the reliability of the AI capabilities developed in-house. You will develop and extend the state-of-the-art in uncertainty quantification and uncertainty calibration. This will involve understanding the AI systems built, interfacing with them, and evaluating their robustness in real-world and adversarial scenarios. You will contribute to impactful projects and collaborate with people across several teams and backgrounds.

Undisclosed

()

Munich
Maybe global
Onsite

Research Scientist – Tabular & Structured Machine Learning

New
Top rated
Granica
Full-time
Full-time
Posted

Invent and prototype algorithms that advance the foundations of machine learning for structured and tabular data; develop new representation learning techniques and information models for large enterprise datasets; build adaptive learners combining statistical learning theory, probabilistic modeling, and large-scale systems optimization; contribute to the development of large tabular models and structured foundation models; design architectures integrating relational, symbolic, and neural learning components; research and implement methods for dataset compression, selection, and representation to improve learning efficiency; develop cost models and optimization frameworks for large-scale structured learning systems; collaborate closely with the Granica research group and with systems engineers; rapidly prototype new algorithms and evaluate them on real enterprise datasets; publish and contribute to the broader research community shaping the future of structured AI and efficient ML systems.

$160,000 – $250,000
Undisclosed
YEAR

(USD)

Bay Area, United States
Maybe global
Onsite

Research Engineer/Scientist - Generative UI, Consumer Devices

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Research Engineer/Scientist on the Future of Computing Research team, you will train and evaluate state-of-the-art models along axes important to the vision for future devices. You will work to overcome barriers to take nascent research capabilities and develop them into capabilities that can be built upon. Additionally, you will help define how software functions for decades to come.

$380,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid

Research Engineer/Scientist - Human Alignment, Consumer Devices

New
Top rated
OpenAI
Full-time
Full-time
Posted

Develop RLHF and post-training methods for multimodal models. Build reward models and preference-learning pipelines for adaptive, personalized model behavior. Design datasets, rubrics, and evaluation frameworks that capture user preferences, contextual appropriateness, and long-term value in realistic tasks. Run experiments on policy improvement using explicit feedback, implicit signals, and model-based grading. Work on long-horizon evaluation problems where model quality depends not just on single responses but on behavior that improves outcomes over time. Collaborate closely with safety researchers to ensure that adaptation and personalization remain aligned, interpretable, and bounded by clear constraints. Prototype and iterate quickly on training recipes, reward formulations, data pipelines, and evaluation suites for product-relevant behaviors. Help define how OpenAI measures success for personalized AI systems including trust, appropriateness, and long-term user benefit.

$380,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Remote

Videographer

New
Top rated
helsing
Full-time
Full-time
Posted

You will be responsible for defining operational domains and evaluating the reliability of the AI capabilities developed in-house. You will develop and extend the state-of-the-art in uncertainty quantification and uncertainty calibration. This will involve understanding the AI systems built, interfacing with them, and evaluating their robustness in real-world and adversarial scenarios. You will contribute to impactful projects and collaborate with people across several teams and backgrounds.

Undisclosed

()

Munich or Berlin or London
Maybe global
Onsite

Researcher, Loss of Control

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Researcher for loss of control mitigations, you will design and implement mitigation components for loss of control risk including prevention, monitoring, detection, containment, and enforcement under the guidance of senior leadership. You will integrate safeguards across product and research surfaces in partnership with product, engineering, and research teams to ensure protections are consistent, low-latency, and resilient as usage and model autonomy increase. You will evaluate technical trade-offs in the loss of control domain and propose pragmatic, testable solutions. Additionally, you will collaborate closely with risk modeling, evaluations, and policy partners to align mitigation design with anticipated failure modes and high-severity threat scenarios such as deceptive alignment, hidden subgoals, reward hacking, and attempts to evade oversight. You will also execute rigorous testing and red-teaming workflows to stress-test the mitigation stack against subversive model behaviors like sandbagging, monitor evasion, exploit-seeking, unsafe tool use, or strategic deception, and iterate based on findings.

$295,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Software Engineer, Developer Experience

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Munich, Germany
Maybe global
Onsite

Safety Research Internship (Spring/Summer 2026)

New
Top rated
Cohere
Full-time
Full-time
Posted

As a Cohere Research Intern, you will conduct cutting-edge machine learning research, training and evaluating production large language models. You will focus on research projects aimed at making models better understood, safer, more reliable, more inclusive, and more beneficial for the world. You will disseminate your research results through the production of publications, datasets, and code. Additionally, you will contribute to research initiatives that have practical applications in Cohere's product development. The internship involves collaborating with the Modelling Safety team on implementing novel research ideas related to fairness, safety (including for multiple languages, dialects, and cultural contexts), robustness, generalisation, interpretability, safety for agents with complex read/write actions, and safety for codegen. The project details and topic will be designed collaboratively between the intern and the team, with a goal to publish a paper in a top venue and contribute to open science. The internship may be remote or onsite, with no relocation or housing provided.

Undisclosed

()

Canada
Maybe global
Remote

Want to see more AI Applied Research Scientist jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Have questions about roles, locations, or requirements for AI Applied Research Scientist jobs?

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What does a AI Applied Research Scientist do?","answer":"AI Applied Research Scientists lead research initiatives to develop new AI methodologies and algorithms. They design experiments, build prototypes, and create proof-of-concepts to test innovative AI systems. Their work involves implementing cutting-edge techniques in areas like computer vision or NLP, collaborating with engineers to transition research into production, and publishing findings in academic journals. These researchers bridge the gap between theoretical AI advancements and practical applications for specific domains."},{"question":"What skills are required for AI Applied Research Scientist?","answer":"Essential skills for this role include expertise in machine learning frameworks, proficiency in Python with libraries like PyTorch, LangChain, and Streamlit, and the ability to implement algorithms from scratch. Strong research design capabilities and problem-solving skills are crucial. Experience with deep learning, computer vision, or NLP is highly valued. Additionally, excellent communication abilities for interdisciplinary collaboration and technical documentation are necessary in AI research positions."},{"question":"What qualifications are needed for AI Applied Research Scientist role?","answer":"Most employers require a Master's degree at minimum, with a PhD preferred, in Computer Science, Electrical Engineering, or related technical fields. Candidates typically need at least 3 years of hands-on experience in AI/ML research and deep learning algorithms. Demonstrated expertise in specific domains like computer vision is often expected. The ability to handle ambiguous research areas and collaborate effectively across teams is essential beyond academic credentials."},{"question":"What is the salary range for AI Applied Research Scientist job?","answer":"While specific salary figures aren't available in the research provided, AI Applied Research Scientist positions generally command premium compensation due to their specialized expertise and advanced education requirements. Salaries typically vary based on factors including location (with tech hubs paying more), years of research experience, publication history, domain specialization (like computer vision or NLP), and whether the role is in industry or academia."},{"question":"How long does it take to get hired as a AI Applied Research Scientist?","answer":"The hiring process for AI Applied Research Scientist positions typically takes 1-3 months. It often involves multiple interview rounds including technical assessments, research presentations, and discussions with cross-functional teams. The timeline may extend if the role requires specialized domain expertise or if candidates need to demonstrate their research capabilities through sample projects. Educational requirements (PhD preferred) also lengthen the career preparation timeline considerably."},{"question":"Are AI Applied Research Scientist job in demand?","answer":"Yes, AI Applied Research Scientist jobs are in high demand across industries as organizations seek experts who can translate theoretical AI advancements into practical applications. The specialized skill set combining deep technical expertise with implementation capabilities makes qualified candidates particularly valuable. While exact numbers aren't provided in the research, the position's critical role in developing new AI methodologies and bridging research-to-production gaps drives consistent hiring needs."}]