AI Research Scientist Jobs

Discover the latest remote and onsite AI Research Scientist roles across top active AI companies. Updated hourly.

Check out 275 new AI Research Scientist opportunities posted on The Homebase

Research Engineer – Matilda

New
Top rated
Maincode
Full-time
Full-time
Posted

Contribute to building Matilda, a conversational AI platform, by engaging in a variety of tasks including researching relevant literature, developing telemetry systems for production infrastructure at scale, and other related activities as needed. Work collaboratively within a small team that values fast progress and takes the work seriously. Adapt to various roles and responsibilities as they arise without a fixed detailed job specification.

Undisclosed

()

Melbourne, Australia
Maybe global
Onsite

Technical Director of AI Safety

New
Top rated
Faculty
Full-time
Full-time
Posted

The Technical Director of AI Safety is responsible for owning the technical strategy for AI Safety by determining research directions and building technologies that mitigate risks from alignment to societal harms. The role leads a high-performing R&D team through intentional hiring, mentorship, and cultivation of a culture defined by technical excellence and high output. It involves driving academic impact by guiding complex machine learning projects and securing top-tier publications to establish Faculty's reputation in the AI safety domain. The position shapes market-leading offerings for frontier labs and security institutes by translating cutting-edge R&D into practical safety solutions. The role oversees technical delivery of AI safety and security projects, ensuring scientific rigor and high-quality outputs across evaluations and red-teaming efforts. Additionally, the Technical Director will represent Faculty externally as a primary technical voice, delivering thought leadership and speaking at major global industry events. The role includes collaboration with business unit directors and commercial teams to align research investments with strategic growth and client needs, as well as the opportunity to hire and build a world-class AI safety technical team, design and lead an AI safety R&D program, build scaling work with Frontier Labs, and contribute to the international debate on AI safety including working with governments and other key bodies.

Undisclosed

()

London, United Kingdom
Maybe global
Hybrid

Research Engineer – Benchmarking, Evals & Failure Analysis

New
Top rated
Mercor
Full-time
Full-time
Posted

As a Research Engineer at Mercor, you will own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how frontier language models are trained and improved. You will design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning, ensuring they scale with training and align with product and research goals. You will build and operate LLM evaluation systems including runs, scoring, dashboards, and reporting to allow tracking and comparison of model performance at scale. You will conduct systematic failure analysis on model outputs, categorize failure modes, quantify their prevalence, and use these insights to influence reward design, data curation, and benchmark design. Additionally, you will create and refine rubrics, automated evaluators, and scoring frameworks that influence training and evaluation decisions, balancing rigor and scalability. You will quantify data usability and quality, guide data generation, augmentation, and curation based on evaluations and failure analysis. Collaboration with AI researchers, applied AI teams, and data producers to align evaluations with training objectives and prioritize important benchmarks and failure analyses is expected. Finally, you will operate with strong ownership in a fast-paced, high-iteration research environment.

$130,000 – $500,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

SAP Key User - Procurement

New
Top rated
helsing
Full-time
Full-time
Posted

You will be responsible for defining operational domains and evaluating the reliability of the AI capabilities developed in-house. You will develop and extend the state-of-the-art in uncertainty quantification and uncertainty calibration. This will involve understanding the AI systems built, interfacing with them, and evaluating their robustness in real-world and adversarial scenarios. You will contribute to impactful projects and collaborate with people across several teams and backgrounds.

Undisclosed

()

Munich
Maybe global
Onsite

Research Scientist – Tabular & Structured Machine Learning

New
Top rated
Granica
Full-time
Full-time
Posted

Invent and prototype algorithms that advance the foundations of machine learning for structured and tabular data; develop new representation learning techniques and information models for large enterprise datasets; build adaptive learners combining statistical learning theory, probabilistic modeling, and large-scale systems optimization; contribute to the development of large tabular models and structured foundation models; design architectures integrating relational, symbolic, and neural learning components; research and implement methods for dataset compression, selection, and representation to improve learning efficiency; develop cost models and optimization frameworks for large-scale structured learning systems; collaborate closely with the Granica research group and with systems engineers; rapidly prototype new algorithms and evaluate them on real enterprise datasets; publish and contribute to the broader research community shaping the future of structured AI and efficient ML systems.

$160,000 – $250,000
Undisclosed
YEAR

(USD)

Bay Area, United States
Maybe global
Onsite

Research Engineer/Scientist - Generative UI, Consumer Devices

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Research Engineer/Scientist on the Future of Computing Research team, you will train and evaluate state-of-the-art models along axes important to the vision for future devices. You will work to overcome barriers to take nascent research capabilities and develop them into capabilities that can be built upon. Additionally, you will help define how software functions for decades to come.

$380,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Hybrid

Research Engineer/Scientist - Human Alignment, Consumer Devices

New
Top rated
OpenAI
Full-time
Full-time
Posted

Develop RLHF and post-training methods for multimodal models. Build reward models and preference-learning pipelines for adaptive, personalized model behavior. Design datasets, rubrics, and evaluation frameworks that capture user preferences, contextual appropriateness, and long-term value in realistic tasks. Run experiments on policy improvement using explicit feedback, implicit signals, and model-based grading. Work on long-horizon evaluation problems where model quality depends not just on single responses but on behavior that improves outcomes over time. Collaborate closely with safety researchers to ensure that adaptation and personalization remain aligned, interpretable, and bounded by clear constraints. Prototype and iterate quickly on training recipes, reward formulations, data pipelines, and evaluation suites for product-relevant behaviors. Help define how OpenAI measures success for personalized AI systems including trust, appropriateness, and long-term user benefit.

$380,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Remote

Researcher, Loss of Control

New
Top rated
OpenAI
Full-time
Full-time
Posted

As a Researcher for loss of control mitigations, you will design and implement mitigation components for loss of control risk including prevention, monitoring, detection, containment, and enforcement under the guidance of senior leadership. You will integrate safeguards across product and research surfaces in partnership with product, engineering, and research teams to ensure protections are consistent, low-latency, and resilient as usage and model autonomy increase. You will evaluate technical trade-offs in the loss of control domain and propose pragmatic, testable solutions. Additionally, you will collaborate closely with risk modeling, evaluations, and policy partners to align mitigation design with anticipated failure modes and high-severity threat scenarios such as deceptive alignment, hidden subgoals, reward hacking, and attempts to evade oversight. You will also execute rigorous testing and red-teaming workflows to stress-test the mitigation stack against subversive model behaviors like sandbagging, monitor evasion, exploit-seeking, unsafe tool use, or strategic deception, and iterate based on findings.

$295,000 – $445,000
Undisclosed
YEAR

(USD)

San Francisco, United States
Maybe global
Onsite

Software Engineer, Developer Experience

New
Top rated
Intrinsic
Full-time
Full-time
Posted

Lead the research and development of novel deep learning algorithms that enable robots to perform complex, contact-rich manipulation tasks. Explore the intersection of computer vision and robotic control, designing systems that allow robots to perceive and interact with objects in dynamic environments. Create models that integrate visual data to guide physical manipulation, moving beyond simple grasping to sophisticated handling of diverse items. Collaborate with a multidisciplinary team of engineers and researchers to translate cutting-edge concepts into robust capabilities deployable on physical hardware for industrial applications. Research and develop deep learning architectures for visual perception and sensorimotor control in contact-rich scenarios. Design algorithms enabling robots to manipulate complex or deformable objects with high precision. Collaborate with software engineers to optimize and deploy research prototypes onto physical robotic hardware. Evaluate model performance in simulation and real-world environments to ensure robustness and reliability. Identify opportunities to apply state-of-the-art advancements in computer vision and robot learning to practical industrial problems. Mentor junior researchers and contribute to the technical direction of the manipulation research roadmap.

Undisclosed

()

Munich, Germany
Maybe global
Onsite

Safety Research Internship (Spring/Summer 2026)

New
Top rated
Cohere
Full-time
Full-time
Posted

As a Cohere Research Intern, you will conduct cutting-edge machine learning research, training and evaluating production large language models. You will focus on research projects aimed at making models better understood, safer, more reliable, more inclusive, and more beneficial for the world. You will disseminate your research results through the production of publications, datasets, and code. Additionally, you will contribute to research initiatives that have practical applications in Cohere's product development. The internship involves collaborating with the Modelling Safety team on implementing novel research ideas related to fairness, safety (including for multiple languages, dialects, and cultural contexts), robustness, generalisation, interpretability, safety for agents with complex read/write actions, and safety for codegen. The project details and topic will be designed collaboratively between the intern and the team, with a goal to publish a paper in a top venue and contribute to open science. The internship may be remote or onsite, with no relocation or housing provided.

Undisclosed

()

Canada
Maybe global
Remote

Want to see more AI Research Scientist jobs?

View all jobs

Access all 4,256 remote & onsite AI jobs.

Join our private AI community to unlock full job access, and connect with founders, hiring managers, and top AI professionals.
(Yes, it’s still free—your best contributions are the price of admission.)

Frequently Asked Questions

Have questions about roles, locations, or requirements for AI Research Scientist jobs?

Question text goes here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[{"question":"What does an AI Research Scientist do?","answer":"AI Research Scientists conduct research to advance artificial intelligence by developing novel algorithms, techniques, and methodologies. They design experiments, build models, test theories, and analyze results to create new AI capabilities. These researchers implement prototypes using machine learning frameworks, validate systems, and document findings. They frequently publish in academic journals and present at conferences. AI Research Scientists collaborate with cross-functional teams to apply research findings to real-world problems. They also mentor junior researchers, provide technical leadership, and continuously monitor emerging AI trends in specialized areas like deep learning, natural language processing, and computer vision."},{"question":"What skills are required for AI Research Scientists?","answer":"AI Research Scientists need strong theoretical knowledge in mathematics, statistics, and computational methods. Programming proficiency in Python and frameworks like TensorFlow or PyTorch is essential. They must excel at experimental design, hypothesis testing, and data analysis. Critical thinking and problem-solving abilities help navigate complex research challenges. Expertise in specific AI domains such as deep learning, reinforcement learning, or natural language processing is typically required. Communication skills for publishing papers and presenting findings are crucial. Collaboration abilities support interdisciplinary work with engineers, domain experts, and stakeholders. Ethical research practices and knowledge of research methodologies round out the necessary skillset."},{"question":"What qualifications are needed for AI Research Scientists?","answer":"Most AI Research Scientist positions require a PhD in artificial intelligence, machine learning, computer science, or related fields. Employers like Meta explicitly specify this educational requirement in job postings. Candidates need demonstrated expertise in specific AI subfields such as machine learning, deep learning, or specialized areas like large language models. A strong publication record in peer-reviewed journals or at major AI conferences (NeurIPS, ICML, ICLR) is typically expected. Prior research experience developing novel algorithms and conducting experiments is essential. Some positions may accept exceptional candidates with Master's degrees who have substantial research contributions or publications in relevant AI domains."},{"question":"What is the salary range for AI Research Scientists?","answer":"Salaries for AI Research Scientists vary based on several factors including education level, research specialty, publication record, and prior contributions to the field. Geographic location significantly impacts compensation, with positions in tech hubs like San Francisco or New York typically paying more. Employer type affects pay scales—research positions at top tech companies often offer higher compensation than academic or nonprofit research labs. Experience level creates substantial variation, with senior scientists commanding significantly higher salaries. Specialized expertise in high-demand areas like large language models or reinforcement learning can command premium compensation. Many roles include additional compensation through research bonuses, stock options, or conference funding."},{"question":"How long does it take to get hired as an AI Research Scientist?","answer":"The hiring process for AI Research Scientists typically takes 2-4 months from application to offer. The timeline includes initial screening, technical interviews assessing research expertise, and evaluation of published work. Many employers require candidates to present previous research or complete a research proposal task. PhD candidates may face longer timelines as companies evaluate their dissertation research and publication potential. The process often includes multiple rounds of interviews with research teams and leadership. Specialized positions focusing on cutting-edge areas like foundation models or AI safety may have extended evaluation periods as employers carefully assess candidates' expertise in these emerging fields."},{"question":"Are AI Research Scientists in demand?","answer":"AI Research Scientists are currently in high demand, with major organizations like Meta, OpenAI, and leading research institutions actively recruiting. Demand is particularly strong in specialized areas such as large language models, generative AI, reinforcement learning, and AI safety. Research institutions, universities, tech firms, and even freelance opportunities are available across subfields like NLP, robotics, and computer vision. The push to advance AI capabilities drives consistent demand for researchers who can develop novel algorithms and techniques. Competition remains fierce for top positions, with employers seeking candidates who have demonstrated innovation through published research, conference presentations, and practical implementations of theoretical work."},{"question":"What is the difference between AI Research Scientist and Data Scientist?","answer":"AI Research Scientists focus on creating new AI algorithms and advancing theoretical foundations, while Data Scientists primarily analyze existing data to extract insights and solve business problems. Research Scientists typically need PhDs and publish academic papers, whereas Data Scientists often work with Master's degrees and produce business reports. The research role requires deeper mathematical understanding and develops novel techniques, while Data Scientists apply established methods to specific datasets. AI Research Scientists work on longer-term theoretical projects that may take months or years, whereas Data Scientists typically deliver results on shorter timelines with immediate business applications. The research position emphasizes innovation, while data roles prioritize practical implementation."}]