We’re building an autonomous engine for scientific discovery—an AI system that designs, plans, and learns how to learn. This role is for exceptional people who want to invent a better way to optimize products and generalize models over complex search spaces using reinforcement learning, Bayesian methods, and active experimentation.
You’ll be part of the core research team. No side projects. No throwaway intern work. You’ll be expected to think deeply, write clean code, and help us push the theoretical and practical boundaries of intelligent discovery systems.
What You’ll Work On
- Designing and implementing the core decision-making framework.
- Running experiments with internal ML models.
- Evaluating performance, generalization, and uncertainty of models.
- Coding mock environments for policy learning.
Qualifications
- Strong mathematical foundation, with emphasis on probability theory, information theory, and optimization.
- Understanding of Bayesian reasoning and decision theory (e.g. expected utility, belief state modeling, POMDPs)
- Experience with learning under uncertainty and active learning methods (e.g. Bayesian optimization)
- Experience with planning and control algorithms (e.g. Monte Carlo Tree Search, multi-armed bandits such as UCB and Thompson Sampling, meta-RL)
- Knowledge of modeling and generalization techniques, including surrogate modeling, inductive biases, and generalization in high-dimensional search spaces
- Solid practical ML engineering and software engineering fundamentals (Python, PyTorch/JAX/TensorFlow, NumPy/SciPy)
- Ability to mathematically formalize vague problems
Bonus:
- Experience with BoTorch, GPyTorch, Pyro, Stan, or JAX.
- Experience with deep learning methods (e.g. VAE, transformers)
- Prior research or open-source contributions in probabilistic inference, RL, or decision-making.
- Strong performance in math/programming competitions or coursework (e.g. Putnam, IOI, ICPC).