Research Scientist: Multi-Modal Modeling
The modeling team at Enigma is seeking ML Research Scientists to pioneer the development of foundation models that bridge artificial and biological intelligence. You will lead the development of large-scale transformer-based architectures that integrate diverse neural data streams—from visual stimuli to high-dimensional neuronal recordings and behavioral measurements. This position offers a unique opportunity to push the boundaries of self-supervised learning and multi-task objectives, creating models that not only predict neural responses but reveal fundamental principles of biological computation. The ideal candidate will have extensive experience developing multimodal foundation models and interest in pioneering the application of these techniques for decoding the neural basis of intelligence.
The Enigma Project (https://enigmaproject.ai) is a Stanford-based non-profit research organization, launched in August 2024 with $30M in funding. Our core mission is to leverage deep learning to crack the neural code. We own the full neuroAI pipeline: from neurotechnology development, to neural data collection, to modeling, theory, & analysis.
Role & Responsibilities:
- Design novel transformer-based architectures for integrating continuous visual, neural, and behavioral time series data
- Develop self-supervised learning approaches and multi-task objectives for training foundation models of the brain
- Pioneer new methods for modeling the relationship between sensory inputs and neural activity across the visual hierarchy
- Lead research in scaling model architectures to process and integrate massive neurophysiological datasets
- Guide technical strategy for model evaluation, validation, and interpretation
- Advance the field through publications and presentations at top machine learning and computational neuroscience venues
Key Qualifications:
- Ph.D. in Computer Science, Machine Learning, Computational Neuroscience, or a related field, plus 2+ years post-Ph.D. research experience
- At least 2+ years of practical experience in training, fine-tuning, and using multi-modal deep learning models
- Strong publication record in top-tier machine learning conferences and journals, particularly in areas related to multi-modal modeling
- Strong programming skills in Python and deep learning frameworks
- Demonstrated ability to lead research projects and mentor others
- Ability to work effectively in a collaborative, multidisciplinary environment
Preferred Qualifications:
- Background in theoretical neuroscience or computational neuroscience
- Experience in processing and analyzing large-scale, high-dimensional data of different sources
- Experience with cloud computing platforms (e.g., AWS, GCP, Azure) and their machine learning services
- Familiarity with big data and MLOps platforms (e.g. MLflow, Weights & Biases)
- Familiarity with training, fine-tuning, and quantization of LLMs or multimodal models using common techniques and frameworks (LoRA, PEFT, AWQ, GPTQ, or similar)
- Experience with large-scale distributed model training frameworks (e.g., Ray, DeepSpeed, HF Accelerate, FSDP)
What We Offer:
- A rich environment in which to pursue fundamental research questions in AI and neuroscience
- A dynamic team of engineers and scientists in a project dedicated to one mission, rooted in academia but inspired by science in industry
- Access to unique datasets spanning artificial and biological neural networks
- State-of-the-art computing infrastructure
- Competitive salary and benefits package
- Collaborative environment at the intersection of multiple disciplines
- Location at Stanford University with access to its world-class research community
- Strong mentoring in career development
Application: Please send your CV and one-page interest statement to: recruiting@enigmaproject.ai