Learn how to build and compare neural encoding models with step-by-step guides
Get up and running with LITCoder in 5 minutes. Set up your environment, organize data, and verify everything works.
Start Tutorial →Assemblies organize brain data, stimuli, timing, and metadata for training encoding models.
Start Tutorial →A fast baseline that counts words per TR and aligns with brain data.
Start Tutorial →Use pre-trained word vectors (Word2Vec, GloVe) as features for encoding models.
Start Tutorial →Extract contextual transformer features (e.g., GPT-2) with caching and layer control.
Start Tutorial →Extract features from audio using Whisper or similar speech models.
Start Tutorial →Learn the fundamentals of neural encoding models. Understand the pipeline, choose parameters, and build your first model.
Start Tutorial →Compare different models and evaluate performance. Learn cross-validation, metrics, and best practices.
Start Tutorial →Explore FIR kernels, downsampling methods, and advanced modeling techniques for better performance.
Start Tutorial →Integrate Weights & Biases for experiment management, visualization, and collaboration.
Start Tutorial →Add your own brain imaging datasets to LITCoder. Learn the data format requirements and integration process.
Start Tutorial →New to LITCoder? Start with our Quick Start guide to get your environment set up, then move on to building your first encoding model.