🧠 Building Your First Encoding Model

Learn the fundamentals of neural encoding models with LITCoder

1. Choose Your Parameters

Start with these basic settings for your first model:

Dataset

Narratives (easiest to start with)

Model

GPT-2 small (fast and reliable)

Layer

Layer 9 (good middle layer)

Downsampling

Lanczos (recommended)

2. Basic Code Example

Here's a simple example of how to build an encoding model:

# Create assembly for your dataset assembly = AssemblyGenerator.generate_assembly( dataset_type="narratives", data_dir="data/narratives/neural_data", subject="sub-256", tr=1.5, lookback=256, context_type="fullcontext" ) # Extract features from language model extractor = LanguageModelFeatureExtractor({ "model_name": "gpt2-small", "layer_idx": 9 }) # Downsample to brain timing downsampler = Downsampler() features = downsampler.downsample( data=model_features, method="lanczos" ) # Apply FIR delays delayed_features = FIR.make_delayed(features, delays=range(1, 9)) # Train the model metrics = fit_nested_cv( features=delayed_features, targets=brain_data, n_outer_folds=5 )

3. What Happens Next

After running your model, you'll get:

4. Next Steps

Once you've built your first model: