Learn the fundamentals of neural encoding models with LITCoder
Start with these basic settings for your first model:
Narratives (easiest to start with)
GPT-2 small (fast and reliable)
Layer 9 (good middle layer)
Lanczos (recommended)
Here's a simple example of how to build an encoding model:
# Create assembly for your dataset
assembly = AssemblyGenerator.generate_assembly(
dataset_type="narratives",
data_dir="data/narratives/neural_data",
subject="sub-256",
tr=1.5,
lookback=256,
context_type="fullcontext"
)
# Extract features from language model
extractor = LanguageModelFeatureExtractor({
"model_name": "gpt2-small",
"layer_idx": 9
})
# Downsample to brain timing
downsampler = Downsampler()
features = downsampler.downsample(
data=model_features,
method="lanczos"
)
# Apply FIR delays
delayed_features = FIR.make_delayed(features, delays=range(1, 9))
# Train the model
metrics = fit_nested_cv(
features=delayed_features,
targets=brain_data,
n_outer_folds=5
)
After running your model, you'll get:
Once you've built your first model: