Module 12
So you want to understand neural networks
This lesson adapts `4_numpy_neural_networks.ipynb`, a compact notebook that uses only NumPy to explain the structure of a small neural network before learners move on to richer deep-learning tools.
What the notebook contributes
The notebook uses a deliberately simple toy problem and a tiny network diagram with input, hidden, and output nodes. Its purpose is not to produce a serious model. Its purpose is to make the idea of network structure less intimidating.
Best way to integrate it into the site
- Explain the role of inputs, hidden units, and outputs.
- Show how weights connect nodes conceptually.
- Use NumPy arrays as the first representation of parameters.
- Connect this simple example back to later tools like Cellpose and BioImage Model Zoo.
Keeping this as a concept-first page is helpful because it gives learners a way to understand deep learning without needing to train a full model immediately.
Representative code examples
import numpy as np
inputs = np.array([8, 32]) # CPU cores, RAM
weights_input_to_hidden = np.array(
[
[0.1, 0.4],
[0.3, 0.2],
]
)
weights_hidden_to_output = np.array([0.5, 0.7])
hidden = inputs @ weights_input_to_hidden
output = hidden @ weights_hidden_to_output
print("Hidden layer:", hidden)
print("Output:", output)
The exact numbers do not matter much here. The point is to see that a neural network can still be described as arrays and operations.
Why this belongs before pretrained models
Researchers often use deep-learning tools long before they fully understand what a neural network is doing. This page gives just enough conceptual grounding to make the later pretrained-model workflows feel less like a black box.
Suggested website exercises
- Label the parts of a small neural-network diagram.
- Represent a tiny set of weights as a NumPy array.
- Write in plain language what each layer is supposed to do.
- Explain why this toy example is useful even though it is not a real bioimage model.