Code Explanation:
Import PyTorch Library
import torch
import torch.nn as nn
Explanation:
torch is the main PyTorch package — it provides tensors, math operations, and autograd.
torch.nn (imported as nn) is the Neural Network module — it includes layers, activations, loss functions, etc.
We import it separately for convenience when defining neural network layers.
Define a Simple Neural Network Model
model = nn.Sequential(nn.Linear(4, 2), nn.ReLU())
Explanation:
nn.Sequential() creates a container that stacks layers in order.
Inside, two components are defined:
nn.Linear(4, 2) — a fully connected (dense) layer with:
4 input features
2 output features
It performs a linear transformation:
nn.ReLU() — a Rectified Linear Unit activation function, which replaces all negative values with zero.
So this model performs:
Linear mapping → Activation (ReLU).
Create Input Tensor
x = torch.ones(1, 4)
Explanation:
torch.ones(1, 4) creates a tensor of shape (1, 4) filled with 1s.
This represents one sample (batch size = 1) with 4 input features.
Example of the tensor:
tensor([[1., 1., 1., 1.]])
Forward Pass Through the Model
print(model(x).shape)
Explanation:
model(x) performs a forward pass:
Input x (size [1, 4]) is passed through nn.Linear(4, 2) → output size [1, 2].
Then nn.ReLU() is applied → keeps the same shape [1, 2], but clamps negatives to zero.
Finally, .shape prints the size of the output tensor.
Output
torch.Size([1, 2])
.png)

0 Comments:
Post a Comment