All Problems Description Template Solution

ReLU

Activation functions, element-wise ops

Easy Fundamentals

Problem Description

Implement the ReLU (Rectified Linear Unit) activation function from scratch.

$$\text{ReLU}(x) = \max(0, x)$$

Signature

def relu(x: torch.Tensor) -> torch.Tensor: ...

Rules

• Do NOT use torch.relu, F.relu, torch.clamp, or any built-in activation

• Must support autograd (gradients should flow back)

Example

Input: tensor([-2., -1., 0., 1., 2.]) Output: tensor([ 0., 0., 0., 1., 2.])

Template

Implement the function below. Use only basic PyTorch operations.

# ✏️ YOUR IMPLEMENTATION HERE def relu(x: torch.Tensor) -> torch.Tensor: pass # Replace this

Test Your Implementation

Use this code to debug before submitting.

# 🧪 Test your implementation (feel free to add more debug prints) x = torch.tensor([-2., -1., 0., 1., 2.]) print("Input: ", x) print("Output:", relu(x)) print("Shape: ", relu(x).shape)

Reference Solution

Try solving it yourself first! Click below to reveal the solution.

# ✅ SOLUTION def relu(x: torch.Tensor) -> torch.Tensor: return x * (x > 0).float()

Tips

Run Locally

For interactive practice with auto-grading, run TorchCode locally:
pip install torch-judge then use check("relu")

Key Concepts

Activation functions, element-wise ops

ReLU

Description Template Test Solution Tips