Activation functions, element-wise ops
Easy FundamentalsImplement the ReLU (Rectified Linear Unit) activation function from scratch.
• Do NOT use torch.relu, F.relu, torch.clamp, or any built-in activation
• Must support autograd (gradients should flow back)
Implement the function below. Use only basic PyTorch operations.
Use this code to debug before submitting.
Try solving it yourself first! Click below to reveal the solution.
For interactive practice with auto-grading, run TorchCode locally:pip install torch-judge then use check("relu")
Activation functions, element-wise ops