Visual introduction to PyTorch

0byte Intro to PyTorch Easy to follow, visual introduction. 10th FEBRUARY, 2026 Table of Contents: what is PyTorch tensors basics autograd for automatic differentiation autograd in practice building a simple neural network YouTube what is PyTorch? PyTorch is currently one of the most popular deep learning frameworks. It is an open-source library built upon the Torch Library (it’s no longer in active development), and it was developed by Meta AI (previously Facebook AI). It is now part of the Linux Foundation . tensor basics Machine Learning (ML) is all about numbers. Tensor is a specialised container for those numbers. You might know tensors from maths or physics, but in machine learning, a tensor is simply PyTorch’s data type for storing numbers. Think of it like a more powerful version of a list or array. Tensors hold your training data and the weights Weights are numbers that determine how important each input is to the final decision. your model learns. What makes tensors special is that they come packed with useful functions. When you create a new tensor, you need to fill it with starting values. PyTorch offers many initialisation functions : torch.rand() , torch.randn() , torch.ones() , the list goes on. But what’s the difference between them? If they give you random numbers, which random numbers? And why there are so many ways to initialise it? The best way to understand is to see it. If we create thousands of random values using each function and plot them as a histogram, we can see exactly what to expect: import torch rand_sample = torch.rand(10000) randn_sample = torch.randn(10000) zeros_sample = torch.zeros(10) ones_sample = torch.ones(10) arange_sample = torch.arange(0, 10) linspace_sample = torch.linspace(0, 10, steps=5) eye_sample = torch.eye(5) empty_sample = torch.empty(10) Seeing it as a histogram paints a very clear picture. torch.rand() initialises tensor with random values between 0 and 1. torch.randn() with values that are mostly clustering around 0 and torch.eye() gives an identity matrix , and torch.empty() is… wait. Not empty? In theory is not empty. It allocates memory but does not initialise it, so the tensor contains whatever values happened to already be in that memory. If you see zeros, that’s just coincidence. torch.zeros() explicitly fills the tensor with zeros, whereas torch.empty() makes no guarantees at all – you should always write to it before reading from it. “Hey! What about own my data?” I’ve got you. Initialising tensors with random noise can be helpful, but ultimately, you want your own data for training. Let’s start with a simple example, you have data structure as follows: Bedrooms Size (m²) Age (years) Price (£k) 2 65 15 285 3 95 8 425 4 120 25 380 3 88 42 295 5 180 3 675 2 58 50 245 # Each row is one house: [bedrooms, bathrooms, size, age, price] houses = torch.tensor([ [2, 65, 15, 285], [3, 95, 8, 425], [4, 120, 25, 380], [3, 88, 42, 295], [5, 180, 3, 675], [2, 58, 50, 245] ], dtype=torch.float32) Y

Source: Hacker News | Original Link