Useful NumPy Operations for Machine Learning
Professionally, I’m a C# web app developer. On the side I’m investing my time into Machine Learning, and that really means I need to learn python. Python is the current language of choice when it comes to Machine Learning. If you’re using python for Machine Learning, it really goes without saying that you’re also using NumPy.
Not being used to python, some of the library’s features were strange to me. This article is going to be a reference for me to understand and highlight some of the features of numpy that I have encountered so far. I hope it’s helpful for you as well.
If you’re using numpy for machine learning, you’re going to be using numpy arrays. They’re more performant when dealing with large data sets, which is common in Machine Learning.
Here’s how you create one.
It very quickly becomes second nature.
When you have an array, you’ll often want to perform operations against the entire array. Numpy makes that really simple.
Similarly, operations across multiple arrays is just as simple.
Numpy automatically realizes when you’re adding array + array or array + number and will apply the correct operation.
Matrices are very critical to Machine Learning. There’s a lot of them. Here are some useful matrix operations that numpy makes pretty easy for you.
Generating a matrix with random numbers:
For neural networks, its common to randomly initiate the weights matrices. Here’s how to do that.
I’m using a seed so that if you run it on your computer the results will be the same, but it’s probably not the best idea to use it on your projects.
By default this will create a matrix of the given size with values [0, 1), which means the values will range from 0 to greater to less than 1. If you need bigger range you can apply operations to it:
Multiply by the specific range that you need, then add or subtract to move the range to whatever your starting value needs to be.
At times you’ll want a matrix to be in a different form. You can use numpy’s reshape for that.
Dot multiplication is also used a lot, it’s also pretty easy.
If you’re unfamiliar with the dot multiplication and you want to get into Machine Learning, I would recommend brushing up a bit on your Linear Algebra.
Transposing is as simple as using a T.
Transposing turns your columns into rows and rows into columns.
Argmax returns the 0th based index of the maximum element. Useful when comparing a prediction to the expected result.
The last index has the greatest value, so 3 is printed.
Apply a function to a whole matrix:
At times you’d like to pass every element in a matrix through a function. I have experienced this mostly by using activation function. Even though the function takes a singular value, numpy applies the function to each element for you.
Relu is an activation function that turns negative values into 0 and returns any positive value. As you can see, the -1 became 0 and the 1 stayed the same.
Lastly, something I haven’t used a lot of, but here’s how to calculate the outer product if needed.
I hope this was helpful for you. I enjoyed diving a more into numpy and focusing on important features that I’ve been using a lot.
I’m currently learning deep learning by reading through Grokking Deep Learning by Answer W. Trask. I highly recommend the book if you’re just getting started.