I always do a better job learning new things through practical application. Since I am in the journey to transition to a data scientist it left me wondering what I should build to learn the algorithms. It came to me when I was watching my kids play soccer. I figured there must be a better way of doing a post game review with some data augmentation. So here we are.
I want to build an application that can 1) classify objects, 2) track objects and 3) allow for playback with all of this additional information. I also envision having an interface that allows you to select a player and make everything else opaque so you can really focus on that specific player. Then you can take a look at how they are positioning in relation to the ball and to the other players. Then you can take a peek into their game IQ and thought process.
I plan to use the masked region based convolutional neural network (Masked R-CNN) to classify objects such as a player and a ball. Matterport has a GitHub for an implementation that they did, so I am using that as a base implementation. That part is pretty straight forward, because there are already trained data for most of the objects that I am looking for. This is the first level of classification that needs to be done.
Once I have detected players, I need to further classify them as specific players with names. This will be a little more tricky. I think I can freeze the layers of the current model and add layers to do the multi-classification training. One thing that I also want to do it to detect the opponents and maybe classify them if there were seen previously.
I was reading about the challenges with multi-label classifications so this will be a fun challenge to solve for my use case. I am not sure the approach that I will take yet, but I will write more once I get to that point.
For the last couple of weeks, I have been trying to learn more about machine learning. The obvious path was to soak up as much as I could from various blog posts, but I wasn’t getting everything that I needed. I bought this book by Joel Grus Data Science from Scratch. It was really good and gave me a great introduction into the concepts and terms. I read that front to back and read some chapters many (many) times. I felt like I was off to a good start, but I felt like I needed more textbook style content. Deep Learning with Python turned out to be that book.
Deep Learning with Python turned is a book from 2018 by Francois Chollet, the creator of Keras. As I am going through the chapters, I am going to post here about what I understood about the text and an example if possible.
I watched a PluralSight video on ML and it talked about a google site called Colab. I had been using Kaggle, but I think Colab has more power. The first thing I noticed on Colab was the code completion. This tool was built for engineers so I should not have been surprised, they really did a great job. Did I mention that it is free, you only need a Google account.
Moving on. Disclaimer: I may make mistakes or omit pertinent concepts, but I am learning at the same time. After going over Tensors and what they are, the books jumps into a classification problem. This is a binary classification because it is reviewing reviews from IMDB to determine if the are positive or negative. Since there is only two states (positive/negative), this is defined as a binary classification. Another cool thing about Keras is that it comes with datasets for you to experiment with.
Starting with the IMDB import, you can extract out your training and testing datasets.
The load_dataset call takes an integer which specifies, like it name says, the number of frequent words that you want to load. I think it is obvious that the data is broken into training and testing sets. What might not be obvious is why the data and label are stored as a pair. If you think about a the basic linear equation, you have y= mx + b. In this case, you are given the x and the y (Data/Label) and you need to solve for the m and b. Thinking back to algebra we remember that m is equal to the slope of the line and b is the offset. We need to find the m and b to make the question correct. This is what the neural network will do. You will give it an equation and it will solve for the remaining variables and it will even adjust the m and b until it gets to a certain level of accuracy. Word soup.
Again, now that we have our training and test data partitioned we can start to see how we can use it to train the machine to predict if a review is positive or negative. Since a network can only take a number and more specifically a tensor, we need to change the words to a vector. There is a lot of information between where we are now and where we want to be, so I will just show how we do that.
def vectorize_sequences(sequences, dimension = 10000):
results = np.zeros((len(sequences),dimension))
for i, sequence in enumerate(sequences):
results[i,sequence] = 1
return results
This creates a tensor of 10000 entries and sets them all to zero. Then it loops over all of the data in the training data and sets the cell to 1 where the word is present in the sequence. The sequence is simply a list filled with the indices of the position of the words. Now that we can turn our individual lists into a tensor we can convert our training and test data to a collection of these tensors:
Our data is now prepared and ready to go, so we need to configure our network. There is a key concept here that needs to be understood, activation function. The activation function is what determines whether the output of the neuron is 0 or 1. There are many different types of activation functions, but in this example he used the Rectified Linear Unit (relu) function. You can check out the link for more information.
Since a neural network consists of the input, output and one ore more hidden layers, we will need to do that configuration.
#We have to make sure that import the model and layer objects
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu',input_shape = (10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
Ok, we have set up the model. Let’s break it down. Since we are working in layers, we define this model to be sequential. Then we configure two hidden layers that will be 16 dimensions which my understanding is that there will be 16 neurons, could be wrong. We also define the shape of the input data, which is our 10,000 element wide tensor. Lastly, since this is a binary classification we will only have a single output.
Before we need to send our training data through the model, we need to compile it. We will compile it to use the RMS Prop optimizer function and the loss function of Binary Cross Entropy. These work well for classification problems. The last parameter will allow us to get some data in the form of history as the machine runs through its trials.
Now we are ready to train the machine! We need to define how many iterations it will attempt to train and the batch size. We also submit the validation data.
history = model.fit(partial_x_train, partial_y_train, epochs=5, batch_size=512, validation_data=(x_val, y_val))
Executing this command will start the machine to learn that given an input X, the model predicts Y. As it loops over, you should see an output like:
When I ran through this example, each time I received different values and this is because of the random sampling. The only thing left is to test out our machine and see how it performs on the test data.
You can see that some are good and some are terrible. I guess that makes sense since so much of what people right cannot be simply distilled down to positive or negative, but this is still impressive. The words in the post are mine and not copied from any other source, but all credit goes to Francois Chollet. Now this was a simple binary classification, the next on is a multiple classification problem where the answer can be 1 of 46 different classes. Here is the colab. Anyways, I will do more reading and report back. The full code is also on github.