Lessons in Communication: Balancing Empathy and Professionalism

I want to share a story about a brilliant engineer whose personality, while unique and engaging, ultimately presented significant challenges within our team. This experience served as a powerful reminder of the importance of setting clear boundaries and navigating complex interpersonal dynamics when managing individuals.

By late 2023, our Ruby on Rails application had become increasingly fragile and difficult to maintain. Recognizing the need to enhance our team’s engineering fundamentals, we decided to shift our hiring strategy to attract individuals with a stronger foundation in core engineering principles.

Note: Rails facilitates rapid application development, which is a significant advantage. However, a team lacking strong engineering fundamentals can quickly encounter limitations as the application grows in complexity. This can lead to technical debt and hinder future development efforts.

Both the engineer and I tend to be quite expressive in our communication style. This shared characteristic fostered a sense of camaraderie, allowing for open and candid conversations in private. However, our more direct communication style sometimes clashed with the expectations of the broader team. I recognized this early on and began to gently guide her on the importance of adjusting the communication style in public settings.

While it’s generally advisable to maintain consistency in one’s communication, the engineer’s conversational style felt familiar and comfortable to me, making it difficult to consistently enforce professional communication standards.

She quickly adjusted her communication style, and the initial interactions seemed promising. I believe in providing challenging opportunities for my team members to grow, so I entrusted them with our most critical project of the year. I was eager to see them excel and showcase their talent. However, it quickly became apparent that the high-pressure nature of this project was exacerbating the existing tensions.

As the research progressed, I observed a gradual increase in the engineer’s stress levels. This became a recurring pattern: I would address their concerns, offer reassurance, and express confidence in their abilities, which would temporarily alleviate the pressure. However, within a week, the stress would resurface, creating a cyclical and unproductive dynamic. In retrospect, I realize I should have recognized the severity of the situation much earlier, but my own biases and a desire to support their success likely blinded me to the escalating issues.

I believe my shared background and generational similarities with the engineer may have inadvertently led to an overestimation of our compatibility and a reluctance to address the escalating issues more directly. I inadvertently granted more latitude than was appropriate, failing to recognize the severity of the situation until it reached a critical point.

It’s important to note that I value open and authentic communication in private settings. My primary objective is to understand and support my team members, and I believe that effective communication can take many forms.

While I am open to direct and even passionate communication in private settings, I cannot tolerate disruptive behavior in public forums. This includes raising one’s voice or engaging in disrespectful language during team meetings. Unfortunately, this is what happened during a code review session.

I had presented a code solution that did not align with the engineer’s preferred approach, and they proceeded to express their disagreement in a highly emotional and disruptive manner, raising their voice in front of the entire team. The shock and discomfort on the faces of my colleagues were palpable. This incident marked a significant turning point in the situation.

The following day, I issued a written warning along with a Performance Improvement Plan (PIP). The meeting that followed was understandably emotional. As always, I made a concerted effort to listen to the engineer’s concerns and perspectives. After a lengthy discussion, we concluded the meeting by outlining clear expectations for future behavior and performance. Unfortunately, the situation did not improve, and the employment relationship ultimately ended.

In retrospect, I realize that while I consistently listened to the engineer’s concerns, I failed to truly grasp the escalating severity of the situation. I did not fully recognize the extent of her frustration and the impact it was having on her well-being. This was a difficult and valuable lesson for me.

My previous approach, which emphasized a permissive environment for open communication, inadvertently failed to establish clear boundaries between appropriate and inappropriate behavior, particularly in public settings. I now approach private conversations with a greater emphasis on both empathy and clear expectations. While I still value open communication, I strive to maintain a balance between fostering an inclusive environment and upholding professional standards.

The Unexpected Teachers

There are several stories that I tell, whether you want to hear it or not about learning. There are two stories that mean a lot to me that I share as often as I can. I want to share them here.

In the late 1990s, I was not working in software engineering and it was not on my radar. I was working in metrology. Yes it is a word and it has nothing to do with weather. I was sold that it was a good career and was stable. Needless to say, I was ok with that and didn’t look much past it.

One day I overhead a manager from a different group talking with one of his technicians. I heard the technician say “Why do I need to learn about combustion engines?”. At first, realizing that our field has nothing to do with those types of engines, I wanted to hear the response. The manager asked a seemingly simple question, “Do you know how they work?”.

The response seems silly since the technician had zero interests in combustion engines. I walked away not thinking much more about the exchange. It wasn’t until later that night that I replayed the comments. The message that I extracted was there was something that the technician didn’t know, so why not take the opportunity to learn about the engines.

The comment was not intended for my ears, but I have taken that lesson with me. For almost 30 years, I have taken the opportunity to learn anything and everything that presents itself to me. It was that lesson that has carried me this far in my career. I was a self-taught software engineer for almost five years before I decided that I needed to get my degree. During those early days, I read everything. A lot of my Friday nights were spent at the book store and my then girl friend loved it. Honestly, she hated it but she understood what I was trying to achieve.

He doesn’t know it, but I owe my entry into this career to you Shawn. I tell that story all the time, because it had such an impact on my life.

Fast forward to the mid 2000s and then I saw something that taught me a similar lesson. I was working on web applications and we didn’t have mainstream JavaScript libraries like jQuery, so a lot of the code we wrote was for handling the browser quirks. Most people could see the writing on the wall that JavaScript would keep spreading throughout the industry.

One day, I was walking down the cubicles and I stopped at one of my colleague’s desk. I struck up a conversation about JavaScript and sharing the resources that I was using to get up to speed. She said to me that she didn’t need to learn anything new, because she already knew enough. To be clear, she was not being a know-it-all, she was a good person and not like that.

Her response didn’t sit well then and it further cemented my commitment to life long learning. My colleague stagnated and their career didn’t progress. In fairness, not everyone is so passionate about learning or growth and that is OK. If you are content and don’t have the desire to do anything new, then I respect that.

Those two lessons have served me well for that last 24 years and has made sure that I will always seek out opportunities to grow. I share these two stories with anyone who will listen because they have been so pivotal to my journey. Keeping a growth mindset has made me hungry and keeps me going. I hope you can find these two stories as valuable as I have.

React-Intl outside your component tree

Recently while reworking our app to add multi-language support I came across the need to use the intl api outside of the component structures. I tried a couple of different paths, such as a singleton-like provider that is populated by a wrapper component when it is mounted. This worked fine, but was really ugly.

For my use case I have the need to load the language file (data) from the server at runtime, so we needed a way to dynamically load the intl provider after the fact. I didn’t find a way to do that, so we will have to make sure that the data is available when the components at the root are mounted (not a big deal). What I did find, however, was in their api I can create a new instance of the intl object outside of the provider.

There are some considerations with having two instances of the intl object floating around like memory usage, but for our user load that isn’t really an issue for us. To build this out we need to supply a two character locale string, so for that I am using the package locale2. I don’t think anyone could have made that package any easier to use. Like I mentioned, I needed to load the language data from the server when the page is rendered. To extract the language data, I use simple function to pull it from the window object. Now that I have all that I need, I can create the new intl object and return that as an export.

import { createIntl, createIntlCache } from 'react-intl'
import locale2 from 'locale2'

export const locale = locale2.substring(0, 2)
const cache = createIntlCache()

export const getMessages = () => {
    if (window['messages']) {
        try {
            return window['messages']
        } catch (error) {
            console.error(error)
            throw 'Failed to load messages from server'
        }
    }

    return null
}

export default createIntl(
    {
        locale: locale,
        messages: getMessages()
    },
    cache
)

That was all it took to access the intl object outside of the component tree. Now you can import intl and use it freely.

intl.formatMessage({ id: 'Hello' })

If you have any questions, leave a comment. Cheers

New Agenda

I have been spending time trying to get the Jetson Nano to work and I have only made slight progress to get it to where I can use it effectively. In a previous post, I reviewed several books that I read recently and I gained a lot of knowledge, but something was missing. The books talked about the principles at a high level, but I think some of the lower level thoughts experiments were missing. I have decided to pause on the practical application for a bit, maybe a month or two, and focus on getting a deeper understand of the math.

Math is one area of focus, but on twitter I have been chatting with someone on social issues and it made me realize that there was a large section of my knowledge base that was void. Even more introspectively I realized that I am not great at making an argument. I know the content, but my ability to frame and stay on target is an issue. Given that, here are the new three books that I am reading:

The fire next time by James Baldwin.

I am only a quarter way through, but his detail of how he was raised and the paths that were available to him were small. Seeing his explanation of how the world (white America) changed how they seem themselves and the effects that it has on psyche is mind-boggling.

How Not to Be Wrong: The Power of Mathematical Thinking – by Jordan Ellenberg

This book is hard to put down. It takes you through a lot of examples where decision making happens and the mathematics that you can apply. This isn’t an exhaustive book on math, but it hits just write for where I am. In short, this book is awesome.

Deep Learning – by Ian Goodfellow, Yoshua Bengio, and Aaron Courville

I am holding off cracking the cover of this one until I finish the two above. This one is an actual textbook so I have high expectations. It digs pretty deep into the mathematics, so if I do this right I should have a much better footing than I have now.

In short, I am looking to learn math, social justice and argument. This political climate is brutal right now, so I am trying to stay off the television as much as possible to reduce my stress.

Please vote on November 3rd, our democracy is at stake.

Cheers

Machine Learning Book Review

If you have been following along, you know that I am trying to become a data scientist. That means that I am reading more books that are probably healthy for a person to binge read. I wanted to highlight some of the books that I have been reading and some comments. I am going to list them in the order that I read them and I will close with the order that I should have read them in.

Data Science from ScratchJoel Grus

This really was an excellent book to start with. It game me a good overview of what the field looks like and how to use the tools – Python. Not knowing Python did give me some challenges, but I was able to work around this with Google. Like most of the other books that I read it starts with some foundation in math, which was good because I have been out of school for more than a decade. Overall I think this was a good place to start.

Deep Learning with PythonFrancois Chollet

After reading about Neural Networks and Deep Learning, I was hooked. I google for a book and this one popped up. I had heard of TensorFlow before, but I didn’t know about Keras. Keras is a helper library which makes things very easy. This book was written by the creator of Keras. This one is awesome and in depth. I learned a lot from it, but there was some fundamental things that were still a bit confusing even with my superior google sleuthing abilities. I am still reading that last few chapters because I felt I needed to take a break. Honestly my brain was full, but I am a glutten for punishment, so I moved to the next book.

Mathematics for Machine Learning

This one was not for the faint of heart. I read through the first few chapters, but just like the last book it only made my brain hurt more. It is really a textbook in disguise, but it is really good. I plan on getting back to it soon because some of the things that are missing for me are the more advanced math.

Deep Learning from ScratchSeth Weidman

This was the best book yet. Neural Networks are really simple in concept, but I was still having a hard time seeing that. Seth takes a great approach to teaching each concept. He breaks them down using the math, code and a diagram. This approach really worked for me. The biggest part that really drove things home was writing a the neural network from scratch. Learning to Use Keras was good, but without the deeper understanding it was all a little too much voodoo for me. I give this one 5 stars. After reading Joel Grus book, I should have read this one second.

Machine Learning with Python for Everyone – Mark Fenner

I think this is the book that I should have read second. It is so comprehensive in the breadth of topics, I love it. This one is also a textbook in disguise. I don’t have much to say other than stop reading my post and go buy it already.

New Idea

I always do a better job learning new things through practical application. Since I am in the journey to transition to a data scientist it left me wondering what I should build to learn the algorithms. It came to me when I was watching my kids play soccer. I figured there must be a better way of doing a post game review with some data augmentation. So here we are.

I want to build an application that can 1) classify objects, 2) track objects and 3) allow for playback with all of this additional information. I also envision having an interface that allows you to select a player and make everything else opaque so you can really focus on that specific player. Then you can take a look at how they are positioning in relation to the ball and to the other players. Then you can take a peek into their game IQ and thought process.

I plan to use the masked region based convolutional neural network (Masked R-CNN) to classify objects such as a player and a ball. Matterport has a GitHub for an implementation that they did, so I am using that as a base implementation. That part is pretty straight forward, because there are already trained data for most of the objects that I am looking for. This is the first level of classification that needs to be done.

Once I have detected players, I need to further classify them as specific players with names. This will be a little more tricky. I think I can freeze the layers of the current model and add layers to do the multi-classification training. One thing that I also want to do it to detect the opponents and maybe classify them if there were seen previously.

I was reading about the challenges with multi-label classifications so this will be a fun challenge to solve for my use case. I am not sure the approach that I will take yet, but I will write more once I get to that point.

I have a GitHub were I am tracking this code.

Convolution Neural Network (Convnet), My Understanding

As I was going through the chapter on convnets in Chollet’s book, Deep Learning with Python, one of the things that I found interesting was the ability to extend an existing trained model. When you think about image recognition, I can’t imagine being able to collect enough images to be able to build a decent model. I thought about an application that could track soccer players on the field and detect how often they were engaged with the ball. Engagement would be defined as the amount of time they were spotted within 1 to 2 meters of the ball.

I think this would be an interesting project, but collecting images of all of the angles that players might find themselves in would be a challenge if not impossible. I wonder how many images of each player would it take to train the model. One of the things that might work is using data augmentation or more specifically taking the same images and mutate them into new images. The mutation could be a translation or an offset in the frame to make the image different, at least to the machine. These images would add to the existing pool and improve the model since there is now more training data. Keras takes care of those mutations with their ImageDataGenerator class.

Convolution using a windowed view of the image and moves that image around to find local features. In contrast, Dense layers look at the whole of the features to train. I am an not an artist so I won’t try to show an example of a convolution window, but think of reading the news paper through a magnifying glass and moving it around until eventually you have covered the whole page.

Chollet gives an explanation of strides and padding, which seem straightforward. I think the best explanation from another well known site MachineLearningMastery. The purpose of the padding is really to give each pixel the change the be in the center of the window. Since the window move 1 pixel at a time from left to right, unless the size of this image is large enough, it is impossible to center the border pixels. For a 5 X 5 image and a 3 X 3 window, it will be impossible to center each pixel, but it you add that image such that it is a 7 X 7 dimensioned image, you can center the pixels.

I am going to post the code from the book since it is more concise, but you can get the true source from Chollet’s GitHub. This code assumes that you have downloaded the cats vs dogs dataset from Kaggle and you have loaded the data and separated them out. I have posted my version on GitHub, but again it is derived from the authors code.

#We need to setup the environment and some paths to our images:
import os
import shutil
from keras import layers
from keras import models
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
os.environ['KMP_DUPLICATE_LIB_OK']='True'

base_dir = '/Users/heathivie/Downloads/cats_and_dogs_small'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')

I needed to add this piece os.environ[‘KMP_DUPLICATE_LIB_OK’]=’True’ because it was failing with this error:

OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.
OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/

There are a ton of images that need to be processed and used for training, so we will need to use Keras’ ImageDataGenerator. It is a python generator that looks through the files and yields the image as it is available. Here we will load the data for training.

train_datagen = ImageDataGenerator(rescale=1./255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)
    
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
    train_dir,
    target_size=(150, 150),
    batch_size=20,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_dir,
    target_size=(150, 150),
    batch_size=20,
    class_mode='binary')

The first train_datagen is filled with parameters to support the data augmentation. The next to pieces there simple setup the path, target (image size) and batch size. It also specifies what the class model is and since we are doing a classification of two types (cats & dogs) we will use binary.

Like the earlier posts we will still be using a Sequential model, but we will start with the ConvD layers.

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))

Above we have specified that we want to have a 3 X 3 window, 32 filters (channels), relu as our activation and the image shape of 150 X 150 X 3. One thing to note, we need to do a classification which requires a Dense layer to process, so how do we translate a 3D tensor to fit the dense layer. Keras gives a Flatten method to do this. It final shape is a 1D tensor (X * Y * Channel).

model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

We finalize it with a single Dense layer with the sigmoid activation. The last piece is to compile the model. For this we will use the loss function binary_crossentropy since this is a classification problem with 2 possible outcomes.We will again use the optimizer RMS prop, but here we will specify a learning rate or the rate at which it moves when doing the gradient. Lastly, we configure it to return the accuracy metrics.

model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])

Now we can run the fit method, supplying it with the training and validation generators that we created above. The step* parameters are there to make sure that our generators don’t run forever. This is configured to run 30 epochs at 100 steps each, so on my machine this takes about 10 minutes. Make sure you save your model.

history = model.fit(
    train_generator,
    steps_per_epoch=100,
    epochs=30,
    validation_data=validation_generator,
    validation_steps=50)

model.save('ch_5_cat_dogs.h5')

After running through all of the epoch’s, I achieved a 0.75 accuracy. This is what it looked like:

After you have saved your model, you can go a take picture of your cat or dog (or grab one of the internet) and use it to predict whether it is a cat or a dog.

import os
import numpy
from keras.models import load_model
from keras.preprocessing import image
base_dir = '/Users/heathivie/Downloads/cats_and_dogs_small'
# load model
model = load_model('ch_5_cat_dogs.h5')
# summarize model.
model.summary()
file = test_dir = os.path.join(base_dir, 'test/cats/download.jpeg')
f = image.load_img(file, target_size=(150, 150, 3))
x = image.img_to_array(f)
# the first param is the batch size

y = x.reshape((1, 150, 150, 3)).astype('float32')

classes = model.predict_classes(y)
print(classes)

I used this image of my amazing dog Fergus and the prediction was correct, he was indeed a dog.

The incomparable Fergus

The next post I will do is use a pre-training convnet, which I think it awesome. I am going to continue talking about the goal of a model that can detect someone and their proximity to a ball.

K-Fold and Pima

Yesterday I posted an example of the Pima dataset which provide data on the features of an individual and their likelihood to be diabetic. I didn’t get great results (only 67%), so I wanted to take another look and see if there was anything that I could change to make it better. The dataset is pretty small, only 768 records. In my readings, it showed that when you have a small population of data you can use K-Fold Cross Validation to improve the performance of the model.

K-Fold splits the data into k folds or groups. For instance if you set k to be 3, the data will be split into 1 validation set and 2 training sets. For each k the 2 training sets will be used for… fitting the model and the remaining will be used for validation. SciKitLearn has a KFold objects that you can use to parcel the data into the sets. An interesting point that I didn’t catch at first is that the split function returns sets of indices, not a new list of data.

for train_index, test_index in kf.split(x_data,y_data):

So now that we have our data split into groups, we need to loop over those groups to train the model. Remembering that the split data is just an array of indices, we need to populate our training and test data.

    X_train, X_test = x_data[train_index], x_data[test_index]
    y_train, y_split_test = y_data[train_index], y_data[test_index]

Just like the previous Pima example, we build, compile, fit and evaluate our model.

    model = models.Sequential()
    model.add(layers.Dense(16, activation='relu', input_shape=(8,)))
    model.add(layers.Dense(16, activation='relu'))
    model.add(layers.Dense(1, activation='sigmoid'))

    model.compile(optimizer='rmsprop', loss='binary_crossentropy',
                metrics=['accuracy'])

    history = model.fit(X_train, y_train, epochs=epochs, batch_size=15  )
    results = model.evaluate(X_test, y_split_test)

Now to capture the metrics of each fold, we need to store them in an array and I set aside the model with the best performance.

    current = results[1]
    m = 0
    if(len(accuracy_per_fold) > 0):
        m = max(accuracy_per_fold)
 
    if current > m :
        best_model = model
        chosen_model = pass_index

    loss_per_fold.append(results[0])
    accuracy_per_fold.append(results[1])

Putting it all together after all folds have been processed, we can print the results.

Now we can run our test data through the model and check out the results.

y_new = best_model.predict_classes(x_test) 
total = len(y_new)
correct = 0
for i in range(len(accuracy_per_fold)):
    print(f'Accuracy: {accuracy_per_fold[i]}')

for i in range(len(x_test)): 
    if y_test[i] == y_new[i]:
        correct +=1

print(correct / total)

Everything worked and based on the randomized test data I was able to achieve a 75% accuracy where the previous method yielded 67%. The full code can be found on GitHub. Just like the other posts, these are just my learnings from the book Deep Learning with Python from Francois Chollet. If any expert reads through this and there is something that I missed or was found to be incorrect, please drop a comment. I am learning so any correction would be appreciated.

ML – A Novice Series (Pima Indians)

I wanted to take another look at binary classifications and see if I could use what I learned on the Pima Indian data set. This is a data set that describes some features of a population and then we try to predict whether someone will have diabetes. The shape of the data is (769,9) and the 9 columns are:

  • Pregnancies
  • Glucose
  • Blood Pressure
  • Skin Thickness
  • Insulin
  • BMI
  • Diabetes Pedigree Function
  • Age
  • Outcome

These are the features that we have to work with. This data is in csv, so the columns are actually a string so we will need to convert that. Let’s load the data and convert it to a numpy array and do the conversion to float32.

with open('pima_indian_diabetes.csv', newline='') as csvfile:
    dataset = list(csv.reader(csvfile))

data = np.array(dataset)
data = data[1:]
data = data.astype('float32')

Obviously, this assumes that the file is in the same directory as your Python file. Originally I used the pandas read_csv, but it returns a DataFrame so it was failing to do the extractions that you will see in a minute. This took me longer than I care to mention before I figured out why it was failing to slice. Just like the IMDB example, we need to separate the features from the outcome.

# 8 features and 1 outcome columns
X = data[:, 0:8]
Y = data[:, 8:]

Now that we have our data separated, we need to split out the training and test data with scikit’s train_test_split. I will choose a 70/30 split.

x_train, x_test, y_train, y_test = model_selection.train_test_split(
    X, Y, train_size=0.7, test_size=0.3, random_state=42)

Now we have to define our model, which is an interesting section. I am using the same model as the IMDB data, but we have some options. We have to change the shape for the input since we only have 8 features (IMDB has 10000). We also need to define the neurons on each level based on the inputs. I will set it to 16, but it must be at least the size of the input which is 8. I get different results when I change this around which I will share.

model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(8,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

Again we use RMS Prop and Binary Cross Entropy and track the accuracy metrics. We also split out our training data and validation data.

model.compile(optimizer='rmsprop', loss='binary_crossentropy',
              metrics=['accuracy'])

x_val = x_train[:350]
partial_x_train = x_train[350:]

y_val = y_train[:350]
partial_y_train = y_train[350:]

Now we can apply the data to the model and see the results. I chose 40 epochs, but we will test different iterations. We will also adjust the number of hidden units.

history = model.fit(partial_x_train, partial_y_train, epochs=40,
                    batch_size=1, validation_data=(x_val, y_val))

If we look at the loss graph with 40 epochs and 16 hidden units, the graph seems to track nicely. Our accuracy seems to level of for a while and the climbs a little higher.

What happens if we add more hidden units, let’s say 32. The loss and accuracy is not as smooth. The accuracy is higher, but it could be overfitting the training data. I should mention that the batch size is 15.

For the last test, I want to put the hidden units back to 16, but run more epochs: 100. We can see that the performance doesn’t change that much, but the accuracy does something strange. The only thing I can think is that there is a lot of overfitting

You might get different results due to the random sampling. The prediction results were not what I was hoping for, so there will be some more experimentation needed. Maybe I will drop some columns and see how the model performs. To get the charts you can use matplotlib.

history_dict = history.history

loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']

accuracy_values = history_dict['accuracy']
val_accuracy_values = history_dict['val_accuracy']

epochs = range(1, epochs+1)

plt.plot(epochs, loss_values, 'bo', label='Training Loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation Loss')

plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()


plt.plot(epochs, accuracy_values, 'bo', label='Training Accuracy')
plt.plot(epochs, val_accuracy_values, 'b', label='Validation Accuracy')

plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

The full code can be found on github. In summary, this is just playing around with some data and running some experiments with adjusting the hyperparameters. It would be great if someone with more experience in machine learning would add some comments and highlight my mistakes or maybe some improvements.

ML – A Novice Series

For the last couple of weeks, I have been trying to learn more about machine learning. The obvious path was to soak up as much as I could from various blog posts, but I wasn’t getting everything that I needed. I bought this book by Joel Grus Data Science from Scratch. It was really good and gave me a great introduction into the concepts and terms. I read that front to back and read some chapters many (many) times. I felt like I was off to a good start, but I felt like I needed more textbook style content. Deep Learning with Python turned out to be that book.

Deep Learning with Python turned is a book from 2018 by Francois Chollet, the creator of Keras. As I am going through the chapters, I am going to post here about what I understood about the text and an example if possible.

I watched a PluralSight video on ML and it talked about a google site called Colab. I had been using Kaggle, but I think Colab has more power. The first thing I noticed on Colab was the code completion. This tool was built for engineers so I should not have been surprised, they really did a great job. Did I mention that it is free, you only need a Google account.

Moving on. Disclaimer: I may make mistakes or omit pertinent concepts, but I am learning at the same time. After going over Tensors and what they are, the books jumps into a classification problem. This is a binary classification because it is reviewing reviews from IMDB to determine if the are positive or negative. Since there is only two states (positive/negative), this is defined as a binary classification. Another cool thing about Keras is that it comes with datasets for you to experiment with.

Starting with the IMDB import, you can extract out your training and testing datasets.

from keras.datasets import imdb

(train_data, train_labels), (test_data, test_labels)  = imdb.load_data(num_words = 10000)

The load_dataset call takes an integer which specifies, like it name says, the number of frequent words that you want to load. I think it is obvious that the data is broken into training and testing sets. What might not be obvious is why the data and label are stored as a pair. If you think about a the basic linear equation, you have y= mx + b. In this case, you are given the x and the y (Data/Label) and you need to solve for the m and b. Thinking back to algebra we remember that m is equal to the slope of the line and b is the offset. We need to find the m and b to make the question correct. This is what the neural network will do. You will give it an equation and it will solve for the remaining variables and it will even adjust the m and b until it gets to a certain level of accuracy. Word soup.

Again, now that we have our training and test data partitioned we can start to see how we can use it to train the machine to predict if a review is positive or negative. Since a network can only take a number and more specifically a tensor, we need to change the words to a vector. There is a lot of information between where we are now and where we want to be, so I will just show how we do that.

def vectorize_sequences(sequences, dimension = 10000):
  results = np.zeros((len(sequences),dimension))
  for i, sequence in enumerate(sequences):
    results[i,sequence] = 1
  return results

This creates a tensor of 10000 entries and sets them all to zero. Then it loops over all of the data in the training data and sets the cell to 1 where the word is present in the sequence. The sequence is simply a list filled with the indices of the position of the words. Now that we can turn our individual lists into a tensor we can convert our training and test data to a collection of these tensors:

x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)

We need to make our labels a 1 dimension array of floats:

y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

Our data is now prepared and ready to go, so we need to configure our network. There is a key concept here that needs to be understood, activation function. The activation function is what determines whether the output of the neuron is 0 or 1. There are many different types of activation functions, but in this example he used the Rectified Linear Unit (relu) function. You can check out the link for more information.

Since a neural network consists of the input, output and one ore more hidden layers, we will need to do that configuration.

#We have to make sure that import the model and layer objects
from keras import models
from keras import layers

model = models.Sequential()
model.add(layers.Dense(16, activation='relu',input_shape = (10000,))) 
model.add(layers.Dense(16, activation='relu')) 
model.add(layers.Dense(1, activation='sigmoid')) 

Ok, we have set up the model. Let’s break it down. Since we are working in layers, we define this model to be sequential. Then we configure two hidden layers that will be 16 dimensions which my understanding is that there will be 16 neurons, could be wrong. We also define the shape of the input data, which is our 10,000 element wide tensor. Lastly, since this is a binary classification we will only have a single output.

Before we need to send our training data through the model, we need to compile it. We will compile it to use the RMS Prop optimizer function and the loss function of Binary Cross Entropy. These work well for classification problems. The last parameter will allow us to get some data in the form of history as the machine runs through its trials.

model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])

We will set aside the first 10,000 sequences for training and the rest for validation.

x_val = x_train[:10000]
partial_x_train = x_train[10000:]

y_val = y_train[:10000]
partial_y_train = y_train[10000:]

Now we are ready to train the machine! We need to define how many iterations it will attempt to train and the batch size. We also submit the validation data.

history = model.fit(partial_x_train, partial_y_train, epochs=5, batch_size=512, validation_data=(x_val, y_val))

Executing this command will start the machine to learn that given an input X, the model predicts Y. As it loops over, you should see an output like:

Epoch 1/5
30/30 [==============================] - 2s 58ms/step - loss: 0.5071 - accuracy: 0.7931 - val_loss: 0.3831 - val_accuracy: 0.8645

Then for visualization we can look at the training loss and the validation loss:

import matplotlib.pyplot as plt

loss_values =  history_dict['loss']
val_loss_values = history_dict['val_loss']
 
epochs = range(1,21)

plt.plot(epochs,loss_values, 'bo',label='Training Loss')
plt.plot(epochs,val_loss_values, 'b',label='Validation Loss')

plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

You should see something similar to

When I ran through this example, each time I received different values and this is because of the random sampling. The only thing left is to test out our machine and see how it performs on the test data.

model.predict(x_test)

Preview(opens in a new tab)

When I print out the results, I get so so results.

[[0.06688562]
 [0.99754685]
 [0.9261165 ]
 ...
 [0.11254096]
 [0.04644495]
 [0.90793926]]

You can see that some are good and some are terrible. I guess that makes sense since so much of what people right cannot be simply distilled down to positive or negative, but this is still impressive. The words in the post are mine and not copied from any other source, but all credit goes to Francois Chollet. Now this was a simple binary classification, the next on is a multiple classification problem where the answer can be 1 of 46 different classes. Here is the colab. Anyways, I will do more reading and report back. The full code is also on github.