I always do a better job learning new things through practical application. Since I am in the journey to transition to a data scientist it left me wondering what I should build to learn the algorithms. It came to me when I was watching my kids play soccer. I figured there must be a better way of doing a post game review with some data augmentation. So here we are.
I want to build an application that can 1) classify objects, 2) track objects and 3) allow for playback with all of this additional information. I also envision having an interface that allows you to select a player and make everything else opaque so you can really focus on that specific player. Then you can take a look at how they are positioning in relation to the ball and to the other players. Then you can take a peek into their game IQ and thought process.
I plan to use the masked region based convolutional neural network (Masked R-CNN) to classify objects such as a player and a ball. Matterport has a GitHub for an implementation that they did, so I am using that as a base implementation. That part is pretty straight forward, because there are already trained data for most of the objects that I am looking for. This is the first level of classification that needs to be done.
Once I have detected players, I need to further classify them as specific players with names. This will be a little more tricky. I think I can freeze the layers of the current model and add layers to do the multi-classification training. One thing that I also want to do it to detect the opponents and maybe classify them if there were seen previously.
I was reading about the challenges with multi-label classifications so this will be a fun challenge to solve for my use case. I am not sure the approach that I will take yet, but I will write more once I get to that point.
As I was going through the chapter on convnets in Chollet’s book, Deep Learning with Python, one of the things that I found interesting was the ability to extend an existing trained model. When you think about image recognition, I can’t imagine being able to collect enough images to be able to build a decent model. I thought about an application that could track soccer players on the field and detect how often they were engaged with the ball. Engagement would be defined as the amount of time they were spotted within 1 to 2 meters of the ball.
I think this would be an interesting project, but collecting images of all of the angles that players might find themselves in would be a challenge if not impossible. I wonder how many images of each player would it take to train the model. One of the things that might work is using data augmentation or more specifically taking the same images and mutate them into new images. The mutation could be a translation or an offset in the frame to make the image different, at least to the machine. These images would add to the existing pool and improve the model since there is now more training data. Keras takes care of those mutations with their ImageDataGenerator class.
Convolution using a windowed view of the image and moves that image around to find local features. In contrast, Dense layers look at the whole of the features to train. I am an not an artist so I won’t try to show an example of a convolution window, but think of reading the news paper through a magnifying glass and moving it around until eventually you have covered the whole page.
Chollet gives an explanation of strides and padding, which seem straightforward. I think the best explanation from another well known site MachineLearningMastery. The purpose of the padding is really to give each pixel the change the be in the center of the window. Since the window move 1 pixel at a time from left to right, unless the size of this image is large enough, it is impossible to center the border pixels. For a 5 X 5 image and a 3 X 3 window, it will be impossible to center each pixel, but it you add that image such that it is a 7 X 7 dimensioned image, you can center the pixels.
I am going to post the code from the book since it is more concise, but you can get the true source from Chollet’s GitHub. This code assumes that you have downloaded the cats vs dogs dataset from Kaggle and you have loaded the data and separated them out. I have posted my version on GitHub, but again it is derived from the authors code.
#We need to setup the environment and some paths to our images:
import os
import shutil
from keras import layers
from keras import models
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
os.environ['KMP_DUPLICATE_LIB_OK']='True'
base_dir = '/Users/heathivie/Downloads/cats_and_dogs_small'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
I needed to add this piece os.environ[‘KMP_DUPLICATE_LIB_OK’]=’True’ because it was failing with this error:
OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.
OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/
There are a ton of images that need to be processed and used for training, so we will need to use Keras’ ImageDataGenerator. It is a python generator that looks through the files and yields the image as it is available. Here we will load the data for training.
The first train_datagen is filled with parameters to support the data augmentation. The next to pieces there simple setup the path, target (image size) and batch size. It also specifies what the class model is and since we are doing a classification of two types (cats & dogs) we will use binary.
Like the earlier posts we will still be using a Sequential model, but we will start with the ConvD layers.
Above we have specified that we want to have a 3 X 3 window, 32 filters (channels), relu as our activation and the image shape of 150 X 150 X 3. One thing to note, we need to do a classification which requires a Dense layer to process, so how do we translate a 3D tensor to fit the dense layer. Keras gives a Flatten method to do this. It final shape is a 1D tensor (X * Y * Channel).
We finalize it with a single Dense layer with the sigmoid activation. The last piece is to compile the model. For this we will use the loss function binary_crossentropy since this is a classification problem with 2 possible outcomes.We will again use the optimizer RMS prop, but here we will specify a learning rate or the rate at which it moves when doing the gradient. Lastly, we configure it to return the accuracy metrics.
Now we can run the fit method, supplying it with the training and validation generators that we created above. The step* parameters are there to make sure that our generators don’t run forever. This is configured to run 30 epochs at 100 steps each, so on my machine this takes about 10 minutes. Make sure you save your model.
history = model.fit(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
model.save('ch_5_cat_dogs.h5')
After running through all of the epoch’s, I achieved a 0.75 accuracy. This is what it looked like:
After you have saved your model, you can go a take picture of your cat or dog (or grab one of the internet) and use it to predict whether it is a cat or a dog.
import os
import numpy
from keras.models import load_model
from keras.preprocessing import image
base_dir = '/Users/heathivie/Downloads/cats_and_dogs_small'
# load model
model = load_model('ch_5_cat_dogs.h5')
# summarize model.
model.summary()
file = test_dir = os.path.join(base_dir, 'test/cats/download.jpeg')
f = image.load_img(file, target_size=(150, 150, 3))
x = image.img_to_array(f)
# the first param is the batch size
y = x.reshape((1, 150, 150, 3)).astype('float32')
classes = model.predict_classes(y)
print(classes)
I used this image of my amazing dog Fergus and the prediction was correct, he was indeed a dog.
The next post I will do is use a pre-training convnet, which I think it awesome. I am going to continue talking about the goal of a model that can detect someone and their proximity to a ball.
Yesterday I needed to deploy a Lambda function to send out email reminders for my client. In setting this up I need to add it to a VPC so it could connect to the database which was straight forward. On our platform we use Secret Manager to store our connection strings for our connections to our databases. It all sounds great, but the function kept timing out and I wasn’t getting any information about the why.
Like any respected engineer (or desperate) I started to add console.logs every where. I figured that there was a connection issue to the database since it is not accessible to the public. After some annoying cycles of add log, redeploy, test rinse and repeat it turned out that it was hanging trying to connect to Secret Manager.
This didn’t make much sense at first, there were no security groups that were restricting outbound traffic. I checked the credentials that it was using and those were fine. Then I turned to the engineers best friend, Google. After some digging, I found that Lambda needs internet access to get to Secret Manager and I am not willing to give it access. So what were my options?
I remembered that I used an endpoint so I don’t have to route my database traffic over the internet. Endpoints allow you to access available services through the AWS internal networks and not over the internet. Not all services have endpoints that you can create, but it turns out that Secret Manager is available. It is incredibly simple to setup. Here are the steps.
You have to navigate to the VPC section of AWS, and select Endpoints item in the left nav menu.
Now you can click Create Endpoint and you can look through the list of services that are available. It should look something like this.
You want to select what is the category of services that you want to use, in my case it was the default AWS Services. Search for Secret Manager and it will pop into the list. Once you make the selection, your only task left is to select the VPC you want to attach it to.
That’s all it takes. After a short period (~45 seconds) your endpoint will be attached and all is well again.
Yesterday I posted an example of the Pima dataset which provide data on the features of an individual and their likelihood to be diabetic. I didn’t get great results (only 67%), so I wanted to take another look and see if there was anything that I could change to make it better. The dataset is pretty small, only 768 records. In my readings, it showed that when you have a small population of data you can use K-Fold Cross Validation to improve the performance of the model.
K-Fold splits the data into k folds or groups. For instance if you set k to be 3, the data will be split into 1 validation set and 2 training sets. For each k the 2 training sets will be used for… fitting the model and the remaining will be used for validation. SciKitLearn has a KFold objects that you can use to parcel the data into the sets. An interesting point that I didn’t catch at first is that the split function returns sets of indices, not a new list of data.
for train_index, test_index in kf.split(x_data,y_data):
So now that we have our data split into groups, we need to loop over those groups to train the model. Remembering that the split data is just an array of indices, we need to populate our training and test data.
Now to capture the metrics of each fold, we need to store them in an array and I set aside the model with the best performance.
current = results[1]
m = 0
if(len(accuracy_per_fold) > 0):
m = max(accuracy_per_fold)
if current > m :
best_model = model
chosen_model = pass_index
loss_per_fold.append(results[0])
accuracy_per_fold.append(results[1])
Putting it all together after all folds have been processed, we can print the results.
Now we can run our test data through the model and check out the results.
y_new = best_model.predict_classes(x_test)
total = len(y_new)
correct = 0
for i in range(len(accuracy_per_fold)):
print(f'Accuracy: {accuracy_per_fold[i]}')
for i in range(len(x_test)):
if y_test[i] == y_new[i]:
correct +=1
print(correct / total)
Everything worked and based on the randomized test data I was able to achieve a 75% accuracy where the previous method yielded 67%. The full code can be found on GitHub. Just like the other posts, these are just my learnings from the book Deep Learning with Python from Francois Chollet. If any expert reads through this and there is something that I missed or was found to be incorrect, please drop a comment. I am learning so any correction would be appreciated.
I wanted to take another look at binary classifications and see if I could use what I learned on the Pima Indian data set. This is a data set that describes some features of a population and then we try to predict whether someone will have diabetes. The shape of the data is (769,9) and the 9 columns are:
Pregnancies
Glucose
Blood Pressure
Skin Thickness
Insulin
BMI
Diabetes Pedigree Function
Age
Outcome
These are the features that we have to work with. This data is in csv, so the columns are actually a string so we will need to convert that. Let’s load the data and convert it to a numpy array and do the conversion to float32.
with open('pima_indian_diabetes.csv', newline='') as csvfile:
dataset = list(csv.reader(csvfile))
data = np.array(dataset)
data = data[1:]
data = data.astype('float32')
Obviously, this assumes that the file is in the same directory as your Python file. Originally I used the pandas read_csv, but it returns a DataFrame so it was failing to do the extractions that you will see in a minute. This took me longer than I care to mention before I figured out why it was failing to slice. Just like the IMDB example, we need to separate the features from the outcome.
# 8 features and 1 outcome columns
X = data[:, 0:8]
Y = data[:, 8:]
Now that we have our data separated, we need to split out the training and test data with scikit’s train_test_split. I will choose a 70/30 split.
Now we have to define our model, which is an interesting section. I am using the same model as the IMDB data, but we have some options. We have to change the shape for the input since we only have 8 features (IMDB has 10000). We also need to define the neurons on each level based on the inputs. I will set it to 16, but it must be at least the size of the input which is 8. I get different results when I change this around which I will share.
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(8,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
Again we use RMS Prop and Binary Cross Entropy and track the accuracy metrics. We also split out our training data and validation data.
Now we can apply the data to the model and see the results. I chose 40 epochs, but we will test different iterations. We will also adjust the number of hidden units.
history = model.fit(partial_x_train, partial_y_train, epochs=40,
batch_size=1, validation_data=(x_val, y_val))
If we look at the loss graph with 40 epochs and 16 hidden units, the graph seems to track nicely. Our accuracy seems to level of for a while and the climbs a little higher.
What happens if we add more hidden units, let’s say 32. The loss and accuracy is not as smooth. The accuracy is higher, but it could be overfitting the training data. I should mention that the batch size is 15.
For the last test, I want to put the hidden units back to 16, but run more epochs: 100. We can see that the performance doesn’t change that much, but the accuracy does something strange. The only thing I can think is that there is a lot of overfitting
You might get different results due to the random sampling. The prediction results were not what I was hoping for, so there will be some more experimentation needed. Maybe I will drop some columns and see how the model performs. To get the charts you can use matplotlib.
The full code can be found on github. In summary, this is just playing around with some data and running some experiments with adjusting the hyperparameters. It would be great if someone with more experience in machine learning would add some comments and highlight my mistakes or maybe some improvements.
For the last couple of weeks, I have been trying to learn more about machine learning. The obvious path was to soak up as much as I could from various blog posts, but I wasn’t getting everything that I needed. I bought this book by Joel Grus Data Science from Scratch. It was really good and gave me a great introduction into the concepts and terms. I read that front to back and read some chapters many (many) times. I felt like I was off to a good start, but I felt like I needed more textbook style content. Deep Learning with Python turned out to be that book.
Deep Learning with Python turned is a book from 2018 by Francois Chollet, the creator of Keras. As I am going through the chapters, I am going to post here about what I understood about the text and an example if possible.
I watched a PluralSight video on ML and it talked about a google site called Colab. I had been using Kaggle, but I think Colab has more power. The first thing I noticed on Colab was the code completion. This tool was built for engineers so I should not have been surprised, they really did a great job. Did I mention that it is free, you only need a Google account.
Moving on. Disclaimer: I may make mistakes or omit pertinent concepts, but I am learning at the same time. After going over Tensors and what they are, the books jumps into a classification problem. This is a binary classification because it is reviewing reviews from IMDB to determine if the are positive or negative. Since there is only two states (positive/negative), this is defined as a binary classification. Another cool thing about Keras is that it comes with datasets for you to experiment with.
Starting with the IMDB import, you can extract out your training and testing datasets.
The load_dataset call takes an integer which specifies, like it name says, the number of frequent words that you want to load. I think it is obvious that the data is broken into training and testing sets. What might not be obvious is why the data and label are stored as a pair. If you think about a the basic linear equation, you have y= mx + b. In this case, you are given the x and the y (Data/Label) and you need to solve for the m and b. Thinking back to algebra we remember that m is equal to the slope of the line and b is the offset. We need to find the m and b to make the question correct. This is what the neural network will do. You will give it an equation and it will solve for the remaining variables and it will even adjust the m and b until it gets to a certain level of accuracy. Word soup.
Again, now that we have our training and test data partitioned we can start to see how we can use it to train the machine to predict if a review is positive or negative. Since a network can only take a number and more specifically a tensor, we need to change the words to a vector. There is a lot of information between where we are now and where we want to be, so I will just show how we do that.
def vectorize_sequences(sequences, dimension = 10000):
results = np.zeros((len(sequences),dimension))
for i, sequence in enumerate(sequences):
results[i,sequence] = 1
return results
This creates a tensor of 10000 entries and sets them all to zero. Then it loops over all of the data in the training data and sets the cell to 1 where the word is present in the sequence. The sequence is simply a list filled with the indices of the position of the words. Now that we can turn our individual lists into a tensor we can convert our training and test data to a collection of these tensors:
Our data is now prepared and ready to go, so we need to configure our network. There is a key concept here that needs to be understood, activation function. The activation function is what determines whether the output of the neuron is 0 or 1. There are many different types of activation functions, but in this example he used the Rectified Linear Unit (relu) function. You can check out the link for more information.
Since a neural network consists of the input, output and one ore more hidden layers, we will need to do that configuration.
#We have to make sure that import the model and layer objects
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(16, activation='relu',input_shape = (10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
Ok, we have set up the model. Let’s break it down. Since we are working in layers, we define this model to be sequential. Then we configure two hidden layers that will be 16 dimensions which my understanding is that there will be 16 neurons, could be wrong. We also define the shape of the input data, which is our 10,000 element wide tensor. Lastly, since this is a binary classification we will only have a single output.
Before we need to send our training data through the model, we need to compile it. We will compile it to use the RMS Prop optimizer function and the loss function of Binary Cross Entropy. These work well for classification problems. The last parameter will allow us to get some data in the form of history as the machine runs through its trials.
Now we are ready to train the machine! We need to define how many iterations it will attempt to train and the batch size. We also submit the validation data.
history = model.fit(partial_x_train, partial_y_train, epochs=5, batch_size=512, validation_data=(x_val, y_val))
Executing this command will start the machine to learn that given an input X, the model predicts Y. As it loops over, you should see an output like:
When I ran through this example, each time I received different values and this is because of the random sampling. The only thing left is to test out our machine and see how it performs on the test data.
You can see that some are good and some are terrible. I guess that makes sense since so much of what people right cannot be simply distilled down to positive or negative, but this is still impressive. The words in the post are mine and not copied from any other source, but all credit goes to Francois Chollet. Now this was a simple binary classification, the next on is a multiple classification problem where the answer can be 1 of 46 different classes. Here is the colab. Anyways, I will do more reading and report back. The full code is also on github.
For the last several weeks, I have been trying to learn more about data science. I have always enjoyed writing queries for reporting to dig into the stories that the data can tel. Data science takes that interest to a whole new level.
As a typical engineer I haven’t exercised my linear algebra or calculus since my undergraduate. The book (Data Science from Scratch: First Principles with Python 2nd Edition) that I am reading starts hard and fast in that direction. Luckily I have most of my old school books, so I have been speed reading through them to try to reclaim some level of understanding. So I have been a nerd with my stats, linear algebra and calculus books for the last several weeks. Reading them a decade later gives me a new appreciation for the material.
All of that leads to why I am really interested into learning data science. On a project that I am working on there is an opportunity to utilize some machine learning techniques. More specifically, the project asks a sequence of questions to a user to arrive at a customer satisfaction score.
One of the areas that we may be able to employ such techniques is determining the next question (excluding next questions only). It would seem reasonable to have the ability to craft the sequence of questions (let’s call it a survey) based on the answers to the previous ones. If you have n surveys completed, then you might be able to do some multiple regression analysis. This could tell you what the probability of the next question being answer in a certain way. This would allow you to remove questions that you know have a high likelihood of being unnecessary since they are consistently answered the same.
One of the first thing that comes to mind with that approach is there may be some questions that are more important thereby requiring a user’s answer. In other words, they may have more weight and should be removed from the possible exclusion set. Another top of mind concern is the actual ordering of the questions. If the ordering is random such that the questions are independent, how could you predict what the next question is going should not be or does the machine determine the order itself.
I think having the machine determine the order of the questions would be pretty awesome. Would that somehow invalidate the comparison between surveys based on some psychological factors that the order would create, I am not sure. Theoretically, if the questions are truly independent, you should be able to present them in any order. I do believe that they could tell a different story even with the same answers if presented in different ways. ¯\_(ツ)_/¯
So at this point, we have multiple regression to determine what the likely next sequence of questions will be answered in a certain way. This would need to be a multi-pass operation, since you should be checking each successive question to determine if it needs to be asked. It is possible that you could reach a point where no further questions are required.
That now covers the exclusion , but what about including new questions dynamically. Would you have a pool of questions that are related to a product or service to be used for possible inclusion? In that scenario would you only include some static questions and allow machine to add questions from a pool based on prior responses? That seems like a simple decision tree, but given that we are already excluding some questions it seems unreasonable to construct that tree. You could always have a parent-child relationship where one answer prompts a series of new questions, but again that would need to be preordained.
It seems that dynamic inclusion is a much more challenging endeavor. I think I will leave it here for now so I can marinate a little more on this topic.
Recently I have been working on a clinical trial application and needed a way to export all of the data. The client wants the questions to be displayed horizontally, but the data is stored vertically. This is the structure of the data:
Pretty simple schema, the issue is here. This is what they want it to look like:
At first this was a little intimidating, because there are different types of questions in separate tables that would need to be combined and then transposed together. I decided to use the PIVOT function. Now there was an another requirement that needed to be addressed, the columns had to be sorted. This took a little bit of data massaging to get it in the correct pre-pivot format.
To set the stage I have a number of pre-processing steps that I won’t show here, but just know that those steps result in temp table #data. Now that I have all of the data in place, I need to order the questions so that they pivot correctly:
-- PRE-PIVOT PROCESSING
SELECT
[CaseId] ,
[Title],
[Value]
INTO #ordered
FROM #data
ORDER BY
[CaseId],
[Title]
Here comes the interesting part. I need to extract all of the columns into a comma delimited string for use in the pivot. For this I need a VARCHAR to hold the delimited fields. Now I can take the data from the #ordered table and generate the fields:
DECLARE @ColumnName AS NVARCHAR(MAX)
SELECT @ColumnName= ISNULL(@ColumnName + ',','')
+ QUOTENAME([Title])
FROM (SELECT DISTINCT [Title] FROM #ordered ) AS c ORDER BY [Title]
There is another function in there that I didn’t know about before I needed to do this: QUOTENAME. This will take the string and wrap it in square brackets. This allows for strings with spaces to be used for a column name. At this point, we have the @ColumnName variable set with a delimited set of question titles.
Now that we have our columns set, we need to generate the sql that will be executed. For this we need to get the data. We will take use a CTE to collect the data with the new list of columns and pump that into the PIVOT function. The PIVOT function take the aggregate that you want applied to the value field and the column that you want to PIVOT, in my case the @ColumnName array. Lastly, I need to have the rows grouped by the unique CaseId. In the end it looks like this:
DECLARE @query NVARCHAR(MAX);
SET @query = ';WITH p ([CaseId],'+ @ColumnName +') as (SELECT [CaseId],' + @ColumnName + ' from
(
SELECT
[CaseId],
[Title],
[Value]
FROM
#ordered
) x PIVOT (MAX([Value]) for [Title] in ('+@ColumnName+')) p GROUP BY [CaseId],'+@ColumnName+') SELECT * FROM p '
EXECUTE (@query);
When we execute these queries, we end up with the result that we want:
One note, these values above are strings in the data, so an aggregate does not make sense, so you can use the MAX function to get the exact value.
Recently, I needed a way to take a list of name and concatenate them with a separator between them. There are many ways to this in C#, but I was feeling lazy so I wanted to see if there was a simpler way… in SQL. To set the stage, I needed to be able to take a table with the name of the object in rows. So let’s say we have two tables Group and Group Members:
CREATE TABLE [dbo].[Group]
(
[Id] [BIGINT] IDENTITY(1,1) NOT NULL,
[Name] VARCHAR(255) NULL
)
CREATE TABLE [dbo].[GroupMembers]
(
[Id] [BIGINT] IDENTITY(1,1) NOT NULL,
[GroupId] BIGINT NOT NULL ,
[UserEmail] VARCHAR(255) NOT NULL
)
Let’s create a group and some new members:
INSERT INTO [Group] (Name) SELECT 'My Group'
INSERT INTO [GroupMembers] ([GroupId], [UserEmail]) SELECT 1, 'test@test.com'
INSERT INTO [GroupMembers] ([GroupId], [UserEmail]) SELECT 1, 'test1@test.com'
INSERT INTO [GroupMembers] ([GroupId], [UserEmail]) SELECT 1, 'test2@test.com'
INSERT INTO [GroupMembers] ([GroupId], [UserEmail]) SELECT 1, 'test3@test.com'
The goal is to take a group and the first five users and return the name of the group and a concatenation of the emails. To start with, we can just pull the data directly, but it gives us all rows as opposed to just one:
SELECT
g.[Name],
gm.[UserEmail]
FROM
[GROUP] g
INNER JOIN [GroupMembers] gm ON gm.[GroupId] = g.[Id]
Reminder the goal is to have a single row with the emails concatenated and we want to do this in SQL, so this isn’t helpful. If I did choose to do this in code, I would have to iterate over the collection, insert the separator and combine the strings. I would also need to track if I was on the last item so that I didn’t add a trailing separator. Enter STRING_AGG. In their words:
Concatenates the values of string expressions and places separator values between them. The separator is not added at the end of string.
This is perfect, so let’s try it out. From the docs, we can see that we have two parameters: expression and separator.
This takes our previous query and with a few adjustments we get this:
SELECT
g.[Name],
[Emails] = STRING_AGG(gm.[UserEmail], ', ')
FROM
[GROUP] g
INNER JOIN [GroupMembers] gm ON gm.[GroupId] = g.[Id]
GROUP BY
g.[Name]
When we run this, we get the exact results that we are looking for.
There are a couple of things to note. I added a space after the separator to make it look better and for it to work we have to apply the group by. The only thing that is left is to set the order of the aggregate strings. I mention this because it got logged as a bug. I should have known better, rookie mistake. Oh well. It is really simple, we just need to add the WITHIN statement at the end of the STRING_AGG function:
SELECT
g.[Name],
[Emails] = STRING_AGG(gm.[UserEmail], ', ') WITHIN GROUP ( ORDER BY gm.[UserEmail] DESC)
FROM
[GROUP] g
INNER JOIN [GroupMembers] gm ON gm.[GroupId] = g.[Id]
GROUP BY
g.[Name]
The last few months have been tough for so many, but the last 2 weeks have been especially hard for some. With the passing of George Floyd and renewed attention on the mistreatment in black America, it makes me think of my two children. Both of my children are bi-racial and are not defined by either side. For now they are innocent of the bias and hatred that permeates so deeply in our society today.
I was raised to be intolerant and see race in everything around me. I won’t go into details about the teachings, but let me assure you it started early. It is still hard for me to not see race when I look at other humans still at almost 45. With that said, it does not mean that I harbor that learned hatred and bias still today.
I think the awareness of difference is engrained deep within our DNA. The question is what does that awareness mean to you. Generalizing or stereotyping a group based on the actions of a few is so easily done, but rarely holds true. For example, I have worked with only one black man in engineering in the last 2 decades. It was easy to extrapolate that fact to thinking that there was not that many black men in tech let along black women. It is amazing to see on Twitter and the blogosphere how many there are doing great things and lifting each other up.
My wife frequently reminds me that my children will be seen as black, but I have never seen them in that way. They are just two beautiful and handsome brown kids. The recent events has created a new reality for me in that the challenges that my kids will have because of their heritage. I am left with the realization that so many black families must have to go through trying to teach their child about the world and how to navigate it successfully. That is a sad reality that stains the innocence of childhood for so many.
I hope that the energy and attention to George Floyd’s death and the death of countless others will resonate in the non-black communities and translate to real change at the voting booth. I know we can do better! So let’s.