Brain Clog

I have an exam tomorrow. This is a 3rd year Physics exam. The content is hard. Tomorrow is going to be difficult. As you can imagine I’ve been feeling the need to revise like mental today, but I know…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




KINSHIP NORTHEASTERN SIMILAR FACES PROJECT

PROBLEM DESCRIPTION:

Identifying the Blood relatives with the help of deep learning

Blood relatives often share facial features.

Now researchers at Northeastern University want to improve their algorithm for facial image classification to bridge the gap between research

and other familiar markers like DNA results.

The challenge is to build a deep learning technique to help researchers build a more complex model by determining if two people are blood-related based solely on images of their faces. As most of the blood related people have same features like eyes, nose, face cut which we need to identify using the images.

DATA OVERVIEW:

The data is provided by Families In the Wild (FIW), the largest and most comprehensive image database for automatic kinship recognition. FIW obtained data from publicly available images of celebrities.

There can be familial facial relationships exist which we might overlook this can be avoided using deep learning.

FILE DESCRIPTIONS:

· train-faces.zip — the training set is divided in Families (F0123), then individuals (MIDx). Images in the same MIDx folder belong to the same person. Images in the same F0123 folder belong to the same family.

· train.csv — training labels. Remember, not every individual in a family shares a kinship relationship. For example, a mother and father are kin to their children, but not to each other.

· test-faces.zip — the test set contains face images of unknown individuals

METRIC:

EXPLORATORY DATA ANALYSIS:

There are 12379 images in the train dataset

Total Positive image pairs in relationship.csv file à 3598

3598 pairs have to have total image pairs with kinship relationship: 165179

Total Negative image pairs à 1720

1720 pairs have to have total image pairs with kinship relationship: 184094

There are 470 families in the train set

VGGFACE:

VGGFace: It is developed by researchers of one of the most prominent groups when it comes to image processing, Visual Geometry Group at Oxford.

The Visual Geometry Group (VGG) at Oxford developed a deep convolutional neural network model and trained on very large dataset of faces for face recognition task. They are evaluated on benchmark face recognition datasets, demonstrating that the model is effective at generating generalized features from faces.

The VGGFace refers to a series of models developed for face recognition

There are two main VGG models for face recognition at the time of writing; they are VGGFace and VGGFace2

CODE:

!pip install git+https://github.com/rcmalli/keras-vggface.git

IMPORTING LIBRARIES

Import Required Libraries

Then we check families that are having data and can see family F0601 is having large amount of data.

Then we randomly check few images

Following images are having relationship with each other.

DATA PREPROCESSING:

Now Creating Dictionary with KEY as FAMILY NAME/PERSON NAME → VALUES as ALL IMAGES

Here we are only considering those folder which have images in it

GENERATOR FUNCTION

Now we will create Generator function

We have the data for pairs having kinship relation. But to train the model, we should also provide the data for pairs not having kinship relation. This logic is also included in the defined python generator. Half of the batch will have pairs of images which have kinship relation (say class 1) and the remaining half will have pairs of images which don’t have kinship relation (say class 0)

MODEL

The below code is for building a model. The model is built as per architecture below code explain MODEL1.

The two images are passed into VGGFace models. We remove the top layers of VGGFace so that the output we get are face embeddings.

After passing the input images through both the VGGFace models we will get face embeddings for both the images. We have

X1- face embedding for image 1 from VGGFace model

X2- face embedding for image 2 from VGGFace model

We can directly use these embeddings for our classification task by passing them through Dense FC layers but instead of doing that it will be a good feature engineering trick to combine or merge these embeddings for better results.

We use the binary cross-entropy loss for minimization and Adam optimizer.

CALLBACKS:

The callbacks ModelCheckPoint and ReduceLROnPlateau are used while training.

ModelCheckPoint

The ModelCheckPoint will save the model after each epoch only if the validation accuracy is improved compared to previous epoch.

ReduceLROnPlateau

The ReduceLROnPlateau will reduce the learning rate when the validation accuracy doesn’t improve for considerable epochs.

Now below I will explain what all models we have trained

Here, we use features obtained from pre-trained model VGGFace. We simultaneously pass two faces and get the probability that they have kinship. The two images features are obtained and are then combined to pass to dense layers. Below is the architecture of the various network.

DEEP LEARNING MODELS:

1. 4 VGGFACE MODELS

MODEL 1:

MODEL 2:

MODEL 3:

MODEL 4:

The VGG face is the best model which has higher auc compared to other models and in this we have used multiple variety of VGG Face models used.

2. SIAMESE BASED MODEL:

It is a one-shot image recognition technique. This model is described in the paper titled Siamese Neural Networks for One-shot Image Recognition. Implementation of Siamese network is mentioned Here.

In this model we have taken model architecture inspiration from Siamese model.

MODEL 5:

3. INCEPTION BASED MODEL:

It is one of the best face recognition technique. It is described in a 2014 paper title Going Deeper with Convolutions. Implementation of Inception network is explained in this site.

For all six 6 MODELS we got public and private scores to max 0.85 so we then ensemble the models

MODEL SCORE:

Ensemble Models for Kaggle Submission:

The average of output results of 6 models are taken and submitted from which we can get the private score of 0.89.

FUTURE WORK:

· We can add few more distinct models and take the ensemble output of them But we should have criteria to consider model which have individual score more then 0.82.

· We can Ensemble the models by giving different weights to each model.

· Combine this model with flask (Python-based micro web framework ) to make a simple web server on the localhost which will act as an API and will help end-users to communicate to our trained model. So that now anyone can access our model.

The full link to the code can be found on my github profile.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

REFERENCES links: →

REFERENCE Course: →

— — — — — — — — — — — — — — — — — — — — — — — — — —

Add a comment

Related posts:

Your Home Your Investment

If you are young and you are seeking to buy a new residence to reside in, perhaps you should think about turning your very first home in an investment property. Although the majority of people wait…

How becoming a teaching assistant for a hearing impaired child taught me these valuable lessons.

After having completed my studies at university amidst these covid times, sadly resulting in graduation not being on the cards, I struggled in knowing what it is I should do with my degree and my…

Why spirituality needs to be reconnected to work.

Only one in three people are actively engaged and happy at work according to research done by Gallup International. The others apathetic, they don’t care about their work and are ‘just doing their…