Face Mask Detection using OpenCV

Anuja Shukla
4 min readOct 3, 2020

--

Readers really enjoy learning from the timely, practical application, so today we are going to look at COVID-related application of computer vision, this one on detecting face masks with OpenCV and TensorFlow.

Let us begin…..

Prerequisites:

A. OpenCV package (import cv2 as cv)

B. Tensorflow/Keras (import tensorflow)

C. IDE (I used Anaconda Navigator — Jupyter Notebook) : To download opencv and tensorflow on Anaconda, you can simply install these 2 packages in case if it is not available.

Commands:

conda install -c conda-forge opencv

pip install tensorflow

Here is the list of subtopics we are going to cover:

  1. How to do Real-time face detection
  2. How to do Real-time Mask detection

How to do Real-time face detection

In this section, we are going to use OpenCV to do real-time face detection from a live stream via our webcam.

As you know videos are basically made up of frames, which are still images. We perform the face detection for each frame in a video. So when it comes to detecting a face in still image and detecting a face in a real-time video stream, there is not much difference between them.

We will be using Haar Cascade algorithm to detect faces. It is basically a machine learning object detection algorithm which is used to identify objects in an image or video. In OpenCV, we have several trained Haar Cascade models which are saved as XML files. Instead of creating and training the model from scratch, we use this file. We are going to use “haarcascade_frontalface_alt2.xml” file in this project.

Haar Cascade Model

The first step is to find the path to the “haarcascade_frontalface_alt2.xml” file. We do this by using the os module of Python language.

import os
cascadePath = os.path.dirname(
cv.__file__) + "/data/haarcascade_frontalface_alt2.xml"

The next step is to load our classifier. The path to the above XML file goes as an argument to CascadeClassifier() method of OpenCV.

faceCascade = cv.CascadeClassifier(cascadePath)
Haar Cascade classifier

After loading the classifier, let us open the webcam using this simple OpenCV one-liner code

video_capture = cv.VideoCapture(0)

Next, we need to get the frames from the webcam stream, we do this using the read() function. We use it in infinite loop to get all the frames until the time we want to close the stream.

while True: #(or 1)
# Capture frame-by-frame
ret, frame = video_capture.read()

For this specific classifier to work, we need to convert the frame into grayscale.

gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)

The faceCascade object has a method detectMultiScale(), which receives a frame(image) as an argument and runs the classifier cascade over the image. The term MultiScale indicates that the algorithm looks at subregions of the image in multiple scales, to detect faces of varying sizes.

faces = faceCascade.detectMultiScale(gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(80, 80),
flags=cv.CASCADE_SCALE_IMAGE)
  • scaleFactor — the image size that can be reduced at each image scale. By rescaling the input image, you can resize a larger face to a smaller one, making it detectable by the algorithm. 1.05 is a good possible value for this, which means you use a small step for resizing, i.e. reduce the size by 5%, you increase the chance of a matching size with the model for detection is found.
  • minNeighbors — specifying how many neighbours each candidate rectangle should have to retain it. This parameter will affect the quality of the detected faces. Higher value results in fewer detections but with higher quality. 3~6 is a good value for it.
  • flags –Mode of operation
  • minSize — Minimum possible object size. Objects smaller than that are ignored.

The coordinates indicate the row and column of pixels in the image. We can easily get these coordinates from the variable face

for (x,y,w,h) in faces:
face_frame = frame[y:y+h,x:x+w]
face_frame = cv.cvtColor(face_frame, cv.COLOR_BGR2RGB)

cv.rectangle(frame, (x, y), (x + w, y + h),(0,255,0), 2)

rectangle() accepts the following arguments:

  • The original image
  • The coordinates of the top-left point of the detection
  • The coordinates of the bottom-right point of the detection
  • The colour of the rectangle (a tuple that defines the amount of red, green, and blue (0–255)).In our case, we set as green just keeping the green component as 255 and rest as zero.
  • The thickness of the rectangle lines

Next, we just display the resulting frame and also set a way to exit this infinite loop and close the video feed. By pressing the ‘q’ key, we can exit the script here

cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

Code output:

A> With Mask

With Mask

B> Without Mask:

The next two lines are just to clean up and release the picture.

video_capture.release()
cv2.destroyAllWindows()

I hope you enjoyed the tutorial!!!

Please feel free to post your comments and feedback….

Don’t forget to hit claps if you liked it :)

Also, new ideas are welcomed.

Thank You! :)

--

--