Model Name – Safety hat and vest detection

What does the model detect?
This model detects safety hat and vest in images

What is the use of this model?
In industry, the top cause of construction-related casualties are falling, people getting stuck in the equipment, electrocution, and collisions. If workers wear appropriate personal protective equipment like a safety vest and hard hat, the majority of these injuries can be prevented. This model helps to reduce the number of casualties and ensure workers’ safety.

Approach to creating a model in Vredefort

Step 1 – Dataset Collection
We recorded a video in the RTSP camera through the VLC player. And 3633 images were collected from that.
There are three classes ie. Safety Hat, Safety Vest, and Person.

Step 2 – Data Cleaning
After collecting the dataset, we uploaded it on Vredefort. Vredefort automatically cleans the data by removing the corrupt images and resizing them to a suitable resolution.

Step 3 – Data Annotation
The computer learns to detect objects from images through a process of labeling. Thus, we drew boxes around the concerned objects and labeled them as safety hat/safety vest/person accordingly.
We annotated 3633 images using the inbuilt Vredefort tool.

Annotation Rules – (Keep them in mind for better detection)
     ⦁ Skip the object if it is in motion or blur.
     ⦁ Precisely draw the bounding box on the object.
     ⦁ Bounding boxes should not be too large.

[Optional] Step 4 – Tuning Parameters
If you register as a developer and developer mode is on, you can modify the number of epochs, batch size per GPU, neural network model, etc. In case of no user inputs, the settings will change to default.

Step 5 – Training
The training process takes place automatically with a single click.

Evaluation of the model
After training, we can evaluate the model.
In evaluation, there are two parts. The first is accuracy and the second is to play inference videos. Vredefort enables us to obtain total model accuracy and class-wise accuracy. In this case, three classes are present. We achieved 86% model accuracy. Individual class accuracy is 80% for Safety Hat, 88% for Safety Vest, and 93% for Person.

A new video for inference
We recorded video with the help of VLC player and made video from that to check the inference. If the developer mode is on, it will ask to set confidence. You can set it as per your convenience. Here we set 0.1 [10%] confidence.

Model download and transfer learning from unpruned model
Vredefort provides one more feature to get the accuracy of the model. It allows you to download the model and dataset for further applications(like adding logic to your model). If you have downloaded model files, you can use the unpruned model (click here to know more about the unpruned model) for different datasets and save training time. You can generate alerts and write use-cases with that model.

Any challenges faced
The model detected red shirts as safety vests. We collected more images of people who wore red shirt/tshirt and annotated them precisely to avoid confusion and resolve this.

Limitations
     ⦁ The model is trained on specific office premise video feeds. It will struggle to detect the objects in other areas.

Improvements
The best approach is to record images or videos of the construction site. This approach informs about scenes that empower understanding the complex construction sites more promptly. For more accuracy, collect the dataset from different angles and balance the dataset for both classes by reducing the mismatch in the number of images. You need not worry about data imbalance situation if images in your dataset are balanced for all classes.

Model Details
Model Name – Safety hat and vest Detection
Dataset Images – 3633
Number of Labels – 3
Label name and count – Safety Hat (3139), Safety Vest (3369), Person (6587)
Accuracy – 86%
Class Accuracy – Safety Hat (80%), Safety Vest (88%), Person (93%)

Download Links

Dataset Download – Download here

Model Download Link – Download here

Inference Video Link – Download here

Author:

 

Leave a Reply

Your email address will not be published. Required fields are marked *