Model Name – Fruits Detection

What does the model detect?
This model detects fruits in images/videos.

What is the use of this model?
The handling of perishable foods like fruits in the supply chain involves sorting, weighing, and identifying expired produce. Traditionally, these processes were done manually but are being more automated as technology advances. Recently, Industrial IoT and AI are increasingly playing a role in supply chains. Industry players can utilize technologies like image recognition to help classify products, make decisions at the edge, and optimize their operations. This model empowers industries in many ways.

Approach to creating a model in Vredefort

Step 1 – Dataset Collection
For the dataset, we made one video of 4 fruits. Vredefort automatically converted the video into images. There were 1267 images and four classes ie. Apple, Banana, Watermelon, and Grapes.

Step 2 – Data Cleaning
After collecting the dataset, we uploaded it on Vredefort. Vredefort automatically cleans the data by removing the corrupt images and resizing them to a suitable resolution.

Step 3 – Data Annotation
The computer learns to detect objects from images through a process of labeling. Thus, we drew boxes around the concerned objects and labeled them as apple, banana, watermelon, and grapes accordingly. We annotated 1267 images using the inbuilt Vredefort tool.

Annotation Rules – (Keep them in mind for better detection)
     ⦁ Skip the object if it is in motion or blur.
     ⦁ Precisely draw the bounding box on the object.
     ⦁ Bounding boxes should not be too large.

[Optional] Step 4 – Tuning Parameters
If you register as a developer and developer mode is on, you can modify the number of epochs, batch size per GPU, neural network model, etc. In case of no user inputs, the settings will change to default.

Step 5 – Training
The training process takes place automatically with a single click.

Evaluation of the model
After training, we can evaluate the model.
In evaluation, there are two parts. The first is accuracy and the second is to play inference videos. Vredefort enables us to obtain total model accuracy and class-wise accuracy. In this case, three classes are present. We achieved 97% model accuracy. Individual class accuracy is 96% for apple, 99% for banana, 99% for watermelon, and 97% for grapes.

A new video for inference
We made a same another video of 4 fruits and used it for interference. If the developer mode is on, it will ask to set confidence. You can set it as per your convenience. Here we set 0.1 [10%] confidence.

Model download and transfer learning from unpruned model
Vredefort provides one more feature to get the accuracy of the model. It allows you to download the model and dataset for further applications(like adding logic to your model). If you have downloaded model files, you can use the unpruned model (click here to know more about the unpruned model) for different datasets and save training time. You can generate alerts and write use-cases with that model.

Any challenges faced
None

Limitations
     ⦁ The model is trained on only 4 fruits(apple, banana, grapes, watermelon) images and hence, will work best              on those images or video feed.
     ⦁ It will struggle to detect fruits wherever other fruits images/videos or complex backgrounds are present in                images/videos.

Improvements
For more accuracy, collect the dataset from different angles, include complex environments, and balance the dataset for all the classes by reducing the mismatch in the number of images. You need not worry about class imbalance if images in your dataset are balanced for all classes.

Model Details
Model Name – Fruits Detection
Dataset Images – 1267
Number of Labels – 4
Label name and count – apple (1233), banana (1218), grapes (1181), watermelon (1120)
Accuracy – 97%
Class Accuracy – apple (96%), banana (99%), grapes (97%), watermelon (99%)

Download Links

Dataset Download – Download here

Model Download Link – Download here

Inference Video Link – Download here

Author:

Leave a Reply

Your email address will not be published. Required fields are marked *