Category: In-House AI models

In-House AI models

Model Name – Satellite Aeroplane Detection

What does the model detect?
This model detects aeroplanes on the ground from satellite imagery.

What is the use of this model?
This model could serve in military applications and for better surveillance of airports.

Approach to creating a model in Vredefort

Step 1 – Dataset Collection
We collected satellite images of aeroplanes from the Google Earth website. In google earth, we kept a 2D view with a certain fixed height(80 m). We selected the 100 busiest airports and collected 1463 images of aeroplanes. There is only one class – Aeroplane.

Step 2 – Data Cleaning
After collecting the dataset, we uploaded it on Vredefort. Vredefort automatically cleans the data by removing the corrupt images and resizing them to a suitable resolution.

Step 3 – Data Annotation
The computer learns to detect objects from images through a process of labeling. Thus, we drew boxes around the concerned objects and labeled them as Aeroplane (only one object to detect).
We annotated 1463 images using the inbuilt Vredefort tool.

Annotation Rules – (Keep them in mind for better detection)
⦁ Skip the object if it is in motion or blur.
⦁ Precisely draw the bounding box around the object.
⦁ Bounding boxes should not be too large.

[Optional] Step 4 – Tuning Parameters
If you register as a developer and developer mode is on, you can modify the number of epochs, batch size per GPU, neural network model, etc. In case of no user inputs, the settings will change to default.

Step 5 – Training
The training process takes place automatically with a single click.

Evaluation of the model
After training, we can evaluate the model.
In evaluation, there are two parts. The first is accuracy and the second is to play inference videos. Vredefort enables us to obtain total model accuracy and class-wise accuracy. In this case, only one class is present. We achieved 76% model accuracy.

A new video for inference
We recorded video with the help of SimpleScreenRecorder with same 2D view and fixed height and that video used to check the inference. If the developer mode is on, it will ask to set confidence. You can set it as per your convenience. Here we set 0.1 [10%] confidence.

Model download and transfer learning from unpruned model
Vredefort provides one more feature to get the accuracy of the model. It allows you to download the model and dataset for further applications(like adding logic to your model). If you have downloaded model files, you can use the unpruned model (click here to know more about the unpruned model) for different datasets and save training time. You can generate alerts and write use-cases with that model.

Any challenges faced
None

Limitations
⦁ The model will work best on satellite imagery with 80m of height
⦁ The model is trained on satellite imagery and hence will work best on those images or video feeds.
⦁ It will struggle to detect aeroplanes from other sources such as mobile camera videos.

Improvements
More datasets can be collected to detect aeroplanes from different heights and sources to improve the model accuracy.

Model Details

Model Name – Satellite Aeroplane Detection
Dataset Images – 1463
Number of Labels – 1
Label name and count – aeroplanes (6605)
Accuracy – 76%

Download Links

Dataset Download – Download here

Model Download Link – Download here

Inference Video Link – Download here

Author:

In-House AI models

Model Name – Car parking occupancy Detection

What does the model detect?
This model detects occupancy for car parking in images/videos.

What is the use of this model?
Techniques for car parking occupancy detection are significant for the management of car parking lots. Knowing the real-time availability of free parking spaces and communicating it to the users helps reduce the queues, improve scalability, and minimize the time required to find a place in the parking lot. In many parking lots, ground sensors determine the status of the various spaces. These require expensive installation and maintenance of sensors in every parking space, especially in parking lots with more available spots. So this model can be used to overcome this problem.

Approach to creating a model in Vredefort

Step 1 – Dataset Collection
We collected 1497 dataset images from an open-source. These images were captured from a camera mounted on a building rooftop in front of a car parking lot. Then we split the dataset into train and test. There were two classes – Car and Vacant.

Step 2 – Data Cleaning
After collecting the dataset, we uploaded it on Vredefort. Vredefort automatically cleans the data by removing the corrupt images and resizing them to a suitable resolution.

Step 3 – Data Annotation
The computer learns to detect objects from images through a process of labeling. Thus, we drew boxes around the concerned objects and labeled them as car and vacant accordingly. We annotated 1497 images using the inbuilt Vredefort tool.
Annotation Rules – (Keep them in mind for better detection)
    ⦁ Skip the object if it is in motion or blur.
    ⦁ Precisely draw the bounding box on the object.
    ⦁ Bounding boxes should not be too large.

[Optional] Step 4 – Tuning Parameters
If you register as a developer and developer mode is on, you can modify the number of epochs, batch size per GPU, neural network model, etc. In case of no user inputs, the settings will change to default.

Step 5 – Training
The training process takes place automatically with a single click.

Evaluation of the model
After training, we can evaluate the model.
In evaluation, there are two parts. The first is accuracy and the second is to play inference videos. Vredefort enables us to obtain total model accuracy and class-wise accuracy. In this case, three classes are present. We achieved 40% model accuracy. Individual class accuracy is 64% for car and 16% for vacant.

A new video for inference
We made a video from test dataset images and used it for interference. If the developer mode is on, it will ask to set confidence. You can set it as per your convenience. Here we set 0.1 [10%] confidence.

Model download and transfer learning from unpruned model
Vredefort provides one more feature to get the accuracy of the model. It allows you to download the model and dataset for further applications(like adding logic to your model). If you have downloaded model files, you can use the unpruned model (click here to know more about the unpruned model) for different datasets and save training time. You can generate alerts and write use-cases with that model.

Any challenges faced
The model was not working well as the white parking lines were not visible clearly. The vacant spots were not detected precisely so we annotated more images to reduce
the errors.

Limitations
     ⦁ The model is trained on an outdoor camera and hence will work best on those images or video feed.
     ⦁ It will struggle to detect parking spots if the white lines are not visible.

Improvements
For more accuracy, collect the dataset from different angles, including complex environments, and balance the dataset for all the classes by reducing the mismatch in the number of images. You need not worry about class imbalance if images in your dataset are balanced for all classes.

Model Details
Model Name – Car parking occupancy Detection
Dataset Images – 1497
Number of Labels – 2
Label name and count – Car (7740), Vacant (8745)
Accuracy – 40%
Class Accuracy – Car (64%), Vacant (16%)

Download Links

Dataset Download –  Download here

Model Download Link – Download here 

Inference Video Link – Download here 

Author:

In-House AI models

Model Name – Satellite Ship Detection

What does the model detect?
This model detects ships in the sea from satellite imagery.

What is the use of this model?
This model empowers the government institutions for strict and finer maritime security surveillance. It helps to manage marine traffic at busy ports. The detection enables the concerned authorities to take quick decisions and reduce pirate threats.

Approach to creating a model in Vredefort

Step 1 – Dataset Collection
We collected satellite images of ships from the Google Earth website. In google earth, we kept a 2D view with a certain fixed height – 100m for big ships and 60 m for small ships. We selected the 50 busiest ports and collected 645 ship images. There is only one class – Ship.

Step 2 – Data Cleaning
After collecting the dataset, we uploaded it on Vredefort. Vredefort automatically cleans the data by removing the corrupt images and resizing them to a suitable resolution.

Step 3 – Data Annotation
The computer learns to detect objects from images through a process of labeling. Thus, we drew boxes around the concerned objects and labeled them as ship (only one object to detect).
We annotated 645 images using the inbuilt Vredefort tool.

Annotation Rules – (Keep them in mind for better detection)
    ⦁ Skip the object if it is in motion or blur.
    ⦁ Precisely draw the bounding box around the object.
    ⦁ Bounding boxes should not be too large.

[Optional] Step 4 – Tuning Parameters
If you register as a developer and developer mode is on, you can modify the number of epochs, batch size per GPU, neural network model, etc. In case of no user inputs, the settings will change to default.

Step 5 – Training
The training process takes place automatically with a single click.

Evaluation of the model
After training, we can evaluate the model.
In evaluation, there are two parts. The first is accuracy and the second is to play inference videos. Vredefort enables us to obtain total model accuracy and class-wise accuracy. In this case, only one class is present. We achieved 55% model accuracy.

A new video for inference
We recorded a video of a 2D view with fixed height using a SimpleScreenRecorder to check the inference. If the developer mode is on, it will ask to set confidence. You can set it as per your convenience. Here we set 0.1 [10%] confidence.

Model download and transfer learning from unpruned model
Vredefort provides one more feature to get the accuracy of the model. It allows you to download the model and dataset for further applications(like adding logic to your model). If you have downloaded model files, you can use the unpruned model (click here to know more about the unpruned model) for different datasets and save training time. You can generate alerts and write use-cases with that model.

Any challenges faced
Collecting the images was challenging due to security reasons at certain ports.

Limitations
    ⦁ The model will work best on satellite imagery with 100m of height for big ships and 60m of height for small ships.
    ⦁ The model is trained on satellite imagery and hence will work best on those images or video feeds.
    ⦁ It will struggle to detect ships from other sources such as mobile camera videos.

Improvements
More datasets can be collected to detect ships of different heights and from varied sources to improve the model accuracy.

Model Details
Model Name – Satellite Ships Detection
Dataset Images – 645
Number of Labels – 1
Label name and count – ship (1399)
Accuracy – 55%

Download Links

Dataset Download – Download here

Model Download Link – Download here

Inference Video Link – Download here

Author:

In-House AI models

Model Name – Street View Detection

What does the model detect?
This model detects street view text detection from images/videos.

What is the use of this model?
The model is the foundation for OCR technology. It is able to detect text in the image and help run OCR on the same. It has uses in the logistics sector, for example – number plate recognition. The model can be inserted in blind assistive devices with ocr and text-to-speech to read-out-loud the text on signages. In dealing with outdoor street-level imagery, we note two characteristics. (1) Image text often comes from business signages (2) Business names are available through geographic business searches. These factors make the Street View Text set uniquely suited for word spotting in the wild: given a street view image, the goal is to identify words from nearby businesses. In computer vision, the method of converting this text present in images or scanned documents to a machine-readable format that can later be edited, searched, and used for further processing is known as Optical Character Recognition (OCR).

Approach to creating a model in Vredefort

Step 1 – Dataset Collection
We collected a dataset from Kaggle. It contains 300 images for training and 50 images for testing. There is only one class – Text.
The Street View Text (SVT) dataset was harvested from Google Street View. Image text in this data exhibits high variability and often has low resolution.

Step 2 – Data Cleaning
After collecting the dataset, we uploaded it on Vredefort. Vredefort automatically cleans the data by removing the corrupt images and resizing them to a suitable resolution.

Step 3 – Data Annotation
The computer learns to detect objects from images through a process of labeling. Thus, we drew boxes around the concerned objects and labeled them as text (only one object to detect).
We annotated 300 images using the inbuilt Vredefort tool.

Annotation Rules – (Keep them in mind for better detection)
     ⦁ Skip the object if it is in motion or blur.
     ⦁ Precisely draw the bounding box on the object.
     ⦁ Bounding boxes should not be too large.

[Optional] Step 4 – Tuning Parameters
If you register as a developer and developer mode is on, you can modify the number of epochs, batch size per GPU, neural network model, etc. In case of no user inputs, the settings will change to default.

Step 5 – Training
The training process takes place automatically with a single click.

Evaluation of the model
After training, we can evaluate the model.
In evaluation, there are two parts. The first is accuracy and the second is to play inference videos. Vredefort enables us to obtain total model accuracy and class-wise accuracy. In this case, only one class is present. We achieved 14% model accuracy.

A new video for inference
We made a video from test dataset images and used it for interference. If the developer mode is on, it will ask to set confidence. You can set it as per your convenience. Here we set 0.1 [10%] confidence.

Model download and transfer learning from unpruned model
Vredefort provides one more feature to get the accuracy of the model. It allows you to download the model and dataset for further applications(like adding logic to your model). If you have downloaded model files, you can use the unpruned model (click here to know more about the unpruned model) for different datasets and save training time. You can generate alerts and write use-cases with that model.

Any challenges faced
None

Limitations
     ⦁ The model is trained on street view images, hence will work best on those images or video feeds.
     ⦁ It will struggle to detect text has different shapes, sizes.

Improvements
For more model accuracy, collect the dataset with different colors, sizes, and shapes.

Model Details
Model Name – Street Text Detection
Dataset Images – 300
Number of Labels – 1
Label name and count – text (1012)
Accuracy – 14%

Download Links

Dataset Download – Download here

Model Download Link – Download here

Inference Video Link – Download here

Author:

 

 

In-House AI models

Model Name – Tomato Detection

What does the model detect?
This model detects tomatoes from images/videos.

What is the use of this model?
The application of artificial intelligence to agriculture has increased globally, particularly in harvesting robot development. This harvesting robot relieves manual picking of vegetables/fruits, which is very tedious, time-consuming, expensive, and relatively high in human error. Meanwhile, the autonomous detection of vegetables/fruits or other agricultural products is the first important step for harvesting robots. A manipulator(a lift-assist device used to help workers lift, maneuver, and place articles in a process) is guided to pick the tomatoes in this model based on the detection accuracy.

Approach to creating a model in Vredefort

Step 1 – Dataset Collection
We collected a dataset from Kaggle. There are 800 images for training and 95 images for testing. There is only one class – Tomato.

Step 2 – Data Cleaning
After collecting the dataset, we uploaded it on Vredefort. Vredefort automatically cleans the data by removing the corrupt images and resizing them to a suitable resolution.

Step 3 – Data Annotation
The computer learns to detect objects from images through a process of labeling. Thus, we drew boxes around the concerned objects and labeled them as tomatoes (only one object to detect).
We annotated 800 images using the inbuilt Vredefort tool.

Annotation Rules – (Keep them in mind for better detection)
     ⦁ Skip the object if it is in motion or blur.
     ⦁ Precisely draw the bounding box on the object.
     ⦁ Bounding boxes should not be too large.

[Optional] Step 4 – Tuning Parameters
If you register as a developer and developer mode is on, you can modify the number of epochs, batch size per GPU, neural network model, etc. In case of no user inputs, the settings will change to default.

Step 5 – Training
The training process takes place automatically with a single click.

Evaluation of the model
After training, we can evaluate the model.
In evaluation, there are two parts. The first is accuracy and the second is to play inference videos. Vredefort enables us to obtain total model accuracy and class-wise accuracy. In this case, only one class is present. We achieved 59% model accuracy.

A new video for inference
We made a video from test dataset images and used it for interference. If the developer mode is on, it will ask to set confidence. You can set it as per your convenience. Here we set 0.1 [10%] confidence.

Model download and transfer learning from unpruned model
Vredefort provides one more feature to get the accuracy of the model. It allows you to download the model and dataset for further applications(like adding logic to your model). If you have downloaded model files, you can use the unpruned model (click here to know more about the unpruned model) for different datasets and save training time. You can generate alerts and write use-cases with that model.

Any challenges faced
None

Limitations
     ⦁ It will work best on a feed of ripe tomatoes(red colour).
     ⦁ It will struggle to detect unripe tomatoes because the color of the tomato’s leaf and unripe tomatoes are                  almost similar.

Improvements
For improving the model accuracy, more datasets can be gathered and trained to precisely detect both (ripe and unripe) tomatoes.

Model Details
Model Name – Tomato Detection
Dataset Images – 800
Number of Labels – 1
Label name and count – tomato (4384)
Accuracy – 59%

Download Links

Dataset Download – Download here

Model Download Link – Download here

Inference Video Link – Download here

Author: