AI has a substantial contribution in helping visually impaired people. We developed a set of features to assist blind people in their daily life tasks.
The goal is to aid Blind people finding hard to locate objects and provide smart navigation for blind people such as finding Electrical wall outlets and tasks such as Crossing the road by finding the cross-road lines and traffic lights.
1-Our solution achieved remarkable accuracy on benchmark test sets. It was also tested by blind users and adopted as a new feature in their devices.
2-We solved different technical challenges like limited datasets, different types of zebra cross-road, existing models that show no direction of the cross-road.
3-To achieve this, we relied on extracting scenes automatically using Google StreetView, active Learning to select the best samples to annotate, using Foundation model for weak annotations,
1-Generic Object Detection: Our models enable generic object detection with the aid of extended training on pre trained models to adapt to new and specific generic objects.
2-Meta Learning: We adopt a meta-learning approach to locate new objects given a few support samples. The user will be able to take a few photos of a new class and the model will be able to detect new objects.
3-Siamese Model:Using a Siamese model we leverage the FaceNet methodology to verify face identity using triplet loss.
We solved the following technical challenges:
1-Limited dataset
2-Different type of zebra cross-road
3-Existing models show no direction of the cross-road, shows only bounding boxes or segmentation
Using the following techniques:
1-Extracting scenes automatically using Google StreetView
2-Active Learning to select the best samples to annotate
3-Using Foundation model for weak annotations
4-Changing existing models (yolov8) to add direction vectors for zebra crossings