نبذة مختصرة : Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies ; Object detection and localization play a significant role in artificial intelligence, as they facilitate understanding of the surrounding environment. While architectures designed for this purpose have proven promising and continue to advance, certain objects, such as doors and door handles, have not been extensively explored. Recognizing these specific objects is crucial for autonomous decision-making, especially in robotics, enabling safe and efficient interaction in dynamic environments like hospitals. Decision-making regarding particular objects, such as doors and their handles, involves the robot executing specific actions, such as opening a door. However, achieving this objective goes beyond merely identifying the object; the robot needs information on how to interact with it. In the case of handles, this involves indicating to the robot the specific grip point to open the door. In this study, the novel YOLO NAS architecture, unexplored in these objects, was trained. The results demonstrated remarkable effectiveness in detecting true positives, with a recall of 0.99. However, a lower precision was observed compared to the reference YOLO v8 version. It is noteworthy that despite the lower precision, the visual performance of the model was notable, successfully detecting doors and handles under challenging conditions of light, contrast, and other relevant considerations considered during the study. A distinctive aspect of this work is the integration of a model based on Euclidean geometry for locating the grip point. Unlike previous studies that typically place this point at the centroid of the handle, the proposed model positions it at the ends of the handle, thus leveraging effective force to open the door with potentially less effort. For the generation of this Euclidean model, the predicted bounding boxes by the YOLO NAS model serve as input. Additionally, the detection model ...
No Comments.