نبذة مختصرة : Visual impairment affects millions of people around the world nowadays. By not being able to rely on their vision, people often use other senses to compensate for the lack of vision, such as hearing and touch, when performing daily tasks. Sensory substitution is offered as an alternative to assist people with vision loss, through devices capable of recognizing the environment, obstacles, paths and then providing feedbacks that a visually impaired person can understand. However, the literature contains works that present navigation systems that are very distant from their adoption and without great concerns regarding the individuality of each user's preferences or even their specific needs/limitations. Based on these problems, the main goal of this thesis is to provide an alternative to autonomous and customizable navigation of visually impaired people, from the conception of a navigation system to its build as a functional prototype. Hence, in this thesis, it was developed an abstract model for interactive and customizable navigation systems to provide a baseline for the design and development of this type of system. This work also created a customization methodology for functional features of navigation systems, to offer a way to specify how it is possible to customize each feature of a customizable system. The abstract model and customization methodology resulted in a wearable prototype of a multimodal, interactive and customizable navigation system. This thesis also proposes the NSGA2CGP, an evolutionary and Cartesian multi-objective optimization method for automatic generation of optimized morphological filters, minimizing error and complexity to correct depth images from RGB-D cameras with areas of unknown distances that affect the segmentation process of vision-based navigation systems. NSGA2CGP method proved to be solid and capable of generating optimized filters to fix depth images with good diversity. The wearable prototype with the embedded navigation system was validated and then evaluated in ...
No Comments.