نبذة مختصرة : The autonomous driving algorithm studied in this paper makes a ground vehicle capable of sensing its environment via visual images and moving safely with little or no human input. Due to the limitation of the computing power of end side devices, the autonomous driving algorithm should adopt a lightweight model and have high performance. Conditional imitation learning has been proved an efficient and promising policy for autonomous driving and other applications on end side devices due to its high performance and offline characteristics. In driving scenarios, the images captured in different weathers have different styles, which are influenced by various interference factors, such as illumination, raindrops, etc. These interference factors bring challenges to the perception ability of deep models, thus affecting the decision-making process in autonomous driving. The first contribution of this paper is to investigate the performance gap of driving models under different weather conditions. Following the investigation, we utilise StarGAN-V2 to translate images from source domains into the target clear sunset domain. Based on the images translated by StarGAN-V2, we propose Conditional Imitation Learning with ResNet backbone named Star-CILRS. The proposed method is able to convert an image to multiple styles using only one single model, making our method easier to deploy on end side devices. Visualization results show that Star-CILRS can eliminate some environmental interference factors. Our method outperforms other methods and the success rate values in different tasks are 98%, 74%, and 22%, respectively.
No Comments.