نبذة مختصرة : With technology advancing at an unprecedented rate, artificial intelligence (AI), and more specifically the generative dialect, has begun to advance at an exponential rate. And although the technology is still new, its popularity has grown at an unprecedented rate. Technologies such as ChatGPT are being used in a wide range of fields (Feng, Han & Lan, 2024). Of course, the ease of customization of these models has also led to a huge number of custom solutions and tools for generating images, text, slides, code, etc. This development and widespread use of technology in everyday life can lead to overdependence or even addiction. As people start to get more relaxed and hand over decisions to AI, they are faced with ethical and moral questions about who is responsible if AI makes a wrong decision that could cause harm. A good example of this technology is autonomous vehicles. As generative AI is already being used in humanoid robots, that are supposed to take over extreme and difficult tasks from humans, the idea of the same AI controlling a car no longer seems far-fetched. However, if it is involved in (or causes) an accident, the question of who is responsible can arise, which causes responsibility gaps. So the question has to be asked: who is to blame? The owner of the AV, the manufacturer, or perhaps the programmer who developed the AI system? It was decided to look at this issue through the lens of AI, as this, is perhaps, the area where the damage is most visible. Moreover, insurance companies will have to assess this risk, especially when Avs will be on the road alongside traditional human drivers. As a result, a theoretical model for AV insurance has been developed, based on scientific literature, which will have to take into account the relevant risks, while removing irrelevant factors, for example human factors, as the drivers role is transferred to the AI system. To assess the validity of the theoretical model, a qualitative study was carried out in the form of semistructured interviews with analysts, ...
No Comments.