Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Human Detection in Low-Visibility Industrial Settings Using Automotive 4D Radar and Deep Learning

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • بيانات النشر:
      Örebro universitet, Institutionen för naturvetenskap och teknik
    • الموضوع:
      2024
    • Collection:
      Örebro University: Publications (DiVA)
    • نبذة مختصرة :
      Autonomous driving technology is progressively being adopted on public roads and in industrial settings such as mines. With autonomous vehicles expected to increasingly displace human drivers, it is crucial that their adoption does not risk the safety of people near the vehicles. Adverse weather conditions, such as precipitation, may impede the ability of an autonomous vehicle’s sensors to detect road users, including pedestrians. This also poses a significant problem in mining, where airborne particulates such as smoke and dust are often present. Cameras and LiDARs are sensors particularly susceptible to having their views obstructed under such conditions. Radar constitutes a promising addition in such situations, as it can often collect data on parts of the environment that are obscured for other types of sensors. Unlike traditional 3D radar, 4D radar collects data on its surroundings along the elevation dimension. This may prove critical in enabling the detection of motionless pedestrians, particularly ones who have fallen due to loss of consciousness. This thesis covers work on human detection under low-visibility conditions using 4D radar. First, this thesis presents a dataset containing data collected in industrial settings with a car-mounted 4D radar. This data is represented in the dataset as heat maps in multiple views. Each heat map represents power measurements in two of the following dimensions: range, azimuth, elevation, and Doppler. The collected radar data were initially represented as point clouds. To generate the heat maps, each point cloud was projected to five views: elevation-azimuth, elevation-range, elevation-Doppler, range-azimuth, and range-Doppler. The elevation and azimuth dimensions represent the vertical and horizontal dimensions of the frame of a thermal camera; this camera was mounted to the car alongside the radar. Annotations for the dataset consist of segmentation masks in the elevation-azimuth view. These were generated through semantic segmentation of the thermal camera images ...
    • File Description:
      application/pdf
    • الدخول الالكتروني :
      http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-115232
    • Rights:
      info:eu-repo/semantics/openAccess
    • الرقم المعرف:
      edsbas.D539AA7A