MA: Estimation of human poses with radar sensors
Smart home applications are increasingly finding their way into the everyday lives of many people. Audio and camera systems have already established voice and gesture control in some households. However, the audiovisual sensor technology is a relatively strong intrusion into people’s privacy. Radar sensors offer a decisive advantage in this respect since the data do not have any immediately recognizable personal reference. For both private and public buildings, radar sensors therefore offer many attractive possibilities for performing tasks, which involve human behavior, even with high data protection requirements. They can be used, for example, to detect people in the building, their general movements and even to measure vital parameters.
There are several existing RGB-data-based machine learning networks that are able to estimate human key points, poses and gestures. However, estimating people’s pose from radar data is a much more complicated task. In this work, the poses of a human will be studied from RGB data, in order to use this knowledge to train radar sensors on it as well. A comprehensive literature search is required to get an overview of the state-of-the-art. Subsequently, RGB networks for pose extraction must be applied to RGB data. These results are used to train a self-developed deep learning network fed with radar data, and the results are compared with those in literature.
Supervisors: Prof. Dr.-Ing. Martin Vossiek, Lukas Engel (M.Sc.), Dr.-Ing. Ingrid Ullmann
Date of issue: starting by today
Language: german or english
Previous knowledge: Radar signal processing, python, machine learning, deep learning