Scientists have introduced a novel AI approach for robots and self-driving vehicles to perceive the entire structure of objects even when they are partially hidden
Self-driving vehicles till now were using panoptic segmentation to understand their environment and navigate on their own i.e. they predict which pixels of an image belong to which visible region and hence, unable to predict the entire shape if the object is partially hidden such as people in crowds or vehicles in traffic jams. To overcome this, two Freiburg scientists have invented the amodal panoptic segmentation task that enables of viewing any partially occluded object as a whole instead of viewing them in fragments. This will enhance the ability of visual recognition and will lead to great progress in improving the safety of self-driving vehicles.</span>
“Amodal panoptic segmentation will significantly help downstream automated driving tasks where occlusion is a major challenge such as depth estimation, optical flow, object tracking, pose estimation, motion prediction, etc. With more advanced AI algorithms for this task, the visual recognition ability of self-driving cars can be revolutionized. For example, if the entire structure of road users is perceived at all times, regardless of partial occlusions, the risk of accidents can be significantly minimized.”
The aim of the task is the pixel-wise semantic segmentation of the visible regions of amorphous background classes such as roads, vegetation, sky, and the instance segmentation of both the visible and occluded object regions of countable classes such as cars, trucks, and pedestrians. The researchers have included a new task to establish a benchmark dataset and make the benchmark and datasets publicly available on the website, including two proposed novel learning algorithms. They are inviting scientists to participate in the standardization of their own AI algorithms. “We are confident that novel AI algorithms for this task will enable robots to emulate the visual experience that humans have by perceiving complete physical structures of objects,” Valada explains.
The researchers envision understanding the relative depth of objects in a scene to enable autonomous vehicles to take complex decisions such as which direction to move for obtaining a clearer view of the object.
Click here for the Published Research Paper