Abstract:
For annotation overlay applications using augmented reality (AR), view management is widely used for improving readability and intelligibility of annotations. In order to recognize the visible portions of targets in the user's view, the positions, orientations, and shapes of the objects are used in the case of conventional view management methods. However, it is difficult for a wearable AR system to obtain the positions, orientations and shapes of targets because the target is usually moving or nonrigid. In this paper, we propose a view management method to overlay annotations for networked wearable AR in a dynamic scene. The proposed method obtains positions and shapes of targets via a network in order to estimate the visible portions of the targets in the user's view. Annotations are located by minimizing penalties related to the overlap of an annotation, occlusion of target objects, length of a line between the annotation and the target object, and distance of the annotation in successive frames. Through experiments, we have proven that a prototype system can correctly provide each user with annotations on multiple users of wearable AR systems.