1.
Bartl, A., Wenninger, S., Wolf, E., Botsch, M., Latoschik, M.E.: Affordable but not Cheap: A Case Study of the Effects of Two 3D-Reconstruction Methods of Virtual Humans. Frontiers in Virtual Reality. 2, (2021).
Realistic and lifelike 3D-reconstruction of virtual humans has various exciting and important use cases. Our and others' appearances have notable effects on ourselves and our interaction partners in virtual environments, e.g., on acceptance, preference, trust, believability, behavior (the Proteus effect), and more. Today, multiple approaches for the 3D-reconstruction of virtual humans exist. They significantly vary in terms of the degree of achievable realism, the technical complexities, and finally, the overall reconstruction costs involved. This article compares two 3D-reconstruction approaches with very different hardware requirements. The high-cost solution uses a typical complex and elaborated camera rig consisting of 94 digital single-lens reflex (DSLR) cameras. The recently developed low-cost solution uses a smartphone camera to create videos that capture multiple views of a person. Both methods use photogrammetric reconstruction and template fitting with the same template model and differ in their adaptation to the method-specific input material. Each method generates high-quality virtual humans ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity. We compare the results of the two 3D-reconstruction methods in an immersive virtual environment against each other in a user study. Our results indicate that the virtual humans from the low-cost approach are perceived similarly to those from the high-cost approach regarding the perceived similarity to the original, human-likeness, beauty, and uncanniness, despite significant differences in the objectively measured quality. The perceived feeling of change of the own body was higher for the low-cost virtual humans. Quality differences were perceived more strongly for one's own body than for other virtual humans.
2.
Bartl, A., Jung, S., Kullmann, P., Wenninger, S., Achenbach, J., Wolf, E., Schell, C., Lindeman, R.W., Botsch, M., Latoschik, M.E.: Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match. Proceedings of the 28th IEEE Virtual Reality conference (VR ’21). IEEE (2021).
3.
Wenninger, S., Achenbach, J., Bartl, A., Latoschik, M.E., Botsch, M.: Realistic Virtual Humans from Smartphone Videos. In: Teather, R.J., Joslin, C., Stuerzlinger, W., Figueroa, P., Hu, Y., Batmaz, A.U., Lee, W., and Ortega, F. (eds.) VRST. pp. 29:1–29:11. ACM (2020).
This paper introduces an automated 3D-reconstruction method for generating high-quality virtual humans from monocular smartphone cameras. The input of our approach are two video clips, one capturing the whole body and the other providing detailed close-ups of head and face. Optical flow analysis and sharpness estimation select individual frames, from which two dense point clouds for the body and head are computed using multi-view reconstruction. Automatically detected landmarks guide the fitting of a virtual human body template to these point clouds, thereby reconstructing the geometry. A graph-cut stitching approach reconstructs a detailed texture. Our results are compared to existing low-cost monocular approaches as well as to expensive multi-camera scan rigs. We achieve visually convincing reconstructions that are almost on par with complex camera rigs while surpassing similar low-cost approaches. The generated high-quality avatars are ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity
4.
Ganal, E., Bartl, A., Westermeier, F., Roth, D., Latoschik, M.E.: Developing a Study Design on the Effects of Different Motion Tracking Approaches on the User Embodiment in Virtual Reality. In: Hansen, C., Nürnberger, A., and Preim, B. (eds.) Mensch und Computer 2020. Gesellschaft für Informatik e.V (2020).
5.
Latoschik, M.E., Kern, F., Stauffert, J.-P., Bartl, A., Botsch, M., Lugrin, J.-L.: Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality. IEEE Transactions on Visualization and Computer Graphics (TVCG). 25, 2134–2144 (2019).
This article investigates performance and user experience in Social Virtual Reality (SVR) targeting distributed, embodied, and immersive, face-to-face encounters. We demonstrate the close relationship between scalability, reproduction accuracy, and the resulting performance characteristics, as well as the impact of these characteristics on users co-located with larger groups of embodied virtual others. System scalability provides a variable number of co-located avatars and AI-controlled agents with a variety of different appearances, including realistic-looking virtual humans generated from photogrammetry scans. The article reports on how to meet the requirements of embodied SVR with todayu0027s technical off-the-shelf solutions and what to expect regarding features, performance, and potential limitations. Special care has been taken to achieve low latencies and sufficient frame rates necessary for reliable communication of embodied social signals. We propose a hybrid evaluation approach which coherently relates results from technical benchmarks to subjective ratings and which confirms required performance characteristics for the target scenario of larger distributed groups. A user-study reveals positive effects of an increasing number of co-located social companions on the quality of experience of virtual worlds, i.e., on presence, possibility of interaction, and co-presence. It also shows that variety in avatar/agent appearance might increase eeriness but might also stimulate an increased interest of participants about the environment.
6.
Lugrin, B., Bartl, A., Striepe, H., Lax, J., Toriizuka, T.: Do I act familiar? Investigating the Similarity-Attraction Principle on Culture-specific Communicative behaviour for Social Robots. International Conference on Intelligent Robots and Systems (IROS 2018). pp. 2033–2039. IEEE (2018).
Culture, amongst other individual and social factors, plays a crucial role in human-human interactions. If robots should become a part of our society, they should be able to act in culture-specific manners as well. In this paper, we showcase the implementation of a cultural dichotomy, namely individualism vs. collectivism, in a social robotsu0027 conversation. Presenting these conversations to human observers from Germany and Japan, we investigate whether the implemented differences are recognized as such, and whether stereotypical culture-specific behaviours that correspond to the observersu0027 cultural background is preferred. Results suggest that the manipulations in behaviour had the intended effect, but are not reflected in personal preferences.
7.
Bartl, A., Bosch, S., Brandt, M., Dittrich, M., Lugrin, B.: The Influence of a Social Robot’s Persona on How it is Perceived and Accepted by Elderly Users. In: Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., and He, H. (eds.) 8th International Conference on Social Robotics (ICSR 2016). pp. 681–691. Springer (2016).
The demographic change causes an imbalance between the number of elderly in need of support and the number of caring staff. Therefore, it is important to help older adults keep their independence. Forgetting is a common obstacle people have to face when they become older which can be moderated by social robots by reminding on tasks. Since most elderly people are not used to robots a challenge in HRI is to identify aspects of a robot’s design to promote its acceptance. We present two different personas (companion vs. assistant) for a robotic platform by manipulating verbal and nonverbal behavior. A study was conducted in assisted living accommodations with the robot reminding on appointments to review if the persona influences the robot’s acceptance. Results indicate that the companion version of the robot was better accepted and perceived more likeable and intelligent compared to the assistant version.