SimInsights is developing 3D, photorealistic models for both the mobile carts. In addition, physics-based models will be developed for the carts so that their motions and interactions with users as well as other objects in the environment can be accurately captured and represented. For example, users will be able to push or pull the cart, and the cart will bump into other users and objects.
Specifically for the design of user interaction with the wi-med cart, we will also build a model of the computer screen so that the users can touch or tap the virtual screen from close proximity to activate the screen, review the information and make decisions based on the information. This will realistically replicate the real world interaction.
The VR environments must allow for scenario enactments with up to four people who will interact with each other and the VR environment simultaneously.
We are developing the VR environments in Unity in a manner to allow for scenario enactments with up to four people. Users will be able to interact with each other (e.g. tapping some on the shoulder to interrupt) as well as objects in the VR environment at the same time. Unity provides a comprehensive scripting API, which we will use to control the networked states of all players. This interface allows communication between connected players and ensures that the transforms (positions and rotations) of the players’ 3D models as well as objects manipulated by the players are synchronized across all views.
Figure 2: Users on different computers and Vive headsets will inhabit the same virtual world.
SimInsights’ experience with industry professionals and leading academic researchers has indicated that HTC Vive offers the best VR experience. Thus SimInsights recommends use of HTC Vive, although we do support many other VR and AR devices in our projects. We will assume use of HTC for the technical description in this proposal.
HTC Vive tracks interactions with the VR environment using a headset, two handheld controllers, and two wireless infrared cameras (base stations). The headset and controllers have 70 sensors in total that allows for accurate tracking of positions in space (Prasuethsut, 2016) with a tracking accuracy of about 2 mm (Kreylos, 2016). The headset also has a microphone to capture the wearer’s voice.
Signals from Vive controllers will be used to render the 3D avatar of the user in the VR environment. The Vive signals will allow us to accurately render the position and orientation of user’s head and both hands.