ORamaVR Streamlines Medical Training for Healthcare Professionals
According to the World Health Organization, our planet will need more than 40 million new doctors, nurses, frontline healthcare workers, and other healthcare professionals by the year 2030, which is double the current medical workforce. This demand will be fueled by several different factors – most notably, a growing geriatric population. People are living much longer, thanks in part to improved healthcare. But that brings a rise in chronic conditions such as cardiovascular disease, diabetes, and cancer.
In light of the COVID-19 pandemic, it’s sobering to realize that if we don’t act now to implement new medical training solutions, an additional deficit of 18 million healthcare workers will compound the current existing shortage. However, the 150-year-old training model (a master teaching an apprentice over the course of several years) is unable to meet the level of healthcare professionals needed.
Empirical evidence from other industries clearly demonstrates virtual reality (VR) technology is an effective and efficient way of improving training. However, VR implementation in the medical industry has been slow because of the cost to develop and customize software, limiting accessibility where it is needed the most – in the hands of medical instructors and learners.
A computational medical XR discipline
Computational medical XR (extended reality) unifies the computer science applications of intelligent reality, medical virtual reality, medical augmented reality and spatial computing for medical training, planning and navigation content creation. It builds upon clinical XR by bringing on novel low-code/no-code XR authoring platforms, suitable for medical professionals as well as XR content creators.
Architectures for SLAM and Augmented Reality Computing
In the next few years, new demanding applications will be supported on mobile platforms by reconciling two conflicting requirements: high performance (often with real-time limitations) and low power consumption. The objective of the vipGPU project is to develop hardware and software technology to provide efficient support for two such application scenarios, namely (a) simultaneous localization and mapping (SLAM) in mobile robotics systems, and (b) virtual reality (VR) in portable
devices to simulate serious games with emphasis on simulating surgical interventions and medical training in general. In this project, we aim at developing a new heterogeneous platform consisting of hardware accelerators for low power embedded systems optimized (at the hardware and software level) for the implementation of the two applications mentioned above.
Never ‘Drop the Ball’ in the Operating Room: An efficient hand-based VR HMD controller interpolation algorithm, for collaborative, networked virtual environments
In this work, we propose two algorithms that can be applied in the context of a networked virtual environment to efficiently handle the interpolation of displacement data for hand-based VR HMDs. Our algorithms, based on the use of dual-quaternions and multivectors respectively, impact the network consumption rate and are highly effective in scenarios involving multiple users. We illustrate convincing results in a modern game engine and a medical VR collaborative training scenario.
Inter-operability and Orchestration in Heterogeneous Cloud/Edge Resources: The ACCORDION Vision
This paper introduces the ACCORDION framework, a novel framework for the management of the cloud-edge continuum, targeting the support of NextGen applications with strong QoE requirements. The framework addresses the need for an ever expanding and heterogeneous pool of edge resources in order to deliver the promise of ubiquitous computing to the NextGen application clients. This endeavor entails two main technical challenges. First, to assure interoperability when incorporating heterogeneous infrastructures in the pool. Second, the management of the largely dynamic pool of edge nodes. The optimization of the delivered QoE stands as the core driver to this work, therefore its monitoring and modelling comprises a core part of the conducted work. The paper discusses the main pillars that support the ACCORDION vision, and provide a description of the three planned use case that are planned to demonstrate ACCORDION capabilities.
Covid-19 – VR Strikes Back: innovative medical VR training
In this work, we present “Covid-19 VR Strikes Back” (CVRSB), a novel Virtual Reality (VR) medical training application focusing on a faster and more efficient teaching experience for medical personnel regarding the nasopharyngeal swab and the proper Personal Protective Equipment (PPE) donning and doffing. Our platform incorporates a diversity of innovations: a) techniques to avoid the uncanny valley observed in human representation and interactivity in VR simulations, b) exploitation of Geometric Algebra interpolation engine capabilities and c) supervised machine learning analytics module for real-time recommendations. Our application is publicly available at no cost for most Head Mount Displays (HMDs) and Desktop VR. The impact and effectiveness of our application is proved by recent clinical trials.
An All-In-One Geometric Algorithm for Cutting, Tearing, and Drilling Deformable Models
Conformal Geometric Algebra (CGA) is a framework that allows the representation of objects, such as points, planes and spheres, and deformations, such as translations, rotations and dilations as uniform vectors, called multivectors. In this work, we demonstrate the merits of multivector usage with a novel, integrated rigged character simulation framework based on CGA. In such a framework, and for the first time, one may perform real-time cuts and tears as well as drill holes on a rigged 3D model. These operations can be performed before and/or after model animation, while maintaining deformation topology. Moreover, our framework permits generation of intermediate keyframes on-the-fly based on user input, apart from the frames provided in the model data. We are motivated to use CGA as it is the lowest-dimension extension of dual-quaternion algebra that amends the shortcomings of the majority of existing animation and deformation techniques. Specifically, we no longer need to maintain objects of multiple algebras and constantly transmute between them, such as matrices, quaternions and dual-quaternions, and we can effortlessly apply dilations. Using such an all-in-one geometric framework allows for better maintenance and optimization and enables easier interpolation and application of all native deformations. Furthermore, we present these three novel algorithms in a single CGA representation which enables cutting, tearing and drilling of the input rigged model, where the output model can be further re-deformed in interactive frame rates. These close to real-time cut,tear and drill algorithms can enable a new suite of applications, especially under the scope of a medical VR simulation.
Α Virtual Reality App for Physical and Cognitive Training of Older People With Mild Cognitive Impairment: Mixed Methods Feasibility Study
Background: Therapeutic virtual reality (VR) has emerged as an effective treatment modality for cognitive and physical training in people with mild cognitive impairment (MCI). However, to replace existing nonpharmaceutical treatment training protocols, VR platforms need significant improvement if they are to appeal to older people with symptoms of cognitive decline and meet their specific needs.
Objective: This study aims to design and test the acceptability, usability, and tolerability of an immersive VR platform that allows older people with MCI symptoms to simultaneously practice physical and cognitive skills on a dual task.
Methods: On the basis of interviews with 20 older people with MCI symptoms (15 females; mean age 76.25, SD 5.03 years) and inputs from their health care providers (formative study VR1), an interdisciplinary group of experts developed a VR system called VRADA (VR Exercise App for Dementia and Alzheimer’s Patients). Using an identical training protocol, the VRADA system was first tested with a group of 30 university students (16 females; mean age 20.86, SD 1.17 years) and then with 27 older people (19 females; mean age 73.22, SD 9.26 years) who had been diagnosed with MCI (feasibility studies VR2a and VR2b). Those in the latter group attended two Hellenic Association Day Care Centers for Alzheimer’s Disease and Related Disorders. Participants in both groups were asked to perform a dual task training protocol that combined physical and cognitive exercises in two different training conditions. In condition A, participants performed a cycling task in a lab environment while being asked by the researcher to perform oral math calculations (single-digit additions and subtractions). In condition B, participants performed a cycling task in the virtual environment while performing calculations that appeared within the VR app. Participants in both groups were assessed in the same way; this included questionnaires and semistructured interviews immediately after the experiment to capture perceptions of acceptability, usability, and tolerability, and to determine which of the two training conditions each participant preferred.
Results: Participants in both groups showed a significant preference for the VR condition (students: mean 0.66, SD 0.41, t29=8.74, P<.001; patients with MCI: mean 0.72, SD 0.51, t26=7.36, P<.001), as well as high acceptance scores for intended future use, attitude toward VR training, and enjoyment. System usability scale scores (82.66 for the students and 77.96 for the older group) were well above the acceptability threshold (75/100). The perceived adverse effects were minimal, indicating a satisfactory tolerability.
Conclusions: The findings suggest that VRADA is an acceptable, usable, and tolerable system for physical and cognitive training of older people with MCI and university students. Randomized controlled trial studies are needed to assess the efﬁcacy of VRADA as a tool to promote physical and cognitive health in patients with MCI.
MAGES 3.0: Tying the knot of medical VR
In this work, we present MAGES 3.0, a novel Virtual Reality (VR)-based authoring SDK platform for accelerated surgical training and assessment. The MAGES Software Development Kit (SDK) allows code-free prototyping of any VR psychomotor simulation of medical operations by medical professionals, who urgently need a tool to solve the issue of outdated medical training. Our platform encapsulates the following novel algorithmic techniques: a) collaborative networking layer with Geometric Algebra (GA) interpolation engine b) supervised machine learning analytics module for real-time recommendations and user profiling c) GA deformable cutting and tearing algorithm d) on-the-go configurable soft body simulation for deformable surfaces.
Scenior: An Immersive Visual Scripting system based on VR Software Design Patterns for Experiential Training
Virtual reality (VR) has re-emerged as a low-cost, highly accessible consumer product, and training on simulators is rapidly becoming standard in many industrial sectors. However, the available systems are either focusing on gaming context, featuring limited capabilities or they support only content creation of virtual environments without any rapid prototyping and modification. In this project, we propose a code-free, visual scripting platform to replicate gamified training scenarios through rapid prototyping and VR software design patterns. We implemented and compared two authoring tools: a) visual scripting and b) VR editor for the rapid reconstruction of VR training scenarios. Our visual scripting module is capable to generate training applications utilizing a node-based scripting system whereas the VR editor gives user/developer the ability to customize and populate new VR training scenarios directly from the virtual environment. We also introduce action prototypes, a new software design pattern suitable to replicate behavioral tasks for VR experiences. In addition, we present the training scenegraph architecture as the main model to represent training scenarios on a modular, dynamic and highly adaptive acyclic graph based on a structured educational curriculum. Finally, a user-based evaluation of the proposed solution indicated that users – regardless of their programming expertise – can effectively use the tools to create and modify training scenarios in VR.
Deform, Cut and Tear a skinned model using Conformal Geometric Algebra
In this work, we present a novel, integrated rigged character simulation framework in Conformal Geometric Algebra (CGA) that supports, for the first time, real-time cuts and tears, before and/or after the animation, while maintaining deformation topology. The purpose of using CGA is to lift several restrictions posed by current state-of-the-art character animation & deformation methods. Previous implementations originally required weighted matrices to perform deformations, whereas, in the current state-of-the-art, dual-quaternions handle both rotations and translations, but cannot handle dilations. CGA is a suitable extension of dual-quaternion algebra that amends these two major previous shortcomings: the need to constantly transmute between matrices and dual-quaternions as well as the inability to properly dilate a model during animation. Our CGA algorithm also provides easy interpolation and application of all deformations in each intermediate steps, all within the same geometric framework. Furthermore we also present two novel algorithms that enable cutting and tearing of the input rigged, animated model, while the output model can be further re-deformed. These interactive, real-time cut and tear operations can enable a new suite of applications, especially under the scope of a medical surgical simulation.
iSupport: Building a Resilience Support Tool for Improving the Health Condition of the Patient During the Care Path
Anxiety and stress are very common symptoms of patients facing a forthcoming surgery. However, limited time and resources within healthcare systems make the provision of stress relief interventions difficult to provide. Research has shown that provision of preoperative stress relief and educational resources can improve health outcomes and speed recovery. Information and Communication Technology (ICT) can be a valuable tool in providing stress relief and educational support to patients and family before but also after an operation, enabling better self-management and self-empowerment. To this direction, this paper reports on the design of a novel technical infrastructure for a resilience support tool for improving the health condition of patients, during the care path, using Virtual Reality (VR). The designed platform targets, among others, at improving the knowledge on the patient data, effectiveness and adherence to treatment, as well as providing for effective communication channels between patients and clinicians.
A True AR Authoring Tool for Interactive Virtual Museums
In this work, a new and innovative way of spatial computing that appeared recently in the bibliography called True Augmented Reality (AR), is employed in cultural heritage preservation. This innovation could be adapted by the Virtual Museums of the future to enhance the quality of experience. It emphasises, the fact that a visitor will not be able to tell, at a first glance, if the artefact that he/she is looking at is real or not and it is expected to draw the visitors’ interest. True AR is not limited to artefacts but extends even to buildings or life-sized character simulations of statues. It provides the best visual quality possible so that the users will not be able to tell the real objects from the augmented ones. Such applications can be beneficial for future museums, as with True AR, 3D models of various exhibits, monuments, statues, characters and buildings can be reconstructed and presented to the visitors in a realistic and innovative way. We also propose our Virtual Reality Sample application, a True AR playground featuring basic components and tools for generating interactive Virtual Museum applications, alongside a 3D reconstructed character (the priest of Asinou church) facilitating the storyteller of the augmented experience.
From Readership to Usership and Education, Entertainment, Consumption to Valuation: Embodiment and Aesthetic Experience in Literature-based MR Presence
This chapter will extend its preliminary scope by examining how literary transportation further amplifies presence and affects user response vis-á-vis virtual heritage by focusing on embodiment and aesthetic experience. To do so, it will draw on recent findings emerging from the fields of applied psychology, neuroaesthetics and cognitive literary studies; and consider a case study advancing the use of literary travel narratives in the design of DCH applications for Antiquities – in this case the well-known ancient Greek monument of Acropolis. Subsequently, the chapter will discuss how Literary-based MR Presence shifts public reception from an education-entertainment-touristic consumption paradigm to a response predicated on valuation. It will show that this type of public engagement is more closely aligned both with MR applications’ default mode of usership, and with newly emerging conceptions of a user-centered museum (e.g., the Museum 3.0), thus providing a Virtual Museum model expressly suited to cultural heritage.
Virtual Reality Simulation Facilitates Resident Training in Total Hip Arthroplasty: A Randomized Controlled Trial
No study has yet assessed the efficacy of virtual reality (VR) simulation for teaching orthopedic surgery residents. In this blinded, randomized, and controlled trial, we asked if the use of VR simulation improved postgraduate year (PGY)-1 orthopedic residents’ performance in cadaver total hip arthroplasty and if the use of VR simulation had a preferentially beneficial effect on specific aspects of surgical skills or knowledge.
Fourteen PGY-1 orthopedic residents completed a written pretest and a single cadaver total hip arthroplasty (THA) to establish baseline levels of knowledge and surgical ability before 7 were randomized to VR-THA simulation. All participants then completed a second cadaver THA and retook the test to assess for score improvements. The primary outcomes were improvement in test and cadaver THA scores.
There was no significant difference in the improvement in test scores between the VR and control groups ( P = .078). In multivariate regression analysis, the VR cohort demonstrated a significant improvement in overall cadaver THA scores ( P = .048). The VR cohort demonstrated greater improvement in each specific score category compared with the control group, but this trend was only statistically significant for technical performance ( P = .009).
VR-simulation improves PGY-1 resident surgical skills but has no significant effect on medical knowledge. The most significant improvement was seen in technical skills. We anticipate that VR simulation will become an indispensable part of orthopedic surgical education, but further study is needed to determine how best to use VR simulation within a comprehensive curriculum.
Level of Evidence
Digital Health Tools for Perioperative Stress Reduction in Integrated Care
Patients undergoing elective surgery often face symptoms of anxiety and stress. Healthcare systems have limited time and resources to provide individualized stress relief interventions. Research has shown that stress relief interventions and educational resources can improve health outcomes and speed recovery.
Digital health tools can provide valuable assistance in stress relief and educational support to patients and family. This paper reports on the design of a novel digital health infrastructure for improving the health condition of patients, during the care path, using virtual reality (VR) and other information and communication technologies (ICT).
Digital tools have been combined to form an integrated platform that can be used by patients before but also after an operation, enabling better self-management and selfempowerment.
The designed platform aims at improving the knowledge of patients about their condition, providing stress relief tools, helping them adhere to treatment, as well as providing for effective communication channels between patients and clinicians.
The proposed solution has the potential to improve physical and emotional reactions to stress and increase the levels of calmness and a sense of wellbeing. Information provided through the platform advances and enhances health literacy and digital competence and increases the participation of the patient in the decision-making process. Integration with third-party applications can facilitate the exchange of important information between patients and physicians as well as between personal applications and clinical health systems.
Transforming medical education and training with VR using M.A.G.E.S.
In this work, we propose a novel VR s/w system aiming to disrupt the healthcare training industry with the first Psychomotor Virtual Reality (VR) Surgical Training solution. Our system generates a fail-safe, realistic environment for surgeons to master and extend their skills in an affordable and portable solution. We deliver an educational tool for orthopedic surgeries to enhance the learning procedure with gamification elements, advanced interactability and cooperative features in an immersive VR operating theater. Our methodology transforms medical training to a cost-effective, easily and broadly accessible process. We also propose a fully customizable SDK platform able to generate educational VR simulations with minimal adaptations. The latter is accomplished by prototyping the learning pipeline into structured, independent and reusable segments, which are used to generate more complex behaviors. Our architecture supports all current and forthcoming VR HMDs and standard 3D content generation.
Psychomotor Surgical Training in Virtual Reality
In this chapter, we present a novel s/w system aiming to disrupt the healthcare training industry with the first psychomotor virtual reality (VR) surgical training solution. We provide the means for performing surgical operations in VR, thereby facilitating training in a fail-safe environment that very accurately simulates reality and significantly reduces training costs, offering surgeons and the healthcare ecosystem a way to improve operation outcomes drastically.
With the presented system, we focus on a completed total knee arthroplasty (TKA) virtual reality operating module, opening the way for making available a full suite of virtual reality operations. Our methodology transforms medical training to a cost-effective and easily and broadly accessible process. The latter is accomplished by employing the latest VR, gamification and tracking technologies for virtual character-based, interactive 3D medical simulation training. It requires standard h/w (PCs, laptops) irrelevant of the operating system. For optimal user experience, a commodity VR head-mounted display (HMD) should be employed along with motion or other hand-controller sensors. The open ovidVR architecture supports all current and forthcoming VR HMDs and standard 3D content generation. Our novel technologies facilitate Presence that is the feeling of ‘being there’ and ‘acting there’ in the virtual world, thereby offering the means for unprecedented training.
Real-time rendering under distant illumination with Conformal Geometric Algebra
Precomputed Radiance Transfer (PRT) methods established for handling global illumination(GI) of objects from area lights in real-time and many techniques proposed for rotating the light using linear algebra rotation matrices.Rotating area lights efficiently is crucial part for Computer Graphics since is one of the main components of real-time rendering. Matrices commonly used for handling such rotations are not quite efficient and require high memory consumption;as a result the need for proposing new more efficient rotation algorithms has been established.In this work,we employ the CGA as the mathematical background for ”GI in real-time”under distant IBL illumination,for diffuse surfaces with self-shadowing by efficiently rotating the environment light using CGA entities. Our work, is based on Spherical Harmonics (SH) which are used for approximating natural,area-light illumination as irradiance maps.Our main novelty, is that we extend the PRT algorithm by representing SH for the first time with CGA.The main intuition is that SH of band index 1 are represented using CGA entities and SH with band index larger than 1 are represented in terms of CGA-SH of band 1. Specifically, we propose a new method for representing SH with CGA entities and rotating SH by rotating CGA entities. In this way, we can visualize the SH rotations, rotate them faster than rotation matrices, we provide a unique visual representation and intuition regarding their rotation,in stark contrast to usual rotation matrices and we achieve consistently better visual results from Ivanic rotation matrices during light rotation. Via our CGA expressed SH we provide a significant boost on the PRT algorithm since we represent SH rotations by CGA rotors(4 numbers) as opposed to 9×9 sparse matrices that are usually employed.With our algorithm,we pave the way for including scaling(dilation) and translation of light coefficients using CGA motors. Copyright c 2009 John Wiley & Sons, Ltd.
Gamification and Serious Games
A Video game is a mental contest, played with a computer according to certain rules for amusement, recreation, or winning a stake [Zyda 2005]. A Digital Game refers to a multitude of types and genres of games, played on different platforms using digital technologies such as computers, consoles, handheld, and mobile devices [DGEI2013]. The concept of digital games embraces this technological diversity. In contrast with terms such as ‘video games’ or ‘computer games’, it does not refer to a particular device on which a digital game can be played. The common factor is that digital games are fundamentally produced, distributed and exhibited using digital technologies. Gamification has been defined as the use of game design elements in non-game contexts and activities [Deterding2011] which often aim to change attitudes and behaviors [Prandi et al 2015]. Using game-based mechanics, aesthetics and game thinking to engage people, motivate action, solve problems and promote learning [Kapp2013] [Kapp2015]. i.e. employing awards, ranks during missions or leaderboards to encourage active engagement during an activity e.g. health fitness tracking or e-learning during an online course. Thus, gamification uses parts of games but is not a complete game. Serious Games are full-fledged games created for transferring knowledge [Ritterfeld et al 2009], teaching skills and raising awareness concerning certain topics for non-entertainment purposes [Deterding2011]). Essentially is a mental contest, played with a computer in accordance with specific rules, that uses entertainment to further government or 2 corporate training, education, health, public policy, and strategic communication objectives [Zyda2005].
Augmented Cognition via Brainwave Entrainment in Virtual Reality: An Open, Integrated Brain Augmentation in a Neuroscience System Approach
Building on augmented cognition theory and technology, our novel contribution in this work enables accelerated, certain brain functions related to task performance as well as their enhancement. We integrated in an open-source framework, latest immersive virtual reality (VR) head-mounted displays, with the Emotiv EPOC EEG headset in an open neuro- and biofeedback system for cognitive state detection and augmentation. Our novel methodology allows to significantly accelerate content presentation in immersive VR, while lowering brain frequency at alpha level—without losing content retention by the user. In our pilot experiments, we tested our innovative VR platform by presenting to N = 25 subjects a complex 3D maze test and different learning procedures for them on how to exit it. The subjects exposed to our VR-induced entrainment learning technology performed significantly better than those exposed to other ‘‘classical’’ learning procedures. In particular, cognitive task performance augmentation was measured for: learning time, complex navigational skills and decision-making abilities, orientation ability.
glGA: an OpenGL Geometric Application framework for a modern, shader-based computer graphics curriculum
This paper presents the open-source glGA (Opengl Geometric Application) framework, a lightweight, shader-based, comprehensive and easy to understand computer graphics (CG) teaching C++ system that is used for educational purposes, with emphasis on modern graphics and GPU application programming. This framework with the accompanying examples and assignments has been employed in the last three Semesters in two different courses at the Computer Science Department of the University of Crete, Greece. It encompasses four basic Examples and six Sample Assignments for computer graphics educational purposes that support all major desktop and mobile platforms, such as Windows, Linux, MacOSX and iOS using the same code base. We argue about the extensibility of this system, referring to an outstanding postgraduate project built on top of glGA for the creation of an Augmented Reality Environment, in which life-size, virtual characters exist in a marker-less real scene. Subsequently, we present the learning results of the adoption of this CG framework by both undergraduate and postgraduate university courses as far as the success rate and student grasp of major, modern, shader-based CG topics is concerned. Finally, we summarize the novel educative features that are implemented in glGA, in comparison with other systems, as a medium for improving the teaching of modern CG and GPU application programming.
A survey of mobile and wirelesstechnologies for augmented reality systems
Recent advances in hardware and software for mobile computing have enabled a new breed ofmobile augmented reality (AR) systems and applications. A new breed of computing called‘augmented ubiquitous computing’ has resulted from the convergence of wearable computing,wireless networking, and mobile AR interfaces. In this paper, we provide a survey of differentmobile and wireless technologies and how they have impact AR. Our goal is to place theminto different categories so that it becomes easier to understand the state of art and to helpidentify new directions of research. Copyright#2008 John Wiley & Sons, Ltd.
Applications Of Interactive Virtual Humans In Mobile Augmented Reality
Recent advances in hardware and software for mobile computing have enabled a newbreed of mobile Augmented Reality systems and applications featuring interactivevirtual characters. This has resulted from the convergence of the tremendous progress inmobile computer graphics and mobile AR interfaces. In this paper, we focus on theevolution of our algorithms and their integration towards improving the presence andinteractivity of virtual characters in real and virtual environments, as we realize thetransition from mobile workstations to ultra-mobile PC’s. We examine in detail threecrucial parts of such systems: user-trackedinteraction; real-time, automatic, adaptableanimation of virtual characters and deformable pre-computed radiance transferillumination for virtual characters. We examine our efforts to enhance the sense ofpresence for the user, while maintaining the quality of animation and interactivity as wescale and deploy our AR framework in a variety of platforms. We examine different ARvirtual human enhanced scenarios under the different mobile devices that illustrate theinterplay and applications of our methods.
Presence and interaction in mixed reality environments
In this paper, we presenta simple and robust mixed reality(MR) framework that allows for real-time interaction with virtual humansin mixed reality environments underconsistent illumination. We willlook at three crucial parts of thissystem: interaction, animation andglobal illumination of virtual humansfor an integrated and enhancedpresence. The interaction systemcomprises of a dialogue module,which is interfaced with a speechrecognition and synthesis system.Next to speech output, the dialoguesystem generates face and bodymotions, which are in turn managedby the virtual human animation layer.Our fast animation engine can handlevarious types of motions, such asnormal key-frame animations, ormotions that are generated on-the-flyby adapting previously recordedclips. Real-time idle motions are anexample of the latter category. Allthese different motions are generatedand blended on-line, resulting ina flexible and realistic animation. Ourrobust rendering method operates inaccordance with the previous anima-tion layer, based on an extended forvirtual humans precomputed radiancetransfer (PRT) illumination model,resulting in a realistic rendition ofsuch interactive virtual characters inmixed reality environments. Finally,we present a scenario that illustratesthe interplay and application of ourmethods, glued under a unique frame-work for presence and interaction inMR.
Immersive VR Decision Training:Telling Interactive Stories FeaturingAdvanced Virtual Human Simulation Technologies
Based on the premise of a synergy between the interactive storytelling and VR training simulation this paper treats the main issues involved in practical realization of an immersive VR decision training system supporting possibly broad spectrum of scenarios featuring interactive virtual humans. The paper describes a concrete concept of such a system and its practical realization example
Interactive Scenario Immersion: Health Emergency Decision Training in JUST Project
The paper discusses the main requirements, constraints and challenges involved in practical realization of an immersive VR situation training system that would support simulation of interactive scenarios of various type. A special attention is paid to the demanding health emergency decision training domain. As an example an immersive JUST VR health emergency training system built in frame of EU IST JUST project is presented in more detail.