ORAMAVR PUBLICATIONS

MAGES 3.0: Tying the knot of medical VR

George Papagiannakis, Paul Zikas, Nick Lydatakis, Steve Kateros, Mike Kentros, Efstratios Geronikolakis, Manos Kamarianakis, Ioanna Kartsonaki, Giannis Evangelou
(ACM SIGGRAPH Immersive Pavillion, 2020)

In this work, we present MAGES 3.0, a novel Virtual Reality (VR)-based authoring SDK platform for accelerated surgical training and assessment. The MAGES Software Development Kit (SDK) allows code-free prototyping of any VR psychomotor simulation of medical operations by medical professionals, who urgently need a tool to solve the issue of outdated medical training. Our platform encapsulates the following novel algorithmic techniques: a) collaborative networking layer with Geometric Algebra (GA) interpolation engine b) supervised machine learning analytics module for real-time recommendations and user profiling c) GA deformable cutting and tearing algorithm d) on-the-go configurable soft body simulation for deformable surfaces.

Paper (PDF)

Scenior: An Immersive Visual Scripting system based on VR Software Design Patterns for Experiential Training

Paul Zikas, George Papagiannakis, Nick Lydatakis, Steve Kateros, Stavroula Ntoa, Ilia Adami, Constantine Stephanidis
(CGI, 2020)

Virtual reality (VR) has re-emerged as a low-cost, highly accessible consumer product, and training on simulators is rapidly becoming standard in many industrial sectors. However, the available systems are either focusing on gaming context, featuring limited capabilities or they support only content creation of virtual environments without any rapid prototyping and modification. In this project, we propose a code-free, visual scripting platform to replicate gamified training scenarios through rapid prototyping and VR software design patterns. We implemented and compared two authoring tools: a) visual scripting and b) VR editor for the rapid reconstruction of VR training scenarios. Our visual scripting module is capable to generate training applications utilizing a node-based scripting system whereas the VR editor gives user/developer the ability to customize and populate new VR training scenarios directly from the virtual environment. We also introduce action prototypes, a new software design pattern suitable to replicate behavioral tasks for VR experiences. In addition, we present the training scenegraph architecture as the main model to represent training scenarios on a modular, dynamic and highly adaptive acyclic graph based on a structured educational curriculum. Finally, a user-based evaluation of the proposed solution indicated that users – regardless of their programming expertise – can effectively use the tools to create and modify training scenarios in VR.

Paper (PDF)

Deform, Cut and Tear a skinned model using Conformal Geometric Algebra

Manos Kamarianakis, George Papagiannakis
(CGI-ENGAGE 2020)

In this work, we present a novel, integrated rigged character simulation framework in Conformal Geometric Algebra (CGA) that supports, for the first time, real-time cuts and tears, before and/or after the animation, while maintaining deformation topology. The purpose of using CGA is to lift several restrictions posed by current state-of-the-art character animation & deformation methods. Previous implementations originally required weighted matrices to perform deformations, whereas, in the current state-of-the-art, dual-quaternions handle both rotations and translations, but cannot handle dilations. CGA is a suitable extension of dual-quaternion algebra that amends these two major previous shortcomings: the need to constantly transmute between matrices and dual-quaternions as well as the inability to properly dilate a model during animation. Our CGA algorithm also provides easy interpolation and application of all deformations in each intermediate steps, all within the same geometric framework. Furthermore we also present two novel algorithms that enable cutting and tearing of the input rigged, animated model, while the output model can be further re-deformed. These interactive, real-time cut and tear operations can enable a new suite of applications, especially under the scope of a medical surgical simulation.

Paper (PDF)

Video

iSupport: Building a Resilience Support Tool for Improving the Health Condition of the Patient During the Care Path

Angelina KouroubaliHaridimos KondylakisLefteris KoumakisGeorge PapagiannakisPaul ZikasDimitrios G Katehakis
(Care Path. Stud Health Technol Inform, 2019)

Anxiety and stress are very common symptoms of patients facing a forthcoming surgery. However, limited time and resources within healthcare systems make the provision of stress relief interventions difficult to provide. Research has shown that provision of preoperative stress relief and educational resources can improve health outcomes and speed recovery. Information and Communication Technology (ICT) can be a valuable tool in providing stress relief and educational support to patients and family before but also after an operation, enabling better self-management and self-empowerment. To this direction, this paper reports on the design of a novel technical infrastructure for a resilience support tool for improving the health condition of patients, during the care path, using Virtual Reality (VR). The designed platform targets, among others, at improving the knowledge on the patient data, effectiveness and adherence to treatment, as well as providing for effective communication channels between patients and clinicians.

Paper (PDF)

A True AR Authoring Tool for Interactive Virtual Museums

Efstratios Geronikolakis, Paul Zikas, Steve Kateros, Nick Lydatakis, Stelios Georgiou, Mike Kentros, George Papagiannakis
(Visual Computing for Cultural Heritage, Springer, 2019)

In this work, a new and innovative way of spatial computing that appeared recently in the bibliography called True Augmented Reality (AR), is employed in cultural heritage preservation. This innovation could be adapted by the Virtual Museums of the future to enhance the quality of experience. It emphasises, the fact that a visitor will not be able to tell, at a first glance, if the artefact that he/she is looking at is real or not and it is expected to draw the visitors’ interest. True AR is not limited to artefacts but extends even to buildings or life-sized character simulations of statues. It provides the best visual quality possible so that the users will not be able to tell the real objects from the augmented ones. Such applications can be beneficial for future museums, as with True AR, 3D models of various exhibits, monuments, statues, characters and buildings can be reconstructed and presented to the visitors in a realistic and innovative way. We also propose our Virtual Reality Sample application, a True AR playground featuring basic components and tools for generating interactive Virtual Museum applications, alongside a 3D reconstructed character (the priest of Asinou church) facilitating the storyteller of the augmented experience.

Paper (PDF)

Video

From Readership to Usership and Education, Entertainment, Consumption to Valuation: Embodiment and Aesthetic Experience in Literature-based MR Presence

Stéphanie Bertrand, Martha Vassiliadi, Paul Zikas, Efstratios Geronikolakis, George Papagiannakis
(CoRR, 2019)

This chapter will extend its preliminary scope by examining how literary transportation further amplifies presence and affects user response vis-á-vis virtual heritage by focusing on embodiment and aesthetic experience. To do so, it will draw on recent findings emerging from the fields of applied psychology, neuroaesthetics and cognitive literary studies; and consider a case study advancing the use of literary travel narratives in the design of DCH applications for Antiquities – in this case the well-known ancient Greek monument of Acropolis. Subsequently, the chapter will discuss how Literary-based MR Presence shifts public reception from an education-entertainment-touristic consumption paradigm to a response predicated on valuation. It will show that this type of public engagement is more closely aligned both with MR applications’ default mode of usership, and with newly emerging conceptions of a user-centered museum (e.g., the Museum 3.0), thus providing a Virtual Museum model expressly suited to cultural heritage.

Virtual Reality Simulation Facilitates Resident Training in Total Hip Arthroplasty: A Randomized Controlled Trial

Jessica Hooper MD, Eleftherios Tsiridis MD, PhD, James E. Feng MD, Ran Schwarzkopf MD, MSc, Daniel Waren MS, William J. Long MD, FRCSC, Lazaros Poultsides MD, PhD, William Macaulay MD, George Papagiannakis, Eustathios Kenanidis, Eduardo D. Rodriguez, James Slover, Kenneth A. Egol, Donna P. Phillips, Scott Friedlander, Michael Collins
(The Journal of Arthroplasty, 2019)

Background
No study has yet assessed the efficacy of virtual reality (VR) simulation for teaching orthopedic surgery residents. In this blinded, randomized, and controlled trial, we asked if the use of VR simulation improved postgraduate year (PGY)-1 orthopedic residents’ performance in cadaver total hip arthroplasty and if the use of VR simulation had a preferentially beneficial effect on specific aspects of surgical skills or knowledge.

Methods
Fourteen PGY-1 orthopedic residents completed a written pretest and a single cadaver total hip arthroplasty (THA) to establish baseline levels of knowledge and surgical ability before 7 were randomized to VR-THA simulation. All participants then completed a second cadaver THA and retook the test to assess for score improvements. The primary outcomes were improvement in test and cadaver THA scores.

Results
There was no significant difference in the improvement in test scores between the VR and control groups ( P = .078). In multivariate regression analysis, the VR cohort demonstrated a significant improvement in overall cadaver THA scores ( P = .048). The VR cohort demonstrated greater improvement in each specific score category compared with the control group, but this trend was only statistically significant for technical performance ( P = .009).

Conclusions
VR-simulation improves PGY-1 resident surgical skills but has no significant effect on medical knowledge. The most significant improvement was seen in technical skills. We anticipate that VR simulation will become an indispensable part of orthopedic surgical education, but further study is needed to determine how best to use VR simulation within a comprehensive curriculum.

Level of Evidence
Level 1.

Paper (PDF)

Digital Health Tools for Perioperative Stress Reduction in Integrated Care

Angelina Kouroubali, Haridimos Kondylakis, Evangelos Karadimas,Georgios Kavlentakis, Akis Simos, Rosa María Baños, Rocío Herrero Camarano, George Papagiannakis, Paul Zikas, Yiannis Petrakis, Alba Jiménez Díaz, Santiago Hors-Fraile, Kostas Marias, Dimitrios G. Katehakis
(EJBI, 2019)

Background
Patients undergoing elective surgery often face symptoms of anxiety and stress. Healthcare systems have limited time and resources to provide individualized stress relief interventions. Research has shown that stress relief interventions and educational resources can improve health outcomes and speed recovery.

Objectives
Digital health tools can provide valuable assistance in stress relief and educational support to patients and family. This paper reports on the design of a novel digital health infrastructure for improving the health condition of patients, during the care path, using virtual reality (VR) and other information and communication technologies (ICT).

Methods
Digital tools have been combined to form an integrated platform that can be used by patients before but also after an operation, enabling better self-management and selfempowerment.

Results
The designed platform aims at improving the knowledge of patients about their condition, providing stress relief tools, helping them adhere to treatment, as well as providing for effective communication channels between patients and clinicians.

Conclusions
The proposed solution has the potential to improve physical and emotional reactions to stress and increase the levels of calmness and a sense of wellbeing. Information provided through the platform advances and enhances health literacy and digital competence and increases the participation of the patient in the decision-making process. Integration with third-party applications can facilitate the exchange of important information between patients and physicians as well as between personal applications and clinical health systems.

Paper (PDF)

Transforming medical education and training with VR using M.A.G.E.S.

George Papagiannakis, Nick Lydatakis, Steve Kateros, Stelios Georgiou, Paul Zikas
(Proceedings of Siggraph Asia ’18 Posters, 2018)

In this work, we propose a novel VR s/w system aiming to disrupt the healthcare training industry with the first Psychomotor Virtual Reality (VR) Surgical Training solution. Our system generates a fail-safe, realistic environment for surgeons to master and extend their skills in an affordable and portable solution. We deliver an educational tool for orthopedic surgeries to enhance the learning procedure with gamification elements, advanced interactability and cooperative features in an immersive VR operating theater. Our methodology transforms medical training to a cost-effective, easily and broadly accessible process. We also propose a fully customizable SDK platform able to generate educational VR simulations with minimal adaptations. The latter is accomplished by prototyping the learning pipeline into structured, independent and reusable segments, which are used to generate more complex behaviors. Our architecture supports all current and forthcoming VR HMDs and standard 3D content generation.

Paper (PDF)

Poster

Video

Psychomotor Surgical Training in Virtual Reality

George Papagiannakis, Panos Trahanias, Eustathios Kenanidis, Eleftherios Tsiridis
(The Adult Hip – Master Case Series and Techniques. Springer, Cham, 2018)

In this chapter, we present a novel s/w system aiming to disrupt the healthcare training industry with the first psychomotor virtual reality (VR) surgical training solution. We provide the means for performing surgical operations in VR, thereby facilitating training in a fail-safe environment that very accurately simulates reality and significantly reduces training costs, offering surgeons and the healthcare ecosystem a way to improve operation outcomes drastically.

With the presented system, we focus on a completed total knee arthroplasty (TKA) virtual reality operating module, opening the way for making available a full suite of virtual reality operations. Our methodology transforms medical training to a cost-effective and easily and broadly accessible process. The latter is accomplished by employing the latest VR, gamification and tracking technologies for virtual character-based, interactive 3D medical simulation training. It requires standard h/w (PCs, laptops) irrelevant of the operating system. For optimal user experience, a commodity VR head-mounted display (HMD) should be employed along with motion or other hand-controller sensors. The open ovidVR architecture supports all current and forthcoming VR HMDs and standard 3D content generation. Our novel technologies facilitate Presence that is the feeling of ‘being there’ and ‘acting there’ in the virtual world, thereby offering the means for unprecedented training.

Paper (PDF)

Real-time rendering under distant illumination with Conformal Geometric Algebra

Margarita Papaefthymiou, George Papagiannakis
(Mathematical Methods in the Applied Sciences, John Wiley & Sons, 2017)

Precomputed Radiance Transfer (PRT) methods established for handling global illumination(GI) of objects from area lights in real-time and many techniques proposed for rotating the light using linear algebra rotation matrices.Rotating area lights efficiently is crucial part for Computer Graphics since is one of the main components of real-time rendering. Matrices commonly used for handling such rotations are not quite efficient and require high memory consumption;as a result the need for proposing new more efficient rotation algorithms has been established.In this work,we employ the CGA as the mathematical background for ”GI in real-time”under distant IBL illumination,for diffuse surfaces with self-shadowing by efficiently rotating the environment light using CGA entities. Our work, is based on Spherical Harmonics (SH) which are used for approximating natural,area-light illumination as irradiance maps.Our main novelty, is that we extend the PRT algorithm by representing SH for the first time with CGA.The main intuition is that SH of band index 1 are represented using CGA entities and SH with band index larger than 1 are represented in terms of CGA-SH of band 1. Specifically, we propose a new method for representing SH with CGA entities and rotating SH by rotating CGA entities. In this way, we can visualize the SH rotations, rotate them faster than rotation matrices, we provide a unique visual representation and intuition regarding their rotation,in stark contrast to usual rotation matrices and we achieve consistently better visual results from Ivanic rotation matrices during light rotation. Via our CGA expressed SH we provide a significant boost on the PRT algorithm since we represent SH rotations by CGA rotors(4 numbers) as opposed to 9×9 sparse matrices that are usually employed.With our algorithm,we pave the way for including scaling(dilation) and translation of light coefficients using CGA motors. Copyright c 2009 John Wiley & Sons, Ltd.

Paper (PDF)

Video

Gamification and Serious Games

George Papagiannakis
(Encyclopedia of Computer Graphics and Games, Springer International Publishing, 2017)

A Video game is a mental contest, played with a computer according to certain rules for amusement, recreation, or winning a stake [Zyda 2005]. A Digital Game refers to a multitude of types and genres of games, played on different platforms using digital technologies such as computers, consoles, handheld, and mobile devices [DGEI2013]. The concept of digital games embraces this technological diversity. In contrast with terms such as ‘video games’ or ‘computer games’, it does not refer to a particular device on which a digital game can be played. The common factor is that digital games are fundamentally produced, distributed and exhibited using digital technologies. Gamification has been defined as the use of game design elements in non-game contexts and activities [Deterding2011] which often aim to change attitudes and behaviors [Prandi et al 2015]. Using game-based mechanics, aesthetics and game thinking to engage people, motivate action, solve problems and promote learning [Kapp2013] [Kapp2015]. i.e. employing awards, ranks during missions or leaderboards to encourage active engagement during an activity e.g. health fitness tracking or e-learning during an online course. Thus, gamification uses parts of games but is not a complete game. Serious Games are full-fledged games created for transferring knowledge [Ritterfeld et al 2009], teaching skills and raising awareness concerning certain topics for non-entertainment purposes [Deterding2011]). Essentially is a mental contest, played with a computer in accordance with specific rules, that uses entertainment to further government or 2 corporate training, education, health, public policy, and strategic communication objectives [Zyda2005].

Paper (PDF)

Augmented Cognition via Brainwave Entrainment in Virtual Reality: An Open, Integrated Brain Augmentation in a Neuroscience System Approach

Emanuele Argento, George Papagiannakis, Eva Baka, Michail Maniadakis, Panos Trahanias, Michael Sfakianakis, Ioannis Nestoros
(Augmented Human Research Journal, Springer, 2017)

Building on augmented cognition theory and technology, our novel contribution in this work enables accelerated, certain brain functions related to task performance as well as their enhancement. We integrated in an open-source framework, latest immersive virtual reality (VR) head-mounted displays, with the Emotiv EPOC EEG headset in an open neuro- and biofeedback system for cognitive state detection and augmentation. Our novel methodology allows to significantly accelerate content presentation in immersive VR, while lowering brain frequency at alpha level—without losing content retention by the user. In our pilot experiments, we tested our innovative VR platform by presenting to N = 25 subjects a complex 3D maze test and different learning procedures for them on how to exit it. The subjects exposed to our VR-induced entrainment learning technology performed significantly better than those exposed to other ‘‘classical’’ learning procedures. In particular, cognitive task performance augmentation was measured for: learning time, complex navigational skills and decision-making abilities, orientation ability.

Paper (PDF)

A Mobile, AR Inside-Out Positional Tracking Algorithm, (MARIOPOT), Suitable for Modern, Affordable Cardboard-Style VR HMDs

Paul Zikas, Vasileios Bachlitzanakis, Margarita Papaefthymiou, George Papagiannakis
(Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection, Lecture Notes in Computer Science, vol 10058. Springer, also presented in EuroMed16, Larnaca, 2016)
Smartphone devices constitute a low-cost, mainstream and easy to use h/w for VR rendering and main component for modern, mobile VR Head-Mounted-Displays (HMDs). They support rotational tracking from on board sensors to manage orientation changes, via their Inertial Measurement Units (IMUs), but they lack positional tracking to reflect head translational movements, a key feature that modern, desktop VR HMDs nowadays provide out-of-the-box. Taking advantage of the RGB camera sensor that each modern mobile device is equipped, we describe a novel combination of inside-out AR tracking algorithms based on both marker and markerless tracking systems to provide the missing positional tracking for mobile HMDs. We employed this system as an affordable, low-cost VR visualization h/w and s/w method, for heritage professionals to employ it for VR archeological sites and Cultural Heritage related monuments interactive walk-throughs. We also compared our results with a recent holographic AR headset (Meta AR-glasses) that supports gesture recognition and interaction with the virtual objects via its RGB-D camera sensor and integrated IMU.

Paper (PDF)

Video

glGA: an OpenGL Geometric Application framework for a modern, shader-based computer graphics curriculum

George PapagiannakisPetros PapanikolaouElisavet GreassidouPanos Trahanias
(Education Papers, Eurographics 2014)

This paper presents the open-source glGA (Opengl Geometric Application) framework, a lightweight, shader-based, comprehensive and easy to understand computer graphics (CG) teaching C++ system that is used for educational purposes, with emphasis on modern graphics and GPU application programming. This framework with the accompanying examples and assignments has been employed in the last three Semesters in two different courses at the Computer Science Department of the University of Crete, Greece. It encompasses four basic Examples and six Sample Assignments for computer graphics educational purposes that support all major desktop and mobile platforms, such as Windows, Linux, MacOSX and iOS using the same code base. We argue about the extensibility of this system, referring to an outstanding postgraduate project built on top of glGA for the creation of an Augmented Reality Environment, in which life-size, virtual characters exist in a marker-less real scene. Subsequently, we present the learning results of the adoption of this CG framework by both undergraduate and postgraduate university courses as far as the success rate and student grasp of major, modern, shader-based CG topics is concerned. Finally, we summarize the novel educative features that are implemented in glGA, in comparison with other systems, as a medium for improving the teaching of modern CG and GPU application programming.

Paper (PDF)

A survey of mobile and wirelesstechnologies for augmented reality systems

George Papagiannakis, Gurminder Singhand Nadia Magnenat-Thalmann
(Journal of Computer Animation and Virtual Worlds, John Wiley and Sons Ltd, 2008)

Recent advances in hardware and software for mobile computing have enabled a new breed ofmobile augmented reality (AR) systems and applications. A new breed of computing called‘augmented ubiquitous computing’ has resulted from the convergence of wearable computing,wireless networking, and mobile AR interfaces. In this paper, we provide a survey of differentmobile and wireless technologies and how they have impact AR. Our goal is to place theminto different categories so that it becomes easier to understand the state of art and to helpidentify new directions of research. Copyright#2008 John Wiley & Sons, Ltd.

Paper (PDF)

Applications Of Interactive Virtual Humans In Mobile Augmented Reality

Nadia Magnenat-Thalmann, George Papagiannakis, Parag Chaudhur
(Encyclopedia of Multimedia [2nd Edition] Springer-Verlag, 2008)

Recent advances in hardware and software for mobile computing have enabled a newbreed of mobile Augmented Reality systems and applications featuring interactivevirtual characters. This has resulted from the convergence of the tremendous progress inmobile computer graphics and mobile AR interfaces. In this paper, we focus on theevolution of our algorithms and their integration towards improving the presence andinteractivity of virtual characters in real and virtual environments, as we realize thetransition from mobile workstations to ultra-mobile PC’s. We examine in detail threecrucial parts of such systems: user-trackedinteraction; real-time, automatic, adaptableanimation of virtual characters and deformable pre-computed radiance transferillumination for virtual characters. We examine our efforts to enhance the sense ofpresence for the user, while maintaining the quality of animation and interactivity as wescale and deploy our AR framework in a variety of platforms. We examine different ARvirtual human enhanced scenarios under the different mobile devices that illustrate theinterplay and applications of our methods.

Paper (PDF)

Presence and interaction in mixed reality environments

Arjan Egges, George Papagiannakis, Nadia Magnenat-Thalmann
(Visual Comput, 2007)

In this paper, we presenta simple and robust mixed reality(MR) framework that allows for real-time interaction with virtual humansin mixed reality environments underconsistent illumination. We willlook at three crucial parts of thissystem: interaction, animation andglobal illumination of virtual humansfor an integrated and enhancedpresence. The interaction systemcomprises of a dialogue module,which is interfaced with a speechrecognition and synthesis system.Next to speech output, the dialoguesystem generates face and bodymotions, which are in turn managedby the virtual human animation layer.Our fast animation engine can handlevarious types of motions, such asnormal key-frame animations, ormotions that are generated on-the-flyby adapting previously recordedclips. Real-time idle motions are anexample of the latter category. Allthese different motions are generatedand blended on-line, resulting ina flexible and realistic animation. Ourrobust rendering method operates inaccordance with the previous anima-tion layer, based on an extended forvirtual humans precomputed radiancetransfer (PRT) illumination model,resulting in a realistic rendition ofsuch interactive virtual characters inmixed reality environments. Finally,we present a scenario that illustratesthe interplay and application of ourmethods, glued under a unique frame-work for presence and interaction inMR.

Paper (PDF)

Immersive VR Decision Training:Telling Interactive Stories FeaturingAdvanced Virtual Human Simulation Technologies

Michal Ponder, Bruno Herbelin, Tom Molet, Sebastien Schertenlieb, Branislav Ulicny, George Papagiannakis, Nadia Magnenat-Thalmann, Daniel Thalmann
(Proc. of 9th Eurographics Workshop on Virtual Environments, 2003)

Based on the premise of a synergy between the interactive storytelling and VR training simulation this paper treats the main issues involved in practical realization of an immersive VR decision training system supporting possibly broad spectrum of scenarios featuring interactive virtual humans. The paper describes a concrete concept of such a system and its practical realization example

Paper (PDF)

Interactive Scenario Immersion: Health Emergency Decision Training in JUST Project

Michal PonderBruno HerbelinTom MoletSébastien SchertenleibBranislav UlicnyGeorge PapagiannakisNadia Magnenat-ThalmannDaniel Thalmann 
(Proc. Of 1st International Workshop on Virtual Reality Rehabilitation, VRMHR, 2002)

The paper discusses the main requirements, constraints and challenges involved in practical realization of an immersive VR situation training system that would support simulation of interactive scenarios of various type. A special attention is paid to the demanding health emergency decision training domain. As an example an immersive JUST VR health emergency training system built in frame of EU IST JUST project is presented in more detail.

Paper (PDF)