R&D projects

ONGOING PROJECTS

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

INDUX-R

Project Website: https://indux-r.eu/

INDUX-R Consortium Members

INDUX-R Consortium Members

OMEN-E

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

Revolutionary Virtual Reality Simulationbased Medical Training Platform

There is a growing lack of surgeons globally and not enough surgeons are trained today for future needs. Access to quality medical services is very limited5 billion people are lacking affordable surgical or anaesthesia care due to the high medical costs and shortage of physicians.

The problem lies in the outdated paradigm of surgical training that has not changed for 150 years despite technological advances. We still use time and resource intense surgical apprenticeship training methods to train our next surgeons–a surgeon operates a patient and the trainee observes. This is extremely costly, and it is not an efficient way of preparing future surgeons.

Virtual reality (VR) can change the paradigm and make medical training efficient. MAGES, developed byORamaVR, is the world’s first hyperrealistic VRbased software platform for accelerated surgical training andassessment. With MAGES trainees can perform lifelike surgery simulations in a riskfree environment toimprove patient outcomes. We have clinically proven that our software enhances skills acquisition compared to traditional methods.

We have developed a revolutionary technology that enables codefree creation of VR medical training contentby medical professionals. The technology removes the biggest obstacle to the uptake of VR in daily life–lack ofaffordable content. By providing the necessary tools to doctors, we enable them to create new training contentrapidly and for a fraction of the cost of alternative technologies.

In the RevolutionaryVirtualRealitySimulationbasedMedicalTrainingPlatformproject, we will productise our software development kit. This will be a marketcreating product that will bring about the longexpected proliferation of VR usage. We will conduct further trials to clinically validate the efficiency of the medical training content (including Covid19 related content) created using MAGES. This will enable us to tackle the €3Billion Serviceable Obtainable Market.

 

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

REVIRES-MED Consortium Members

REVIRES-MED Consortium Members

VOsCE

Viva-VOsCE, innosuisse project, aspires to develop a Virtual Reality (VR)platform to assist medical schools in delivering and assessing Objective Structured Clinical Examinations (OSCEs).

Physical OSCEs play a fundamental role in the practical assessment of medical degree learning outcomes but they are time and asset intensive. Viva-VOsCE will produce a VR based OSCE platform to support medical schools and national accreditation bodies in reforming student assessment to significantly reduce the logistical efforts and lowering overall expenditure.

The project aims to automate procedural aspects of the assessment to enable clinical examiners to focus on interpersonal and communication skills during an OSCE. ORamaVR will upscale its SDK with many innovative features that enable the OSCE examiner to optimally and objectively assess an examinee’s overall performance.

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects
VOsCE Consortium Members
VOsCE Consortium Members

FIDAL, Horizon-Europe project, aspires to extend and deliver

(i) advanced future proof Evolved 5G test infrastructures, anticipating evolution into next SNS phase
(ii) open & accessible to support 3rd party vertical experiments
(iii) test environments for rapid prototyping and large-scale validation of advanced, forward-looking applications.

By building on the success of 5G-PPP Phase 3 projects, it will produce a unified experimentation framework with Zero-Touch orchestration, reusable network applications and secure AI as a Service capabilities, to validate the evolved 5G technologies in a user context that maximizes downstream take-up, thus cultivating the ground for 6G, facilitating the convergence of the cyber and physical worlds, by providing very low latency and high data capacity of Beyond 5G and 6G networks. In FIDAL ORamaVR leads the XR-assisted services for public safety use case.

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects
FIDAL Consortium Members
FIDAL Consortium Members
FIDAL Consortium Members
FIDAL Consortium Members
FIDAL Consortium Members

PROFICIENCY

Training on patients according to the principle of “see one, do one and teach one” no longer corresponds to today’s requirements and technical opportunities regarding education of surgical residents. During the on-going Covid-19 pandemic, most hands-on training had to be discontinued, leading to an almost complete interruption of surgical education.

Under the lead of the three clinical partners Kantonsspital St.Gallen, Centre Hospitalier Universitaire Vaudois and Balgrist University Hospital, endorsed by the Swiss Surgical Societies, novel standardized and proficiency-based surgical training curricula are defined and interfaced to simulation tools. The four implementation partners VirtaMed AG, Microsoft Mixed Reality, AI Zurich Lab, OramaVR SA and Atracsys LLC, in collaboration with ETHZ and ZHAW develop these innovative training tools ranging from online virtual reality simulation, augmented box trainers, high-end simulators, to augmented-reality-enabled open surgery and immersive remote operation room participation.

The proposed developments introduce a fully novel, integrative training paradigm installed and demonstrated on two example surgical modalities, laparoscopy and arthroscopy, while fully generalizable to other interventions. This will set new standards both in Switzerland and abroad.

In Proficiency ORamaVR leads the VR platform WP in the Online/VR simulation subproject. Additionally, ORamaVR will be involved in the AR Surgery training subproject by providing its expertise on the Surgical phase detection task. Finally, ORamaVR will provide visualizations, OR modelling, and multi-user cooperation for AR/VR trainee application of the Virtual Surgery participation subproject.

 

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

PROFICIENCY Consortium Members

PROFICIENCY Consortium Members

INTELLIGENT DIGITAL SURGEON

The Intelligent Digital Surgeon is an Innosuisse funded research project that will design and implement an embodied virtual agent within MAGES medical training platform. Its main objective is to identify and analyze the immersed trainee’s behavioral model and provide personalized real-time feedback, assessment, and recommendations like a real surgical instructor.

A deep learning model will be derived to identify the trainee’s behavioral model, by recognizing and analyzing the trainee’s hand/arm gestures, and to assist the feedback decision engine, by providing personalized assessment, real-time feedback, instructions, and recommendations. The IDS will be able to present the training module’s actions using the correct gestures. Furthermore, MAGES system will embed methods of cutting and tearing of physics-based deformable surfaces that advance state-of-the-art. A distributed VR system architecture will be designed utilizing edge computing and 5G networks that allows the use of even low-spec HMDs.

In Intelligent Digital Surgeon, ORamaVR plays the role of the main implementation partner.

 

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

INTELLIGENT DIGITAL SURGEON Consortium Members

INTELLIGENT DIGITAL SURGEON Consortium Members

COMPLETED PROJECTS

5G is considered to be the next decade mainstream broadband wireless technology and can leverage the efficiency and effectiveness of everyday high demanding operations such as public protection and disaster relief (PPDR). ITU considers LTE-Advanced systems and 5G as a mission critical PPDR technology able to address the needs of mission critical intelligence by supporting mission critical voice, data and video services as an IMT radio interface. 5G-EPICENTRE will deliver an open end-to-end experimentation 5G platform focusing on software solutions that serve the needs of PPDR. The envisioned platform will enable SMEs and developers to acquire knowledge with regard to the latest 5G applications and approaches for first responders and crisis management, as well as to build up and experiment with their solutions. The 5G-EPICENTRE platform will be based on an open Service oriented Architecture, following the current best DevOps practices (containerization of micro-services) and will be able to accommodate and provide open access to 5G networks’ resources, acting this way as a 5G open source repository for PPDR NetApps. For the assessment of the abovementioned platform, the realization of several use cases is being foreseen which will be realized as a PPDR vertical. The purpose of the use cases is to expand along the entire range of the 3 ITU-defined service types (i.e. eMBB, mMTC and URLLC) as well as to provide the floor for overseeing the platform’s secure interoperability capabilities beyond vendor-specific implementation. The engaged SMEs and organizations that will participate into the realization of the use cases constitute active players in the public security and disaster management, thus acting as key enablers for the assessment of 5G-EPICENTRE with regard to the real needs that should be addressed. Finally, through the use cases realization, KPIs relevant to 5G will be measured, especially those that are pertain to services’ creation time.

In 5G-EPICENTRE, ORamaVR is responsible of the AR assisted emergency surgical care use case.

 

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

5G-EPICENTRE Project Coordinator

5G-EPICENTRE Consortium Members

5G-EPICENTRE  Project Coordinator

5G-EPICENTRE Consortium Members

The impact of technology in the world’s panorama is at an all-time high. Advanced media applications enabling immersive communication are becoming ubiquitous in our lives, and there is a global trend to adopt virtual solutions to support day-to-day business operations, social events, and general lifestyle. A subset of these innovative media applications includes Virtual Reality, Augmented Reality, and Holography, but they do not come without their share of challenges and requirements. To enable a satisfactory user experience, the requirements for the computing platform and its underlying network can be considered extreme and far from what can be attainable today. Hence, we propose a Cloud for Holography and Cross Reality (CHARITY), which is a complete framework that attempts to overcome the challenges and meet the requirements of such applications. CHARITY leverages an innovative cloud architecture that exploits edge solutions, a computing and network continuum autonomous orchestration, application-driven interfacing, mechanisms for smart, adaptive and efficient resource management, strong community involvement, and overreaching compatibility with all infrastructure vendors. This integrated framework will be put into test in a broad diversity of use cases targeted at advanced media applications, such as holographic events, virtual reality training, and mixed reality entertainment. CHARITY expects to deliver a working prototype, validated by the most demanding applications, capable of being demonstrated at dissemination events and exploited by a large community of users and companies outside the consortium, paving the way to the mass adoption of more advanced media applications in the market.

In CHARITY, ORamaVR is the use cases WP leader and main responsible of the VR Medical training use case.

 

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

CHARITY Consortium Members

CHARITY Consortium Members

There is an increasing number of signs that the edge computing concept is going to play a dominant role in the forthcoming technology developments, disrupting economies at a large scale. The big cloud providers promptly jumped in to get the lion’s share but edge computing is intrinsically more “democratic” than cloud computing. In fact, its distributed and localized nature can be an antibody for big trusts, if properly exploited. Synergistically employing edge computing with upcoming technologies such as 5G provides an opportunity for EU to capitalize on its local resource and infrastructure and its SME-dominated application development landscape and achieve an edge-computing-driven disruption with a local business scope. To this end, ACCORDION establishes an opportunistic approach in bringing together edge resource/infrastructures (public clouds, on-premise infrastructures, telco resources, even end-devices) in pools defined in terms of latency, that can support NextGen application requirements. To mitigate the expectation that these pools will be “sparse”, providing low availability guarantees, ACCORDION will intelligently orchestrate the compute & network continuum formed between edge and public clouds, using the latter as a capacitor. Deployment decisions will be taken also based on privacy, security, cost, time and resource type criteria. The slow adoption rate of novel technological concepts from the EU SMEs will be tackled though an application framework, that will leverage DevOps and SecOps to facilitate the transition to the ACCORDION system. With a strong emphasis on European edge computing efforts (MEC, OSM) and 3 highly anticipated NextGen applications on collaborative VR, multiplayer mobile- and cloud-gaming, brought by the involved end users, ACCORDION is expecting to radically impact the application development and deployment landscape, also directing part of the related revenue from non-EU vendors to EU-local infrastructure and application providers.

In ACCORDION, ORamaVR is the responsible for the collaborative VR use-case, supporting cloud/edge network and computing resources.

 

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

ACCORDION Consortium Members

ACCORDION Consortium Members

The VipGPU project aims at developing new hardware and software technology to efficiently support cutting edge application scenarios, which have the potential of significant research, business, and financial gains: (a) a computer vision application for mobile robotics, and (b) a virtual reality application for medical training for surgical procedures. The project will deliver a FPGA prototype based on the multicore, heterogeneous GPUs of Think Silicon that will be optimized for the two use-cases.

More specifically, we have the following goals:

Enhanced Low Power GPUs and Hardware Accelerators. The first objective of the project is to develop the FPGA prototype consisting of very low power multiple GPUs and supporting the OpenCL 2.0 and Vulkan programming models. Also, customized low-level libraries will be created that will run on the GPU of Think Silicon and will be optimized based on its architecture. The FPGA prototype will include hardware accelerators for the most performance-critical functions of these two applications. Specialized accelerators can offer the greatest possible performance, since they are precisely tailored to the algorithm requirements. These accelerators will be integrated in Think Silicon’s GPUs.

Machine Vision in Mobile Robotic Systems. Mobile robotic systems are used, inter alia, for educational purposes or for supporting older people. The first application of the project will accurately determine the position of a robot and at the same time map its surroundings through visual processing. The computation of this information is a very important process of an autonomous robotic system, as it updates a series of other processes, such as gait design, perception and understanding of the surroundings, routing, etc. Highly accurate algorithms are based on computationally intensive processes, which prevent them from being applied to portable devices and small / medium-size robotic systems. In order to limit these power and computational dependencies, researchers very often resort to several compromises, which result in efficiency and robustness reductions of the proposed algorithms. In this project, we aim to develop a positioning system for embedded systems so as:

  1. to determine the exact position of a robot in real time conditions,
  2. to enable dynamic image processing, in poor lighting conditions and changing scenes,
  3. to create a low-power processing subsystem that will not use the processing power of the robot.

Virtual reality. The second application is to create an innovative environment for creating new generation training games that will take full advantage of new real-time virtual reality and motion detection technologies. Recognizing the current state of technology and market needs, the partners will expand their existing technology and know-how with new algorithms that will allow:

  1. Simulation in a virtual reality environment of an application for orthopedic surgeon training.
  2. The stereoscopic display in a new portable personal projection system based on the new very low-power GPUs that will not be connected to a personal computer, and
  3. The creation of educational games for medical surgeon training in a simplified way.

The expected results will lead to the creation of an ideal system for the visualization of medical simulation and experiential education in processes and events.

In VipGPU, ORamaVR is developing a custom/mobile VR prototype headset, optimized for low energy GPUs.

 

Funded by EYDE-ETAK (GR)

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

VIPGPU Consortium Members

VIPGPU Consortium Members

The proposed solution for STARS PCP aims to provide an innovative and sustainable tool for patients planned for surgery with the aim of reducing stress and anxiety of the patient during the complete care path. The solution will support patient-centred perioperative education, stress management, self-empowerment and effective use of health care resources. It will be dynamically adapted according to patient preferences, needs and medical history and will incorporate intelligent alerts, recommendations, motivation and other personalized features. Clinically validated information will be provided to help patients be informed about their condition and upcoming surgery. Additional features will address effective and efficient patient-physician communication in a manner that promotes self-management and patient-empowerment.

The main characteristics of the proposed solution are the following:

  • Provides innovative functionalities for patient planned for surgery with the aim of reducing stress and anxiety as well as improving their health condition during the complete care path.
  • Combines a group of services into one platform that can be adapted to provide personalized stress relief and perioperative education based on individual patient profiles.
  • Facilitates effective interactions between patients and healthcare professionals, through user-friendly and intelligent intercommunication technologies, paying emphasis on timely patient outcome reports about perioperative concerns. Patients will be assisted to return to normal daily life after surgery.
  • Provides perioperative education and guidance through clinically validated, personalized material relevant to the planned surgery.
  • Increases effectiveness of adherence to treatment
  • Increases participation of the patient in the decision-making process
  • Offers psychological and emotional assessment and encouragement tools
  • Promotes health-related behaviours through personalized recommendations
  • Involves peers and carers in empowering the patient
  • Facilitates effective lifestyle management including diet and activity management
  • Provides multichannel anxiety and stress relief personalized content
  • Facilitates the collection of clinical data for research purposes
  • Is designed on a cost-effective structure and a sustainable business model
  • Functions in each of the buyers’ languages and is adaptable to support other EU languages.
  • Offers an integrated platform that is simple and usable.
  • Provides advanced security, ethical and data management processes at all levels of technology use and data analysis.
  • Offers a framework for sustainability of technology use and large-scale deployment in the medium and long-term taking into consideration amortisation and maintenance costs to be low and not a hindrance to the deployment of services.
  • Uses state of the art technologies. All the proposed subsystems are adaptable to end user needs, and their social and cultural profiles and fully compatible with current developments in the European eHealth and mobile-Health space.
  • Proposes a coherent project management plan to cope with the iterative design process that will be required throughout the three project phases.
  • Is implemented by a highly experienced consortium, having multi-year presence in research and development of new technologies in personalized, predictive and preventive medicine, coaching and virtual reality.

The solution will be partially based on the continuation of R&D activities, previously funded by the EU.

 

Project Site: https://stars-pcp.eu/

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

STARS PCP Consortium Members

STARS PCP Consortium Members

An extensive amount of research on the benefits of exercise supports that exercise plays a vital role on health, physical functioning and quality of life, and in the prevention and rehabilitation of many deceases such as dementia. People suffering from dementia number 46.8 million worldwide, while 7.7 million new cases of dementia are recorded each year. In Greece, patients suffering from one of the categories of dementia, the Alzheimer disease, have been calculated to be around 200,000 and that they will reach 500,000 by 2050. Inactivity alone, constitutes the cause of 3.8% of all dementia cases worldwide. Recently, exercise has been recognized to benefit these patient populations both as a mechanism for the amelioration of their health and the prevention of dementia as well as a mechanism for the improvement of their cognitive functioning and memory. The difficult part however, is for these patients to recognize the benefits of exercise. These benefits become apparent when someone is trained in an enjoyable and attractive environment.

This proposal aims to develop an attractive, innovative and self-regulating exercise program for the improvement of health that takes place in a virtual environment incorporated by an exercise apparatus. For this purpose, we require a multidisciplinary approach that will be based on the most recent findings of psychology, physiology and virtual reality technology. Specifically, from the field of psychology this virtual reality application will include self-regulation strategies such as goal setting and self-talk. Additionally, this model will provide the participants with the ability to self-regulate the intensity and duration of their exercise in accordance to a number of protocols of the physiology of exercise that can be highly personalized around each participant. These exercise protocols are regulated through a number of factors which will be displayed inside this virtual environment in real time. Meaning that during his or her exercise inside the virtual environment, the patient will be informed of the distance he has covered, the speed, the intensity, his heart rate, the subjective feeling of fatigue as well as the personal levels of satisfaction and enjoyment. Additionally, this model will provide the participants with visual or auditory cues and directions targeting a number of goals for the participant. The participant will be able to choose which ones to follow and which ones to ignore according to what is more beneficial for him or her at every given moment. Therefore, the user will be able to listen to directions at specific time intervals or repeat them loudly to himself aided by a number of cues given by the program. Cognitive exercises for the improvement of memory will also be provided through the system.

This innovative, self-regulating, virtual reality program is going to be designed exclusively around patients suffering from all types of dementia and Alzheimer. The program’s ultimate goal is to exercise these patients and at the same time to improve their memory and their cognitive functions. At the end of the exercise program, the patient will receive immediate information around the amount of exercise he or she did, the time he needed to complete the cognitive memory exercises etc. The cooperating teams will initially develop the virtual environment and incorporate it to exercise bikes, suitable for the exercise of elderly individuals. This virtual environment will be controlled at all stages of its development, by performing test trials on populations suffering from dementia of various levels. These will take place at the gyms of the Hellenic Association of Alzheimer’s Disease & Related Disorders. When the application is completed, the effectiveness and appeal of the system will be tested using experimental methods and neuropsychological, psychomotor and neuroimaging evaluations. With the completion of the project, an innovative tool and highly personalized exercise system be given to the medical society.

In VRADA, ORamaVR is implementing the virtual environment-platform capable to synchronize and merge the movement from an exercise bicycle into VR input.

 

Funded by EYDE-ETAK (GR)

Project Site: https://vrada.weebly.com

MAGES™ SDK can be used by developers using Unity™ or Unreal Engine™ as a plug-in Package – the software library containing compiled code in DLL form, scripts in C# or C++ form and 3D/2D/multimedia assets and their associated files – that can be imported in the Unity or Unreal Editor, given an annual subscription. It can subsequently be used with the rest of the Unity or Unreal Engine in order to generate standalone VR desktop medical simulations on windows, android, or macOS, which incorporate the Unity, Unreal, or SteamVR™ runtime. MAGES SDK runtime operates locally, within the end-user Unity or Unreal Engine editors software.

Developers can reach their audiences with state-of-the-art medical VR training simulations, powered by MAGES SDK.

More details on the simulation creation methodology.

  • Proprietary interpolation engine with geometric algebra:
    – allows 4x reduced network data transfer
  • 5G-edge computing ready network deployment:
    – off-load costly calculations on network edge
  • Highest in the market number of concurrent active users in the same VR scene, all fully interacting with the same synchronized content with minimal latency
    – including VR Recorder functionality for recording multiplayer sessions and replaying them from any device
  • Unlimited number of captured, analyzed events per second
  • Rapid, in-the-scene local authoring with cloud-based storage, review and visualization customization
  • DL agent platform for analytics recommendations and scoring factors
  • Geometric algebra algorithms for custom cutting, tearing, drilling of deformable and soft bodies, based on the algebra of William K. Clifford
  • Rapid prototyping based on our proprietary s/w design patterns for VR, which can be employed as “recurring solutions to standard problems”, as first introduced by Christopher Alexander in architecture
  • Dynamics (plot), mechanics (rules), components (UI elements) embedded in the SDK with ready-to-use sample scenes
  • Visual scripting editor for rapid VR scene authoring – proprietary semantic representation of scenes, based on novel “scene-graph” algorithms
  • Scoring leaderboards, live webinars, and VR user recording support
  • Based on the curriculum learning objectives (psychomotor or cognitive), support for all currently available VR HMDs as well as same curriculum are automatically available in windows and macOS standard point and click versions
  • Physics-based visual techniques allowing the 3D representation of deformable soft body objects (skin, tissue, etc.)
  • On-the-go configuration to match the tissue properties
  • Velocity-based interaction for manipulation in VR with users and other objects

VRADA Consortium Members

VRADA Consortium Members