Project Coordinator (EU) :Open AR Cloud Europe gUG
Country of the EU Coordinator :Germany
Organisation Type :Non-profit Organisation
Project participants :
EU members (Open AR Cloud Europe):
- Alina Kadlubsky: (Managing Director Open AR Cloud Europe & Director of Communications and Lead of the Accessibility and Safety Working Group at OARC). In this role Alina has led the charge in OARC's mission to drive the development of open and interoperable technology, data, and standards to connect the real and digital worlds for the benefit of all.
- Gábor Sörös: research scientist at Nokia Bell Labs. Previously, he was a researcher at ETH Zurich, interned at Qualcomm's Vuforia team, and he was also involved in several successful startups in mobile computer vision and augmented reality. Gábor obtained MSc EE from TU Budapest and PhD CS from ETH Zurich.
- Jonathan Tiedtke: CTO at 3D Interactive, oversees the technical architecture and development of Augmented reality applications and systems as well as implementing web standards and frameworks for XR since 2018. He has previously worked with several key technologies for delivering XR experiences via the web. This includes the use of Remote rendering services for streaming 3D content to mobile clients. He was also one of the cofounders of the first VR arcade in the Nordic region where he arranged XR hackathons and VR events.
- Mikael Spuhl: founder and CEO of 3D Interactive Sthlm, a Spatial Computing company based in Stockholm, Sweden. He has for many years been promoting, selling and been project lead for hundreds of AR/VR projects in Sweden since 2011 for a vast type of industries and technologies. He has been on the board for HVE programmes within AR/VR.
- Pär Nordenhjälm Linde: Unity3D developer focusing on 3D content creation for AR/VR. He holds a bachelor’s degree in computer science and a background in graphical communication. Pär has been developing AR & VR applications at 3D Interactive Sthlm since 2012.
- John Nilsson: is an experienced 3D developer and has deep knowledge of the game engine Unity and has developed VR and AR experiences since 2015.
- Christine Perey: industry analyst and independent researcher focusing solely on AR since 2006. She is on numerous boards, is a member of many standards groups and co-chairs the IEEE SA ARLEM WG and the OGC GeoPose SWG. She is a work item leader in the ETSI Industry Standards Group on AR Framework. She is the founder of the AR for Enterprise Alliance (AREA), chairs the Research Committee and leads the AREA Interoperability and Standards Program. Since 2018, she is a founder of the Open AR Cloud (OARC) association and serves on the governing board. She currently leads the research coordination initiative of the OARC.
US members (Winlab, Rutger University):
- Ivan Seskar: Chief Technologist at WINLAB, Rutgers University responsible for experimental systems and prototyping projects. He is currently the program director for the COSMOS project responsible for the New York City NSF PAWR deployment, the PI for the NSF GENI Wireless project, which resulted in campus deployments of LTE/ WiMAX base stations at several US universities, and the PI for the NSF CloudLab deployment at Rutgers. He has also been the co-PI and project manager for all three phases of the NSF-supported ORBIT mid-scale testbed project at WINLAB, successfully leading technology development and operations since the testbed was released as a community resource in 2005 and for which the team received the 2008 NSF Alexander Schwarzkopf Prize for Technological Innovation. Ivan is a co-chair of 5 the IEEE Future Networks Testbed Working Group, a Senior Member of the IEEE, a member of ACM and the cofounder and CTO of Upside Wireless Inc.
- Bo Han: Associate Professor in the Department of Computer Science at George Mason University. His research interests are in the areas of networked systems, mobile computing, and wireless networking. His current research focuses on immersive video streaming, augmented, virtual, and mixed reality, and the Internet of Things. He enjoys building practical systems by leveraging innovations in machine learning, multimedia, computer vision, computer graphics, and human- computer interaction. Before joining George Mason University, he was a Principal Scientist at AT&T Labs Research. Along with his collaborators, he has built several immersive video streaming systems and mobile augmented reality systems and published research papers in these areas on top-notch international conferences.
State of US partner :New Jersey
Deployment and Evaluation of a 5G Open Spatial Computing Platform in a Dense Urban Environment
Spatial computing is a broad term for a suite of technologies that result in users being immersed, engaged, and interacting with spatial and temporal digital information that pertain to the physical space in, around, and near the user. It consists of a superset of technologies required for traditional Augmented Reality. The figure below introduces the concept and the most important terminology.
Spatial computing relies on a number of fundamental technologies such as real-world 3D capture (sparse and dense mapping), precise localization with six degrees of freedom, and discovery and delivery of personalized content to users. Today, these building blocks are only available from and controlled by a few major companies (Microsoft, Apple, Google, Meta, etc) in siloed platforms.
The Open AR Cloud (OARC) association is dedicated to the development of technologies and standards for open and interoperable spatial computing components on which an ecosystem of companies and their services can flourish. The OARC has already designed and implemented prototypes of important building blocks of an Open Spatial Computing Platform (OSCP), shown in the figure below. The OARC has released the components as open-source code (see https://github.com/OpenArCloud) and these are already deployed on an OARC tested in Bari, Italy. The existing OARC testbed has proven that the OSCP enables discovery of spatial services, localization of users based on images sent from the user’s mobile camera, discovery of digital multimedia content attached to physical locations or smart objects and sensors, and display of content to test bed-connected users. Based on personalized user preferences, location, and other context information, the retrieved content can be dynamically filtered.
With support of the NGI Atlantic program, the Open AR Cloud (OARC) Europe team is adapting the reference implementation of the OSCP and deploying the components on the COSMOS 5G platform in Manhattan. In addition, the team creates a new Unity-based reference client application and two representative demo applications. The team is also working on integrating a new, open-source Visual Positioning Service developed by George Mason University.
The COSMOS 5G deployment of the OSCP will allow conducting experiments that will deepen understanding of limitations and opportunities of the OSCP, provide component and network performance metrics, and trigger development of new software to increase the capabilities and features of the OSCP. The integration with an open-source VPS will permit any provider to offer a vendor-neutral positioning service to any organization seeking to use the OSCP.
Implementation plan :
The common vision of a “real-world metaverse” is to be able to place 3D digital information linked to any physical place, enhancing our ability to gather information at a glance. A fundamental condition to the creation of persistent AR experiences is to capture and create a detailed 3D map of the environment. In existing AR cloud solutions today, the images and 3D maps of a user's personal environment are centralised to a few industry giants. Apart from not only storing very intimate data about users' surroundings, all the digital information displayed in AR is also hosted and filtered (“personalized”) by these providers. To address these privacy concerns, this project aims to advance and make publicly available all components of an end-to-end AR cloud solution, the Open Spatial Computing Platform (OSCP). We start with creating a new reference client in Unity (for interoperability) and then implement two representative AR cloud use cases. We also adapt the existing components to be deployable at new locations, and we demonstrate the complete system in action in the COSMOS 5G testbed in Manhattan. Prior to this project, AR contents were added manually to the content database which was not user friendly. We therefore started the first phase of this project with developing a new client application with the ability to easily place content directly in augmented reality. Our goal is that any client that adheres to the OSCP protocols will be able to place new contents, which will be immediately discoverable by other clients (independently from their underlying device type, operating system, or AR engine). Related to content placement, we also had to implement user authentication and an open-source file storage solution that anyone can easily set up (note that the OSCP does not store but only references contents).
In the second phase, we extend our client application with new features required by the defined use cases. In the first representative use case, a mobile device discovers a sensor node in a laboratory environment and shows its current sensor values floating above the real node. Beyond the discovery, this requires real-time streaming of IoT data from sensors to AR devices. In the second representative use case, our plan is to have an AR art piece placed at a specific location or implement a collaborative drawing application. This use case requires real-time sharing of users' poses and interactions with each other. The project also requires creating a 3D map of the selected experiment locations, as well as creating simplified digital twins of the locations to aid the artistic design of AR experiences. The 3D map was built by the US and EU partners jointly with two different technologies. The digital twin is created by the EU team using a series of photos taken by the US team at the Rutgers University lab. We will publish a detailed process description of generating a 3D map to be used as a reference in this project.
- Enhanced EU – US cooperation in Next Generation Internet, including policy cooperation.
- Reinforced collaboration and increased synergies between the Next Generation Internet and the Tomorrow's Internet programmes.
- Developing interoperable solutions and joint demonstrators, contributions to standards.
- An EU - US ecosystem of top researchers, hi-tech start-ups / SMEs and Internetrelated communities collaborating on the evolution of the Internet.
- Installation and configuration of all OSCP components on US testbed for use by the research team on the COSMOS network on the COSMOS network
- Heterogeneous urban environment of campus scale captured in 3D for testing with AR experiences and ready for development of future vision based localization services
- Measurement of AR over wireless network within expected performance parameters
- Two AR applications implemented on top of the Open Spatial Computing Platform
- Discovery and filtering of AR assets for experiences using Spatial Content Discovery
- Discovery and filtering of AR assets for experiences using Spatial Content Discovery
Future Plan :
The innovative elements introduced in this project will include the seamless detection of spatial context and (through the use of the GeoPose standard) automatically provide the user the best 3D map and most relevant content as the user change’s locations along a trajectory in a target rich environment including buildings and open spaces the project may further study aspects of identity, privacy and personal data protection. Efficient and continuous 3D world capture, mapping and query technologies may, due to their invasive nature, introduce risks to private and public institutions and rights of users and entities. Control over sensitive/private data, privacy and security in a diverse range of contexts such as personal, on campus, in public, on-premises, in restricted facilities and under emergency settings are high on the OARC’s roadmap for further developments.
The 5G testbed will gain a service that enables mobile camera localization and real-time location aware content discovery on top of the existing platforms. The OSCP also serves as the spatial context layer for location- based and personalized information retrieval in smart environments. These features can enable further sensing research and (in case of public testbeds) may also serve as an incentive service for a large number of users to join gamified experiments on the testbed.
As the backbone of location-based applications for students and campus visitors, the deployed platform can further serve (even after the project is finished) as a learning and test environment. We envision location- anchored and community- created digital media content, interactive visual navigation, restaurant discovery, lecture discovery, ride-hailing, etc. application to name only a few.
This project has successfully applied for further funding. Website for project Aurora https://www.auroraviewer.org/auroraviewer
- Open Source visual positioning Service: https://youtu.be/ZKaBXdxD0Fs
- Aurora View, the Unity client app : https://youtu.be/qUR3zcTDuXA