Skip to main content

The NGIAtlantic.eu project has ended in February 2023. A new platform is coming soon.

Main image

Project Coordinator (EU) :

Technological University Dublin

Country of the EU Coordinator :

Ireland

Organisation Type :

Academia

Project participants :

Team members of TUD - UCD (EU)

Dr Sachin Sharma, Role: Project Coordinator

Dr Avishek Nag, Role: ML Lead

Snehal Dey, Role: Data Scientist

Tiarnan Rush, Role: WP2, WP3 and WP4

Dr Catherine Mulwa, Role: WP3 and WP4

Aviejay Paul, Role: WP3 and WP4


Team members of UNIVERSITY OF NEBRASKA (US)

Prof Byrav Ramamurthy, Role: WP1 and WP4 Lead

Boyang Hu, Role: WP 4

Shideh Yavary Mehr, Role: WP3 and WP4

Sai Suman, Role: WP3 and WP4

State of US partner :

Nebraska

Starting date :

ATLANTIC-eVISION: Cross-Atlantic Experimental Validation of Intelligent SDN-controlled IoT Networks


Experiment description

The main vision of this project is to pioneer future Internet experiments escaping simulations. Simulation has long been a valuable tool in testing and analysing the behaviour of new protocols and ideas in computer networking. Simulations are important, as they mimic the testing environment as closely as possible. It is possible, however, that the simulation environment might not match the real system if some simulation parameters are not configured properly, or some parameters are unknown. Test-beds are viewed as a means of tackling the problem by allowing protocols and initial prototypes to be tested in an environment which is non-idealistic and closely replicates the real system, even if some parameters are unknown. Moreover, when we move from simulation to testbeds we add more realism to the evaluation of the protocol/system. A simulation environment can be seen as an ideal environment in which a protocol/system can behave well. However, when unexpected events and dynamic environmental constraints are induced (such as in a testbed including real physical systems) the behaviour of the protocol/system can be evaluated with higher confidence [1]. Our second vision is to extend a pervasive network management protocol like OpenFlow towards wireless ad-hoc networks that can be managed with OpenFlow and will pave the path for scalable, low-cost, self-configurable, and programmable future IoT networks supporting a plethora of use cases. The technologies and approaches we are proposing in this project can have a huge impact on the speed at which IoT service providers can make their infrastructure efficiently evolve with their market evolution. In fact, the impact of this approach for wireless ad-hoc IoT networking can be extended easily to other IoT device communication platforms (e.g., MQTT) for a comprehensive solution. Furthermore, we aim to create exemplary knowledge by performing experimentation on one of the world’s largest and most advanced wireless testbeds, located over two continents. By remotely running experimentation across the Atlantic the project will “stress-test” the performance of novel algorithms and achieve the project’s KPIs in one of the most challenging scenarios in terms of round-trip latency and network heterogeneity. Additionally, developing and experimenting with novel algorithms on such an advanced Cross-Atlantic testbed is going to be an invaluable and truly unique experience for postdocs and research assistants working on the project, with a strong impact on the advancement of their future research careers. In general, the knowledge developed by this project will impact on creating new curricula and research spokes in the higher education sector and help in training skilled personnel in the combined area of AI-aided IoT networks management that the project addresses.

Implementation plan :

There are four work packages (WP) in this project:

WP1: Project Management (Leader: Dr Sharma and Prof. Ramamurthy): This deals with the day-to-day management of the project, meeting contractual obligations and filling cost statements. It will also address the dissemination of project outcomes through publications and workshops at IEEE/ACM conferences, social media, and personal websites. This WP will have one initial deliverable due at the end of the first month and the rest will be included in the final deliverable. Initially, this will result in deliverable 1 (D1), which will provide detailed implementation and experimentation plans. Further, all the dissemination activities will be reported in the second (D2) and the final deliverable (D3). As a part of this work package, the project team has decided to meet bi-weekly on Fridays via Zoom to discuss updates and plan the tasks ahead as the project progresses. The EU and the US teams already met two times in the first month and discussed the next steps of the project. Further, we have also created a shared folder on Google Drive to share all project-related information among the partners.

WP2: Testbeds Preparation (Leader: Dr Nag): Contains two tasks as depicted in Figure 2. Initially, TU Dublin and UCD will prepare the testbeds on Fed4Fire, and the University of Nebraska will prepare the COSMOS/POWDER. Later, all these teams will work together to integrate multiple testbed experiments. There will be one initial deliverable due at the end of the second month. Currently, AERPAW is not available for experimentation. They are aiming to declare general availability with the initial platform features in November 2021 [8]. So, the team will restrict US experimentation to COSMOS and POWDER.

The requested resources for emulations from the testbeds side include:

  • 10 wireless nodes from W-iLab1.t, W-iLab2.t and CityLab of Fed4Fire at the EU
  • 10 nodes from the virtual wall testbed of Fed4Fire for cloud server functionality.
  • 10 wireless nodes at the POWDER testbeds in the US.
  • 4 cloud server nodes with GPU functionality from COSMOS.
  • GPULAB at Fed4Fire (a maximum of 10 simultaneous jobs at a time.

As the above testbeds are public, it would be difficult to request a large number of nodes at once. Therefore, initially, the team will use a small number of nodes and additional nodes will be reserved or released based on availability and requirement.

The team will connect all sensor nodes in an ad-hoc fashion (Figure 1), install an Ubuntu OS and other required software (e.g., Open vSwitch), and run the emulations. Gateways for accessing the Internet will be configured using access point (AP) nodes of the testbeds. The controller and the IoT application will run on the cloud servers of COSMOS and Fed4Fire respectively. These servers would access the sensor networks through the Internet (Figure 1). Furthermore, the controller and the IoT application will use GPULAB clusters of Fed4Fire and/or COSMOS to run ML algorithms and process large real-time data (power consumption, battery usage, buffer capacity, etc.) collected from the sensor nodes. The Command Line Interface (CLI) of the GPULAB will be used to execute commands from the controller and the IoT application.

Sensors that gather and send data related to temperature, humidity, etc. to an IoT application, will be used in our research. Different sensor network topologies (tree, mesh, ring, etc.) will be created so that only a few sensor nodes will have Internet connectivity.

WP3: Prototyping (Leader: Dr Sharma). This Work Package contains six tasks:

  • Task 3.1 - Automatic Configuration Method
  • Task 3.2 - Data Collection
  • Task 3.3 - ML-based path selection
  • Task 3.4 - ML with GPULab/Hadoop clusters
  • Task 3.5 - IoT Application Emulation
  • Task 3.6 - Failure Recovery
     

WP4: Validation, Testing and Result Analysis:

This work package will handle the integration of the components developed by the previous work package, and will perform various tests to validate the reliability, interoperability and integration ability of the developed technology enablers. It will then lead the use cases implementation and tests, including the end user validation and evaluation. It contains the following tasks: (1) validation of each functionality built in WP3, (2) scalability testing of each of the functionality, (3) integration testing with all the functionality and( 4) scalability testing with more nodes.

Impacts :

Our project will address the impacts in relation to the NGI initiative, namely:

Impact 1: Enhanced EU – US cooperation in Next Generation Internet, including policy cooperation

Through our experiments performed so far, this project has educated the researchers at the University of Nebraska-Lincoln (UNL), USA to set up and operate experiments on the Fed4Fire testbeds in the EU, i.e., the CityLab and the Wilab1 and Wilab2 testbeds. Similarly, the EU team, while developing and performing their experiments on the US testbeds, developed comprehensive knowledge on the minute operational details of the POWDER and the COSMOS testbeds. The US team has provided expertise in SDN control and testbed experimentation, enabling the EU team to perform testbed experiments.

Impact 2: Reinforced collaboration and increased synergies between the Next Generation Internet and the Tomorrow's Internet programmes.

This project establishes collaboration between three PI's Dr Sachin Sharma and Dr Avishek Nag from the EU and Dr Byrav Ramamurthy from the US. Dr. Sharma has worked for several EU and Flemish projects: FP7-SPARC, FP7-OFELIA, FP7- CityFlow, FP7-UNIFY, FP7-CleanSky and MECANO where he extensively used the virtual wall testbed within Fed4Fire. He is also an associate investigator at the CONNECT Centre. Dr Nag on the other hand has worked on the FP7-DISCUS project with many EU collaborators. Dr Nag was also a researcher in Ireland’s biggest telecom networks research centre CONNECT where he worked on developing an LTE testbed with YouTube. CONNECT currently houses one of the biggest 5G testbeds that has links with Fed4Fire and the US testbed COSMOS. Dr Nag is also an alumnus of the same research group (i.e., Prof. Biswanath Mukherjee’s networks research lab) at UC Davis as Prof Ramamurthy who is the US PI of this project. From these enriched networks of the three PIs we have identified a few teams who have already worked on NGI experimental projects from the Open Call 2, e.g., a team between CONNECT research centre, Politecnico di Milano, and the University of Arizona. We have also identified some teams from within our close networks working on NGI projects from the ongoing Open Call 3, involving UPC, Spain and UC Davis. We have reached out to some of these teams and are planning to carry forward these discussions once our results are disseminated and our intellectual properties are protected. Our plan is to first identify some future network use cases, like ML-assisted 6G networks, quantum communication networks etc. Then based on our and the potential collaborators’ expertise developed from the NGI projects or otherwise, prepare a joint action plan to reinforce collaborations and foster increased synergies between the Next Generation Internet and future Internet programmes.

Impact 3: Developing interoperable solutions and joint demonstrators, contributions to standards.

The mere philosophy of this project is based on interoperability as it establishes the feasibility of OpenFlow and SDN for unified control of wireless testbeds spanning over two continents. OpenFlow and SDN were originally suited for wide-area wired networks, and we emulated it for the first time on practical-scale networks. Different technologies and protocols need to be interoperated with a certain scope of creating new standards. The scope of new standards lies in the fact that SDN/OpenFlow has to be redefined to support machine learning algorithms and policies as well as support automatic discovery of wireless devices. In our implementation of SDN and ML solutions for IoT networks, we used only standard protocols. For example, OpenFlow, OVS-DB and OLSR protocols are used for the implementation of automatic configuration of SDN in IoT networks. Moreover, all the data is collected using standard protocols such as OpenFlow and OVS-DB. Nevertheless, ML decisions are incorporated using the northbound API of SDN and ONF (Open Networking Foundations is currently putting efforts to standardise this API.

Our contribution is also in the application of open-source reinforcement Learning libraries, such as Tensorflow and Keras, in our testbeds for running the RL algorithm. Keras is a deep learning API written in Python, running on top of the machine learning platform Tensorflow and one of the standard API for RL.

Impact 4: An EU - US ecosystem of top researchers, hi-tech start-ups / SMEs and Internet-related communities collaborating on the evolution of the Internet

The PIs of this project, owing to their combined professional network, has reachability to a lot of tech startups and SMEs. For example, Dr Avishek Nag and Dr Byrav Ramamurthy’s PhD advisor Prof Biswanath Mukherjee is the founder of a startup based in Northern California, called Ennetix Inc. which specialises in AIOps and Network Analytics. Our automatic IoT device discovery methodologies and machine-learning-based optimum path discovery in IoT networks can help them extend their solutions beyond enterprise-wide-area networks. Further, Dr Sachin Sharma also works as an associate investigator at the CONNECT Centre, where he collaborates with many companies such as INTEL and Bosch related to the topic researched in this project. Moreover, we can also collaborate with them in future NGI calls to extend our proposal to incorporate security and advanced and accurate data collection techniques from a live operational network. While the current results reported in this deliverable are early-stage, it provides enough promise to develop a successful prototype for an SDN and ML enabled automatic discovery of IoT nodes finding applications in healthcare, sensing, and weather monitoring and can stimulate the interests of several hi-tech start-ups / SMEs and Internet-related communities. In fact, in this context, it is worth mentioning that towards the last few months of the project we have implemented some use cases on AWS cloud as AWS agreed to provide some resources to our US partner, University of Nebraska Lincoln. The findings of our experiments on AWS are not reported in this deliverable, but that will be the part of our future exploitation and dissemination as mentioned in Section 6. However, this already evidences the impact of our project to enhance an ecosystem of top-level collaborations.

Results :

This project compared a number of EU and US testbeds based on their architecture, resources available, IoT capabilities, data that can be collected, limitations, Software-Defined Networking (SDN) capabilities, machine learning capabilities, and practical experimental results. Benchmark and failure recovery experiments are performed and results are shown. Further, issues faced from each testbed are reported. Different testbeds have different resources available for wireless experiments, as shown by the results. In addition, because nodes in different testbeds have different resources available with respect to CPU, memory, bandwidth available, we achieved different results from each testbed experimentation. Furthermore, as the selection of nodes is highly dependent on the availability of nodes at the time of experimentation, results vary depending on the type of node selected. In addition, testbed experiments provide a realistic environment for experimentation. Therefore, it makes sense to use these nodes as experimentation platforms.

Apart from that, we also achieved the objectives of this NGIAtlantic project namely:

1. We experimentally demonstrated automatic configuration of SDN/OpenFlow in Wireless Ad hoc networks. This was achieved by implementing an automatic configuration method on testbeds. The efficiency of the method is calculated by measuring the automatic discovery time and data plane latency.

2. We achieved the best data-plane latency for an e-healthcare application. This was done by applying machine learning to find the best path from an IoT device to an IoT application which meets the latency and bandwidth requirements.

3. Recover from a failure when it occurs in different network topologies (ring, grid and mesh). This was achieved by implementing a restoration scheme and calculating the failure-recovery time. The failure-recovery time was calculated after the failure is introduced in the network. Most of the results in the literature are based on simulations. The results gathered using our emulations are unique, as these are measured in a set-up emulated on real testbeds.

4. We tested inter-testbed connectivity by performing experiments on EU and US testbeds. The inter-testbed connectivity was achieved by running different modules (IoT applications and sensor nodes) on different testbeds and using the public internet for connection. For example, we ran the controller at the COSMOS testbed and an IoT application at the virtual wall testbed. Further, wireless IoT scenarios will were created on the W-iLab.t, CityLab and POWDER testbeds.

5. We tested our secure IoT application using the GPULAB testbed. Further, our e-healthcare application was tested using a setup created on POWDER, COSMOS, and AWS servers.

Future Plan :

Due to resource limitations (as discussed in section 4.2), we could not test our IoT application with the inter-testbed environment where w-ilab1.t, w-ilab2.t and POWDER testbed are connected to each other as discussed in previous section. In the future, we will perform this experiment when the POWDER testbed is more developed. 

Going forward, we will use the findings of this project to implement more advanced edge-computing use cases and run more robust machine learning algorithms (e.g., considering the tradeoff between algorithm accuracy vs energy efficiency) on the EU-US inter-testbed topologies. Furthermore, we would like to explore the security and trust aspect of connecting these software-controlled nodes by using the principles of Blockchain.

Key results

Six papers published or accepted:

  1. S. Sharma, S. Urumkar, G. Fontanesi, B. Ramamurthy, and A. Nag, "Future Wireless Networking Experiments Escaping Simulations", Future Internet. 2022; 14(4):120. https://doi.org/10.3390/fi14040120
  2. V. Tomer and S. Sharma, "Detecting IoT Attacks Using an Ensemble Machine Learning Model" Future Internet," 2022; 14(4):102. https://doi.org/10.3390/fi14040102
  3. S. Sharma, A. Nag and B. Ramamurthy, "Cross-Atlantic Experiments on EU-US Test-beds," IEEE Networking Letters, doi: 10.1109/LNET.2022.317771
  4. V. Tomer and S. Sharma, "Experimenting an Edge-Cloud Computing Model on the GPULab Fed4Fire Testbed," 2022 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), 2022, pp. 1-2, doi: 10.1109/LANMAN54755.2022.9820006.
  5. S. Urumkar, G. Fontanesi, A. Nag and S. Sharma, "Demonstrating Configuration of Software Defined Networking in Real Wireless Testbeds," 2022 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), 2022, pp. 1-2, doi: 10.1109/LANMAN54755.2022.9819994.
  6. S. Sharma, S. Urumkar, G. Fontanesi, V. S. Karanam, B. Hu, B. Ramamurthy and A. Nag, 1) S. Sharma, S. Urumkar, G. Fontanesi, B. Ramamurthy, and A. Nag, "Future Wireless Networking Experiments Escaping Simulations", Future Internet. 2022; 14(4):120. https://doi.org/10.3390/fi14040120

Open source contributions:

  1. Experimenting an Edge-Cloud Computing Model on the GPULab Fed4Fire Testbed, https://github.com/VikasTomar32/LANMAN

  2. GNN+DQN code in https://github.com/GianFont/RL_routhOpt .

  3. Controller Automatic Configuration Code: https://bitbucket.org/saish15/olsrd2/src/master/

  4. Client node automatic configuration code: https://bitbucket.org/o2cmf-work/olsrd_client/src/master/

  5. Demonstrating configuration of software defined networking in wireless testbeds:
    https://www.youtube.com/watch?v=kAkrT95tRb4&ab_channel=SaishUrumkar

 

NGI related Topic :

Discovery and identification technologies

Call Reference :

3

The 30-months project NGIatlantic.eu will push the Next Generation Internet a step further by providing cascade funding to EU-based researchers and innovators in carrying out Next Generation Internet related experiments in collaboration with US research teams.




contact action add button