Tag Archive | VCT

Initial CONFINE design

The following information are notes adapted from the deliverable Initial system software and services of the testbed of the CONFINE project.

Node Architecture:

Early Dilemma:
1. Deploy hardware that is known to be stable long time (mostly concerning the outdoors conditions faced) and cut CPU performance
2. Deploy hardware with reasonable CPU performance and virtualize Wi-Fi interfaces using the DELP protocol developed in the CONFINE project.
Finally the second option was chosen (focus on researcher’s needs).

Confine Node Software

  • LXC for isolation and virtualization of sliver systems
  • Linux Cgroups for ensuring local sliver isolation
  • Linux TC and Qdisc tools for controlling sliver network load
  • VLAN tagging and Linux firewall tools to ensure network isolation
    • Public interface: L4 and up -> ebtables
    • Isolated interface: L3 and up -> VLAN
    • Raw Interfaces: Layer 1 and up -> physically separeted, not yet supported
    • Passive Interface: capture realtime traffic -> anonimity problem? not yet supported
    • Tinc for providing the management overlay network

Development Cycles

First Iteration: “A-Hack”

Implements node internal data structures and functions for executing node and experiments management tasks.

Second Iteration: “Bare-Bones”

Reuses the outcome of the first development cycle as backend functions. Once completed, it will contain the full node design and management API.

Node & Sliver Management Functions

  • node_enable/disable: cleanly activate/deactivate participation of the RD in the testbed. Reads node and testbed specific config from /etc/config/CONFINE and /etc/config/CONFINE-defaults to set up: hostname, ssh public keys, tinc mgmt net, br-local, br-internal, dummy testing LXC container
  • sliver_allocate: allocate resources on an RD necessary to execute an experiment
  • sliver_deploy: set-up sliver environment in an RD (What’s the difference between allocate and deploy???)
  • sliver_start/stop/remove: create VETH iface, install ebtables, tc, qdisc, boot LXC container, set-up cgroups, set-up management routes

Open Question – Openflow

Layer 2 experiments are considered by supporting the 
OpenFlow standard and other Software-Defined Networking 
(SDN) possibilities enabled via the Open vSwitch [OvS] 
implementation by allowing researchers the registration 
of a switch controller. Therefore Open vSwitch has been 
ported to OpenWrt but further investigations and 
development efforts are needed to clarify the management
and integration of this architecture into the CNS.

Note about VCT

The virtual RDs are linked with their local interface to a local bridge (vct-local) on the system hosting the VCT environment. Further bridges are instantiated to emulate direct links between nodes (vct-direct-01, vct-direct-02, …).

By connecting the vct-local bridge to a real interface it is possible to test interaction between virtual and real CONFINE nodes (RDs). This way VCT can also assist in managing real and physically deployed nodes and experiments.

OMF Integration

In the CONFINE project, a custom testbed management 
framework has been developed.However, to allow 
interoperability with existing tools and to avoid 
a steep learning curve for researchers, the project 
has also developed an OMF integration.

nodeDB or CNDB or commonNodeDB

For community wireless networks (CWNs), a node database 
serves as a central repository of network information. 
It comprises information about nodes deployed at certain 
locations, devices  installed  at  these  locations,  
information  about  internet  addresses,  and  —  in 
networks that use explicit link planning — links among 
devices. 
[...]
Historically,  each  community  network  has  built  their  
own  node  database  and corresponding tools. There was 
some effort to come up with a common implementation of a 
node database but so far none of these efforts has seen 
deployment beyond one or two installations.
[...]
To  get  a  better  understanding  of  the  common  
requirements  of  community  wireless networks  and  to  
eventually  facilitate  interoperability,  the  community  
network  markup language (CNML) project was founded in 2006. 
To date we know of an implementation by  guifi.net  and  
another  similar  XML export  by  Athens  Wireless  
Metropolitan Network (AWMN).

Common Node Database:

To avoid duplicating data in the community network database 
and the research database, the common node database 
implementation was started.

Some possible applications that nodeDB will enable are: IP address registry and allocation, Map, Link planning, Auto configuration of devices. Also an common DB approach will ebable the investigation of experimental features like: Social networking functions, Federation, Services offered by the network or by users.

Testbed Management Implementation

The management software is implemented based on Django.

Alt text

Federation

Horizontal or network federation:

Interconnection of different community networks at the 
network layer (OSI layer 3) so that a testbed can span 
several disconnected networks, i.e. so that testbed 
components in different networks can be managed by the 
same entity and they can reach each other at the network 
layer. 

Vertical or testbed federation

Given a set of different, autonomous testbeds (each one 
being managed by its own entity), the usual case is that 
the resources and infrastructure in any of them is not 
directly available to experiments running in other 
testbeds because of administrative reasons (e.g. 
different authorized users) or technical ones (e.g. 
incompatible management protocols).

Usefull Ideas

  • Community-Lab is a testbed deployed based on the CONFINE testbed software system.
  • Dynamic Link Exchange Protocol (DLEP) is a proposed IETF standard for protocol for radio-router communication. The intention is to be able to transport information such as layer 1/2 information from a radio to a router e.g. over a standard ethernet link. Depending on the final feature-set the router might even be able to (re-)configure certain settings of the radio. The general use of these protocols is to provide link layer information to the L3 protocols running on the router.
    Alt text
  • Process control group (Cgroup) is a Linux kernel feature to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.) of process groups. One of the design goals of cgroups was to provide a unified interface to many different use cases, from controlling single processes (like nice) to whole operating system-level virtualization (like OpenVZ, Linux-VServer, LXC).
  • cOntrol and Management Framework (OMF) is a control and management framework for networking testbeds.
  • Community Network Markup Language (CNML) is a project of people working on an open and scalable standard for local mesh networks, also referred to as local clouds. It is the aim of the project to define an ontology which acts as the basis for the set up and operation of mesh clouds.
  • nodeDB or CNDB is A database where all CONFINE devices and nodes are registered and assigned to responsible persons. The nodeDB contains all the backing data on devices, nodes, locations (GPS positions), antenna heights and alignments. Essentially anything which is needed to plan, maintain and monitor the infrastructure.

Related Links

Advertisements

Virtual Confine Testbed (VCT)

What is VCT

Virtual Confine Testbed (VCT) is an emulator of an actual CONFINE testbed environment. VCT is an easily deployable resource satisfying many goals simultaneously. For the end-user researcher, VCT is a platform through which he can get a quick overview of the function of a CONFINE testbed, familiarize with the environment as well as proof-test experiments destined for the CONFINE testbed. For the future CONFINE developer, VCT facilitates familiarization with CONFINE research devices (RD) and their behaviour, as an actual CONFINE OpenWrt image is used for the virtual nodes.

VCT Implementation

VCT uses two levels of virtualization as it creates virtual nodes (RDs) that contain slices belonging to experiment slivers. The actual RDs use Linux Containers (LXC) to run the slices. Thus, we need a virtualization layer that will surround the slice containers and will run the virtual node. This is achieved using KVM. Thus the two layers of virtualization are: KVM nodes and inside the slice containers, as seen in the image below.

vct_kvm_lxc

All the scripts are softlinks to the main script vct.sh. The scripts are actually implemented as functions in vct.sh where the main function redirects to the corresponding function after checking the name of the executable.