This page tracks the main terms used on the CONFINE project as they are presented in the deliverable Initial system software and services of the testbed of the CONFINE project.
Elements on the system
On this section the elements present on CONFINE testbeds are defined.
NODES AND DEVICES
CONFINE node: A research device connected to a community network through a
wired link called the node’s local network. The node does not run routing protocol
software, so the routing to the rest of the community network is performed by a
community device connected to the local network which is used as a gateway by
the node. In some locations, where accessibility is an issue, a recovery device will
be attached to the node. By properly deploying a CONFINE node, it becomes an
active member device on the community network where researchers can run
Community device: A device in charge of connecting with the community network,
extending it and becoming the link of the CONFINE node with it. It must therefore
have at least two different interfaces, one to connect with the community network,
and another one to connect with the research device through a local network.
Usually the first interface will be a wireless interface and the second one a wired
one. To be able to be a part of the community network, it must run the necessary
software (routing protocols, OpenWrt distribution, etc) required by the community
network, which will be location-dependent. Occasionally, the community device will
be a container running on the research device, with the proper interface attached to
Research device: A relatively more powerful board with virtualization capabilities
where the experiments will run. It runs the CONFINE distribution with the proper
control software to create and control the different slivers. On the network side it
has a local interface and several optional direct interfaces. The local interface is the
wired interface connected with the community device and maybe other nonCONFINE devices. The direct interfaces are interfaces to serve the researcher for
experimentation. In case the community device has become a container on the
research device, the container will control a direct interface connected to the
Recovery device: A simple device whose purpose is to force the research device
to reboot in case of malfunction.
SERVERS, TESTBEDS AND NETWORKS
CONFINE server: A machine running the CONFINE management software
(CONFINE server). It is responsible of offering to the users the API to create and
manage their slices and ensuring that the involved nodes will perform the necessary
operations so that the slices requested are served.
CONFINE testbed: A set of CONFINE nodes managed by the same CONFINE
server (or set of servers containing equivalent configuration data). A single
CONFINE testbed may span different sites (using community network
infrastructure) or even different network islands and community networks (using
CONFINE gateways). Still, all elements in the same testbed share the same
Version 1.0 – 15/11/2012 Page 70D2.1 Initial system software and services of the testbed
namespaces (e.g. there cannot be two testbed nodes with the same ID at different
Site: A grouping of physically close nodes (not relevant to CONFINE’s architecture).
Network island: A subset of a community network where all hosts can reach
themselves at the network layer (OSI model L3) but is not reachable from another
part of the same community network on a permanent basis (i.e. not because of a
link failure). Any two subsets of different community networks are also considered
islands between themselves unless both have public IP connectivity, in which case
they both belong to the “Internet” island.
CONFINE gateway: A machine giving entry to the testbed’s management network
from a set of network islands. It can help connect different parts of a management
network located at different islands over some link external to them (e.g. the
CONFINE management network: A testbed-dedicated network where nodes in a
CONFINE testbed are connected to, together with that testbed’s servers, gateways
and other hosts. To overcome firewalls in community devices and disconnection at
the network layer between islands and community networks, the management
network may be a tunneled VPN. Thus a running management network has no
islands itself: all of its devices are reachable between themselves at the network
layer (of course in the absence of link failures).
commonNodeDB (sometimes abbreviated to “nodeDB” or “CNDB”): A database
where all CONFINE devices and nodes are registered and assigned to responsible
persons. The nodeDB contains all the backing data on devices, nodes, locations
(GPS positions), antenna heights and alignments. Essentially anything which is
needed to plan, maintain and monitor the infrastructure.
PIECES OF SOFTWARE
CONFINE distribution: A customized OpenWrt with enabled virtualization
capabilities and the proper set of scripts and software to control the researchers’
slivers. It also has a set of OS images as base for the researchers’ slivers and is in
charge of running the community container when it is hosted in the research device.
Control software: It refers to the set of scripts running on the research device that
take orders from the CONFINE server to create and remove the researcher’s slivers
and properly configure their networking. Control software also takes care of
restricting their resource consumption both on the host and on the network.
INTERFACES’ ACCESS MODES
When requesting a sliver, the researcher will request a set of interfaces attached to the
sliver. We define four different types of interfaces, that will define their capabilities:
Raw interface: The researcher is granted with unique access to the interface, and
can control it at any layer (from physical to application). This would be only allowed
on nodes on special locations, so that the traffic generated by the researcher will
not interfere with the proper working of the community network. Additionally the
interface use might be limited to some Wi-Fi channels agreed in advance with the
researcher and properly monitored to ensure its enforcement.
Passive interface: When requesting a passive interface the researcher will be able
to capture all the traffic seen by the interface (maybe filtered by control software).
However, it will not be able to generate traffic and forward it through this interface. A
Version 1.0 – 15/11/2012 Page 71D2.1 Initial system software and services of the testbed
physical interface on the CONFINE node might be mapped to several passive
interface on different slices.
Isolated interface: It is used for sharing the same physical interface but isolated at
L3, by means of tagging all the outgoing traffic with a VLAN tag per slice. By means
of using an isolated interface, the researcher will be able to configure it at L3, but
several slices may share the same physical interface.
Traffic interface: It has an already assigned IP address and all traffic generated by
the slice (and sent through this interface) with a different address will be dropped.
Traffic interfaces might have assigned either a public or a private address, defining
a public interface or a private interface. Traffic from a public interface will be
bridged to the community network, whereas traffic from a private interface will be
forwarded to the community network by means of NAT. Every container will have at
least a private interface.
Community user: A person that belongs to a community network and generates
traffic coming from a Community Node.
Researcher: A user registered on the testbed framework with rights to create and
run an experiment. Its goal is to make use of the testbed to analyse the effects of its
experiment on a real environment.
Administrator: A person responsible of managing the users and nodes from a site
on the testbed.
Elements and definitions of the framework
Experiment: A series of actions (programs with accompanying data) to be run on
behalf of a researcher on a testbed. Experiments can generate traffic (active
experiments) or not (passive experiments).
Slice: A set of resources spread over several nodes in a testbed which allows
researchers to run experiments over it.
Sliver: The partition of a node’s resources assigned to a specific slice.
Resource: Anything that can be named and/or reserved, i.e. a node, a virtual link, a
radio are resources, but CPU, memory are not.
On this subsection some testbed characteristics are defined, establishing what is to be
expected when using them (e.g. when saying that two slices are isolated, what can be
Federation: Federation is the explicit cooperation between two or more testbeds
where part of its government is leased to either a central government or distributed
among the other testbeds belonging to the federation in order to achieve some
goals. Tipically those goals will define the type of federation and the rules of the
agreement. Usual objecives for federation are achieving scale, more realism,
increase the number of services offered by the testbed, increase the geographic
extent of the testbed, etc.
Isolation: The difficulty of an experiment running inside of a sliver to access or
affect outside data or computations on the same node beyond what it could do to an
Version 1.0 – 15/11/2012 Page 72D2.1 Initial system software and services of the testbed
Privacy: The quality of community network traffic by which an experiment should
not be able to access it whether it is being forwarded by the node or addressed to it
(unless it is specifically addressed to the experiment).