A quick look inside the software architecture of the IRATI prototype

This blog post provides an introduction to the high-level software architecture of the IRATI prototype. Linux OS has been the chosen target platform for IRATI because of its wide use in different contexts, its open-source nature with a great community and documentation behind and the matureness and stability of the platform. However the implementation aims to be as reusable as possible in similar environments (i.e. other UNIX-based Operating Systems). IRATI’s RINA implementation targets both the user-space and the kernel-space, in order to achieve two important design goals:

  • Low performance penalties have to be achieved for highly-frequent tasks (such as reading and writing data)
  • There is the need to access device driver functionalities in order to be able to overlay RINA on top of Ethernet (or other networking technologies in the future)
The software components have been partitioned into 3 categories: daemons and libraries in user-space, and kernel components. Libraries abstract out the communication details between user-space components and user-space and kernel components. Kernel components implement the “data transfer” and “data transfer control” parts of normal IPC Processes, as well as shim DIFs. Daemons implement the “layer management” parts of IPC Processes, as well as the IPC Manager.
High level software architecture of the IRATI RINA implementation

Figure 1. High level software architecture of the IRATI RINA implementation

Kernel components

There are three types of components that have been placed in the kernel:

  • Kernel IPC Manager. Coordinates and manages the different kernel components, and is the point of entry for any communication with user-space components.
  • Data transfer and data transfer control functions of normal IPC Processes. The IPC process functionality is split between kernel and user-space. The kernel components implement the functions that need to be executed more often but are less complex: Delimiting, the Error and Flow Control Protocol (EFCP), the Relaying and Multiplexing Task (RMT) and SDU protection.
  • Shim IPC Processes.  To overlay RINA on top of different existing networking technologies (Ethernet, TCP/UDP, WiFi, USB, etc.), wrapping them with the RINA API. Shim DIFs are the points where data enters and exits “RINA-land”.

Libraries

The IRATI implementation provides the following libraries to support the operation of user-space daemons and applications.

  • librina-application. Provides the APIs that allow an application to use RINA natively, enabling it to allocate and deallocate flows, read and write SDUs to that flows, and register/unregister to one or more DIFs.
  • librina-ipcmanager. Provides the APIs that facilitate the IPC Manager to perform the tasks related to IPC Process creation, deletion and configuration.
  • librina-ipc-process. APIs exposed by this library allow an IPC Process to configure the PDU forwarding table (through Netlink sockets), to create and delete EFCP instances (through system calls), to request the allocation of kernel resources to support a flow (through system calls also) and so on.
  • librina-faux-sockets. Allow adapting a non-native RINA application (a traditional UNIX socket based application) to lay over the RINA stack.
  • librina-cdap. Implementation of the CDAP protocol.
  • librina-sdu-protection. APIs and implementation to use the SDU-protection module in user space to protect and unprotect SDUs (add CRCs, encryption, etc)
  • librina-common. Common interfaces and data structures.

Daemons

Two types of daemons in user-space are responsible for implementing the IPC Manager and remaining IPC Process functionality.

  • IPC Process Daemon. Implements the layer management parts of an IPC Process, which comprise the functions that are more complex but execute less often. These functions include: the RIB and the RIB Daemon, Enrollment, Flow Allocation, Resource Allocation or the PDU Forwarding Table Computation. There is one instance of the IPC Process Daemon per IPC Process in the system.
  • IPC Manager. The central point of management in the system. It is responsible for the management and configuration of IPC Process daemons and kernel components; hosts the local management agent and IDD. There is one instance of the IPC Manager Daemon per system (could be replicated to increase availability).

Inter-component communications

The different components communicate by using the following mechanisms.

  • Netlink sockets (User space <-> User space/ User space -> Kernel / Kernel -> User space). Match many of the design requirements (1:N, N:M, asynchronous, initiated communications by both-spaces). Netlink can support kernel to user-space communication due to its full-duplex channels, allowing the framework to warn the application about specific events.
  • System calls (User space -> Kernel). For user-originated atomic actions, or belonging to fast-paths, the definition of a set of system calls provides an efficient and less resource consuming approach for synchronous user to kernel dialogues (e.g. read/write operation)
  • Sysfs (Kernel configuration and monitoring). It allows to configure the different RINA components residing in kernel space, as well as obtaining metrics of these kernel components in user-space.