C²I² Press Release
An extended local area network (LAN) protocol (or set of protocols) with functionality spanning the session, presentation and application layers of the network is required to meet the generic and specific needs of real-life, real-time systems.
This is one of the conclusions contained in an extensive study of network strategies for real-time, mission-critical distributed systems by Richard Young, Managing Director of C²I² Systems, a pre-eminent Cape Town-based systems integration and networking company.
Young's aim was to analyse the information management requirements of typical next-generation distributed control systems, with a view to synthesising an optimal solution using distributed computing elements and local area networks (LANs).
The ideal solution had to provide optimal performance, dependability, transparency and flexibility, as well as exhibit a high degree of integration across all its functional areas. Further, the optimal solution had to incorporate an open systems architecture and comply with international standards.
The study found that the most appropriate means to meet numerous, diverse and stringent user requirements is a distributed system architecture integrated by means of local area networks (LANs).
Such an architecture is best able to satisfy high-level (or allocated) requirements, such as system coherence, dependability and operability, as well as lower-level (or derived) requirements such as reliability, maintainability, reconfigurability and electromagnetic compatibility.
In the part of the study dealing with suitable protocols for real-time, mission-critical distributed systems, Young evaluated all of the current commonly used higher layer protocols. He concluded, on the basis of extensive theoretical and practical research, that none of the current high layer protocols would meet all of the requirements of the next generation of real-time systems.
These requirements may be summarised as follows :
"The current high-level protocols, while meeting some of the above requirements for real-time systems, do not meet all of them," Young noted. "This functional inadequacy implies that an alternative, extended protocol should be defined and developed to address the full spectrum of requirements of next-generation real-time systems."
An example of such an extended protocol is APIS (Application Interface Services), developed by Young and colleagues between 1993 and 1996.
APIS is a network communications protocol designed for the exchange of information between functionally independent applications incorporated into a distributed, real-time system. APIS conceptually encompasses Layers 5 to 7 of the ISO OSI Reference Model, and so interfaces below to Layer 4, the Transport Layer and above to the APIS Service User (ASU).
The ASU typically is a collaborative, networked software application producing and/or consuming data of different types, as previously defined by an ASU administration authority (i.e. the System Data Manager) as part of the network design. Each data type is ascribed a unique identification code or Message Identifier.
"The APIS protocol establishes the necessary communication channels between ASUs by registering and matching their producers and consumers," Young explains. "LAN dataflow therefore is determined by the data type of ASU messages and not by predefined ASU addresses.
"This data-driven approach to dataflow management provides a higher level of flexibility than the traditional addressed point to addressed point facilities of current LAN protocols. The objective of this approach is to simplify ASU communication logic and configuration."
The intrinsic operation of APIS requires a number of specific services from the lower layer protocols. These services are: broadcast (derived from media and MAC-layer protocols capable of multiple access), reliable multicast (derived from the transport layer e.g. XTP or possibly by LLC Type 4) and multicast group management (derived from the transport layer e.g. XTP or functionality implemented within APIS itself).
Apart from lower layer services, APIS relies on two fundamental mechanisms to support the data-driven approach. These are message classification and regular expression or wildcarding. Message classification characterises each message by type, sub-type and identifier (ID). Messages are grouped by type and sub-type and uniquely identified by Message ID. Such classification is made on the basis of origin (i.e. producer) and application category (e.g. contact, target, navigation data, etc.).
Producers and consumers register with their own (local) APIS which identifies them to the system by means of an APIS broadcast. The status of all producers and consumers is then maintained in a local status table which is updated periodically as well as after each significant status event. The aggregate of this process provides a distributed, real-time dataflow management agent.
The Message Identification scheme is designed in such as way as to support wildcarding. This is defined as group addressing by means of address subsets, allowing producers and consumers to be accessed by means of a generic addressing scheme. APIS employs wildcarding to manage production and consumption of messages, both individually as well as by group (type) and sub-group (sub-type). Wildcard produce and wildcard demand are both supported as APIS service options.
The APIS protocol includes a number of APIS Services. These are message and dataflow control services supplied to the APIS Service User. Briefly, the APIS Services are :
While APIS's data-driven approach abstracts the implementation details from the application user, it has certain implications which may be significant for real-time, distributed systems. In particular, if applications are completely decoupled from the transfer infrastructure they are unable to effect control of dataflow. Various methods are available, however, to overcome such a situation, such as lowering messaging rates, message filtering and extended message buffering.
"As all software-implemented protocols, including APIS, require a greater or lesser degree of protocol processing, system performance can be significantly enhanced by relieving the sub-system host of such processing," Young concluded. "This implies that the network interface be provided with its own processing capability i.e. an off-host architecture.
"While this is more costly, it leads to much greater transfer performance which is more often than not justified in critically real-time systems. Where Application Programming Interface (API) dataflow control is required, it would be imperative for the network interface to be provided with onboard processing and buffering capability."