The complexity of current software systems, evolution of their requirements and uncertainty in their environments has led the software engineering community to look for inspiration in diverse related fields (e.g., robotics, artificial intelligence, control theory, and biology) for new ways to design and manage complex systems and their evolution. In this endeavor, the capability of the system to adjust its behavior in response to changes in the system itself, the requirements, or the environment in the form of self-adaptation has become one of the most promising directions (cf. [1,2]).
The landscapes of software engineering domains are constantly evolving. In particular, software has become the bricks and mortar of many complex systems that are composed of interconnected parts. Often the overall system exhibits properties not obvious from the properties of the individual parts. Extreme cases for such complex systems are ultra-large-scale (ULS) systems or system of systems (SoS) where self-adaptation, self-organization, and emergence are unavoidable. In order for the evolution of software engineering techniques to keep up with these ever-changing landscapes, the systems must be built in such a manner that they are able to adapt to their ever-changing surroundings and be flexible, fault-tolerant, robust, resilient, available, configurable, secure, and self-healing. For sufficiently complex systems, these adaptations must necessarily happen autonomously.
Self-adaptive systems can be characterized by multiple dimensions of properties including centralized and decentralized, top-down and bottom-up. A top-down self-adaptive system is often centralized and operates with the guidance of a central controller or policy, assesses its own behavior in the current surroundings, and adapts itself if the monitoring and analysis warrants it. Such a system often operates with an explicit internal representation of itself and its global goals. By analyzing the components of a top-down self-adaptive system, one can compose and deduce the behavior of the whole system. In contrast, a cooperative self-adaptive system or self-organizing system is often decentralized, operates without a central authority, and is typically composed bottom-up of a large number of components that interact locally according to simple rules. The global behavior of the system emerges from these local interactions. It is difficult to deduce properties of the global system by analyzing only the local properties of its parts. Such systems do not necessarily use internal representations of global properties or goals; they are often inspired by biological or sociological phenomena.
Most engineered and nature-inspired self-adaptive systems fall somewhere between these two extreme poles of self-adaptive system types. In practice, the line between these types is rather blurred and compromises will often lead to an engineering approach incorporating techniques from both of these two extreme poles. For example, ultra-large-scale (ULS) systems embody both top-down and bottom-up self-adaptive characteristics (e.g., the Web is basically decentralized as a global system, but local sub-webs are highly centralized or server farms are both centralized and decentralized).
In this lecture we will review how to build self-adaptive software systems cost-effectively. We will review existing theories, methods, and techniques for their software engineering covering all life-cycle phases and in particular the necessary adjustments for existing engineering activities as well as novel activities that become necessary.
As part of this lecture, the students will conduct a project to gain hands-on experience of the concepts discussed in the lecture by building self-adaptive software. The project will consist of several parts, in which students experiment and evaluate different concepts.