Chapter #8 Architectural Styles — Software Design and Architecture Specialization University of Alberta
The choice of programming paradigm influences the architectural style of a system. Object-oriented programming, in particular, is characterized by several principles and design patterns. These include:
- Abstraction, which simplifies complex concepts.
- Encapsulation, bundling data and functions into a self-contained object with an exposed interface for interaction with other objects.
- Decomposition, breaking a whole into parts for easier management.
- Generalization, allowing the extraction of commonalities between concepts.
Object-oriented design patterns can be classified into three categories:
- Creational patterns, guiding the creation of new objects.
- Structural patterns, describing relationships between objects and interactions between classes and subclasses.
- Behavioral patterns, focusing on how objects work individually or as a group to accomplish tasks.
These principles and design patterns collectively contribute to the object-oriented architectural style of a system.
Abstract Data Types and Object Oriented Design
Object-oriented design in architecture focuses on data organization. It begins by identifying various data types within the system, which are then represented as classes. These classes encapsulate related attributes and methods, restricting access to the data and governing permissible operations. In an object-oriented system, each object is an instance of a class, and interaction between objects occurs through their methods. The paradigm allows for inheritance, enabling one abstract type to extend another.
Classes, forming the foundation of the architecture, determine the overall structure of the system. The object-oriented approach directly influences the architectural style, with the system’s design driven by the principles of the object-oriented paradigm. While this style suits certain problems well, not all situations lend themselves to easily identifiable classes. Consequently, it’s important to consider alternative design choices when necessary.
Main Program and Subroutine
The main program and subroutine architectural style, derived from the procedural programming paradigm, focuses on functions. This style involves breaking down the system’s overall functionality into a main program and subroutines, forming a hierarchical structure through procedure calls. Data is stored as variables, and while abstract data types are supported, inheritance is not explicitly facilitated. The focus lies on the behavior of functions and how data moves through them. Each subroutine can have its own local variables and may access data within its scope. Data can be passed into and out of subroutines as parameters and return values.
A key principle is “one entry, one exit per subroutine,” simplifying the control flow within subroutines. This architectural style encourages modularity and function reuse, promoting advantages such as modular design and the integration of library functions.
However, it also presents certain drawbacks, including unpredictable data mutations and unexpected changes to shared global data, potentially leading to runtime errors. Procedural programming is best suited for computation-centric systems, such as spending management programs, where identifying object-oriented components may result in overly complex solutions. Nonetheless, for problems where modeling with abstract data types simplifies the solution, object-oriented architectural styles are more appropriate.
Repository-Based Systems
In modern software development, it’s crucial to create software architectures capable of sharing information between different components. Addressing the temporary nature of component states and the need for efficient data communication between various components can be achieved through a data-centric software architecture. This approach facilitates the storage and sharing of data, enhancing the system’s maintainability, reusability, and scalability. Integrating a shared data storage method, such as a database, is a key aspect of this architecture.
The data-centric architecture primarily consists of two types of components:
- Central data: This component functions as the central repository for storing and serving data across all connected components. It serves as the primary source of shared data within the system.
- Data accessors: These components connect to the central data component, facilitating queries and transactions against the stored information. Data accessors are segregated from one another and communicate solely with the central data component. They interact with the central data component to retrieve and update data based on the current state of the system.
Understanding the roles and functionalities of these components is essential for comprehending the dynamics of the data-centric architecture and its operations within the software system.
Databases
Data-centric software architecture relies on databases to store and share central data, offering specific advantages and disadvantages. The qualities ensured by databases in data-centric architectures include data integrity, which maintains accurate and consistent data, and data persistence, enabling data to survive after component termination. Relational databases, using tables and Structured Query Language (SQL), facilitate data sharing among data accessors.
Database management systems (DBMS) can automate query and transaction management, simplifying database integration. In this architectural design, central data serves as a passive repository focused on storing and serving information, with minimal data processing or business logic.
Data accessors are components connecting to the database, known for their ability to:
1. Share a set of data while operating independently.
2. Communicate with the database through queries and transactions, without the need for direct interaction with other data accessors.
3. Query the database for system information to perform computations.
4. Save the updated system state back into the database using transactions.
Each data accessor contains the necessary business rules for its functions, enabling the separation of concerns and controlled usage, granting end users permissions only for what they need. Data-centric architecture provides several advantages over basic object-oriented systems due to the integration of a centralized database, such as increased support for data integrity, reduced overhead for data transfer between data accessors, scalability, and better information management.
However, data-centric architecture also comes with disadvantages, including heavy reliance on the central data component, potential dependencies on existing data schema, and challenges in changing the schema. Safeguards like data redundancies may be costly, and data schema changes can affect data accessors.
In summary, data-centric software architecture is commonly used to store and manage large amounts of data in a central repository, enhancing system stability, reusability, maintainability, and performance. It segregates the functionality of data accessors and facilitates data sharing through database queries and transactions. The choice to adopt a data-centric architecture depends on the specific context and requirements of the problem at hand.
Layered Systems
A layered architecture is a structural design pattern in which a system is organized into distinct layers or tiers, with each layer serving a specific purpose and interacting with adjacent layers. This approach is often used to separate concerns and promote modularity within a system. A common example of a layered system is an operating system, with the kernel interacting directly with hardware and higher layers providing user-level functionality.
Key characteristics of layered architecture include:
1. Isolation of Concerns: Each layer focuses on specific responsibilities or purposes, promoting separation of concerns. Commonly, layered systems are divided into “presentation,” “logic,” and “data” layers, allowing for clear separation and abstraction of functionality.
2. Defined Interfaces: Components within a layer provide well-defined interfaces, driven by the system’s needs. These interfaces are used for communication between layers, and they ensure that upper layers interact only with the layer immediately below them.
Advantages of layered systems include:
- Ease of Use: Users can perform complex tasks without needing to understand the intricacies of lower layers.
- Security and Privilege: Different layers can run at different levels of authorization or privilege, enhancing system security and reliability.
- Loose Coupling: Layers are loosely coupled, following the principle of least knowledge, which promotes modularity and flexibility.
- Consistency: If a layer is replaced, the provided interface for the layer above must remain consistent with the previous implementation.
However, enforcing strict layering can introduce trade-offs in terms of efficiency. Interactions that cross multiple layers can lead to added complexity and resource usage. Balancing this overhead against the benefits of separation of concerns is essential in design.
Layered architecture can be adapted to allow for “passthrough” or exceptions to strict layer boundaries when necessary. This flexibility helps manage design complexity and can be a practical approach in certain situations.
In summary, layered architecture is a powerful and intuitive design pattern commonly used in many organizations and solutions. It supports the separation of concerns, modular design, loose coupling, and can be adapted to various needs, making it a valuable approach for structuring systems. However, it’s crucial to strike a balance between strict layering and practicality, considering the trade-offs in efficiency and complexity.
Client Server n-Tier
n-Tier or multitier architectures are closely related to layered architectures, and they refer to the organization of components across different physical machines or tiers. These tiers are built on layered architectures, where each tier performs specific functions and interacts with adjacent tiers. While the terms “tier” and “layer” are often used interchangeably, they are not identical, as tiers represent a higher-level grouping.
Commonly, n-Tier architectures consist of three-tier or four-tier setups, where each tier communicates with adjacent tiers, often following a client/server relationship. The client makes requests to the server, which provides services like data storage or computation, resulting in a request-response communication pattern.
Key points about n-Tier architectures:
- Tiers can act as both servers and clients, serving their clients’ requests and making requests to other tiers.
- Request-response relationships can be either synchronous or asynchronous. Synchronous communication waits for a response, potentially causing delays, while asynchronous communication allows the client to continue processing tasks independently.
The advantages of n-Tier architecture include scalability, centralization of functionality and data, reduced processing power requirements for client machines, and support for separation of concerns. Additional tiers can be added as needed for specific purposes, enhancing modularity and abstraction.
However, there are drawbacks to consider, such as increased resource requirements for managing client/server relationships and the potential for a server to act as a central point of failure. Redundancy and failover mechanisms can mitigate this, but they add complexity.
In summary, n-Tier architecture is highly scalable, centralizes functionality and data, supports separation of concerns, and reduces processing requirements on client machines. It is well-suited for systems that can be divided into service and request roles, but it should be carefully managed to balance the benefits and challenges associated with its use.
Interpreter-Based Systems
Interpreter-based systems enable users to write scripts, macros, or rules that access and manipulate the basic features of the system in dynamic ways. The interpreter component within these systems allows the execution of user-specified actions during runtime, abstracting the underlying implementation details from the end user. This functionality provides users with flexibility and portability, applicable in various commercial systems.
Scripts and macros, integral components of interpreter-based systems, facilitate task automation and repetitive actions, enabling users to compose complex tasks by invoking predefined commands. Interpreters empower users to extend a system’s existing functionality by combining pre-defined functions in a specific sequence, without the need for developers to implement all possible combinations of functionality.
Such systems encourage users to implement customizations using an easy-to-use language with domain-specific abstractions, tailored to their needs and thinking. Users are not required to have programming language knowledge to customize the system, thereby promoting accessibility and usability.
While interpreter-based systems offer portability across supported platforms, especially crucial with the rise of virtual machines and cloud-hosted services, they can be relatively slower due to the line-by-line translation and execution strategy employed by basic interpreters. This trade-off between flexibility for developers and end users and slower execution speed for computers is a notable disadvantage of these systems.
In summary, interpreter-based systems provide users with the ability to customize and extend system functionalities, leveraging scripts, macros, and rules. They offer portability across platforms and simplified customizations, but their execution speed may be slower due to the nature of interpretation, presenting a trade-off between flexibility and performance.
Dataflow Systems
The pipe and filter architectural style is a type of data flow architecture that involves a series of transformations on sets of data. This style is characterized by the use of components, known as filters, that process data and pass it along through interconnected pipelines or pipes. Each filter performs a specific operation on the data, transforming it from one form to another.
The key components in a pipe and filter architecture include:
- Pipes: These represent the channels through which data flows between filters. They enable the sequential processing of data as it moves from one filter to another, allowing for a modular and efficient data transformation process.
- Filters: These components perform specific operations on the data flowing through the system. Each filter is designed to handle a particular task or transformation, and the output from one filter becomes the input for the next filter in the pipeline.
The pipe and filter architectural style offers several advantages:
- Modularity: Filters can be developed and modified independently, promoting code reusability and easier maintenance.
- Scalability: The system can be easily scaled by adding or removing filters as needed, without affecting the overall architecture.
- Reusability: Filters can be reused in different parts of the system or in different systems altogether, enhancing the overall flexibility and efficiency of the design.
However, there are also some challenges associated with the pipe and filter style:
- Overhead: The use of pipes to connect filters can introduce additional overhead, potentially impacting the system’s overall performance and efficiency.
- Data integrity: Ensuring data integrity and consistency throughout the data flow process, especially when dealing with large volumes of data, can be complex and challenging.
Overall, the pipe and filter architectural style is an effective approach for designing data flow systems, providing modularity, scalability, and reusability. By leveraging this style, developers can create systems that efficiently process and transform data through a series of interconnected filters, enabling a flexible and adaptable data processing environment.
Implicit Invocation Systems
In implicit invocation systems, functions are not directly communicating with each other, and the event-based architectural style is a design approach that falls within this category. This style is derived from the event-driven programming paradigm, which revolves around the concept of events and event handlers.
In an event-based architectural style, the system operates based on the occurrence of events, such as user actions, messages, or signals. These events trigger the execution of corresponding event handlers, which are responsible for carrying out specific actions or processes in response to the events. The event handlers are designed to handle specific types of events and can be registered or unregistered dynamically during runtime.
Key elements of the event-based architectural style include:
- Events: These are incidents or occurrences within the system, such as user inputs, messages, or system notifications, that trigger specific actions or responses.
- Event Handlers: These components are responsible for handling events and executing corresponding actions or processes when events occur. Event handlers are designed to respond to specific types of events and are registered with the system to listen for those events.
The event-based architectural style offers several advantages:
- Asynchronous Processing: Events can be processed asynchronously, allowing the system to handle multiple events concurrently without blocking the execution flow.
- Loose Coupling: The decoupled nature of event-based systems promotes loose coupling between components, enabling greater flexibility and easier maintenance.
- Extensibility: New event handlers can be added to the system without impacting existing components, making it easier to extend the system’s functionality.
However, there are also challenges associated with the event-based style:
- Complex Control Flow: Managing the flow of events and ensuring the correct sequencing of event handling can be complex, particularly in systems with a large number of interconnected events and handlers.
- Debugging and Testing: Verifying the correctness and reliability of event-driven systems can be challenging, as events can trigger multiple handlers, leading to intricate debugging and testing processes.
Overall, the event-based architectural style is a powerful approach for designing implicit invocation systems, offering benefits such as asynchronous processing, loose coupling, and extensibility. By leveraging this style, developers can create flexible and responsive systems that efficiently handle events and deliver timely and appropriate responses.
Process Control Systems
Process control is a crucial aspect of managing various operations to ensure efficiency and safety. One of the fundamental concepts in process control is the feedback loop, which consists of four essential components: a sensor, a controller, an actuator, and the process itself.
The components of a feedback loop are as follows:
1. Sensor: Monitors a specific parameter or condition in the process. In the example of room temperature control, a thermostat acts as the sensor to measure the room’s temperature.
2. Controller: This component contains the logic that determines how the system should respond to the data provided by the sensor. It calculates the error by comparing the desired setpoint (e.g., the target temperature) with the measured process variable (e.g., the current room temperature).
3. Actuator: The actuator is responsible for physically adjusting or manipulating the process to bring it closer to the desired state. In the room temperature example, the heating vent serves as the actuator, controlling the amount of heat released into the room.
4. Process: The process is the system or parameter that you aim to control. In the case of room temperature control, the room’s temperature itself is the process being managed.
In a feedback loop, the controller logic runs continuously because the process is constantly changing. The frequency at which the loop runs depends on the system’s sensitivity and the desired level of control. For example, room temperature changes relatively slowly, so high-frequency control updates, such as every microsecond, are unnecessary.
Feedback loops can be enhanced by introducing a proportional controller. This type of controller calculates an error value based on the difference between the desired setpoint and the measured process variable, and it applies a correction that is proportional to the error’s magnitude. In the room temperature example, a proportional controller could regulate the heating vent’s position based on the error signal. This allows for a smoother approach to the setpoint as the temperature gets closer, reducing the risk of overshooting the target.
Apart from feedback control, there are other variations in process control:
1. Open Loop: In an open-loop system, the process is controlled without continuously monitoring it. Open-loop systems lack the ability to adapt to changes in the process or evaluate their own success.
2. Feedforward Control: Feedforward control is used in systems where processes are in series. Information from an upstream process is utilized to control a downstream process. This method is valuable when dealing with unknown events and requires a good model of process response. Feedforward control is often combined with feedback loops to create more robust control systems.
For example, a flood protection system may employ feedforward control. When an upstream monitoring station detects high flow rates in a river, it can signal the flood protection system’s controller. The controller can then instruct the actuator, such as a gate, to open and divert water into a reservoir to prevent flooding downstream.
Complex processes may involve multiple sensors and control mechanisms, resulting in intricate process control architectures. The diagram provided in your description illustrates a complex example of a self-driving car, where various sensors, controllers, and actuators work together to navigate and control the vehicle’s behavior.
Ibrahim Can Erdoğan