Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Software systems and computational methods
Reference:

Integration of cloud, fog, and edge technologies for the optimization of high-load systems

Cherepenin Valentin Anatolyevich

ORCID: 0000-0002-6310-1939

Postgraduate student, Department of Information and Measurement Systems and Technologies, South Russian State Polytechnic University (NPI) named after M.I. Platova

132 Prosveshcheniya str., Novocherkassk, Rostov region, 346428, Russia

cherept2@gmail.com
Other publications by this author
 

 
Smyk Nikolai Olegovich

Postgraduate student, Department of Computer Software, South Russian State Polytechnic University (NPI) named after M.I. Platova

132 Prosveshcheniya str., Novocherkassk, Rostov region, 346428, Russia

smyk.n@list.ru
Vorob'ev Sergei Petrovich

PhD in Technical Science

Associate Professor, Department of Information and Measuring Systems and Technologies, South Russian State Polytechnic University (NPI) named after M.I. Platova

132 Prosveshcheniya str., Novocherkassk, Rostov region, 346428, Russia

vsp1999@yandex.ru

DOI:

10.7256/2454-0714.2024.1.69900

EDN:

HYTKBH

Received:

13-02-2024


Published:

20-02-2024


Abstract: The study is dedicated to analyzing methods and tools for optimizing the performance of high-load systems using cloud, fog, and edge technologies. The focus is on understanding the concept of high-load systems, identifying the main reasons for increased load on such systems, and studying the dependency of the load on the system's scalability, number of users, and volume of processed data. The introduction of these technologies implies the creation of a multi-level topological structure that facilitates the efficient operation of distributed corporate systems and computing networks. Modern approaches to load management are considered, the main factors affecting performance are investigated, and an optimization model is proposed that ensures a high level of system efficiency and resilience to peak loads while ensuring continuity and quality of service for end-users. The methodology is based on a comprehensive approach, including the analysis of existing problems and the proposal of innovative solutions for optimization, the application of architectural solutions based on IoT, cloud, fog, and edge computing to improve performance and reduce delays in high-load systems. The scientific novelty of this work lies in the development of a unique multi-level topological structure capable of integrating cloud, fog, and edge computing to optimize high-load systems. This structure allows for improved performance, reduced delays, and effective system scaling while addressing the challenges of managing large data volumes and servicing multiple requests simultaneously. The conclusions of the study highlight the significant potential of IoT technology in improving production processes, demonstrating how the integration of modern technological solutions can contribute to increased productivity, product quality, and risk management.


Keywords:

High-load systems, Cloud computing, Fog computing, Edge computing, Performance optimization, Scalability, Internet of Things, Technology integration, Data management, Service continuity

This article is automatically translated.

            Introduction

The term "highly loaded system" is usually applied to web services and sites experiencing intensive interaction with a large number of users at the same time. However, this concept also applies to a variety of information systems and business applications that work with significant amounts of data. Therefore, the tasks of optimizing highly loaded systems are relevant not only for web development, but also in the context of any projects involving the presence of server and client parts. Therefore, a highly loaded system is considered an application that is experiencing a high load due to the multitude of users, vast amounts of data, or the intensity of calculations. These aspects can manifest themselves both together and separately, but the presence of at least one of them indicates increased requirements for system resources.

In the context of the development of modern distributive information systems, design involves taking advantage of Internet of Things (IoT) technologies and applying architectural solutions based on cloud, fog and edge computing. Cloud computing is a method of building a distributed system with access to a flexible and scalable set of resources over the Internet. Cloud computing functions as an intermediate layer between end devices and data centers, reducing latency and network traffic. Edge computing additionally brings data processing closer to the user by performing most operations on the periphery of the network, which contributes to an instant response to the received data through the use of programmable logic controllers for process control [1].

The development of an effective architecture for such a complex interacting complex involves the use of a multi-level approach to representing the topology of a computer network, ensuring optimization of the system as a whole.

Research methodology

The purpose of this article is a detailed review of strategies and tools to improve the efficiency of high-load systems. The object under study is systems that provide the functionality of websites that simultaneously process requests from many users. The procedures of analysis and synthesis, as well as techniques for generalizing existing data and research in this field, are used as a methodological base [2].

The results of the study and their discussion

Analyzing highly loaded systems, key attention is paid to their specific attributes:

1. Highly loaded systems are characterized by a strict architecture with limited space for modifications in individual subsystems. Their complex internal structure limits the possibilities for deep adaptation, making the system less flexible due to the uniqueness of each configuration. Data processing in such systems, which requires high stability and reliability, involves careful selection of database structures based on their specifics, data volume and query intensity. Attempts to increase the flexibility of the system can lead to significant resource costs, as this requires a comprehensive rethink and possible reorganization of the basic principles of its operation [3].

2. One of the key attributes of high-load systems is their ability to provide instant response. In the context of processing data through queries, the performance of the system is critically important: delays in processing requests directly affect the time the user waits for the necessary information.

3. Scalability is a key aspect for high-load systems, as an increase in the amount of data processed can significantly increase the load on the information infrastructure. To adapt to the growing demands, scalability is usually achieved by two main methods. The first is vertical scaling, which involves increasing the performance of individual elements of the system to enhance its overall power. This method does not require structural changes in the architecture and is often implemented by upgrading equipment. The second method is horizontal scaling, which involves distributing the load between several servers or nodes running in parallel, which requires adding new components to the system and configuring the software accordingly for effective interaction between them. Although horizontal scaling involves greater complexity in implementation, it provides a more flexible and scalable solution in the long term and is often chosen as the most preferred option [4]. It is important to carefully analyze the needs of the system and determine the optimal approach to scaling, taking into account the specifics and performance requirements.

4. The development of a modular architecture for high-load systems provides for the organization of their structure in the form of separate components, which are then integrated into a variety of server platforms. This approach allows you to distribute part of the load by assigning particularly intensively used modules to multiple servers for their parallel operation. Although this solution improves performance, it can lead to problems with data consistency due to their parallel processing [5]. With an increase in the overall load on the system, the risk of data inconsistency caused by simultaneous operations of different users increases. Therefore, it is preferable to choose a strategy in which intensively used processes are allocated to separate, more powerful servers without parallelization, which contributes to improving the overall efficiency and reliability of the system.

5. The intense load on the integration level of the system is directly related to its modular structure. The expansion of the system by adding new modules increases the number of interactions between them, requiring high speed and reliability of communication processes. With an increase in the number of modules, the complexity of these interactions increases, which leads to increased load on the integration layer of the system, especially when the volume of communications increases exponentially.

6. Uniqueness is a key attribute of high-load applications, emphasizing the lack of unified standard approaches to their development. This means that each solution is developed taking into account the specific needs and requirements of a particular business, making the system unique and precisely adapted to the customer's tasks [6].

7. The effective application of the redundance strategy for key elements of the system becomes critical for maintaining the continuity of business processes, especially in the context of highly loaded systems. To ensure stable operation, such systems are equipped with duplicate nodes, both in software and hardware aspects. This does not imply the constant parallel activity of all backup components, but rather provides for their readiness to be immediately put into operation when critical loads occur, thus ensuring the unloading of the main system and its smooth functioning.

High loads are often determined not by the architecture of the system itself, but by the conditions of its operation, reflecting the specifics of activities in a particular business area. Among the reasons affecting the increase in workload, one can distinguish:

- An increase in the number of requests to customer relationship management (CRM) and enterprise resource planning (ERP) systems serving many users, as well as in customer support and contact centers, where a significant number of calls and requests are processed;

- The growth of the volume of processed information in monitoring systems with a large number of connected equipment, in business intelligence tools, as well as in CRM and ERP systems that process vast amounts of data;

- Errors in system configuration, which often occur due to flaws in the program code or lack of proper optimization, leading to increased load on server resources.

To reduce and manage the load on the system, various optimization strategies are used, including:

- The use of network protocols and external libraries to reduce the number of requests, including data caching methods, both at the database query level and when receiving responses from the server, which helps reduce delays and increase system performance;

- Optimization of database work through indexing, which speeds up data search and processing, replication for load balancing and fault tolerance, as well as partitioning for efficient management and storage of large amounts of information [7].

Detailing the methods of optimizing interaction with the database, several key areas should be highlighted:

1. Serialization and deserialization of data play an important role in the process of information exchange between the server and the client. These procedures ensure the correct transmission of data, but also entail time costs. Optimization of these processes allows you to speed up information processing and improve the overall performance of the system.

2. Caching database queries becomes an effective solution to reduce the load on information storage. Cache organization allows you to temporarily save the results of frequently executed or rarely modified queries, significantly reducing the number of database accesses. Hash tables are often used to implement caching, providing quick access to stored data.

3. Database indexing is the creation of specialized structures (indexes) that help speed up data retrieval. Indexes not only increase the speed of access to information, but also maintain data integrity. However, it should be borne in mind that operations with indexes, such as rebuilding them after deleting data, may require additional resources. The use of indexes is justified in databases with a large volume of records, where they can significantly optimize the processes of searching and processing information.

These optimization methods make it possible to improve the performance of highly loaded systems, reduce the response time to requests and increase the overall efficiency of working with data [8].

Optimizing access to data in databases may include a replication strategy that serves to increase the throughput and availability of the system. Replication is achieved by creating copies of the information database, dividing the roles between primary (master) and secondary (slave) nodes to balance the load of read operations and ensure data relevance through synchronization. The master node handles both reading and writing, distributing updates to the slave nodes. There are various replication methods:

1. Synchronous replication ensures complete consistency of data between nodes, requiring confirmation of the record from all participants, which increases reliability, but increases response time.

2. Asynchronous replication speeds up the system by switching to the next operations immediately after recording on the master node without waiting for confirmation from the slaves, which sacrifices the guarantee of data relevance on all nodes.

3. The Master-Slaves model assumes the presence of one master node that processes all operations, and many slaves that synchronize at fixed intervals. This simplifies the structure, but creates risks in case of failure of the master node.

4. The Master-Slave model adds additional flexibility by adding multiple master nodes and increasing fault tolerance by complicating the data synchronization process between them.

Each of these approaches has its advantages and disadvantages, and the choice of replication method depends on the specific requirements for performance, availability and consistency of data in a particular system.

Database optimization through partitioning is a strategy for dividing data into smaller segments aimed at improving performance and throughput [9]. This approach allows load balancing, preventing overloading of individual nodes and contributing to more efficient data processing. Partitioning can be performed using different methods:

- By key range is a basic method in which data is distributed according to certain keys. The main challenge here is to choose the key that will ensure optimal data separation.

- Using hash functions is a more advanced technique where a special hash function distributes data evenly across sections. This method ensures a more balanced distribution of data and the efficiency of the system.

To maintain optimal system performance after partitioning, it is necessary to rebalance regularly [10]. This is due to the fact that writing data to different sections can occur unevenly, which over time can lead to a slowdown in the processing speed of requests.

Conclusion

At the end of the discussion of optimization strategies, it is important to emphasize that effective improvement of system performance requires an integrated approach, including the use of a variety of different techniques. Each of them has its own unique features and assumes different levels of impact on the architecture and interaction processes in the system. The choice of a particular method or combination of them should be based on a thorough analysis of the current state of the system, identification of key points requiring optimization, as well as taking into account the specific needs and expectations of the customer.

Among the possible directions for improving performance can be highlighted:

- Analysis and optimization of algorithms for working with data, including revision of database queries, the use of more efficient algorithms for serialization and deserialization of data.

- Caching of frequently requested information to reduce database load and speed up data access.

- Scaling the system, both vertically and horizontally, to provide flexibility and scalability in response to increasing loads.

- Database replication and partitioning to improve availability and load balancing between servers.

- Using modern development approaches, including microservice architecture, to increase modularity and simplify system scaling and updating.

Thus, the optimization strategy should be multidimensional and take into account both the technical capabilities of the existing infrastructure and the long-term business goals of the customer. Developers need to be flexible in choosing tools and techniques, adapting them to specific tasks and operating conditions of the system in order to achieve an optimal balance between performance, stability and extensibility.

References
1. Catal, C. (2019). Tekinerdogan, B. Aligning education for the life sciences domain to support digitalization and Industry 4.0. Procedia Computer Science, 158, 99-106. doi:10.1016/j.procs.2019.09.032
2. Patel, C. (2020). Doshi, N. A novel MQTT security framework in generic IoT model. Procedia Computer Science, 171, 1399-1408. doi:10.1016/j.procs.2020.04.150
3. Subeesh, A., & Mehta, C.R. (2021). Automation and digitization of agriculture using artificial intelligence and internet of things. Artificial Intelligence in Agriculture, 5, 278-291. doi:10.1016/j.aiia.2021.11.004
4. Faridi, F., Sarwar, H., Ahtisham, M., Kumar, S., & Jamal, K. (2022). Cloud computing approaches in health care. Materials Today: Proceedings, 51, 1217-1223. doi:10.1016/j.matpr.2021.07.210
5. Tzounis, A., Katsoulas, N., Bartzanas, T., & Kittas, C. (2017). Internet of Things in agriculture, recent advances and future challenges. Biosystems Engineering, 164, 31-48. doi:10.1016/j.biosystemseng.2017.09.007
6. Tao, W., Zhao, L., Wang, G., & Liang, R. (2021). Review of the internet of things communication technologies in smart agriculture and challenges. Computers and Electronics in Agriculture, 189, 106-352. doi:10.1016/j.compag.2021.106352
7. Moysiadis, V., Sarigiannidis, P., Vitsas, V., & Khelifi, A. (2021). Smart farming in Europe. Computer Science Review, 39, 100-345. doi:10.1016/j.cosrev.2020.100345
8. Raj, M.; Gupta, S.; Chamola, V.; Elhence, A.; Garg, T.; Atiquzzaman, M.; & Niyato, D. (2021). A survey on the role of Internet of Things for adopting and promoting Agriculture 4.0. Journal of Network and Computer Applications, 187, 103-107. doi:10.1016/j.jnca.2021.103107
9. Boursianis, A.D., Papadopoulou, M.S., Diamantoulakis, P., Liopa-Tsakalidi, A., Barouchas, P., Salahas, G., Karagiannidis, G.; Wan, S., & Goudos, S.K. (2022). Internet of Things (IoT) and agricultural unmanned Aerial Vehicles (UAVs) in smart farming: A comprehensive review. Internet of Things, 18, 100-187. doi:10.1016/j.iot.2020.100187
10. Singh, S., Chana, I., & Buyya, R. (2020). Agri-Info: Cloud based autonomic system for delivering agriculture as a service. Internet of Things, 9, 100-131. doi:10.1016/j.iot.2019.10013

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The reviewed work is devoted to the study of the integration of cloud, fog and edge technologies for optimizing high-load systems. The research methodology is based on the study and generalization of scientific publications on the topic under consideration, the application of analysis and synthesis procedures, as well as techniques for generalizing existing data and research in this field. The authors attribute the relevance of the work to the fact that the tasks of optimizing highly loaded systems are relevant not only for web development, but also in the context of any projects involving the presence of server and client parts. The scientific novelty of the reviewed study, according to the reviewer, consists in the generalization of strategies and tools to improve the efficiency of functioning of systems with a high level of load. The following sections are highlighted in the text of the article: Introduction, Research methodology, Research results and discussion, Conclusion, Bibliography. In the article, a highly loaded system is considered an application that is experiencing a high load due to the multitude of users, vast amounts of data, or the intensity of calculations. The authors pay special attention to the specific attributes of high-load systems: a strict architecture with limited space for modifications in individual subsystems; the ability to provide instant response; scalability; organization of their structure as separate components, which are then integrated into a variety of server platforms; modular structure, uniqueness, application of a redundance strategy for key elements of the system. The publication highlights the reasons that affect the increase in system load; optimization strategies used for load management are named; key areas of optimization methods that can improve the performance of highly loaded systems, reduce response time to queries and increase overall data efficiency are highlighted; approaches to replication are considered depending on specific performance, availability and consistency requirements data in a specific system. In conclusion, it is noted that effective improvement of system performance requires an integrated approach, and possible directions for improving performance, achieving an optimal ratio between performance, stability and extensibility of the system are reflected. The bibliographic list includes 10 sources – scientific publications in English on the topic under consideration, which are referenced in the text, which confirms the existence of an appeal to opponents. As comments, it should be noted that there are no references to scientific works published in Russian – it seems that there are also Russian-language publications that deserve attention; there are inconsistent sentences in the text, for example, "Analyzing highly loaded systems, key attention is paid to their specific attributes..." etc. The reviewed material corresponds to the direction of the journal "Software Systems and Computational Methods", reflects the results of the work carried out by the authors, and may be of interest to readers, since it contains interesting information about the integration of cloud, fog and edge technologies for optimizing highly loaded systems.