Resource management in cloud computing
Customer could access to different applications and various services through cloud computing over the internet. With the rapid usage of cloud computing in business, academy and industrial environments, Cloud service providers give high storage space and computation ability to consumers through virtualization technology and cloud computing as a unified system. However allocating virtual machines (VMs) among physical machines are important issues in virtualization technology which will be controlled by Resource management. In Resource management, making selection of different strategies can effect on costs, energy usages and system efficiency. For example, if Resource management is not able to allocate resources among users in peak time traffic, the level of efficiency will be degraded. Therefore, deciding how to select appropriate strategy for managing resources is a demanding task which should be controlled by Resource management. In this section we focus on some important problems in Resource management and we review some researches which are related to these issues.
Resource management algorithms
Greedy algorithms
In order to solve resource management problem in cloud computing environments, the traditional greedy algorithms have been used. The aim of these algorithms is to find a local best solution and they can be a good candidate for solving VM placement and VM consolidation problems. However they are not able to necessarily find global optimal solution due to the local solution procedure. However these algorithms do not have complexity for implementation and have low polynomial time complexity (Feller, 2013). There are two kind of greedy algorithms: Offline and Online. The offline algorithms know all VM requests and they can make decision based on their requests. But the online algorithms do not have any knowledge about the whole VM requests and they allocate VMs to PMs as they receive new VMs. As an example First Fit Decreasing (FFD) is a well-known offline algorithm where the VMs are sorted in descending order (based on their request demands). These sorted VMs are allocated to PMs. On the other hand First Fit (FF) is an online greedy algorithm where the VMs are placed on the first available PMs with enough resource capacity. If the current PM does not have enough capacity, a new PM is activated to host the VM (Yue, 1991).
Mathematical programming algorithms
Constraint Programming (CP) (Rossi, Beek and Walsh, 2006) and Linear Programming (LP) (Schrijver, 1986) are examples of mathematical programming that are able to find optimal solution for VM placement and VM consolidation problems. However these algorithms need exponential time to find optimal solution. In addition we are not able to take into consideration different objectives in mathematical programming algorithms. So the execution time of these algorithms depends on number of VMs and PMs.
Meta-heuristic approaches
Meta-heuristic algorithms have also been proposed for resource management in cloud computing. These algorithms are able to find sub optimal solutions based on their probabilistic algorithms. Genetic algorithms (GA) (Goldberg, 1989) together with Ant colony optimization algorithms (ACO) (Dorigo, Caro and Gambardella, 1999), and Imperialist competitive algorithm (Eduardo Pinheiro ) (Atashpaz-Gargari and Lucas, 2007) are few examples of meta-heuristic approaches. Compared to mathematical programming and greedy algorithms, the meta-heuristic algorithms allow defining a multi-objective approach. However these algorithms generate random data and due to this reason they cannot guarantee to find optimal solutions.
Energy-aware resource management
VM placement
Cloud computing model support a variety of applications on a shared hardware platforms. Due to the popularity of cloud computing, large-scale data centers need thousands of computing servers in order to cover customers’ needs. The more servers are used in large data centers, the more energy is consumed. High performance has always been the main concern in deployment of data centers keeping a side the energy consumption and it impacts on the environment. According to Kaplan research in 2008 (James M. Kaplan, 2008), data centers consume as much energy as twenty five thousands house consumptions. Due to the large usage of data centers, Green cloud computing is introduced to minimize energy usage and achieve efficient management of cloud computing infrastructure (Rajkumar Buyya, 2010). In fact, Cloud providers need to reduce not only energy consumption but also to guarantee customer service delivery based on QoS constraints (Beloglazov, Abawajy and Buyya, 2012). In this section, we discuss one of the main issues in cloud computing: The Energy-aware resource management. Pinheiro and Bianchini in (Eduardo Pinheiro 2001) have considered the issue of energy consumption in large clusters and PCs. Their approach is to develop systems for minimizing energy consumption in replication of cluster nodes and resources. In order to manage load balancing in the system efficiently, they used a technique to minimize cluster nodes and switching idle nodes off. They proposed a load distribution algorithm with cluster configuration under trade-off between performance (execution time & throughput) and power. Based on the performance expectation of the system, the algorithm monitors load of the resources and makes decision to turn on or turn off nodes dynamically for each cluster configuration. In comparison with static cluster configuration, authors claim that the proposed approach allows saving 43% and 86% of energy and power consumptions respectively. This system can be implemented in multi-application environment. However the algorithm executes on primary node and it may become a bottleneck of performance and create a single point of failure as well. On the other hand at a time one node has been added or has been removed by the algorithm whereas this method is not able to react immediately in large scale environments.
The issue of energy consumption in Internet hosting centers has been analyzed by Chase in (Chase et al., 2001). The main objective of this research work is to manage energy usage resource management frameworks in data centers. The authors propose resource management architecture for adaptive resource provisioning in data centers based on economic approach. This system is called Muse and it is based on executable utility function that allows measuring performance value of each service. The main challenge is how to find out the request of resources for each customer and also how to allocate these resources in an efficient way. In this approach system monitors load of resources and allocate resources based on their affection on service performance. In order to allocate resource efficiently, a greedy algorithm for resource allocation have been used to maximize profit by balancing the estimated revenue against resource unit. In order to solve the problem of a “noise” in loading web data and reduce the number of inefficient allocation, statistical flip flop filter has been used. One of the advantages of this system is that an active set of servers can be changed by converting idle server to sleep mode in order to save the power consumption. In this approach, authors propose to manage only CPU usage. For a typical and representative web workload, their experimental results display that the energy consumption can be minimized from 29% to78%.
|
Table des matières
INTRODUCTION
CHAPTER 1 STATE OF THE ART
1.1 Introduction
1.2 Background and definition
1.2.1 Cloud computing overview
1.2.2 Definition
1.2.3 Cloud computing deployment models
1.2.4 Cloud computing service models
1.2.5 Comparison of cloud computing with similar technologies
1.3 Introduction to virtualization technology
1.3.1 Types of virtualization
1.4 Resource management in cloud computing
1.4.1 Resource management algorithms
1.4.2 Energy-aware resource management
1.4.3 Open issues of resource management in cloud computing
1.5 Conclusion
CHAPTER 2 MULTI-OBJECTIVE META-HEURISTIC VM PLACEMENT
ALGORITHMS
2.1 Introduction
2.2 Problem statement and assumptions
2.3 Objective function formulation
2.3.1 Minimize energy consumption
2.3.2 Minimize resource wastage
2.3.3 Minimize energy communication cost
2.4 Methodologies
2.4.1 Multi objective optimization
2.4.2 Multi objective ACO placement (MACO)
2.4.3 Multi objective GA placement (MGA)
2.5 Conclusion
CHAPTER 3 MULTI-OBJECTIVE META-HEURISTIC CONSOLIDATION
ALGORITHMS
3.1 Introduction
3.2 Problem statement and assumptions
3.3 Objective function formulation
3.3.1 Minimize energy consumption
3.3.2 Minimize number of SLA violations
3.3.3 Minimize number of migrations
3.3.4 Minimize number of active PMs
3.4 Methodologies
3.4.1 Multi-objective ACO consolidation algorithm
As already explained in section 2.4.2, heuristic information .
3.5 Conclusion
CHAPTER 4 RESULT ANALYSIS
4.1 Introduction
4.2 Setup of the simulation environment
4.3 Simulation results of placement algorithms
4.4 Performance analysis of consolidation algorithms
4.4.1 Multi objective ACO and GA algorithms performance analysis – First
approach
4.4.2 Multi objective ACO and GA algorithms performance analysis – Second
approach
CONCLUSION
Télécharger le rapport complet