Foundations and Trends (R) in Electronic Design Automation
2 total works
Datacenter Power Management in Smart Grids overviews recent work on managing and minimizing the cost of data centers in the context of smart grids. It starts by reviewing the operation of smart grids and analyzing how power is consumed in datacenters. Then, it presents various cost minimization approaches using techniques from the fields of optimization, algorithmics, and feedback control. In particular, it focuses on approaches that utilize time-of-use pricing and demand response features to cut the datacenter electricity cost. In a cloud computing environment, companies or individuals offload their computing to the cloud, which is supported by the computing infrastructure called datacenters.
The operation of these datacenters consumes large amounts of electricity, bringing high costs and negatively impacting the environment. In the mean time, a new kind of electrical grid, the smart grid, is emerging. Smart grids enable two-way communications between the power generators and the power consumers. Smart grid technology brings many salient features to help deliver power efficiently and reliably.
While a lot of research has been conducted on both datacenters and smart grids, Datacenter Power Management in Smart Grids takes the novel approach of considering both together and focuses on cost-aware datacenter power management in the presence of smart grids. This work reviews recent developments in this area and explains how a smart grid operates, where power goes in datacenters, and, most importantly, how to reduce the power cost and/or negative environmental impact when operating datacenters.
The operation of these datacenters consumes large amounts of electricity, bringing high costs and negatively impacting the environment. In the mean time, a new kind of electrical grid, the smart grid, is emerging. Smart grids enable two-way communications between the power generators and the power consumers. Smart grid technology brings many salient features to help deliver power efficiently and reliably.
While a lot of research has been conducted on both datacenters and smart grids, Datacenter Power Management in Smart Grids takes the novel approach of considering both together and focuses on cost-aware datacenter power management in the presence of smart grids. This work reviews recent developments in this area and explains how a smart grid operates, where power goes in datacenters, and, most importantly, how to reduce the power cost and/or negative environmental impact when operating datacenters.
Utilization Control and Optimization of Real-Time Embedded Systems
by Xue Liu, Xi Chen, and Fanxin Kong
Published 23 September 2015
Real-time embedded systems are widely deployed in mission-critical applications, such as avionics mission computing, highway traffic control, remote patient monitoring, wireless communications, navigation, etc. These applications always require their real-time and embedded components to work in open and unpredictable environments, where workload is volatile and unknown. In order to guarantee the temporal correctness and avoid severe underutilization or overload, it is of vital significance to measure, control, and optimize the processor utilization adaptively.
This monograph examines utilization control and optimization in real-time embedded systems. In many practical real-time embedded applications, it is desired to keep the processors' utilizations at the schedulable upper bounds. In this way, the systems deliver their best Quality of Service (QoS), and, at the same time, all real-time tasks remain schedulable. In order to achieve this goal, the authors present several effective solutions that adaptively adjust task rates and/or processor frequencies to enforce the desired utilization. Feedback control and optimization techniques have been leveraged to ensure that a system is neither overloaded nor underutilized.
This monograph examines utilization control and optimization in real-time embedded systems. In many practical real-time embedded applications, it is desired to keep the processors' utilizations at the schedulable upper bounds. In this way, the systems deliver their best Quality of Service (QoS), and, at the same time, all real-time tasks remain schedulable. In order to achieve this goal, the authors present several effective solutions that adaptively adjust task rates and/or processor frequencies to enforce the desired utilization. Feedback control and optimization techniques have been leveraged to ensure that a system is neither overloaded nor underutilized.