On the Control of Electric Vehicle Charging in the Smart Grid
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Over the last decade, the demand for electric vehicles (EVs) has surged across the globe. This spurred an increase in the installation of public and private EV charging points which are typically connected to low-voltage power distribution feeders. A high penetration of plug-in EVs in distribution networks is anticipated to cause several problems, such as transformer overloading, voltage limits violations, and increased heat losses. Hence, a demand-side management strategy is needed to control the real power drawn by the charging points. These control strategies can be classified as model-based and model-free depending on whether they rely on a model of the distribution network (i.e., the admittance matrix). This thesis aims to investigate how to control the charge power of EVs from user-centric and grid centric perspectives. We design and evaluate two control frameworks that are suitable for the smart grid.
Given an approximate model of the distribution grid, we first propose a reputation-based framework for allocating power to EVs in the smart grid. In this framework, the available capacity of the distribution network measured by distribution-level phasor measurement units is divided in a proportionally fair manner among connected EVs, considering their demands and self-declared deadlines. To encourage users to estimate their deadlines more precisely and conservatively, a weight is assigned to each deadline based on the user’s reputation, which comprises two kinds of evidence: the deadlines declared before and after the actual departure times in the recent past. We design a decentralized algorithm that achieves quadratic convergence under specific conditions and evaluate it empirically on a test distribution network by comparing it with state of the art algorithms.
In the second framework, we relax the assumption of having a model and propose a model-free, adaptive additive-increase multiplicative-decrease (AIMD)-like algorithm for controlled charging of EVs. This control algorithm is decentralized and merely relies on congestion signals generated by sensors deployed across the network. To dynamically adjust the parameter of this congestion control algorithm, we cast the problem as multi-agent reinforcement learning where each charging point is an independent agent which learns this parameter using an off-policy actor-critic deep reinforcement learning algorithm.
Simulation results of both control algorithms on a test distribution network and on a parking station corroborate that the proposed algorithms track the available capacity of the network in real-time, prevents transformer overloading and voltage limits violation problems for an extended period of time, and outperforms several other decentralized feedback control algorithms proposed in the literature.
