RLEM Workshop 2023

Document

Previous Edition: RLEM Workshop 2021

Second ACM SIGEnergy Workshop on Reinforcement Learning for Energy Management in Buildings & Cities (RLEM)

About

RLEM brings together researchers and industry practitioners for the advancement of (deep) reinforcement learning (RL) in the built environment as it is applied for managing energy in civil infrastructure systems (energy, water, transportation).

RLEM’21 will be held in conjunction with ACM BuildSys’21. Following BuildSys’s directive, the conference will be held virtually on November 16th 2021.

:tv: Watch RLEM’21 on YouTube: Part 1 and Part 2!

Registration

Register now via BuildSys 2021.

Important Dates

Call for Papers

Buildings account for 40% of the global energy consumption and 30% of the associated greenhouse gas emissions, while also offering a 50–90% CO2 mitigation potential. The transportation sector is responsible for an additional 30%. Optimal decarbonization requires electrification of end-uses and concomitant decarbonization of electricity supply, efficient use of electricity for lighting, space heating, cooling and ventilation (HVAC), and domestic hot water generation, and upgrade of the thermal properties of buildings. A major driver for decarbonization are integration of renewable energy systems (RES) into the grid, and photovoltaics (PV) and solar-thermal collectors as well as thermal and electric storage into residential and commercial buildings. Electric vehicles (EVs), with their storage capacity and inherent connectivity, hold a great potential for integration with buildings.

The integration of these technologies must be done carefully to unlock their full potential. Artificial intelligence is regarded as a possible pathway to orchestrate these complexities of Smart Cities. In particular, (deep) reinforcement learning algorithms have seen an increased interest and have demonstrated human expert level performance in other domains, e.g., computer games. Research in the building and cities domain has been fragmented and with focus on different problems and using a variety of frameworks. The purpose of this Workshop is to build a growing community around this exciting topic, provide a platform for discussion for future research direction, and share common frameworks.

Topics of Interest

Topics of interest include, but are not limited to:

Submission Instructions

Submitted papers must be unpublished and must not be currently under review for any other publication. Paper submissions must be at most 4 single-spaced US Letter (8.5”x11”) pages, including figures, tables, and appendices (excluding references). All submissions must use the LaTeX (preferred) or Word styles found here https://www.acm.org/publications/proceedings-template. Authors must make a good faith effort to anonymize their submissions by (1) using the “anonymous” option for the class and (2) using “anonsuppress” section where appropriate. Papers that do not meet the size, formatting, and anonymization requirements will not be reviewed. Please note that ACM uses 9-pt fonts in all conference proceedings, and the style (both LaTeX and Word) implicitly define the font size to be 9-pt.

Program

All times are in Greenwich Meridian Time (GMT).

Time Session Title Speaker Abstract
15:00-15:20 Opening remarks General Chair and TPC Chairs -
15:20-16:00 Keynote 1 Scalable Load Management System using Reinforcement Learning Helia Zandi (Modeling and Simulation Software Engineer at Oak Ridge National Laboratory): Helia Zandi received her M.S. in Computer Engineering from University of Florida in 2012. She is currently a Reseacrh and Development Staff in the Computational Systems Engineering Group in Computational Sciences and Engineering Division at Oak Ridge National Laboratory (ORNL). She joined ORNL as a Modeling and Simulation Software Engineer in 2016. Prior to that, she worked at ProNova Solutions as a Researcher and Software Engineer in Imaging and Positioning team. She developed sophisticated automation methodologies for accurate robot positioning and robot calibration. Her research expertise includes machine learning, robotics, advance data analytics, cyber-physical system, large-scale building-to-grid integration, and design and developing optimization and control platform with an application to buildings and energy systems. Internet of things has revolutionized the interaction between devices, actuators, sensors, and individuals in a building. Effective integration of these assets in buildings has critical role in providing load flexibility to support grid resiliency. In addition to a frequently changing load shape due to new demand patterns, the increasing penetration of renewable energy sources adds variability to power generation. The grid needs a reliable control system to coordinate the effects of these changes in real time to ensure safe and reliable operation. To address this, many optimization methods have been developed and researched. However, many of the existing approaches cannot adapt to the changes in the environment. Model-free reinforcement learning has gained a lot of attention in recent years for creating the optimal load schedule in buildings to provide benefits for building and utility stakeholders. In this talk, we will discuss a scalable software framework and various deep model-free RL algorithms that we have developed, and field tested over the past few years for optimal operation of the buildings.
16:00-16:50 Session 1: Addressing challenges of applying RL in real-world buildings Sensitivity and Robustness of End-to-end Data-Driven Approach for Building Performance Optimization A. Naug (Vanderbilt University), M. Quinones-Grueiro (Vanderbilt University), G. Biswas (Vanderbilt University) This paper discusses an approach to optimizing the performance of an end-to-end data-driven control approach for building energy management. The proposed approach, designed for systems that exhibit non-stationary behavior, involves two primary components: (1) performance degradation detection, followed by (2) relearning a set of data-driven models of the system to update the controller policy using a reinforcement learning approach. The overall control framework involves a large hyperparameter space that has to be tuned for "optimal" performance. In this paper, we analyze the sensitivity and robustness to a small set of relevant hyperparameters that have a significant impact on the overall performance. We study the performance in terms of the accuracy of the derived data-driven models that support relearning and the speed of convergence of the reinforcement learning controller.
Addressing partial observability in reinforcement learning for energy management M. Biemann (TU Denmark), X. Liu (TU Denmark), Y. Zeng (Northumbria University), L. Huang (Norwegian University of Science and Technology) Automatic control of energy systems is affected by the uncertainties of multiple factors, including weather, prices and human activities. The literature relies on Markov-based control, taking only into account the current state. This impacts control performance, as previous states give additional context for decision making. We present two ways to learn non-Markovian policies, based on recurrent neural networks and variational inference. We evaluate the methods on a simulated data centre HVAC control task. The results show that the off-policy stochastic latent actor-critic algorithm can maintain the temperature in the predefined range within three months of training without prior knowledge while reducing energy consumption compared to Markovian policies by more than 5%.
Deep Q-Network Boosted with External Knowledge for HVAC Control K. Jneid (Université Grenoble Alpes, LIG), S. Ploix (Grenoble INP), P. Reignier (Université Greno-ble Alpes, LIG), P. Jallon (eLichens) Heating, ventilation, and air conditioning (HVAC) systems consume nearly 40% of the total energy consumption in developed countries. Traditional techniques such as rule based control (RBC) fail to control these systems in an optimal way. Model predictive control (MPC) has been widely explored in literature as well but it doesn't represent a practical solution due to the complexity of buildings' dynamics that it relies on. Recently, deep reinforcement learning (DRL) has shown great success in the domain of optimal control such as robotics and gaming. In this paper, we develop two model-free DRL approaches to optimize the energy consumption of an office while maintaining thermal comfort and good indoor air quality through controlling the radiator and the opening/closing of a window and a door existing in the office. The two DRL approaches belong to deep-Q network (DQN): the first approach represents a DQN agent with no knowledge of the environment and the second approach represents a DQN agent with initial knowledge of the environment: A hybrid approach DQN+RBC. The goal of having external knowledge in DQN agent is to boost convergence by exploiting the RBC rules. We evaluate the performance of these two approaches against an RBC approach through simulations using a physical model of the office's dynamics. Experiments show that the two DRL approaches succeeded to maintain better thermal comfort and better indoor air quality compared with RBC approach while consuming nearly the same energy. In addition, experiments demonstrate that the DQN with knowledge outperforms the DQN with no knowledge in the beginning and converges faster to the optimal value.
16:50-17:00 Break
17:00-17:40 Keynote 2 Data-driven Control of Complex Engineering Systems Andrey Bernstein (Group Manager at National Renewable Energy Laboratory): Andrey Bernstein received his B.Sc., M.Sc., and PhD degrees in Electrical Engineering from the Technion - Israel Institute of Technology. Between 2010 and 2011, he was a visiting researcher at Columbia University. During 2011-2012, he was a visiting Assistant Professor at the Stony Brook University. From 2013 to 2016, he was a postdoctoral researcher at the Laboratory for Communications and Applications of Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. Since October 2016 he has been a Senior Researcher and Group Manager at the National Renewable Energy Laboratory, Golden, CO, USA. His research interests are in the decision and control problems in complex environments and related optimization and machine learning methods, with application to power and energy systems. Optimal control of complex engineering systems is an extremely hard task. The classical approach, based on optimal control concepts (such as model-predictive control), is infeasible for large-scale systems where the accurate model is unavailable or is expensive to develop. Thus, machine learning (ML) approaches are becoming a popular alternative. In this talk, we will overview two most popular ML-based approaches, one based on reinforcement learning and another based on data-driven predictive control, their variants, and their application to optimal control of grid-interactive buildings.
17:40-18:00 Community Announcements CityLearn Challenge Winners, Annex#81, Climate Change AI General Chair and TPC Chairs -
18:00-18:50 Session 2: Benchmarking RL with other controls including other RLs Comparative Analysis of Model-Free and Model-Based HVAC Control for Residential Demand Response K. Kurte (ORNL), K. Amasyali (ORNL), J. Munk(ORNL), H. Zandi (ORNL) In this paper, we present a comparative analysis of model-free reinforcement learning (RL) and model predictive control (MPC) approaches for intelligent control of heating, ventilation, and air-conditioning (HVAC). Deep-Q-network (DQN) is used as a candidate for model-free RL algorithm. The two control strategies were developed for residential demand-response (DR) HVAC system. We considered MPC as our golden standard to compare DQN's performance. The question we tried to answer through this work was, What % of MPC's performance can be achieved by model-free RL approach for intelligent HVAC control?. Based on our test result, RL achieved an average of ≈ 62% daily cost saving of MPC. Considering the pure optimization and model-based nature of MPC methods, the RL showed very promising performance. We believe that the interpretations derived from this comparative analysis provide useful insights to choose from various DR approaches and further enhance the performance of the RL-based methods for building energy managements.
Sinergym: A Building Simulation and Control Framework for Training Reinforcement Learning Agents J. Jiménez-Raboso (Universidad de Granada), A. Campoy-Nieves (Universidad de Granada), A. Manjavacas-Lucas (Universidad de Granada), J. Gómez-Romero (Universidad de Granada), M. Molina-Solana (Universidadde Granada) We introduce Sinergym, an open-source building simulation and control framework for training reinforcement learning agents. The proposed framework is compatible with EnergyPlus models and allows to implement Python-based controllers, facilitating reproducibility of experiments and generalization to multiple scenarios. A comparison between Sinergym and other existing libraries for building control is included. We describe its design and main functionalities, such as offering a diverse set of environments with different buildings, weather types and action spaces. The provided examples show the usage of the framework for benchmarking reinforcement learning methods for building control.
Collaborative energy demand response with decentralized actor and centralized critic R. Glatt (LLNL), F. Leno da Silva (LLNL), B. Soper (LLNL), W. A. Dawson (LLNL), E. Rusu (LLNL), R. A. Goldhahn (LLNL) The ongoing industrialization and rising technology adoption around the world are leading to ever higher energy consumption. The benefits of electrification are enormous, but the growing demand also comes with challenges with respect to associated greenhouse gas emissions. Although continuing progress in energy research has brought up new technologies in energy generation, storage, and distribution, most of those technologies focus on increasing efficiency of individual components. Work on integration and coordination abilities between individual components in micro-grids will lead to further improvements and gains in efficiency that are necessary to reduce carbon footprints and slow down climate change. To this end, the CityLearn environment provides a simulation framework that allows the control of energy components in buildings that are organized in districts. In this paper, we propose an energy management system based on the decentralized actor-critic reinforcement learning algorithm MARLISA but integrate a centralized critic and call it MARLISADSCC. In this way, we are training a model to autonomously control the energy storage of individual buildings in a CityLearn district to improve demand response guided by a better informed training signal. We show performance increases over baseline control techniques for a district but also discuss the resulting action selection for individual buildings.
18:50-19:00 Break
19:00-20:20 Session 3: Getting started with RL COBS: COmprehensive Building Simulator T. Zhang, O. Ardakanian -
Tools for the Design and Evaluation of Building Controls: Spawn of EnergyPlus, OpenBuildingControl, and BOPTEST D. Blum, M. Wetter -
ACTB: Advanced Controls Test Bed T. Marzullo, S. Dey, N. Long, G. Henze -
CityLearn: Demand response using multi-agent reinforcement learning J. Vazquez-Canteli -
20:20-20:30 Closing remarks General Chair and TPC Chairs -

Organization

General Chairs

  1. Zoltan Nagy (The University of Texas at Austin)

Technical Program Committee Co-Chairs

  1. Jan Drgona (Pacific Northwest National Laboratory)
  2. June Young Park (University of Texas at Arlington)

Web Chair

  1. Matias Quintana (National University of Singapore)

Technical Program Committee

  1. Anand Krishnan Prakash (Lawrence Berkeley National Lab)
  2. Anjukan Kathirgamanathan (University College Dublin)
  3. Ankush Chakrabarty (MERL - Mitsubishi Electric Research Laboratories)
  4. Bharathan Balaji (Amazon)
  5. Bratislav Svetozarevic (Swiss Federal Laboratories for Materials Science and Technology)
  6. Giuseppe Pinto (Politecnico di Torino)
  7. Hari Prasanna Das (UC Berkeley)
  8. Helia Zandi (Oak Ridge National Laboratory)
  9. Jose Vazquez-Canteli (Mapped)
  10. Kuldeep Kurte (Oak Ridge National Laboratory)
  11. Ming Jin (Virginia Tech)
  12. Omid Ardakanian (University of Alberta)
  13. Silvio Brandi (Politecnico di Torino)
  14. Xin Jin (National Renewable Energy Laboratory)
  15. Zhe Wang (Lawrence Berkeley National Lab)
  16. Zhiang Zhang (University of Nottingham Ningbo China)

Location

RLEM Workshop 2021 will be held virtually while ACM BuildSys’21 is located at Coimbra, Portugal.