Image copyrights basti_m28

Welcome to RLEM 2021 YouTube link

About RLEM'21

RLEM brings together researchers and industry practitioners for the advancement of (deep) reinforcement learning (RL) in the built environment as it is applied for managing energy in civil infrastructure systems (energy, water, transportation). Following BuildSys's directive, the conference will be held virtually in November 17th 2021. More information about how to join the virtual sessions will be posted here soon.

RLEM'21 will be held in conjunction with ACM BuildSys'21

Important Dates

Abstract submission

September 6, 2021 (AOE)

Paper submission

September 6, 2021 (AOE)

Notifications

September 27, 2021 (AOE)

Camera Ready

October 1, 2021 (AOE)

Workshop date

November 16, 2021

Call for Papers

Buildings account for 40% of the global energy consumption and 30% of the associated greenhouse gas emissions, while also offering a 50–90% CO2 mitigation potential. The transportation sector is responsible for an additional 30%. Optimal decarbonization requires electrification of end-uses and concomitant decarbonization of electricity supply, efficient use of electricity for lighting, space heating, cooling and ventilation (HVAC), and domestic hot water generation, and upgrade of the thermal properties of buildings. A major driver for decarbonization are integration of renewable energy systems (RES) into the grid, and photovoltaics (PV) and solar-thermal collectors as well as thermal and electric storage into residential and commercial buildings. Electric vehicles (EVs), with their storage capacity and inherent connectivity, hold a great potential for integration with buildings.

The integration of these technologies must be done carefully to unlock their full potential. Artificial intelligence is regarded as a possible pathway to orchestrate these complexities of Smart Cities. In particular, (deep) reinforcement learning algorithms have seen an increased interest and have demonstrated human expert level performance in other domains, e.g., computer games. Research in the building and cities domain has been fragmented and with focus on different problems and using a variety of frameworks. The purpose of this Workshop is to build a growing community around this exciting topic, provide a platform for discussion for future research direction, and share common frameworks.

Topics of Interest

Topics of interest include, but are not limited to:

  • Challenges and Opportunities for RL in Building and Cities
  • Explorations of model vs model-free RL algorithms and hybrids
  • Comparisons of RL algorithms to other control solutions, e.g., model-predictive control
  • Frameworks and datasets for benchmarking algorithms
  • Theoretical contributions to the RL field brought about by constraints/challenges in the buildings/cities domain
  • Applications (demand response, HVAC control, occupant integration, traffic scheduling, EV/battery charging, DER integration)

Submission Instructions

Submitted papers must be unpublished and must not be currently under review for any other publication. Paper submissions must be at most 4 single-spaced US Letter (8.5"x11") pages, including figures, tables, and appendices (excluding references). All submissions must use the LaTeX (preferred) or Word styles found here https://www.acm.org/publications/proceedings-template. Authors must make a good faith effort to anonymize their submissions by (1) using the "anonymous" option for the class and (2) using "anonsuppress" section where appropriate. Papers that do not meet the size, formatting, and anonymization requirements will not be reviewed. Please note that ACM uses 9-pt fonts in all conference proceedings, and the style (both LaTeX and Word) implicitly define the font size to be 9-pt.

Submission Link

Registration

Registration: Register now via BuildSys 2021

Program

All times in Western European timezone (GMT)

Time Presentation
15:00 - 15:20 Opening remarks, by General Chair and TPC Chairs
15:20 - 16:00 1st Keynote: Helia Zandi (Oak Ridge National Laboratory)
Session 1: Addressing challenges of applying RL in real-world buildings
16:00 - 16:50 A. Naug (Vanderbilt University), M. Quinones-Grueiro (Vanderbilt University), G. Biswas (Vanderbilt University) - Sensitivity and Robustness of End-to-end Data-Driven Approach for Building Performance Op-timization
M. Biemann (TU Denmark), X. Liu (TU Denmark), Y. Zeng (Northumbria University), L. Huang (Norwegian University of Science and Technology) - Addressing partial observability in reinforcement learning for energy management
K. Jneid (Université Grenoble Alpes, LIG), S. Ploix (Grenoble INP), P. Reignier (Université Greno-ble Alpes, LIG), P. Jallon (eLichens) - Deep Q-Network Boosted with External Knowledge for HVAC Control
16:50 - 17:00 Break
17:00 - 17:40 2nd Keynote: Andrey Bernstein (National Renewable Energy Laboratory)
17:40 - 18:00 Community Announcements (CityLearn Challenge Winners, Annex#81, Climate Change AI)
Session 2: Benchmarking RL with other controls including other RLs
18:00 - 18:50 K. Kurte (ORNL), K. Amasyali (ORNL), J. Munk(ORNL), H. Zandi (ORNL) - Comparative Analysis of Model-Free and Model-Based HVAC Control for Residential Demand Response
J. Jiménez-Raboso (Universidad de Granada), A. Campoy-Nieves (Universidad de Granada), A. Manjavacas-Lucas (Universidad de Granada), J. Gómez-Romero (Universidad de Granada), M. Molina-Solana (Universidadde Granada) - Sinergym: A Building Simulation and Control Framework for Training Reinforcement Learning Agents
R. Glatt (LLNL), F. Leno da Silva (LLNL), B. Soper (LLNL), W. A. Dawson (LLNL), E. Rusu (LLNL), R. A. Goldhahn (LLNL) - Collaborative energy demand response with decentralized actor and centralized critic
18:50 - 19:00 Break
Session 3: Getting started with RL
19:00 - 20:20 T. Zhang, O. Ardakanian - COBS: COmprehensive Building Simulator
D. Blum, M. Wetter - Tools for the Design and Evaluation of Building Controls: Spawn of EnergyPlus, OpenBuildingControl, and BOPTEST
T. Marzullo, S. Dey, N. Long, G. Henze - ACTB: Advanced Controls Test Bed
J. Vazquez-Canteli - CityLearn: Demand response using multi-agent reinforcement learning
20:20 - 20:30 Closing remarks

Keynote Speaker

Helia Zandi (Modeling and Simulation Software Engineer at Oak Ridge National Laboratory)

Scalable Load Management System using Reinforcement Learning

Abstract: Internet of things has revolutionized the interaction between devices, actuators, sensors, and individuals in a building. Effective integration of these assets in buildings has critical role in providing load flexibility to support grid resiliency. In addition to a frequently changing load shape due to new demand patterns, the increasing penetration of renewable energy sources adds variability to power generation. The grid needs a reliable control system to coordinate the effects of these changes in real time to ensure safe and reliable operation. To address this, many optimization methods have been developed and researched. However, many of the existing approaches cannot adapt to the changes in the environment. Model-free reinforcement learning has gained a lot of attention in recent years for creating the optimal load schedule in buildings to provide benefits for building and utility stakeholders. In this talk, we will discuss a scalable software framework and various deep model-free RL algorithms that we have developed, and field tested over the past few years for optimal operation of the buildings.

Bio: Helia Zandi received her M.S. in Computer Engineering from University of Florida in 2012. She is currently a Reseacrh and Development Staff in the Computational Systems Engineering Group in Computational Sciences and Engineering Division at Oak Ridge National Laboratory (ORNL). She joined ORNL as a Modeling and Simulation Software Engineer in 2016. Prior to that, she worked at ProNova Solutions as a Researcher and Software Engineer in Imaging and Positioning team. She developed sophisticated automation methodologies for accurate robot positioning and robot calibration. Her research expertise includes machine learning, robotics, advance data analytics, cyber-physical system, large-scale building-to-grid integration, and design and developing optimization and control platform with an application to buildings and energy systems.

Andrey Bernstein (Group Manager at National Renewable Energy Laboratory)

Data-driven Control of Complex Engineering Systems

Abstract: Optimal control of complex engineering systems is an extremely hard task. The classical approach, based on optimal control concepts (such as model-predictive control), is infeasible for large-scale systems where the accurate model is unavailable or is expensive to develop. Thus, machine learning (ML) approaches are becoming a popular alternative. In this talk, we will overview two most popular ML-based approaches, one based on reinforcement learning and another based on data-driven predictive control, their variants, and their application to optimal control of grid-interactive buildings.

Bio: Andrey Bernstein received his B.Sc., M.Sc., and PhD degrees in Electrical Engineering from the Technion - Israel Institute of Technology. Between 2010 and 2011, he was a visiting researcher at Columbia University. During 2011-2012, he was a visiting Assistant Professor at the Stony Brook University. From 2013 to 2016, he was a postdoctoral researcher at the Laboratory for Communications and Applications of Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. Since October 2016 he has been a Senior Researcher and Group Manager at the National Renewable Energy Laboratory, Golden, CO, USA. His research interests are in the decision and control problems in complex environments and related optimization and machine learning methods, with application to power and energy systems.

Organization

General Chair

  • Zoltan Nagy (University of Texas at Austin)

Technical Program Committee Co-Chairs

Web Chair

Technical Program Committee