Image copyrights basti_m28

Welcome to RLEM 2020

About RLEM'20

RLEM brings together researchers and industry practitioners for the advancement of (deep) reinforcement learning (RL) in the built environment as it is applied for managing energy in civil infrastructure systems (energy, water, transportation). Following BuildSys's directive, the conference will be held virtually in November 17th 2020. More information about how to join the virtual sessions will be posted here soon.

RLEM'20 will be held in conjunction with ACM BuildSys'20

Logo

Important Dates

Abstract submission

August 10, 2020 (AOE)

Paper submission

August 17, 2020 (AOE)

Notifications

September 21, 2020 (AOE)

Camera Ready

October 10, 2020 (AOE)

Workshop date

November 17, 2020

Call for Papers

Buildings account for 40% of the global energy consumption and 30% of the associated greenhouse gas emissions, while also offering a 50–90% CO2 mitigation potential. The transportation sector is responsible for an additional 30%. Optimal decarbonization requires electrification of end-uses and concomitant decarbonization of electricity supply, efficient use of electricity for lighting, space heating, cooling and ventilation (HVAC), and domestic hot water generation, and upgrade of the thermal properties of buildings. A major driver for decarbonization are integration of renewable energy systems (RES) into the grid, and photovoltaics (PV) and solar-thermal collectors as well as thermal and electric storage into residential and commercial buildings. Electric vehicles (EVs), with their storage capacity and inherent connectivity, hold a great potential for integration with buildings.

The integration of these technologies must be done carefully to unlock their full potential. Artificial intelligence is regarded as a possible pathway to orchestrate these complexities of Smart Cities. In particular, (deep) reinforcement learning algorithms have seen an increased interest and have demonstrated human expert level performance in other domains, e.g., computer games. Research in the building and cities domain has been fragmented and with focus on different problems and using a variety of frameworks. The purpose of this Workshop is to build a growing community around this exciting topic, provide a platform for discussion for future research direction, and share common frameworks.

Topics of Interest

Topics of interest include, but are not limited to:

  • Challenges and Opportunities for RL in Building and Cities
  • Explorations of model vs model-free RL algorithms and hybrids
  • Comparisons of RL algorithms to other control solutions, e.g., model-predictive control
  • Frameworks and datasets for benchmarking algorithms
  • Theoretical contributions to the RL field brought about by constraints/challenges in the buildings/cities domain
  • Applications (demand response, HVAC control, occupant integration, traffic scheduling, EV/battery charging, DER integration)

Submission Instructions

Submitted papers must be unpublished and must not be currently under review for any other publication. Paper submissions must be at most 4 single-spaced US Letter (8.5"x11") pages, including figures, tables, and appendices (excluding references). All submissions must use the LaTeX (preferred) or Word styles found here https://www.acm.org/publications/proceedings-template. Authors must make a good faith effort to anonymize their submissions by (1) using the "anonymous" option for the class and (2) using "anonsuppress" section where appropriate. Papers that do not meet the size, formatting, and anonymization requirements will not be reviewed. Please note that ACM uses 9-pt fonts in all conference proceedings, and the style (both LaTeX and Word) implicitly define the font size to be 9-pt.

Submission portal: https://rlem20.hotcrp.com

Registration

Registration: eventbrite link

Program

All times in Eastern timezone (NY)

Time Presentation
12:00 - 12:20 Opening remarks, by General Chair and TPC Chairs
12:20 - 13:00 Keynote: Incorporating robust control guarantees within (deep) reinforcement learning
Session 1: Demanding Response
13:00 - 13:50 Demand Response through Price-setting Multi-agent Reinforcement Learning
Electricity Pricing aware Deep Reinforcement Learning based Intelligent HVAC Control
A Centralised Soft Actor Critic Deep Reinforcement Learning Approach to District Demand Side Management through CityLearn
Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms on a Building Energy Demand Coordination Task
Discussion
13:50 - 14:00 Break
Session 2: Clash of Algorithms
14:00 - 14:50 Less is More: Simplified State-Action Space for Deep Reinforcement Learning based HVAC Control
Continual adaptation in deep reinforcement learning-based control applied to non-stationary building environments
A Comparison of Model-Free and Model Predictive Control for Price Responsive Water Heaters
Flexible Reinforcement Learning Framework for Building Control using EnergyPlus-Modelica Energy Models
Discussion
14:50 - 15:00 Break
Session 3: Keeping it ReaL
15:00 - 15:50 Augmenting Reinforcement Learning with a Planning Model for Optimizing Energy Demand Response in a Prospective Experiment
Transferable Reinforcement Learning for Smart Homes
Deep Reinforcement Learning in Buildings: Implicit Assumptions and their Impact
Towards Off-policy Evaluation as a Prerequisite for Real-world Reinforcement Learning in Building Control
Discussion
15:50 - 16:00 Closing remarks
Happy Hour

Keynote Speaker

Zico Kolter (Associate Professor, Carnegie Mellon University)

Incorporating robust control guarantees within (deep) reinforcement learning

Abstract: Reinforcement learning methods have produced breakthrough results in recent years, but their application to safety-critical systems has been substantially limited by their lack of guarantees, such as those provided by modern robust control techniques. In this talk, I will discuss a technique we have recently developed that embeds robustness guarantees inside of arbitrary RL policy classes. Using this approach, we can build deep RL methods that attain much of the performance advantages of modern deep RL (namely, superior performance in "average case" scenarios), while still maintaining robustness in worst-case adversarial settings. I will highlight experimental results on several simple control systems highlighting the benefits of the method, in addition to a larger-scale smart grid setting, and end by discussing future directions in this line of work.

Bio: Dr Kolter is an Associate Professor in the Computer Science Department with the School of Computer Science at Carnegie Mellon University. In addition, he also serves as Chief Scientist of AI Research for the Bosch Center for AI (BCAI), working in the Pittsburgh Office. His research group focuses on machine learning, optimization, and control. Specifically, much of the research aims at making deep learning algorithms safer, more robust, and more explainable; to these ends, we have worked on methods for training provably robust deep learning systems, and including more complex “modules” (such as optimization solvers) within the loop of deep architectures. Further focus is on several application domains, with a particular focus on applications in smart energy and sustainability domains.

Organization

General Chair

  • Zoltan Nagy (University of Texas at Austin)

Technical Program Committee Co-Chairs

  • Mario Berges (Carnegie Mellon University)
  • Bingqing Chen (Carnegie Mellon University)
  • June Young Park (University of Texas at Arlington)

Technical Program Committee

  • Henning Lange (University of Washington)
  • Helia Zandi (Oak Ridge National Laboratory)
  • Jose Vazquez-Canteli (University of Texas at Austin)
  • Zhe Wang (Lawrence Berkeley National Laboratory)
  • Duc Van Le (NTU Singapore)
  • Wan Du (University of California, Merced)
  • Xin Jin (National Renewable Energy Laboratory)
  • Ming Jin (University of California, Berkeley)
  • Alex Vlachokostas (Pacific Northwest National Laboratory)
  • Hari Prasanna Das (University of California, Berkeley)
  • Lucas Spangheer (University of California, Berkeley)
  • Kuldeep Kurte (Oak Ridge National Laboratory)
  • Ross May (Dalarna University)