Previous Edition: RLEM Workshop 2022
Third ACM SIGEnergy Workshop on Reinforcement Learning for Energy Management in Buildings & Cities (RLEM)
About
RLEM brings together researchers and industry practitioners for the advancement of (deep) reinforcement learning (RL) in the built environment as it is applied for managing energy in civil infrastructure systems (energy, water, transportation).
RLEM’22 will be held in conjunction with ACM BuildSys’22.
Registration
Register now via BuildSys 2022.
Important Dates
- Abstract submission: September 5, 2022 (AOE)
- Paper submission: September 5, 2022 (AOE)
- Notifications: September 15, 2022 (AOE)
- Camera Ready: September 29, 2022 (AOE)
- Workshop date: November 11, 2022
Call for Papers
Buildings account for 40% of the global energy consumption and 30% of the associated greenhouse gas emissions, while also offering a 50–90% CO2 mitigation potential. The transportation sector is responsible for an additional 30%. Optimal decarbonization requires electrification of end-uses and concomitant decarbonization of electricity supply, efficient use of electricity for lighting, space heating, cooling and ventilation (HVAC), and domestic hot water generation, and upgrade of the thermal properties of buildings. A major driver for decarbonization are integration of renewable energy systems (RES) into the grid, and photovoltaics (PV) and solar-thermal collectors as well as thermal and electric storage into residential and commercial buildings. Electric vehicles (EVs), with their storage capacity and inherent connectivity, hold a great potential for integration with buildings.
The integration of these technologies must be done carefully to unlock their full potential. Artificial intelligence is regarded as a possible pathway to orchestrate these complexities of Smart Cities. In particular, (deep) reinforcement learning algorithms have seen an increased interest and have demonstrated human expert level performance in other domains, e.g., computer games. Research in the building and cities domain has been fragmented and with focus on different problems and using a variety of frameworks. The purpose of this Workshop is to build a growing community around this exciting topic, provide a platform for discussion for future research direction, and share common frameworks.
Topics of Interest
Topics of interest include, but are not limited to:
- Challenges and Opportunities for RL in Building and Cities
- Explorations of model vs model-free RL algorithms and hybrids
- Comparisons of RL algorithms to other control solutions, e.g., model-predictive control
- Frameworks and datasets for benchmarking algorithms
- Theoretical contributions to the RL field brought about by constraints/challenges in the buildings/cities domain
- Applications (demand response, HVAC control, occupant integration, traffic scheduling, EV/battery charging, DER integration)
- Predicting energy consumption for Energy Demand Site Management
- Renewable energy forecasting for Energy Demand Site Management
Submission Instructions
Submitted papers must be unpublished and must not be currently under review for any other publication. Paper submissions must be at most 4 single-spaced US Letter (8.5”x11”) pages, including figures, tables, and appendices (excluding references). All submissions must use the LaTeX (preferred) or Word styles found here https://www.acm.org/publications/proceedings-template. Authors must make a good faith effort to anonymize their submissions by (1) using the “anonymous” option for the class and (2) using “anonsuppress” section where appropriate. Papers that do not meet the size, formatting, and anonymization requirements will not be reviewed. Please note that ACM uses 9-pt fonts in all conference proceedings, and the style (both LaTeX and Word) implicitly define the font size to be 9-pt.
Submission Link
Program
All times are in Eastern Standard Time (EST).
Time | Session | Title | Speaker | Abstract |
---|---|---|---|---|
08:00-08:10 | Introduction | Zoltan Nagy | - | |
08:10-08:55 | Keynote 1 | Architecting a Path Toward Generalized Autonomy: Addressing the Biggest Opportunity in Decarbonization | Troy Harvey (CEO and co-founder of PassiveLogic): is the creator of the first platform for generalized autonomy. As architect of the Quantum Digital Twin standard and Deep Physics AI engine, his empathic, systems-oriented approach to technology development is transforming the way we control systems and equipment. Optimizing buildings, cities, and other controlled systems is the clearest opportunity Troy sees to contribute to the world's most pressing climate challenges. | -. |
08:55-09:05 | Break | |||
09:05-09:50 | Session 1 | Behavioural cloning based RL agents for district energy management | Sharath Ram Kumar, Arvind Easwaran, Benoit Delinchant, Rémy Rigo-Mariani | In this work, we discuss a method to incorporate domain knowledge into a Reinforcement Learning (RL) environment through the process of behavioral cloning, in the context of a district energy management system. Prior knowledge, encoded into heuristic rule-based programs, is used to initialize a policy network for an RL agent, after which an RL algorithm is used to improve on this by optimizing against a reward function. The key ideas are implemented in the CityLearn framework, where the resulting controller is used to manage the electrical energy storage for 5 buildings in a district. We demonstrate that the resulting agents offer measurable performance gains compared to existing baselines, offering a 3.8% improvement over the underlying rule-based controller, and a 20% improvement over a pure RL based controller. We also illustrate the possibility of using imitation learning to develop agents with desirable characteristics without explicit reward shaping. |
Deep reinforcement learning-based SOH-aware battery management for DER aggregation | Shotaro Nonaka, Daichi Watari, Ittetsu Taniguchi, Takao Onoye | In smart energy systems, batteries, which assume an important role in filling the temporal gap between generation and consumption, are expected to be a potential distributed energy resource (DER). A resource aggregator (RA) has emerged to collect various DERs to extract demand-side flexibility, and various methods have been proposed based on reinforcement learning. Since battery degradation is unavoidable during utilization, battery management is required to minimize it. This paper proposes state-of-health (SOH)-aware battery management based on deep reinforcement learning. Our experimental results demonstrate an average battery lifetime improvement of 11.2%. | ||
Deep reinforcement learning with online data augmentation to improve sample efficiency for intelligent HVAC control | Kuldeep R Kurte, Kadir Amasyali, Jeffrey Munk, Helia Zandi | Deep Reinforcement Learning (DRL) has started showing success in real-world applications such as building energy optimization. Much of the research in this space utilized simulated environments to train RL-agent in an offline mode. Very few research have used DRL-based control in real-world systems due to two main reasons: 1) sample efficiency challenge---DRL approaches need to perform a lot of interactions with the environment to collect sufficient experiences to learn from, which is difficult in real systems, and 2) comfort or safety related constraints---user's comfort must never or at least rarely be violated. In this work, we propose a novel deep Reinforcement Learning framework with online Data Augmentation (RLDA) to address the sample efficiency challenge of real-world RL. We used a time series Generative Adversarial Network (TimeGAN) architecture as a data generator. We further evaluated the proposed RLDA framework using a case study of an intelligent HVAC control. With a ≈28% improvement in the sample efficiency, RLDA framework lays the way towards increased adoption of DRL-based intelligent control in real-world building energy management systems. | ||
09:50-10:00 | Break | |||
10:00-10:45 | Keynote 2 | Keynote: Opportunities and Challenges of Building Energy Management using Advanced Building Controls | Gregor P. Henze, Ph.D., P.E. (Professor and C.V. Schelke Chair in the Department of Civil, Environmental and Architectural Engineering at the University of Colorado): His teaching focuses on the building energy systems side of architectural engineering, i.e., thermal environmental engineering, HVAC and refrigeration systems, design of energy efficient buildings, building control and automation systems, data science for building engineering applications, and sustainable building design. His research emphasizes model predictive control and reinforcement learning control of building energy systems, building thermal mass, refrigeration systems, model-based benchmarking of building operational performance, fault detection and diagnosis, control strategies for mixed-mode buildings, uncertainty quantification of occupant behavior and its impact, human presence detection, sensor fusion algorithms, energy analytics and decision analysis as well as the integration of building energy system operations with the electric grid system. He is the primary author of more than 150 research articles, four of which have received best paper awards, and received three patents. He received the 2011 Colorado Cleantech Industry Association's Research and Commercialization Award. Prof. Henze is a professional mechanical engineer, certified high-performance building design professional (HBDP), editorial board member for Journal Building Performance Simulation, Fellow of the Renewable and Sustainable Energy Institute, joint professor at the National Renewable Energy Laboratory, collaborating with the power systems engineering and building research groups, as well as co-founder and chief scientist of QCoefficient, Inc., a startup developing real-time optimal control solutions for grid-interactive efficient buildings. Finally, he is the Fulbright Distinguished Chair in Science, Technology, and Innovation at CSIRO in Newcastle, Australia for 2022. | - |
10:45-11:15 | Session 2 | B2RL: an open-source dataset for building batch reinforcement learning | Hsinyu Liu, Xiaohan Fu, Bharathan Balaji, Rajesh K. Gupta, Dezhi Hong | Batch reinforcement learning (BRL) is an emerging research area in the RL community. It learns exclusively from static datasets (i.e. replay buffers) without interaction with the environment. In the offline settings, existing replay experiences are used as prior knowledge for BRL models to find the optimal policy. Thus, generating replay buffers is crucial for BRL model benchmark. In our B2RL (Building Batch RL) dataset, we collected real-world data from our building management systems, as well as buffers generated by several behavioral policies in simulation environments. We believe it could help building experts on BRL research. To the best of our knowledge, we are the first to open-source building datasets for the purpose of BRL learning. |
ComfortLearn: enabling agent-based occupant-centric building controls | Matias Quintana, Zoltan Nagy, Federico Tartarini, Stefano Schiavon, Clayton Miller | The intersection of buildings control and thermal comfort modeling may seem obvious, but there are still prevalent challenges in combining them. "Occupant centric" control strategies are mainly trained using building data but rarely leverage occupants' feedback. While thermal comfort models are developed using occupants' data but are seldom integrated into building controls. To bridge this gap, we developed an open-source simulation tool named ComfortLearn. ComfortLearn is an OpenAI Gym-based environment that leverages historical building management system data from real buildings and existing longitudinal thermal comfort datasets for occupant-centric control strategies and benchmarking. We used an evaluation metric named 'exceedance' to evaluate occupants' thermal comfort and provide a more realistic picture than traditional evaluations like comfort bands. This setup allows the analysis of different building control strategies and their effect on real occupants, based on empirical data, without the need for computationally expensive co-simulations. A theoretical case study implementation shows that an as-is schedule-based controller complies with its comfort band more than 93% of the time, but the simulated occupants are comfortable for only 25% of the occupied time. | ||
11:15-11:25 | Break | |||
11:25-12:10 | Panel Discussion | |||
12:10-12:15 | Closing | Zoltan Nagy | - |
Organization
General Chairs
- Zoltan Nagy (The University of Texas at Austin)
- Jan Drgona (Pacific Northwest National Laboratory)
Technical Program Committee Co-Chairs
- Kuldeep Kurte (Oak Ridge National Laboratory)
- Matias Quintana (National University of Singapore)
- Yaxin Bi (Ulster University at Jordanstown)
Web Chair
- Kingsley Nweye (The University of Texas at Austin)
Technical Program Committee
- Helia Zandi (Oak Ridge National Laboratory)
- Himanshu Sharma (Pacific Northwest National Laboratory)
- June Young Park (The University of Texas at Arlington)
- Kadir Amasyali (Oak Ridge National Laboratory)
Location
RLEM Workshop 2022 will be held in hybrid (virtual and in-person). The Zoom link for virtual workshop participants will be communicated. Location for in-person workshop participants is:
Hilton Boston Back Bay
40 Dalton St, Boston, MA 02115
Room: Mariner
See here for more ACM BuildSys venue, travel and visa information.