Solving Parameter-Robust Avoid Problems with Unknown Feasibility using Reinforcement Learning
每日信息看板 · 2026-02-17
2026-02-17T18:53:31Z
Published
AI 总结
Recent advances in deep reinforcement learning (RL) have achieved strong results on high-dimensional control tasks, but applying RL to reachability problems ra…
- Recent advances in deep reinforcement learning (RL) have achieved strong results on high-dimensional control tasks, but applying RL to reac…
- This mismatch can result in policies that perform poorly on low-probability states that are still within the safe set
- A natural alternative is to frame the problem as a robust optimization over a set of initial conditions that specify the initial state, dyn…
- We propose Feasibility-Guided Exploration (FGE), a method that simultaneously identifies a subset of feasible initial conditions under whic…
- Empirical results demonstrate that FGE learns policies with over 50% more coverage than the best existing method for challenging initial co…
#arXiv #paper #研究/论文
内容摘录
Recent advances in deep reinforcement learning (RL) have achieved strong results on high-dimensional control tasks, but applying RL to reachability problems raises a fundamental mismatch: reachability seeks to maximize the set of states from which a system remains safe indefinitely, while RL optimizes expected returns over a user-specified distribution. This mismatch can result in policies that perform poorly on low-probability states that are still within the safe set. A natural alternative is to frame the problem as a robust optimization over a set of initial conditions that specify the initial state, dynamics and safe set, but whether this problem has a solution depends on the feasibility of the specified set, which is unknown a priori. We propose Feasibility-Guided Exploration (FGE), a method that simultaneously identifies a subset of feasible initial conditions under which a safe policy exists, and learns a policy to solve the reachability problem over this set of initial conditions. Empirical results demonstrate that FGE learns policies with over 50% more coverage than the best existing method for challenging initial conditions across tasks in the MuJoCo simulator and the Kinetix simulator with pixel observations.