《认知机器人》(英文版) Reactive Planning in Large State Spaces Through

Reactive Planning in Large State Spaces Through Decomposition and Serialization Brian c. williams Joint with Seung H Chung 164126834J April 26th, 2004
Reactive Planning in Large State Spaces Through Decomposition and Serialization Brian C. Williams Joint with Seung H. Chung 16.412J/6.834J April 26th, 2004

Outline Model-based programming The need for model-based reactive planning The burton model-based reactive planner Massachusetts Institute of Technology
Artificial Intelligence & Space Systems Laboratories Massachusetts Institute of Technology Outline • Model-based programming • The need for model-based reactive planning • The Burton model-based reactive planner

Model-based Programs MERS CSAIL Interact Directly with state Embedded programs interact with lodel-based programs plant sensors/actuators interact with plant state Read sensors Read state Set actuators Write state Model-based Embedded program Embedded Program Obs Cntrl Plant Plant rogrammer must map between Model-based executive maps state and sensors/actuators between sensors. actuators to states
Model-based Programs Interact Directly with State Embedded programs interact with plant sensors/actuators: • Read sensors • Set actuators Model-based programs interact with plant state: • Read state • Write state Embedded Program S Plant Obs Cntrl Model-based Embedded Program S Plant Programmer must map between state and sensors/actuators. Model-based executive maps between sensors, actuators to states

MERS CSAIL Orbital Insertion example urn camera off and engine on 包每自 Engine Engine Engine Engine B Science camera Science Camera
Orbital Insertion Example EngineA EngineB Science Camera Turn camera off and engine on EngineA EngineB Science Camera

RMPL Model-based program Titan Model-based Executive Control Program Control Sequencer Execut s concurrently Generates goal, states Preer Ass ' d queries states conditioned on stat\ estimates ed on reward State estimates State goals Model ode Orbitlnsert(0∷ tImation: MAINTAIN(EAR OR EBR) (do-watching(EngineA=Firing) OR (EngineB= Firing) cks likel EAS (EngineA =Standby) States (Engine=Standby) (EngineB= Standby) (Camera= Off) ons MAINTAIN (EAF) SAND CO) (do-watching(EngineA=Failed) Plant (when-donext((EngineA= Standby) AND EAS AND co W (Camera=Off)) Engine=Firing))) (EAF AND EBS AND CO) (when-donext((EngineA=Failed) AND +6 EAF AND EBS (Engine B= Standby) AND (Camera=Off)) hierarchical constraint = Firing) automata on state s
Control Sequencer Deductive Controller System Model Commands Observations Control Program Plant RMPL Model-based Program Titan Model-based Executive State estimates State goals Control Sequencer: Generates goal states conditioned on state estimates Mode Estimation: Tracks likely States Mode Reconfiguration: Tracks least-cost state goals z Executes concurrently z Preempts z Asserts and queries states z Chooses based on reward OrbitInsert():: (do-watching ((EngineA = Firing) OR (EngineB = Firing)) (parallel (EngineA = Standby) (EngineB = Standby) (Camera = Off) (do-watching (EngineA = Failed) (when-donext ( (EngineA = Standby) AND (Camera = Off) ) (EngineA = Firing))) (when-donext ( (EngineA = Failed) AND (EngineB = Standby) AND (Camera = Off) ) (EngineB = Firing)))) MAINTAIN (EAR OR EBR) EBS CO LEGEND: EAS (EngineA = Standby) EAF (EngineA = Failed) EAR (EngineA = Firing) EBS (EngineB = Standby) EBF (EngineB = Failed) EBR (EngineB = Firing) CO (Camera = Off) MAINTAIN (EAF) EAS (EAS AND CO) EAR EAS AND CO (EAF AND EBS AND CO) EBR EAF AND EBS AND CO hierarchical constraint automata on state s

RMPL Model-based program Titan Model-based Executive Control Program Control Sequencer Executes concurrently Generates goal states ts conditioned on state estimates Asserts and queries states Chooses based on reward State estimates System Model State goals -ExO Mode Estimation: Reconfiguration Tracks likely Tracks least-cost States state goals Commands ons Valve fails Plan stuck closed Fire backup engine Q least cost reachable Current Belief State First Action goal state
Control Sequencer Deductive Controller System Model Commands Observations Control Program Plant RMPL Model-based Program Titan Model-based Executive State estimates State goals Control Sequencer: Generates goal states conditioned on state estimates Mode Estimation: Tracks likely States Mode Reconfiguration: Tracks least-cost state goals z Executes concurrently z Preempts z Asserts and queries states z Chooses based on reward Closed Valve Open Stuck open Stuck closed Open Close 0. 01 0. 01 0.01 0.01 inflow = outflow = 0 Fire backup engine Valve fails stuck closed S T X0 X1 XN-1 XN S T X0 X1 XN-1 XN least cost reachable Current Belief State First Action goal state

arg max Pt(m arg min rT*(m S t. M(m)o(m)is satisfiable S.t. M(m)entails g(m) S.t. M(m)is satisfiable State estimates State goals Optimal cSP arg min f(x) Mode Mode Estimation: Reconfiguration S.t. C(x)is satisfiable Tracks likely Tracks least-cost D(x)is unsatisfiable States state goals Commands ons Valve fails stuck closed Fire backup engine Q least cost reachable Current Belief State First Action goal state
Deductive Controller Commands Observations Plant State estimates State goals Mode Estimation: Tracks likely States Mode Reconfiguration: Tracks least-cost state goals Fire backup engine Valve fails stuck closed S T X0 X1 XN-1 XN S T X0 X1 XN-1 XN least cost reachable Current Belief State First Action goal state Optimal CSP: arg min f(x) s.t. C(x) is satisfiable D(x) is unsatisfiable arg max P T(m’) s.t. M(m’) ^ O(m’) is satisfiable arg min RT*(m’) s.t. M(m’) entails G(m’) s.t. M(m’) is satisfiable

A simple model-based executive(Livingstone) commanded NASA's Deep Space One probe Deep Space One Started: January 1996 taunch: October 15th. 1998 Remote Agent Experiment: May, 1999 courtesy NASA JPL Massachusetts Institute of Technology
Artificial Intelligence & Space Systems Laboratories Massachusetts Institute of Technology A simple model-based executive (Livingstone) commanded NASA’s Deep Space One probe courtesy NASA JPL Started: January 1996 Launch: October 15th, 1998 Remote Agent Experiment: May, 1999

Remote Agent Experiment See rax arc. nasa. gov May 17-18th experiment Generate plan for course correction and thrust Diagnose camera as stuck on Power constraints violated, abort current plan and replan Perform optical navigation Perform ion propulsion thrust May 21th experiment Diagnose faulty device and Repair by issuing reset Diagnose switch sensor failure Determine harmless, and continue plan Diagnose thruster stuck closed and Repair by switching to alternate method of thrusting Back to back planning Massachusetts Institute of Technology
Artificial Intelligence & Space Systems Laboratories Massachusetts Institute of Technology Remote Agent Experiment May 17-18th experiment • Generate plan for course correction and thrust • Diagnose camera as stuck on – Power constraints violated, abort current plan and replan • Perform optical navigation • Perform ion propulsion thrust May 21th experiment. • Diagnose faulty device and – Repair by issuing reset. • Diagnose switch sensor failure. – Determine harmless, and continue plan. • Diagnose thruster stuck closed and – Repair by switching to alternate method of thrusting. • Back to back planning See rax.arc.nasa.gov

Outline Model-based programming The need for model-based reactive planning The burton model-based reactive planner Massachusetts Institute of Technology
Artificial Intelligence & Space Systems Laboratories Massachusetts Institute of Technology Outline • Model-based programming • The need for model-based reactive planning • The Burton model-based reactive planner
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《认知机器人》(英文版) Fast Solutions to CSp's.pdf
- 《认知机器人》(英文版) Massachusetts Institute of Technology.pdf
- 《认知机器人》(英文版) Distributed constraint Satisfaction problems.pdf
- 《认知机器人》(英文版) LPG: Local search for Planning Graphs.pdf
- 《认知机器人》(英文版) Fast Solutions to CSPs.pdf
- 《认知机器人》(英文版) Planning as Heuristic Forward Search.pdf
- 《认知机器人》(英文版) Using the Forest to See the Trees Context-based Object Recognition.pdf
- 《认知机器人》(英文版) Hybrid Mode Estimation and Gaussian Filtering with Hybrid HMMs.pdf
- 《认知机器人》(英文版) Model-based Programming and Constraint-based HMMs.pdf
- 《认知机器人》(英文版) Optimal csPs and Conflict-directed.pdf
- 《认知机器人》(英文版) Mapping Topics: Topological Maps.pdf
- 《认知机器人》(英文版) Conflict-directed Diagnosis and Probabilistic Mode Estimation.pdf
- 《认知机器人》(英文版) Fault Aware Systems: Model-based Programming and Diagnosis.pdf
- 《认知机器人》(英文版) Foundations of state Estimation.pdf
- 《认知机器人》(英文版) Temporal Planning in Space.pdf
- 《认知机器人》(英文版) Executing Model-based Programs Using.pdf
- 《认知机器人》(英文版) Foundations of State Estimation PartⅡ.pdf
- 《认知机器人》(英文版) Robot Motion Planning and (a little)Computational Geometry.pdf
- 《认知机器人》(英文版) Probabilistic methods for Kinodynamic Path Planning.pdf
- 《认知机器人》(英文版) Incremental Path Planning.pdf
- 《认知机器人》(英文版) Partially Observable Markov Decision Processes.pdf
- 《认知机器人》(英文版) Vision-based SLAM.pdf
- 《认知机器人》(英文版) SIFT SLAM Vision Details.pdf
- 《认知机器人》(英文版) Information Based Adaptive Robotic Exploration.pdf
- 《认知机器人》(英文版) Temporal Plan Execution: Dynamic Scheduling and Simple Temporal Networks.pdf
- 《认知机器人》(英文版) Partially Observable Markov Decision Processes Part II.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义,英文版)Handout 1:Bode plot rules reminder.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义,英文版)Handout 2:Gain and Phase margins.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义,英文版)Handout 5:Control System Design Principles.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义,英文版)Handout 6:Proportional Compensation.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 3:Gain and Phase Margins for unstable.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 4:Root-Locus Review.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)extra.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 7:Lag and PI compensation.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 8:Lead compensation.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 8:Lead compensation.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 12:Plants with right half-plane zeros.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 10:Notch compensation.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 10:Notch compensation.pdf
- 美国麻省理工大学:《航空系统的估计与控制》教学资源(讲义)Handout 13:More about plants with right.pdf