-
Lee,
Zhao,
Sawhney,
Girdhar & Kroemer
(2021)
-
Lee,
T.,
Zhao,
J.,
Sawhney,
A.,
Girdhar,
S. & Kroemer,
O.
(2021).
Causal reasoning in simulation for structure and transfer learning of robot manipulation policies.
IEEE.
https://doi.org/10.1109/icra48506.2021.9561439
-
Hellström
(2021)
-
Hellström,
T.
(2021).
The relevance of causation in robotics: A review, categorization, and analysis.
Paladyn, Journal of Behavioral Robotics, 12(1). 238–255.
https://doi.org/10.1515/pjbr-2021-0017
-
Lundberg & Lee
(2017)
-
Lundberg,
S. & Lee,
S.
(2017).
A unified approach to interpreting model predictions.
-
Ribeiro,
Singh & Guestrin
(2016)
-
Ribeiro,
M.,
Singh,
S. & Guestrin,
C.
(2016).
“Why should I trust you?”: Explaining the predictions of any classifier.
Association for Computing Machinery.
https://doi.org/10.1145/2939672.2939778
-
Lake,
Ullman,
Tenenbaum & Gershman
(2016)
-
Lake,
B.,
Ullman,
T.,
Tenenbaum,
J. & Gershman,
S.
(2016).
Building machines that learn and think like people.
Behavioral and Brain Sciences, 40.
https://doi.org/10.1017/s0140525x16001837
-
Zhang,
Schölkopf,
Spirtes & Glymour
(2017)
-
Zhang,
K.,
Schölkopf,
B.,
Spirtes,
P. & Glymour,
C.
(2017).
Learning causality and causality-related learning: Some recent progress.
National Science Review, 5(1). 26–29.
https://doi.org/10.1093/nsr/nwx137
-
Schölkopf
(2022)
-
Schölkopf,
B.
(2022).
Causality for machine learning. In
Probabilistic and causal inference: The works of judea pearl. (1, pp. 765–804).
Association for Computing Machinery.
https://doi.org/10.1145/3501714.3501755
-
Shrikumar,
Greenside & Kundaje
(2017)
-
Shrikumar,
A.,
Greenside,
P. & Kundaje,
A.
(2017).
Learning important features through propagating activation differences.
PMLR.
-
Lombard & Gärdenfors
(2017)
-
Lombard,
M. & Gärdenfors,
P.
(2017).
Tracking the evolution of causal cognition in humans.
Journal of Anthropological Sciences, 95. 219–234.
https://doi.org/10.4436/JASS.95006
-
Peters,
Janzing & Bernard
(2017)
-
Peters,
J.,
Janzing,
D. & Bernard,
S.
(2017).
Elements of causal inference – foundations and learning algorithms.
MIT Press.
-
Zhu,
Gao,
Fan,
Huang,
Edmonds,
Liu,
Gao,
Zhang,
Qi,
Wu,
Tenenbaum & Zhu
(2020)
-
Zhu,
Y.,
Gao,
T.,
Fan,
L.,
Huang,
S.,
Edmonds,
M.,
Liu,
H.,
Gao,
F.,
Zhang,
C.,
Qi,
S.,
Wu,
Y.,
Tenenbaum,
J. & Zhu,
S.
(2020).
Dark, beyond deep: A paradigm shift to cognitive AI with humanlike common sense.
Engineering, 6(3). 310–345.
https://doi.org/10.1016/j.eng.2020.01.011
-
Nouri & Littman
(2010)
-
Nouri,
A. & Littman,
M.
(2010).
Dimension reduction and its application to model-based exploration in continuous spaces.
Machine Learning, 81(1). 85–98.
https://doi.org/10.1007/s10994-010-5202-y
-
Dearden & Demiris
(2005)
-
Dearden,
A. & Demiris,
Y.
(2005).
Learning forward models for robots.
Morgan Kaufmann Publishers Inc..
-
Gilpin,
Bau,
Yuan,
Bajwa,
Specter & Kagal
(2018)
-
Gilpin,
L.,
Bau,
D.,
Yuan,
B.,
Bajwa,
A.,
Specter,
M. & Kagal,
L.
(2018).
Explaining explanations: An overview of interpretability of machine learning.
IEEE.
https://doi.org/10.1109/dsaa.2018.00018
-
Ciria,
Schillaci,
Pezzulo,
Hafner & Lara
(2021)
-
Ciria,
A.,
Schillaci,
G.,
Pezzulo,
G.,
Hafner,
V. & Lara,
B.
(2021).
Predictive processing in cognitive robotics: A review.
Neural Computation, 33(5). 1402–1432.
https://doi.org/10.1162/neco\_a\_01383
-
Dillon,
LaRiviere,
Lundberg,
Roth & Syrgkanis
(2021)
-
Dillon,
E.,
LaRiviere,
J.,
Lundberg,
S.,
Roth,
J. & Syrgkanis,
V.
(2021).
Be careful when interpreting predictive models in search of causal insights.
Medium. Retrieved from
https://towardsdatascience.com/be-careful-when-interpreting-predictive-models-in-search-of-causal-insights-e68626e664b6
-
Lake
(2014)
-
Lake,
B.
(2014).
Towards more human-like concept learning in machines: Compositionality, causality, and learning-to-learn
({PhD thesis}).
Massachusetts Institute of Technology
Retrieved from
https://dspace.mit.edu/handle/1721.1/95856
-
Kotseruba & Tsotsos
(2018)
-
Kotseruba,
I. & Tsotsos,
J.
(2018).
40 years of cognitive architectures: Core cognitive abilities and practical applications.
Artificial Intelligence Review, 53(1). 17–94.
https://doi.org/10.1007/s10462-018-9646-y
-
Rosenblatt
(1958)
-
Rosenblatt,
F.
(1958).
The Perceptron: A probabilistic model for information storage and organization in the brain..
Psychological Review, 65(6). 386–408.
https://doi.org/10.1037/h0042519
-
McClelland,
Rumelhart & PDP Research Group
(1987)
-
McClelland,
J.,
Rumelhart,
D. & PDP Research Group
(1987).
Parallel distributed processing: Explorations in the microstructure of cognition, volume 2: Psychological and biological models: Psychological and biological models.
The MIT Press.
-
Rumelhart,
Hinton & Williams
(1986)
-
Rumelhart,
D.,
Hinton,
G. & Williams,
R.
(1986).
Learning representations by back-propagating errors.
Nature, 323(6088). 533–536.
https://doi.org/10.1038/323533a0
-
Hornik,
Stinchcombe & White
(1989)
-
Hornik,
K.,
Stinchcombe,
M. & White,
H.
(1989).
Multilayer feedforward networks are universal approximators.
Neural Networks, 2(5). 359–366.
https://doi.org/10.1016/0893-6080(89)90020-8
-
Rosenblatt &
(1962)
-
Rosenblatt,
F. &
(1962).
Principles of neurodynamics: Perceptrons and the theory of brain mechanisms.
Spartan Books.
-
Shapley
(1953)
-
Shapley,
L.
(1953).
A value for n-person games. InKuhn,
H. & Tucker,
A. (Eds.),
Contributions to the theory of games II. (pp. 307–317).
Princeton University Press.
https://doi.org/10.1515/9781400881970-018
-
Chen,
Lu,
Rajeswaran,
Lee,
Grover,
Laskin,
Abbeel,
Srinivas & Mordatch
(2021)
-
Chen,
L.,
Lu,
K.,
Rajeswaran,
A.,
Lee,
K.,
Grover,
A.,
Laskin,
M.,
Abbeel,
P.,
Srinivas,
A. & Mordatch,
I.
(2021).
Decision Transformer: Reinforcement learning via sequence modeling.
Curran Associates, Inc..
-
Janner,
Li & Levine
(2021)
-
Janner,
M.,
Li,
Q. & Levine,
S.
(2021).
Offline reinforcement learning as one big sequence modeling problem.
Curran Associates, Inc..
-
Vaswani,
Shazeer,
Parmar,
Uszkoreit,
Jones,
Gomez,
Kaiser & Polosukhin
(2017)
-
Vaswani,
A.,
Shazeer,
N.,
Parmar,
N.,
Uszkoreit,
J.,
Jones,
L.,
Gomez,
A.,
Kaiser,
Ł. & Polosukhin,
I.
(2017).
Attention is all you need.
Curran Associates, Inc..
-
Ghosh,
Gupta,
Reddy,
Fu,
Devin,
Eysenbach & Levine
(2019)
-
Ghosh,
D.,
Gupta,
A.,
Reddy,
A.,
Fu,
J.,
Devin,
C.,
Eysenbach,
B. & Levine,
S.
(2019).
Learning to reach goals via iterated supervised learning.
https://doi.org/10.48550/ARXIV.1912.06088
-
Oh,
Guo,
Singh & Lee
(2018)
-
Oh,
J.,
Guo,
Y.,
Singh,
S. & Lee,
H.
(2018).
Self-imitation learning.
PMLR.
-
Fisher
(1925)
-
Fisher,
R.
(1925).
Statistical methods for research workers.
Oliver; Boyd.
-
Diehl & Ramirez-Amaro
(2023)
-
Diehl,
M. & Ramirez-Amaro,
K.
(2023).
A causal-based approach to explain, predict and prevent failures in robotic tasks.
Robotics and Autonomous Systems, 162. 104376.
https://doi.org/10.1016/j.robot.2023.104376
-
Lee,
Vats,
Girdhar & Kroemer
(2023)
-
Lee,
T.,
Vats,
S.,
Girdhar,
S. & Kroemer,
O.
(2023).
SCALE: Causal learning and discovery of robot manipulation skills using simulation.
PMLR.
-
Stocking,
Gopnik & Tomlin
(2022)
-
Stocking,
K.,
Gopnik,
A. & Tomlin,
C.
(2022).
From robot learning to robot understanding: Leveraging causal graphical models for robotics.
PMLR.
-
Pearl
(1985)
-
Pearl,
J.
(1985).
Bayesian networks: A model of self-activated memory for evidential reasoning.
-
Sontakke,
Mehrjou,
Itti & Schölkopf
(2021)
-
Sontakke,
S.,
Mehrjou,
A.,
Itti,
L. & Schölkopf,
B.
(2021).
Causal curiosity: RL agents discovering self-supervised experiments for causal representation learning.
PMLR.
-
Sonar,
Pacelli & Majumdar
(2021)
-
Sonar,
A.,
Pacelli,
V. & Majumdar,
A.
(2021).
Invariant policy optimization: Towards stronger generalization in reinforcement learning.
PMLR.
-
Wang,
Xiao,
Xu,
Zhu & Stone
(2022)
-
Wang,
Z.,
Xiao,
X.,
Xu,
Z.,
Zhu,
Y. & Stone,
P.
(2022).
Causal dynamics learning for task-independent state abstraction.
PMLR.
-
Brandfonbrener,
Bietti,
Buckman,
Laroche & Bruna
(2022)
-
Brandfonbrener,
D.,
Bietti,
A.,
Buckman,
J.,
Laroche,
R. & Bruna,
J.
(2022).
When does return-conditioned supervised learning work for offline reinforcement learning?.
Curran Associates, Inc..
-
Emmons,
Eysenbach,
Kostrikov & Levine
(2021)
-
Emmons,
S.,
Eysenbach,
B.,
Kostrikov,
I. & Levine,
S.
(2021).
RvS: What is essential for offline RL via supervised learning?.
https://doi.org/10.48550/ARXIV.2112.10751
-
Wen,
Kuba,
Lin,
Zhang,
Wen,
Wang & Yang
(2022)
-
Wen,
M.,
Kuba,
J.,
Lin,
R.,
Zhang,
W.,
Wen,
Y.,
Wang,
J. & Yang,
Y.
(2022).
Multi-agent reinforcement learning is a sequence modeling problem.
Curran Associates, Inc..
-
Zare,
Kebria,
Khosravi & Nahavandi
(2023)
-
Zare,
M.,
Kebria,
P.,
Khosravi,
A. & Nahavandi,
S.
(2023).
A survey of imitation learning: Algorithms, recent developments, and challenges.
https://doi.org/10.48550/ARXIV.2309.02473
-
Mandlekar,
Xu,
Wong,
Nasiriany,
Wang,
Kulkarni,
Fei-Fei,
Savarese,
Zhu & Martín-Martín
(2021)
-
Mandlekar,
A.,
Xu,
D.,
Wong,
J.,
Nasiriany,
S.,
Wang,
C.,
Kulkarni,
R.,
Fei-Fei,
L.,
Savarese,
S.,
Zhu,
Y. & Martín-Martín,
R.
(2021).
What matters in learning from offline human demonstrations for robot manipulation.
https://doi.org/10.48550/ARXIV.2108.03298
-
Dogge,
Custers & Aarts
(2019)
-
Dogge,
M.,
Custers,
R. & Aarts,
H.
(2019).
Moving forward: On the limits of motor-based forward models.
Trends in Cognitive Sciences, 23(9). 743–753.
https://doi.org/10.1016/j.tics.2019.06.008
-
Sperry
(1950)
-
Sperry,
R.
(1950).
Neural basis of the spontaneous optokinetic response produced by visual inversion..
Journal of Comparative and Physiological Psychology, 43(6). 482–489.
https://doi.org/10.1037/h0055479
-
Holst & Mittelstaedt
(1950)
-
Holst,
E. & Mittelstaedt,
H.
(1950).
Das Reafferenzprinzip: Wechselwirkungen zwischen Zentralnervensystem und Peripherie.
Naturwissenschaften, 37(20). 464–476.
https://doi.org/10.1007/bf00622503