Home /
Research
Showing 691 - 696 / 897
Parameterized Explainer for Graph Neural Network
DongshengLuo, WeiCheng, DongkuanXu....
Published date-11/09/2020
GraphClassification
Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method independently addresses the local explanations (i.e., important subgraph structure …
f-IRL: Inverse Reinforcement Learning via State Marginal Matching
TianweiNi, HarshitSikchi, YuFeiWang....
Published date-11/09/2020
ImitationLearning
Imitation learning is well-suited for robotic tasks where it is difficult to directly program the behavior or specify a cost for optimal control. In this work, we propose a method …
Multimodal Trajectory Prediction via Topological Invariance for Navigation at Uncontrolled Intersections
JunhaRoh, ChristoforosMavrogiannis, RishabhMadan....
Published date-11/08/2020
TrajectoryPrediction
We focus on decentralized navigation among multiple non-communicating rational agents at \emph{uncontrolled} intersections, i.e., street intersections without traffic signs or signals. Avoiding collisions in such domains relies on the ability …
Learning-based 3D Occupancy Prediction for Autonomous Navigation in Occluded Environments
LiziWang, HongkaiYe, QianhaoWang....
Published date-11/08/2020
AutonomousNavigation
In autonomous navigation of mobile robots, sensors suffer from massive occlusion in cluttered environments, leaving significant amount of space unknown during planning. In practice, treating the unknown space in optimistic …
An HVS-Oriented Saliency Map Prediction Modeling
QiangLi....
Published date-11/08/2020
SaliencyPrediction
Visual attention is one of the most significant characteristics for selecting and understanding the outside world. The nature complex scenes, including larger redundancy and human vision, can't be processing all …
Long Range Arena: A Benchmark for Efficient Transformers
YiTay, MostafaDehghani, SamiraAbnar....
Published date-11/08/2020
Transformers do not scale very well to long sequence lengths largely because of quadratic self-attention complexity. In the recent months, a wide spectrum of efficient, fast Transformers have been proposed …