Home /

Research

Showing 229 - 234 / 897

Deeper or Wider Networks of Point Clouds with Self-attention?


Authors:  HaoxiRan, LiLu....
Published date-11/29/2020

Abstract: Prevalence of deeper networks driven by self-attention is in stark contrast to underexplored point-based methods. In this paper, we propose groupwise self-attention as the basic block to construct our network: …

Latent Template Induction with Gumbel-CRFs


Authors:  YaoFu, ChuanqiTan, BinBi....
Published date-11/29/2020
Tasks:  Data-to-TextGeneration, ParaphraseGeneration, TextGeneration

Abstract: Learning to control the structure of sentences is a challenging problem in text generation. Existing work either relies on simple deterministic approaches or RL-based hard structures. We explore the use …

Scaling down Deep Learning


Authors:  SamGreydanus....
Published date-11/29/2020

Abstract: Though deep learning models have taken on commercial and political relevance, many aspects of their training and operation remain poorly understood. This has sparked interest in "science of deep learning" …

Differences between human and machine perception in medical diagnosis


Authors:  TaroMakino, StanislawJastrzebski, WitoldOleszkiewicz....
Published date-11/28/2020
Tasks:  BreastCancerDetection, MedicalDiagnosis

Abstract: Deep neural networks (DNNs) show promise in image-based medical diagnosis, but cannot be fully trusted since their performance can be severely degraded by dataset shifts to which human perception remains …

Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules


Authors:  JohannesKlicpera, ShankariGiri, JohannesT.Margraf....
Published date-11/28/2020

Abstract: Many important tasks in chemistry revolve around molecules during reactions. This requires predictions far from the equilibrium, while most recent work in machine learning for molecules has been focused on …

Understanding How BERT Learns to Identify Edits


Authors:  SamuelStevens, YuSu....
Published date-11/28/2020

Abstract: Pre-trained transformer language models such as BERT are ubiquitous in NLP research, leading to work on understanding how and why these models work. Attention mechanisms have been proposed as a …

Filter by

Categories

Tags