Home /

Research

Showing 55 - 60 / 897

A Three-Stage Self-Training Framework for Semi-Supervised Semantic Segmentation


Authors:  RihuanKe, AngelicaAviles-Rivero, SaurabhPandey....
Published date-12/01/2020
Tasks:  SemanticSegmentation, Semi-SupervisedSemanticSegmentation

Abstract: Semantic segmentation has been widely investigated in the community, in which the state of the art techniques are based on supervised models. Those models have reported unprecedented performance at the …

Optimal visual search based on a model of target detectability in natural images


Authors:  ShimaRashidi, KristaEhinger, AndrewTurpin....
Published date-12/01/2020
Tasks:  EyeTracking, Foveation

Abstract: To analyse visual systems, the concept of an ideal observer promises an optimal response for a given task. Bayesian ideal observers can provide optimal responses under uncertainty, if they are …

Sim2Real for Self-Supervised Monocular Depth and Segmentation


Authors:  NithinRaghavan, PunarjayChakravarty, ShubhamShrivastava....
Published date-12/01/2020
Tasks:  DomainAdaptation

Abstract: Image-based learning methods for autonomous vehicle perception tasks require large quantities of labelled, real data in order to properly train without overfitting, which can often be incredibly costly. While leveraging …

VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain


Authors:  JinsungYoon, YaoZhang, JamesJordon....
Published date-12/01/2020
Tasks:  DataAugmentation, Imputation, Self-SupervisedLearning

Abstract: Self- and semi-supervised learning frameworks have made significant progress in training machine learning models with limited labeled data in image and language domains. These methods heavily rely on the unique …

Learning sparse codes from compressed representations with biologically plausible local wiring constraints


Authors:  KionFallah, AdamWillats, NinghaoLiu....
Published date-12/01/2020
Tasks:  DimensionalityReduction

Abstract: Sparse coding is an important method for unsupervised learning of task-independent features in theoretical neuroscience models of neural coding. While a number of algorithms exist to learn these representations from …

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning


Authors:  GiuliaDenevi, MassimilianoPontil, CarloCiliberto....
Published date-12/01/2020
Tasks:  Meta-Learning

Abstract: Biased regularization and fine tuning are two recent meta-learning approaches. They have been shown to be effective to tackle distributions of tasks, in which the tasks’ target vectors are all …

Filter by

Categories

Tags