Home /
Research
Showing 55 - 60 / 897
A Three-Stage Self-Training Framework for Semi-Supervised Semantic Segmentation
RihuanKe, AngelicaAviles-Rivero, SaurabhPandey....
Published date-12/01/2020
SemanticSegmentation, Semi-SupervisedSemanticSegmentation
Semantic segmentation has been widely investigated in the community, in which the state of the art techniques are based on supervised models. Those models have reported unprecedented performance at the …
Optimal visual search based on a model of target detectability in natural images
ShimaRashidi, KristaEhinger, AndrewTurpin....
Published date-12/01/2020
EyeTracking, Foveation
To analyse visual systems, the concept of an ideal observer promises an optimal response for a given task. Bayesian ideal observers can provide optimal responses under uncertainty, if they are …
Sim2Real for Self-Supervised Monocular Depth and Segmentation
NithinRaghavan, PunarjayChakravarty, ShubhamShrivastava....
Published date-12/01/2020
DomainAdaptation
Image-based learning methods for autonomous vehicle perception tasks require large quantities of labelled, real data in order to properly train without overfitting, which can often be incredibly costly. While leveraging …
VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain
JinsungYoon, YaoZhang, JamesJordon....
Published date-12/01/2020
DataAugmentation, Imputation, Self-SupervisedLearning
Self- and semi-supervised learning frameworks have made significant progress in training machine learning models with limited labeled data in image and language domains. These methods heavily rely on the unique …
Learning sparse codes from compressed representations with biologically plausible local wiring constraints
KionFallah, AdamWillats, NinghaoLiu....
Published date-12/01/2020
DimensionalityReduction
Sparse coding is an important method for unsupervised learning of task-independent features in theoretical neuroscience models of neural coding. While a number of algorithms exist to learn these representations from …
The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning
GiuliaDenevi, MassimilianoPontil, CarloCiliberto....
Published date-12/01/2020
Meta-Learning
Biased regularization and fine tuning are two recent meta-learning approaches. They have been shown to be effective to tackle distributions of tasks, in which the tasks’ target vectors are all …