Home /

Research

Showing 7 - 12 / 897

MixSize: Training Convnets With Mixed Image Sizes for Improved Accuracy, Speed and Scale Resiliency


Authors:  Anonymous....
Published date-01/01/2021

Abstract: Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model. Although trained on images of a specific size, it is well established …

Hierarchical Meta Reinforcement Learning for Multi-Task Environments


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  HierarchicalReinforcementLearning, MetaReinforcementLearning

Abstract: Deep reinforcement learning algorithms aim to achieve human-level intelligence by solving practical decisions-making problems, which are often composed of multiple sub-tasks. Complex and subtle relationships between sub-tasks make traditional methods …

Improving Random-Sampling Neural Architecture Search by Evolving the Proxy Search Space


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  ImageClassification, NeuralArchitectureSearch

Abstract: Random-sampling Neural Architecture Search (RandomNAS) has recently become a prevailing NAS approach because of its search efficiency and simplicity. There are two main steps in RandomNAS: the training step that …

Contrast to Divide: self-supervised pre-training for learning with noisy labels


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  ImageClassification, Learningwithnoisylabels

Abstract: Advances in semi-supervised methods for image classification significantly boosted performance in the learning with noisy labels (LNL) task. Specifically, by discarding the erroneous labels (and keeping the samples), the LNL …

Removing Undesirable Feature Contributions Using Out-of-Distribution Data


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  DataAugmentation

Abstract: Several data augmentation methods deploy unlabeled-in-distribution (UID) data to bridge the gap between the training and inference of neural networks. However, these methods have clear limitations in terms of availability …

Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis


Authors:  Anonymous....
Published date-01/01/2021
Tasks:  ImageGeneration

Abstract: Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for …

Filter by

Categories

Tags