Home /
Research
Showing 7 - 12 / 897
MixSize: Training Convnets With Mixed Image Sizes for Improved Accuracy, Speed and Scale Resiliency
Anonymous....
Published date-01/01/2021
Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model. Although trained on images of a specific size, it is well established …
Hierarchical Meta Reinforcement Learning for Multi-Task Environments
Anonymous....
Published date-01/01/2021
HierarchicalReinforcementLearning, MetaReinforcementLearning
Deep reinforcement learning algorithms aim to achieve human-level intelligence by solving practical decisions-making problems, which are often composed of multiple sub-tasks. Complex and subtle relationships between sub-tasks make traditional methods …
Improving Random-Sampling Neural Architecture Search by Evolving the Proxy Search Space
Anonymous....
Published date-01/01/2021
ImageClassification, NeuralArchitectureSearch
Random-sampling Neural Architecture Search (RandomNAS) has recently become a prevailing NAS approach because of its search efficiency and simplicity. There are two main steps in RandomNAS: the training step that …
Contrast to Divide: self-supervised pre-training for learning with noisy labels
Anonymous....
Published date-01/01/2021
ImageClassification, Learningwithnoisylabels
Advances in semi-supervised methods for image classification significantly boosted performance in the learning with noisy labels (LNL) task. Specifically, by discarding the erroneous labels (and keeping the samples), the LNL …
Removing Undesirable Feature Contributions Using Out-of-Distribution Data
Anonymous....
Published date-01/01/2021
DataAugmentation
Several data augmentation methods deploy unlabeled-in-distribution (UID) data to bridge the gap between the training and inference of neural networks. However, these methods have clear limitations in terms of availability …
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
Anonymous....
Published date-01/01/2021
ImageGeneration
Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for …