Home /
Research
Showing 19 - 24 / 897
Conditional Generative Modeling for De Novo Hierarchical Multi-Label Functional Protein Design
Anonymous....
Published date-01/01/2021
The availability of vast protein sequence information and rich functional annotations thereof has a large potential for protein design applications in biomedicine and synthetic biology. To this date, there exists …
Private Image Reconstruction from System Side Channels Using Generative Models
Anonymous....
Published date-01/01/2021
ImageReconstruction
System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to …
WAVEQ: GRADIENT-BASED DEEP QUANTIZATION OF NEURAL NETWORKS THROUGH SINUSOIDAL REGULARIZATION
Anonymous....
Published date-01/01/2021
Quantization
Deep quantization of neural networks below eight bits can lead to superlinear benefits in storage and compute efficiency. However, homogeneously quantizing all the layers to the same level does not …
On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes
Anonymous....
Published date-01/01/2021
3DShapeRepresentation
A neural implicit outputs a number indicating whether the given query point in space is outside, inside, or on a surface. Many prior works have focused on _latent-encoded_ neural implicits, …
Improving Random-Sampling Neural Architecture Search by Evolving the Proxy Search Space
Anonymous....
Published date-01/01/2021
ImageClassification, NeuralArchitectureSearch
Random-sampling Neural Architecture Search (RandomNAS) has recently become a prevailing NAS approach because of its search efficiency and simplicity. There are two main steps in RandomNAS: the training step that …
Structure and randomness in planning and reinforcement learning
Anonymous....
Published date-01/01/2021
Planning in large state spaces inevitably needs to balance depth and breadth of the search. It has a crucial impact on planners performance and most manage this interplay implicitly. We …