Home /
Research
Showing 145 - 150 / 897
Diversity-Guided Multi-Objective Bayesian Optimization With Batch Evaluations
MinaKonakovicLukovic, YunshengTian, WojciechMatusik....
Published date-12/01/2020
Many science, engineering, and design optimization problems require balancing the trade-offs between several conflicting objectives. The objectives are often black-box functions whose evaluations are time-consuming and costly. Multi-objective Bayesian optimization …
Learning to summarize with human feedback
NisanStiennon, LongOuyang, JeffreyWu....
Published date-12/01/2020
As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to …
Model Class Reliance for Random Forests
GavinSmith, RobertoMansilla, JamesGoulding....
Published date-12/01/2020
Variable Importance (VI) has traditionally been cast as the process of estimating each variables contribution to a predictive model's overall performance. Analysis of a single model instance, however, guarantees no …
Learning Disentangled Representations and Group Structure of Dynamical Environments
RobinQuessard, ThomasBarrett, WilliamClements....
Published date-12/01/2020
Learning disentangled representations is a key step towards effectively discovering and modelling the underlying structure of environments. In the natural sciences, physics has found great success by describing the universe …
Dual-Free Stochastic Decentralized Optimization with Variance Reduction
HadrienHendrikx, FrancisBach, LaurentMassoulié....
Published date-12/01/2020
We consider the problem of training machine learning models on distributed data in a decentralized way. For finite-sum problems, fast single-machine algorithms for large datasets rely on stochastic updates combined …
Inverting Gradients - How easy is it to break privacy in federated learning?
JonasGeiping, HartmutBauermeister, HannahDröge....
Published date-12/01/2020
FederatedLearning
The idea of federated learning is to collaboratively train a neural network on a server. Each user receives the current weights of the network and in turns sends parameter updates …