Textbooks are all you need S Gunasekar, Y Zhang, J Aneja, CCT Mendes, A Del Giorno, S Gopi, ... arXiv preprint arXiv:2306.11644, 2023 | 199 | 2023 |
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions S Chen, S Chewi, J Li, Y Li, A Salim, AR Zhang The Eleventh International Conference on Learning Representations, 2022 | 161 | 2022 |
Maximum mean discrepancy gradient flow M Arbel, A Korba, A Salim, A Gretton Advances in Neural Information Processing Systems 32, 2019 | 132 | 2019 |
A non-asymptotic analysis for Stein variational gradient descent A Korba, A Salim, M Arbel, G Luise, A Gretton Advances in Neural Information Processing Systems 33, 4672-4682, 2020 | 77 | 2020 |
Optimal and practical algorithms for smooth and strongly convex decentralized optimization D Kovalev, A Salim, P Richtárik Advances in Neural Information Processing Systems 33, 18342-18352, 2020 | 74 | 2020 |
Towards a theory of non-log-concave sampling: first-order stationarity guarantees for Langevin Monte Carlo K Balasubramanian, S Chewi, MA Erdogdu, A Salim, S Zhang Conference on Learning Theory, 2896-2923, 2022 | 53 | 2022 |
Improved analysis for a proximal algorithm for sampling Y Chen, S Chewi, A Salim, A Wibisono Conference on Learning Theory, 2984-3014, 2022 | 43 | 2022 |
The Wasserstein proximal gradient algorithm A Salim, A Korba, G Luise Advances in Neural Information Processing Systems 33, 12356-12366, 2020 | 42 | 2020 |
The probability flow ODE is provably fast S Chen, S Chewi, H Lee, Y Li, J Lu, A Salim Advances in Neural Information Processing Systems 36, 2023 | 40 | 2023 |
Phi-2: The surprising power of small language models M Javaheripi, S Bubeck, M Abdin, J Aneja, S Bubeck, CCT Mendes, ... Microsoft Research Blog, 2023 | 37 | 2023 |
Dualize, split, randomize: Toward fast nonsmooth optimization algorithms A Salim, L Condat, K Mishchenko, P Richtárik Journal of Optimization Theory and Applications 195 (1), 102-130, 2022 | 37 | 2022 |
Primal dual interpretation of the proximal stochastic gradient Langevin algorithm A Salim, P Richtárik Advances in Neural Information Processing Systems 33, 3786-3796, 2020 | 33 | 2020 |
A constant step Forward-Backward algorithm involving random maximal monotone operators P Bianchi, W Hachem, A Salim Journal of Convex Analysis 26 (2), 387-436, 2019 | 29 | 2019 |
Stochastic proximal langevin algorithm: Potential splitting and nonasymptotic rates A Salim, D Kovalev, P Richtárik Advances in Neural Information Processing Systems 32, 2019 | 25 | 2019 |
Distributed fixed point methods with compressed iterates S Chraibi, A Khaled, D Kovalev, P Richtárik, A Salim, M Takáč arXiv preprint arXiv:1912.09925, 2019 | 24 | 2019 |
Snake: a stochastic proximal gradient algorithm for regularized problems over large graphs A Salim, P Bianchi, W Hachem IEEE Transactions on Automatic Control 64 (5), 1832-1847, 2019 | 23 | 2019 |
An optimal algorithm for strongly convex minimization under affine constraints A Salim, L Condat, D Kovalev, P Richtárik International conference on artificial intelligence and statistics, 4482-4498, 2022 | 21 | 2022 |
A convergence theory for SVGD in the population limit under Talagrand’s inequality T1 A Salim, L Sun, P Richtarik International Conference on Machine Learning, 19139-19152, 2022 | 20* | 2022 |
Constant step stochastic approximations involving differential inclusions: stability, long-run convergence and applications P Bianchi, W Hachem, A Salim Stochastics 91 (2), 288-320, 2019 | 19* | 2019 |
Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space MZ Diao, K Balasubramanian, S Chewi, A Salim International Conference on Machine Learning, 7960-7991, 2023 | 17 | 2023 |