Machine learning is a broad field in which cross-disciplinary collaboration is essential.  To enable such collaboration, starting this fall we will have weekly interdepartmental meetings wherein faculty, postdocs, and senior PhD students from ECE, CS, STAT, and CAAM will discuss their machine learning research.

Details about the ML lunch presentation are below. Lunch is provided for graduate students, postdoctoral scholars, and faculty.


Breaking the Computational Chicken-and-egg Loop in Adaptive Sampling via Hashing
Nov. 1, 12:00 pm- 1:00 pm in Herring 129
Speaker: Anshumali Shrivastava
Please indicate interest, especially if you want lunch, here.
Abstract: Stochastic Gradient Descent or SGD is the most popular algorithm for large-scale optimization. In SGD, the gradient is estimated by uniform sampling with sample size 1. There have been several results that show better gradient estimates, using weighted non-uniform sampling, which leads to faster convergence. Unfortunately, the per-iteration cost of maintaining this adaptive distribution is costlier than the exact gradient computation itself, which create a chicken-and-egg loop
making the fast convergence useless. In this paper, we break this chicken-and-egg loop and provide the first demonstration of a sampling scheme, which leads to superior gradient estimation, while keeping the sampling cost per iteration similar to the uniform sampling. Such a scheme is possible due to recent advances in Locality Sensitive Hashing (LSH) literature. As a consequence, we improve the running of all existing gradient descent algorithms.

 

The Glass and its Cheap Knockoffs are Half Full: Optimization with Multi-Fidelity Evaluations
Nov. 8, 12:00 pm- 1:00 pm in DCH 1055-McMurtry Auditorium
Speaker: Gautam Dasarathy
Please indicate interest, especially if you want lunch, here.
Abstract: In many scientific and engineering applications, we are tasked with the optimization of a black-box function that is often expensive even to evaluate. However, in many cases, cheap approximations to this function may be obtainable. For example, the real world behavior of an autonomous vehicle can be (possibly poorly) approximated by a significantly cheaper computer simulation. The cross-validation behavior of a neural network maybe approximated by small representative samples of the training set. One might hope that these approximations can be used to efficiently eliminate vast regions of the optimization space, and adaptively hone the search onto a small, promising region. In this talk, I will show how one can formalize this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. I will introduce a new meta-algorithm based on the principle of optimism in the face of uncertainty that (a) comes with theoretical guarantees which reveal the above intuitive behavior, and (b) empirically vastly outperforms other known methods on several synthetic and real experiments.

 

Partition mixture of 1D wavelets for multidimensional data.
Nov. 15, 12:00 pm- 1:00 pm in Herring 129
Speaker: Meng Li
Please indicate interest, especially if you want lunch, here.
Abstract: Traditional statistical wavelet analysis that carries out modeling and inference based on wavelet coefficients under a given, predetermined wavelet transform can quickly lose efficiency in multivariate problems, because such wavelet transforms, which are typically symmetric with respect to the dimensions, cannot adaptively exploit the energy distribution in a problem-specific manner. We introduce a principled probabilistic framework for incorporating such adaptivity—by (i) representing multivariate functions using one-dimensional (1D) wavelet transforms applied to a permuted version of the original function, and (ii) placing a prior on the corresponding permutation, thereby forming a mixture of permuted 1D wavelet transforms. Such a representation can achieve substantially better energy concentration in the wavelet coefficients. In particular, when combined with the Haar basis, we show that exact Bayesian inference under the model can be achieved analytically through a recursive message passing algorithm with a computational complexity that scales linearly with sample size. In addition, we propose a sequential Monte Carlo (SMC) inference algorithm for other wavelet bases using the exact Haar solution as the proposal. We demonstrate that with this framework even simple 1D Haar wavelets can achieve excellent performance in both 2D and 3D image reconstruction via numerical experiments, outperforming state-of-the-art multidimensional wavelet-based methods especially in low signal-to-noise ratio settings, at a fraction of the computational cost.

 

Deep Compressed Sensing
Nov. 30, 12:00 pm- 1:00 pm in Herring 129
Speaker: Paul Hand
Please indicate interest, especially if you want lunch, here.
Abstract: Combining principles of compressed sensing with deep neural network-based generative image priors has recently been empirically shown to require 10X fewer measurements than traditional compressed sensing in certain scenarios. As deep generative priors (such as those obtained via generative adversarial training) improve, analogous improvements in the performance of compressed sensing and other inverse problems may be realized across the imaging sciences. In joint work with Vladislav Voroninski, we provide a theoretical framework for studying inverse problems subject to deep generative priors. In particular, we prove that with high probability, the non-convex empirical risk objective for enforcing random deepgenerative priors subject to compressive random linear observations of the last layer of the generator has no spurious local minima, and that for a fixed network depth, these guarantees hold at order-optimal sample complexity.

 

If you are interested in speaking next semester, please indicate your interest here.


Photo Gallery: