This is the home page for the Machine Learning Lunch Series, sponsored by EOG Resources, at the School of Engineering. The coordinators are Michael Weylandt, Tan Nguyen, and Chen Luo, and the faculty coordinator is Reinhard Heckel. The mailing list is:
Unless otherwise noted, the meetings are on Wednesdays at 12:00pm. 

Details about the ML lunch presentation are below. Lunch is provided for graduate students, postdoctoral scholars, and faculty.

To Facebook, Cambridge Analytica, and (Let’s Hope) Beyond!
April. 25, 12:00 pm- 1:00 pm in Herring 129
Speaker:  Fred Oswald
Please indicate interest, especially if you want lunch, here.
Abstract:  The tandem presence of ‘big data’ and machine learning has led to the promises and perils of large-scale, automated, and improved predictions about a wide range of human characteristics and behavior. The current lunchtime discussion will address a series of recent papers that are relevant to the general roles of data-driven vs. model-driven approaches to social sciences research (personality psychology, neuroscience), and to attendant ethical issues related to recent news regarding Facebook and Cambridge Analytica (e.g., sampling, measurement, data collection, storage, interpretation, and
implications for decision-making).


Farewell to Heavy Human Annotation: Learning from Web Data
April. 18, 12:00 pm- 1:00 pm in Herring 129
Speaker:  Li Niu
Please indicate interest, especially if you want lunch, here.
Abstract:  With the rapid development of machine learning techniques, especially deep learning, myriads of computer vision tasks  have achieved great success and demonstrated excellent performance. However, conventional machine learning approaches, especially deep learning, require a large amount of well-labeled training data annotated by human labeler, which is extremely time-consuming and labor-intensive. To reduce human effort in annotation, many existing research works fall into the scope of transfer learning and webly supervised learning. For transfer learning, only a small set of source domains/categories are labeled by human, and then, the model learnt based on this small set of source domains/categories can be adapted to a given set of target domains/categories or generalize to an arbitrary set of target domains/categories. Note that domain is a very broad concept, for example, one camera viewpoint, one capture device, or one data source can all be treated as one domain. For webly supervised learning, a large amount of freely available web images/videos/text can be crawled from public websites as training data, which leaves human annotation out of the loop.


Valid Statistical Inference for Clustered Data
April. 11, 12:00 pm- 1:00 pm in Herring 129
Speaker:  Fred Campbell
Please indicate interest, especially if you want lunch, here.
Abstract:  We develop several statistical inference procedures for clustered data. Clustered data violates the assumptions of many classical statistical tests and requires researchers to account for the clustering procedure in statistical inference. We derive the exact distribution of two classical multivariate statistics conditional on having clustered the data through the Convex Clustering method. This allows us to quantify the uncertainty in the clusters through confidence regions and test the effectiveness of our clustering assignments with Hotelling’s T^2 test.


Neuronal Functional Connectivity Estimation
April. 4, 12:00 pm- 1:00 pm in Herring 129
Speaker: Giuseppe Vinci
Please indicate interest, especially if you want lunch, here.
Abstract:  One of the most important challenges of computational neuroscience is estimating functional connectivity, that is inferring neurons dependence structure. Functional connectivity may be represented by a graph where nodes represent neurons or brain areas, and an edge connects two nodes if and only if these are dependent conditionally on the others. Nowadays neuroscientists can record the activity of hundreds to thousands of neurons simultaneously, but only on limited numbers of experiments. This high-dimensional setting requires regularized statistical methods to infer neural connectivity efficiently. Popular sparse Gaussian graphical models (GGM), such as the Graphical Lasso, can provide sparse dependence structure estimates, but their performance in realistic scenarios of neural data can be unsatisfactory. We provide methods that incorporate neurophysiological information and that outperform the Graphical Lasso and its existing variants in realistic scenarios of neural data.


Mapping Questions to Textbook Content
March. 28, 12:00 pm- 1:00 pm in Herring 129
Speaker: Ryan Burmeister
Please indicate interest, especially if you want lunch, here.
Abstract:  Learning is an iterative process which consists of knowledge acquisition, assessment of supposed knowledge, identification of misconceptions, and refinement of understanding.  Within courses, teachers often employ textbook review questions as a method of assessing student knowledge retention.  However, students attempt to resolve misconceptions or reinforce concepts following these questions may leave them searching large expanses of textbook content.  This project aims to alleviate this problem by providing formative feedback in the form of textbook passages.

By utilizing question answering and reading comprehension methodologies, a search algorithm for selecting relevant material within textbooks is developed.  The algorithm capitalizes on the inherent structure of textbooks which, intuitively, reduces the search space.  Evaluation of question answering and information retrieval practices in the education domain is performed beyond simple factoid questions.  Results are compared to data collected via subject matter experts, and limitations of these models, as they pertain to textbooks, are explored.


Efficient Mining and Indexing of Massive-Scale Temporal Data
March. 21, 12:00 pm- 1:00 pm in Herring 129
Speaker: Chen Luo
Please indicate interest, especially if you want lunch, here.
Abstract:  Temporal data is ubiquitous. For example, time series, heterogeneous event streams are all temporal data. Mining and indexing the temporal data is crucial for many real-world applications: such as system management for online services, automatic medical diagnosis, analyzing HPC programs, etc. In this talk, I will introduce two of my works on mining and indexing temporal data set: (1) Correlating event with time series for incident diagnosis. In this work, we designed a framework for correlating the event sequences with continues time series data,
that can be used for incident diagnosis for online services. (2) SSH (Sketch, Shingle, & Hash) for Indexing Massive-Scale Time Series. In this work, we designed a hashing framework, named SSH, for indexing time series under DTW measure. SSH can be around 20x times faster than the state-of-the-art methods.


PulseCam: Real-Time Personalized Health Monitoring via Machine Learning
March. 7, 12:00 pm- 1:00 pm in Herring 129
Speaker: Mayank Kumar
Please indicate interest, especially if you want lunch, here.
Abstract:  Over the past decade, deep learning and machine learning methods have brought about significant improvements in computer vision and speech recognition tasks. These recent advances are usually attributed to the use of deep models having higher learning capacity, and a substantial increase in GPU-fueled computational power that made learning tens of millions of parameters possible, as well as the availability of large datasets that are essential to improve the generalizability of such deep models. But, the collection of large-scale image and speech datasets in diverse settings are also fueled by the availability of cheap and ubiquitous sensors — mobile phone cameras and microphones. Similar such sensors are either not available or are not deployable at large scale in a cost-effective manner for monitoring human health. Therefore, there is a lack of large-scale population level datasets in healthcare, and thus the goal of AI-enabled personalized and
continuous health analytic remains a distant dream.

In this talk, I will present my Ph.D. research that enables CMOS cameras (e.g., smartphone cameras) to be used as a non-invasive, clinically accurate and high-resolution blood flow imaging and vital sign monitoring sensor. The new imaging modality, named PulseCam, combines ideas from computational imaging, signal recovery, and computer vision to measure spatial maps and temporal trends of peripheral blood flow reliably. We tested our blood perfusion imaging modality to monitor blood flow changes in the palm of patients undergoing surgery. We show that we can detect when anesthesia is administered to patients during the surgery using only their palm video recordings.  Also, by doing more controlled experiments in our lab, we show that PulseCam has higher sensitivity in detecting blood flow changes associated with small occlusion pressure compared to existing contact-based measurement system. Therefore, camera-based blood flow imaging based on the developed PulseCam methodology has the potential to be used both for real-time hemodynamic monitoring at the bedside in the ICU and the operating room as well as a mobile-phone based handheld imaging tool to visualize blood perfusion at surgical sites, wounds and ulcers in an easy-to-use and low-cost manner.

In the end, I will like to motivate two new research direction. First, to develop new inference algorithms that can take the measured three-dimensional spatiotemporal blood flow maps as inputs to provide better insight into a patient’s hemodynamic state and cardiovascular health. Second, by developing better computational inverse scattering algorithms, aided by machine learning and deep learning ideas, and new computational cameras and illumination systems to develop novel bio-sensors that can measure many more health parameters like blood hemoglobin level, blood cell count, blood oxygenation, and blood glucose level etc, in a cost-effective and scalable manner.


A Study of Neural Networks from a Generative Probabilistic Model Perspective
Feb. 28, 12:00 pm- 1:00 pm in Herring 129
Speaker:Tan Nguyen
Please indicate interest, especially if you want lunch, here.
Abstract: A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object position, orientation, and scale in object recognition while speech recognition involves the unknown voice pronunciation, pitch, and
speed. Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks that routinely yield pattern recognition systems with near- or super-human capabilities. But a
fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive. We answer this question by developing a new probabilistic framework for deep learning based on the Deep Rendering Model: a generative probabilistic model that explicitly captures latent nuisance variation. By relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks and random decision forests, providing insights into their successes and shortcomings, a principled route to their improvement, and new avenues for exploration.


Degeneracy, Trainability, and Generalization in Neural Networks
Feb. 21, 12:00 pm- 1:00 pm in Herring 129
Speaker: Emin Orhan
Please indicate interest, especially if you want lunch, here.
Abstract: I will first discuss the reasons behind the difficulty of training deep neural networks. I will argue that the main difficulty is due to a problem called the “degeneracy problem”, which  severely constrains the expressive capacity of neural networks. I will then argue that a very effective recent “trick”, i.e. adding skip connections between layers in a deep net help training, at least partly, by addressing this degeneracy problem. I will discuss how one can use this insight to design better skip connections. Finally, I will discuss some intriguing connections between degeneracy and generalization in neural networks.


Data-Driven Computational Imaging
Feb. 14, 12:00 pm- 1:00 pm in Herring 129
Speaker: Chris Metzler
Please indicate interest, especially if you want lunch, here.
Abstract: Between ever increasing pixel counts, ever cheaper sensors, and the ever expanding world-wide-web, natural image data has become plentiful. These vast quantities of data, be they high frame rate videos or huge curated datasets like Imagenet, stand to substantially improve the performance and capabilities of computational imaging systems. However, using this data efficiently presents its own unique set of challenges. In this talk, I will use data to develop better priors, improve reconstructions, and enable new capabilities for computational imaging systems.


Energy-efficient Machine Learning Systems for Cloud and Edge Computing
Feb. 7, 12:00 pm- 1:00 pm in Duncan Hall 1055
Speaker: Yingyan Lin
Please indicate interest, especially if you want lunch, here.
Abstract: Machine learning (ML) algorithms are increasingly pervasive in tackling the data deluge of the 21st Century. Current ML systems adopt either a centralized cloud computing or a distributed mobile computing paradigm. In both paradigms, the challenge of energy efficiency has been drawing increased attention. In cloud computing, data transfer due to inter-chip, inter-board, inter-shelf and inter-rack communications (I/O interface) within data centers is one of the dominant energy costs. This will only intensify with the growing demand for increased I/O bandwidth for high-performance computing in data centers. On the other hand, in mobile computing, energy efficiency is the primary design challenge, as mobile devices have limited energy, computation and storage resources. This challenge is being exacerbated by the need to embed ML algorithms, such as convolutional neural networks (CNNs), for enabling local on-device inference capabilities.
In this talk, I will present holistic system-to-circuit approaches for addressing these energy efficiency challenges. First, I will describe the design of a 4 GS/s bit-error-rate optimal analog-to-digital converter in 90nm CMOS and its use in realizing an energy-efficient 4 Gb/s serial link receiver for I/O interface. Measurement results have shown that this technique provides a promising solution to the well-known interface power bottleneck problem in data centers. Next, I will describe two techniques that can potentially enable on-device deployment of CNNs by significantly reducing the energy consumption via algorithmic/architectural innovation. Finally, I will present some of our on-going research projects in the emerging area of machine learning on resource-constrained mobile platforms.


Closing the Loop on Learning and Acquisition: An Interactive Approach
Jan. 31, 12:00 pm- 1:00 pm in Duncan Hall 1055
Speaker: Gautam Dasarathy
Please indicate interest, especially if you want lunch, here.
Abstract: With rapid progress in our ability to acquire, process, and learn from data, the true democratization of data-driven intelligence has never seemed closer. However, there is a catch. Machine learning algorithms have traditionally been designed independently of the systems that acquire data. As a result, there is a stark disconnect between their promise and their real-world applicability. An urgent need has therefore emerged for integrating the design of learning and acquisition systems. In this talk, I will present my approach to addressing this disconnect using interactive, compressive, and multi-fidelity machine learning methods. In particular, I will consider a problem on learning structure in high-dimensional distributions, and highlight how traditional methods do not take into account constraints that arise in applications ranging from sensor networks to calcium imaging of the brain. I will then demonstrate how one can close this loop using interactive learning and will conclude with several fascinating directions for future exploration.


Recent Developments in Methods for Domain Adaptation
Jan. 24, 12:00 pm- 1:00 pm in Herring 129
Speaker: Tan Nguyen
Please indicate interest, especially if you want lunch, here.
Abstract: Deep learning models have achieved state-of-the-art performance on a wide range of computer vision tasks including object recognition and image segmentation. The success of deep learning relies on massive amounts of labeled training data. In many applications, acquiring and annotating a large number of real data is costly and sometimes even impossible. A solution to the lack of real annotated resources is to train a deep learning model using synthetic data and deploy it to real life scenarios. Unfortunately, the differences between real and synthetic data significantly reduces the accuracy of the model at deployment. In this talk, I will review recent synthetic-to-real domain adaptation techniques which aim to reduce such gap of performance. In particular, I will discuss feature-level methods such as ADDA, DAN, Deep CORAL and pixel-level methods such as SimGANs, PixelDA.

If you are interested in speaking next semester, please indicate your interest here.

Photo Gallery: