This is the home page for the Machine Learning Lunch Series, sponsored by EOG Resources, at the School of Engineering. The coordinators are Michael Weylandt, Tan Nguyen, and Chen Luo, and the faculty coordinator is Reinhard Heckel. The mailing list is: ml-l@rice.edu.
Unless otherwise noted, the meetings are on Wednesdays at 12:00pm. 

Details about the ML lunch presentation are below. Lunch is provided for graduate students, postdoctoral scholars, and faculty.


Data-Driven Computational Imaging
Feb. 14, 12:00 pm- 1:00 pm in Herring 129
Speaker: Chris Metzler
Please indicate interest, especially if you want lunch, here.
Abstract: Between ever increasing pixel counts, ever cheaper sensors, and the ever expanding world-wide-web, natural image data has become plentiful. These vast quantities of data, be they high frame rate videos or huge curated datasets like Imagenet, stand to substantially improve the performance and capabilities of computational imaging systems. However, using this data efficiently presents its own unique set of challenges. In this talk, I will use data to develop better priors, improve reconstructions, and enable new capabilities for computational imaging systems.

 

Energy-efficient Machine Learning Systems for Cloud and Edge Computing
Feb. 7, 12:00 pm- 1:00 pm in Duncan Hall 1055
Speaker: Yingyan Lin
Please indicate interest, especially if you want lunch, here.
Abstract: Machine learning (ML) algorithms are increasingly pervasive in tackling the data deluge of the 21st Century. Current ML systems adopt either a centralized cloud computing or a distributed mobile computing paradigm. In both paradigms, the challenge of energy efficiency has been drawing increased attention. In cloud computing, data transfer due to inter-chip, inter-board, inter-shelf and inter-rack communications (I/O interface) within data centers is one of the dominant energy costs. This will only intensify with the growing demand for increased I/O bandwidth for high-performance computing in data centers. On the other hand, in mobile computing, energy efficiency is the primary design challenge, as mobile devices have limited energy, computation and storage resources. This challenge is being exacerbated by the need to embed ML algorithms, such as convolutional neural networks (CNNs), for enabling local on-device inference capabilities.
In this talk, I will present holistic system-to-circuit approaches for addressing these energy efficiency challenges. First, I will describe the design of a 4 GS/s bit-error-rate optimal analog-to-digital converter in 90nm CMOS and its use in realizing an energy-efficient 4 Gb/s serial link receiver for I/O interface. Measurement results have shown that this technique provides a promising solution to the well-known interface power bottleneck problem in data centers. Next, I will describe two techniques that can potentially enable on-device deployment of CNNs by significantly reducing the energy consumption via algorithmic/architectural innovation. Finally, I will present some of our on-going research projects in the emerging area of machine learning on resource-constrained mobile platforms.

 

Closing the Loop on Learning and Acquisition: An Interactive Approach
Jan. 31, 12:00 pm- 1:00 pm in Duncan Hall 1055
Speaker: Gautam Dasarathy
Please indicate interest, especially if you want lunch, here.
Abstract: With rapid progress in our ability to acquire, process, and learn from data, the true democratization of data-driven intelligence has never seemed closer. However, there is a catch. Machine learning algorithms have traditionally been designed independently of the systems that acquire data. As a result, there is a stark disconnect between their promise and their real-world applicability. An urgent need has therefore emerged for integrating the design of learning and acquisition systems. In this talk, I will present my approach to addressing this disconnect using interactive, compressive, and multi-fidelity machine learning methods. In particular, I will consider a problem on learning structure in high-dimensional distributions, and highlight how traditional methods do not take into account constraints that arise in applications ranging from sensor networks to calcium imaging of the brain. I will then demonstrate how one can close this loop using interactive learning and will conclude with several fascinating directions for future exploration.

 

Recent Developments in Methods for Domain Adaptation
Jan. 24, 12:00 pm- 1:00 pm in Herring 129
Speaker: Tan Nguyen
Please indicate interest, especially if you want lunch, here.
Abstract: Deep learning models have achieved state-of-the-art performance on a wide range of computer vision tasks including object recognition and image segmentation. The success of deep learning relies on massive amounts of labeled training data. In many applications, acquiring and annotating a large number of real data is costly and sometimes even impossible. A solution to the lack of real annotated resources is to train a deep learning model using synthetic data and deploy it to real life scenarios. Unfortunately, the differences between real and synthetic data significantly reduces the accuracy of the model at deployment. In this talk, I will review recent synthetic-to-real domain adaptation techniques which aim to reduce such gap of performance. In particular, I will discuss feature-level methods such as ADDA, DAN, Deep CORAL and pixel-level methods such as SimGANs, PixelDA.

If you are interested in speaking next semester, please indicate your interest here.


Photo Gallery: