Colloquiums and Public Lectures 2018
Title: | Mathematical deep learning for drug discovery |
Speaker: | Professor Guowei Wei |
Date: | 17 December 2018 |
Time: | 10.30am - 11.30am |
Venue: | MAS Executive Classroom 2, MAS-03-07, School of Physical and Mathematical Sciences |
Host: | Division of Mathematical Sciences, School of Physical and Mathematical Sciences |
Abstract: | Designing efficient drugs for curing diseases is of essential importance for the 21st century's life science. Computer-aided drug design and discovery has obtained a significant recognition recently. However, the geometric complexity of protein-drug complexes remains a grand challenge to conventional computational methods, including machine-learning algorithms. We assume that the intrinsic physics of interest of protein-drug complexes lies on low-dimensional manifolds or subspaces embedded in a high-dimensional data space. We devise topological abstraction, geometric simplification, graph reduction, and multiscale modeling to encode high-dimensional, massive and diverse biological data into low-dimensional representations. These representations are integrated with advanced deep learning algorithms for the predictions of protein-ligand binding affinity, drug toxicity, drug solubility, drug partition coefficient and mutation induced protein stability change, and for the discrimination of active ligands from decoys. I will briefly discuss how this approach became the top performer in D3R Grand Challenges, a worldwide competition series in computer-aided drug design and discovery (http://users.math.msu.edu/users/wei/D3R_GC3.pdf). |
Title: | Lie group and homogeneous space variational integrators applied to geometric optimal control theory |
Speaker: | Professor Melvin Leok |
Date: | 3 December 2018 |
Time: | 4.00pm - 5.00pm |
Venue: | Lecture Theatre 4 (SPMS-03-09) |
Host: | Professor Bernhard Schmidt |
Abstract: | The geometric approach to mechanics serves as the theoretical underpinning of innovative control methodologies in geometric control theory. These techniques allow the attitude of satellites to be controlled using changes in its shape, as opposed to chemical propulsion, and are the basis for understanding the ability of a falling cat to always land on its feet, even when released in an inverted orientation. We will discuss the application of geometric structure-preserving numerical schemes to the optimal control of mechanical systems. In particular, we consider Lie group variational integrators, which are based on a discretization of Hamilton's principle that preserves the Lie group structure of the configuration space. In contrast to traditional Lie group integrators, issues of equivariance and order-of-accuracy are independent of the choice of retraction in the variational formulation. The importance of simultaneously preserving the symplectic and Lie group properties is also demonstrated . Recent extensions to homogeneous spaces yield intrinsic methods for Hamiltonian flows on the sphere, and have potential applications to the simulation of geometrically exact rods, structures and mechanisms. Extensions to Hamiltonian PDEs and uncertainty propagation on Lie groups using noncommutative harmonic analysis techniques will also be discussed. |
Title: | The beauty of mathematics shows itself to patient followers: The work of Maryam Mirzakhani |
Speaker: | Dr Daniel Mathews |
Date: | 26 September 2018 |
Time: | 1.30pm - 2.30pm |
Venue: | Lecture Theatre 2 (SPMS-03-03) |
Host: | Associate Professor Andrew James Kricker |
Abstract: | Maryam Mirzakhani was a brilliant, trailblazing mathematician, and the first woman to win a Fields Medal. She proved many incredible theorems across a range of fields on the cutting edge of pure mathematics. She was also an all-round excellent human being. Tragically, she passed away last year at the age of 40. In this talk I will discuss her life and work, and attempt to explain some of her mathematics and its implications. Along the way we'll see such things as moduli spaces, hyperbolic surfaces, the art of MC Escher, and fun facts about billiards. I will not assume any technical knowledge of these fields. |
Title: | Data-Driven Approach to Pricing Optimization |
Speaker: | Dr Yan Zhenzhen |
Date: | 14 September 2018 |
Time: | 1.30pm - 2.30pm |
Venue: | Lecture Theatre 3 (SPMS-03-02) |
Host: | Division of Mathematical Sciences, School of Physical and Mathematical Sciences |
Abstract: | We have developed an estimation and optimization framework for the multi-product pricing problem and network pricing problem. The key feature is to develop a convex model to approximate customer’s choice response to the change of prices. The convex model exploits properties of the marginal distributions of the random shock in customer’s utility function. We have shown that the multi-product pricing problem becomes a convex optimization problem with the proposed choice model under a log-concavity assumption of each marginal probability density function. With this approach, we used aggregate sales information from a set of pricing experiments to guide us to the appropriate consumer choice model without presuming a structural choice model. This has partially addressed the problem of model misspecification for pricing problems. Extensive tests using both simulated and experimental data for two companies' multi-product pricing problems demonstrates clearly the benefits of the data driven pricing approach. |
Title: | Stories vs Statistics |
Speaker: | Professor John Allen Paulos |
Date: | 29 June 2018 |
Time: | 4.00pm - 5.00pm |
Venue: | Lecture Theatre 4 (SPMS-03-09) |
Host: | Division of Mathematical Sciences, School of Physical and Mathematical Sciences |
Abstract: | The talk will discuss the complex relationship between stories and statistics or, to vary the alliteration, between narratives and numbers. It will then go on to the most common mathematical mistakes in news reports and the media generally. No mathematical background will be needed, just a bit of arithmetic, a little logic, and maybe a feel for probability. The talk is in two parts, the first a bit more theoretical, the second more topical and news-related. |
Title: | Branching diffusion representation for nonlinear Cauchy problems and Monte Carlo approximation |
Speaker: | Professor Nizar Touzi |
Date: | 28 February 2018 |
Time: | 1.30pm - 2.30pm |
Venue: | Lecture Theatre 4 (SPMS-03-09) |
Host: | Associate Professor Nicolas Privault |
Abstract: | We provide a probabilistic representations of the solution of some semilinear hyperbolic and high-order PDEs based on branching diffusions. These representations pave the way for a Monte-Carlo approximation of the solution, thus bypassing the curse of dimensionality. We illustrate the numerical implications in the context of some popular PDEs such as the Burger's equation, the nonlinear Klein-Gordon equation, a simplified scalar version of the Yang-Mills equation, a fourth-order nonlinear beam equation and the Gross-Pitaevskii PDE as an example of nonlinear Schrödinger equations. |
Title: | Singular values of matrices and facial recognition |
Speaker: | Professor John H. Hubbard |
Date: | 17 January 2018 |
Time: | 1.30pm - 2.30pm |
Venue: | Lecture Theatre 4, SPMS-03-09 |
Host: | Associate Professor Andrew James Kricker |
Abstract: | The spectral theorem for symmetric matrices, and its offspring, the singular value decomposition, is used by facebook and no doubt many others in facial recognition. We don’t have any clear notion of how we store and retrieve faces from memory; one may speculate that perhaps the brain does singular value decomposition also. This was pure speculation until Prof. Tsao (Caltech) showed that something similar is used by macaque monkeys. In Prof Tsao’s words: "In linear algebra, you learn that if you project a 50-dimensional vector space onto a one-dimensional subspace, this mapping has a 49-dimensional null space," Tsao says. "We were stunned that, deep in the brain's visual system, the neurons are actually doing simple linear algebra. Each cell is literally taking a 50-dimensional vector space—face space—and projecting it onto a one-dimensional subspace. It was a revelation to see that each cell indeed has a 49-dimensional null space; this completely overturns the long-standing idea that single face cells are coding specific facial identities. Instead, what we've found is that these cells are beautifully simple linear projection machines.” I will explain the connection between the variational proof of the spectral theorem, singular values, variance and covariance matrix, and applications of these methods to facial recognition and other problems. |