Seminars 2012
Title: | Big Data in Observational Astronomy |
Speaker: | Professor Jogesh Babu |
Date: | 21 December 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Astronomy is among the first areas to encounter and learn from Big Data. Until a few decades ago astronomers would typically compete for observation time on telescopes, spending cold nights on distant mountain top observatories to collect data on few stars and galaxies. This has changed substantially. Today, they pour over massive data through high speed internet connection to their office computers, thinking of automated procedures to identify objects. It is bringing information scientists, statisticians and astronomers together to collaborate on scientific investigations. Several ongoing and future astronomy projects will be discussed, where statisticians and applied mathematicians could contribute. |
Title: | Effective Modelling of Multi-Phase Continuum Systems |
Speaker: | Professor Arnaud Malan |
Date: | 11 December 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | The effective modelling of strongly coupled multi-phase continuum systems is currently an area of key interest in engineering. This talk presents novel computational fluid dynamics (CFD) technology by which to model complex systems which range from multi-species porous systems to fluid-structure-interaction and free-surface-interaction. A set of unified volume-averaged local thermal-disequilibium governing equations are proposed to describe the general case. All non-linear variations in phenomenological coefficients are fully and accurately accounted for. Numerical techniques are developed to effect efficient discretization, and sparse solvers constructed to achieve fast and efficient solution on massively parallel computing platforms. The efficacy of the technology is demonstrated via application to engineering challenges of the day. These range from hygroscopic porous materials with and without phase change, to non-linear aeroelastics and low mach number fluid-structure-interaction. High resolution accuracy and robust efficient modelling is demonstrated. |
Title: | New geometric applications of the linear programming method |
Speaker: | Professor Christine Bachoc |
Date: | 29 November 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Colloquium Room 1, MAS-05-36 |
Abstract: | The linear programming method is a well-known method widely used in coding theory to bound the size of codes with given minimal distance. In this talk we will see an application of this method to a classical problem in Euclidean geometry, namely to the so-called chromatic number of Euclidean space. When this method is combined with Frankl-Wilson intersection theorems, it improves the previous asymptotic estimates for the measurable chromatic number of Euclidean space. |
Title: | Decoding Reed-Solomon codes: iterative methods and the finite ring case |
Speaker: | Associate Professor Margreta Kuijper |
Date: | 8 November 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | In this talk we address Reed Solomon codes and related codes, such as Gabidulin codes. Such codes allow for algebraic decoding methods----minimal Groebner bases are ideal tools for this. We consider minimal interpolation and advocate iterative algorithms that construct a minimal Gr\"obner basis at each step. Already in 1995 Fitzpatrick introduced such an algorithm, resembling the Berlekamp-Massey algorithm. In the talk we explicitly use what we call the "predictable leading monomial" property of minimal Groebner bases. The talk then turns to the finite ring case Z_{p^r}. For this case minimal Groebner bases are less useful due to a lack of uniqueness. To overcome this obstacle, we introduce the novel idea of Groebner $p$-bases. By using the essential Groebner idea in this way, it is possible to tackle a whole range of interpolation and recurrence problems over Z_{p^r}, as will be demonstrated in the talk. |
Title: | Degenerate Spatial-Temporal Process |
Speaker: | Dr Wanli Min |
Date: | 22 October 2012 |
Time: | 1.30pm - 2.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Many applied problems deal with information streaming over fixed sensor network. Often univariate time series from a given sensor has heteroscedasticity. In the talk, we will show the heteroscedasticity, or dependence of innovations at a broader sense, may invalidate many popular classical statistic asymptotic behavior. We developed a theoretical framework to establish asymptotic properties (central limit theorem and invariance principle) with very mild conditions on the dependence structure, particularly without the celebrated mixing conditions. A model specification procedure for multivariate time series was developed accordingly with robustness to the dependence in innovations. The theory is further applied to analyze the information streaming over fixed network structures. We developed a statistical approach to uncovering the hidden degeneracy of the seemingly high-dimensional network. We will share an example of its application in network flow prediction and optimal intervention. |
Title: | Briefing of Computational Electromagnetics in Temasek Laboratories at NUS: Problem Oriented CEM Development for Modeling Wave Propagation and Scattering Phenomena |
Speaker: | Dr Chao-Fu WANG |
Date: | 17 October 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Efficient computational electromagnetic (CEM) modeling of wave propagation and scattering problems is becoming increasingly important in electrical engineering. Many CEM methods and techniques have been developed for simulating scattering from large and complex targets, modeling antenna systems, analyzing and designing RF/microwave devices, bio-medical applications, EMC/EMI evaluation, and so on. This talk will give a briefing of our research activities in developing fast integral equation solves for realizing efficient CEM modeling of real problems in Temasek Laboratories (TL) @ NUS, which covers: (1) Background introduction of some real EM problems; (2) Introduction of some integral equations used in computational electromagnetics; (3) Methodologies developed in TL @ NUS; (4) Examples simulated to show our TL CEM capabilities; and (5) Methodologies to be developed for real applications. Moreover, this talk will also slightly discuss the limitations of current CEM methods and the possible directions to go in my personal view. |
Title: | Fitting Mixtures of Skew Normal and Skew t- Distributions with Applications in Flow Cytometry |
Speaker: | Professor Geoff McLachlan |
Date: | 11 October 2012 |
Time: | 4.00pm - 5.00pm |
Venue: | MAS Executive Classroom 2 |
Abstract: | Flow cytometry is one of the fundamental research tools available to the life scientist. With modern-day machines now capable of providing measurements on at least 20 markers for a cell population, there is a need for an automated approach to the analysis of flow cytometric data. We consider here an automated approach based on mixture models using skew t-distributions. The latter components can handle clusters that are skewed and have outliers which is the case for flow cytometric data. The performance of this approach is demonstrated by its application to some real data sets. |
Title: | Algorithms for Data Management and Migration |
Speaker: | Professor Samir Khuller |
Date: | 2 October 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | We will consider some fundamental optimization problems that arise in the context of data storage and management. In the first part of the talk we will address the following question: How should we store data in order to effectively cope with non-uniform demand for data? How many copies of popular data objects do we need? Where should we store them for effective load balancing? In the second part of the talk we will address the issue of moving data objects quickly, to react to changing demand patterns. We will develop approximation algorithms for both these problems. The first part of the talk is joint work with Golubchik, Khanna, Thurimella and Zhu. The second part is joint work with Kim and Wan. |
Title: | Envelope Models and Methods |
Speaker: | Professor Dennis Cook |
Date: | 28 September 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1 |
Abstract: | We will discuss a new approach to estimation in the classical multivariate linear model that yields estimators of the coefficient matrix with the potential to be substantially less variable asymptotically than the standard estimators. The new approach arises by recognizing that the response vector may contain information that is immaterial to the purpose of estimating the coefficients, but can still introduce substantial extraneous variation into estimation. This idea leads to a general construct -- an envelope -- for removing the immaterial information and effectively reducing the dimension of the parameter space by using invariant subspaces to construct a general mechanism (an envelope) for linking the conditional mean and covariance matrix. Emphasis will be placed on the fundamental concepts and their potential impact on data analysis. Recent work in the area will be described briefly, including extensions beyond the multivariate linear model. |
Title: | Matroids and Network-Error Correcting Codes |
Speaker: | Professor B. Sundar Rajan |
Date: | 14 September 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. This talk attempts to establish a connection between matroid theory and network-error correcting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of network-error correcting codes to arrive at the definition of a "matroidal error correcting network". An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error correcting network code if and only if it is a matroidal error correcting network associated with a representable matroid. Therefore, constructing such network-error correcting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms which enable the construction of scalar linearly solvable multicast and multiple-unicast networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multicast and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters such as number of information symbols, number of sinks, number of network coding nodes, error correcting capability, etc. being arbitrary but for computing power. (Joint work with my Ph.D. student Krishnan Prasad). |
Title: | An alternating selection-optimization approach for an additive multi-index model |
Speaker: | Professor Lixing Zhu |
Date: | 27 August 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Title: | Minimax risk for sparse orthogonal regression revisited |
Speaker: | Professor Iain Johnstone |
Date: | 23 August 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1 |
Abstract: | Consider the classical problem of estimation of the mean of an $n$-variate normal distribution with identity covariance under squared error loss. In this talk, we review exact and approximate models of sparsity for the mean vector and some of the associated minimax mean squared error properties. In particular, we describe some results, developed for a forthcoming book, for the 'highly sparse' regime in which the number of non-zero components remains bounded as $n$ increases. |
Title: | Confidence in nonparametric Bayesian credible sets? |
Speaker: | Professor Aad van der Vaart |
Date: | 22 August 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1 |
Abstract: | Nonparametric inference has the purpose of not making restrictive a-priori assumptions. Nonparametric Bayesian inference starts with a prior on a function space, which should not restrict the shape of the unknown function too much. We illustrate this with the example of Gaussian process priors. The usual Bayesian machine produces a posterior distribution, also on the function space, which one would like to use both for estimating the unknown function and for quantifying the remaining uncertainty of the inference. We study the success of these procedures from a frequentist perspective. For the second this involves the frequentist coverage of a posterior credible set, a central set of prescribed posterior probability. We show that there is a danger of prior oversmoothing, and we ask some questions about preventing this by a hierarchical or empirical Bayes method.. |
Title: | Rate of Adaptation under Selection and Recombination |
Speaker: | Dr Yuxin Yang |
Date: | 14 August 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Evolution is the result of many competing and balancing mechanisms including mutation, natural selection, genetic drift and recombination. The 'rate of adaptation' measures how quickly populations adapt to a new environment by incorporating beneficial mutations. The nonlinear effects of natural selection and recombination on the rate of adaptation are difficult to quantify. We discuss how the Girsanov transform can be applied to deal with such difficulties. This is based on joint work with Feng Yu. |
Title: | Generalised Clark-Ocone Formulae |
Speaker: | Dr Yuxin Yang |
Date: | 10 August 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | The Clark-Ocone formula for functions on the classical Wiener space can be generalised to give analogous representations for differential forms. Such formulae provide explicit expressions for closed and co-closed differential forms, hence a new proof for the triviality of the space of L2 harmonic forms on the Wiener space, alternative to Shigekawa's original approach (1986). This new approach has the potential of carrying over to curved path spaces. In particular, we discuss the triviality of the first L2 cohomology class of based path spaces over Riemannian manifolds furnished with Brownian motion measure, and the consequent vanishing of L2 harmonic one-forms. This is based on joint work with K. D. Elworthy. |
Title: | Estimating Population Eigenvalues From Large Dimensional Sample Covariance Matrices |
Speaker: | Professor Jack W. Silverstein |
Date: | 31 July 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Let Bn = (1=N)T1=2 n XnX¤n T1=2 n where Xn = (Xij) is n £ N with i.i.d. complex standardized entries, and T1=2 n is a Hermitian square root of the nonnegative de¯nite Hermitian matrix Tn. This matrix can be viewed as the sample covariance matrix of N i.i.d. samples of the n dimensional random vector T1=2 n (Xn)¢1, the latter having Tn for its population covariance matrix. Quite a bit is known about the behavior of the eigenvalues of Bn when n and N are large but on the same order of magnitude. These results are relevant in situations in multivariate analysis where the vector dimension is large, but the number of samples needed to adequately approximate the population matrix (as prescribed in standard statistical procedures) cannot be attained. Work has been done in estimating the eigenvalues of Tn from those of Bn. This talk will introduce a method devised by X. Mestre, and will present an extension of his method to another ensemble of random matrices important in wireless communications. |
Title: | About the spectral analysis of large Markov kernels |
Speaker: | Professor Djalil Chafai |
Date: | 26 July 2012 |
Time: | 2.30pm - 3.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | This talk concerns the spectral analysis of models of random Markov kernels, centered around joint works in collaboration with Charles Bordenave and Pietro Caputo, with a “large random matrices” point of view. These models can be seen as random walks or Markov chains in random environment. From the random matrix theory point of view, these random matrices have dependent entries, and what is know concerns mostly first order asymptotics for empirical spectral distributions, spectral edge, and principal eigenvector. In particular, the study of the asymptotic fluctuations is completely open. |
Title: | Polar Codes: From Theory To Practice |
Speaker: | Professor Alexander Vardy |
Date: | 16 July 2012 |
Time: | 10.00am - 11.00am |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | The discovery of polar codes is widely regarded as THE major breakthrough in coding theory in the past decade. These codes provably achieve the capacity of any discrete memoryless symmetric channel, with low encoding and decoding complexity. We will start with a brief primer on polar codes, and some of their applications beyond point-to-point communications. Until recently, however, polar codes were mainly of theoretical interest, since two key obstacles prevented their utilization in practice. Polar codes certainly cannot be used in practice unless there is an efficient method to *construct* them. Although numerous heuristics for this problem have been proposed, none produced a construction algorithm that runs in polynomial time and provides explicit guarantees on the quality of its output. In this talk, we will describe an algorithm that does just that. Our algorithm is based on the idea of channel degrading/upgrading and runs in linear time. It works extremely well in practice, and has become the de facto standard for constructing polar codes. Another key problem in the theory of polar codes was that of improving their performance at short and moderate code lengths. It has been observed empirically that polar codes do not perform as well as turbo codes or LDPC codes at such lengths. However, we have recently showed that significant gains can be attained using a *list-decoding algorithm* for polar codes. Our algorithm retains the desirable properties of the conventional decoder, such as low complexity and recursive structure. Simulations on the BPSK-modulated AWGN channel show that, already at length 2048, list-decoding of polar codes outperforms the best-known LDPC codes (e.g. the LPDC code used in the WiMax standard). |
Title: | Factor Modelling for High-Dimensional Time Series: A Dimension-Reduction Approach |
Speaker: | Professor Qiwei Yao |
Date: | 14 June 2012 |
Time: | 1.30pm - 2.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Following a brief survey on the factor models for multiple time series in econometrics, we introduce a statistical approach from the viewpoint of dimension reduction. Our method can handle nonstationary factors. However under stationary settings, the inference is simple in the sense that both the number of factors and the factor loadings are estimated in terms of an engenanalysis for a non-negative definite matrix, and is therefore applicable when the dimension of time series is in the order of a few thousands. Asymptotic properties of the proposed method are investigated under two settings: (i) the sample size goes to infinity while the dimension of time series is fixed; and (ii) both the sample size and the dimension of time series go to infinity together. In particular, our estimators for zero-eigenvalues enjoy the faster convergence (or divergence) rates, which makes the estimation for the number of factors easier. Furthermore the estimation for both the number of factors and the factor loadings shows the so-called "blessing of dimensionality" property. A two-step procedure is investigated % for the better identification of the number of factors when the factors are of different degrees of strength. Numerical illustration with both simulated and real data is also reported. |
Title: | Computing with Evolving Data |
Speaker: | Professor Eli Upfal |
Date: | 16 May 2012 |
Time: | 4.30pm - 5.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | We formulate and study a new computational model for dynamic data. In this model, the data changes gradually in time and the computation has access to only a small part of the data in each step. The goal is to design algorithms that output solutions to computational problems on the data at any given time. As the data is constantly changing and the algorithm may not be unaware of these changes, it cannot be expected to always output the exact right solution; we are interested in algorithms that guarantee good approximate solutions. We study fundamental computation problems, including sorting and selection, where the true ordering of the elements changes in time and the algorithm can only probe in each step the order of a few pairs; and connectivity and minimum spanning trees in graphs where edges' existence and weight change over time and the algorithm can only track these changes by probing a few vertex or edges per step. This framework captures the inherent trade off between the complexity of maintaining an up-to-date view of the data and the quality of results computed with the available view. (Joint work with Aris Anagnostopoulos, Ravi Kumar, Mohammad Mahdian, and Fabio Vandin). |
Title: | Polynomial Mappings |
Speaker: | Professor Michael Zieve |
Date: | 14 May 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Superconvergence properties for some high-order orthogonal polynomial interpolations are studied. The results are twofold: When interpolating function values, we identify those points where the first and second derivatives of the interpolant converge faster; when interpolating the first derivative, we locate those points where the function value of the interpolant superconverges. For both cases we consider various Chebyshev polynomials, but for the latter case, we also include the counterpart Legendre polynomials. |
Title: | Bayesian Models for Variable Selection that Incorporate Biological Information |
Speaker: | Professor Marina Vannucci |
Date: | 9 May 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | The analysis of the high-dimensional genomic data generated by modern technologies, such as DNA microarrays, poses challenges to standard statistical methods. In this talk I will describe how Bayesian methodologies can be successfully employed in the analysis of such data. I will look at linear models that relate a phenotypic response to gene expression data and employ variable selection methods for the identification of the predictive genes. The vast amount of biological knowledge accumulated over the years has allowed researchers to identify various biochemical interactions and define different families of pathways. I will show how such information can be incorporated into the model for the identification of pathways and pathway elements involved in particular biological processes. |
Title: | Extending “Matrix” and “Inverse Matrix” Methods: Another look at Barron’s Approach |
Speaker: | Li Tang |
Date: | 25 April 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | The problem of misclassification is common in epidemiological and clinical research. Sometimes misclassification may exist in both exposure and outcome variables. It is well known that validity of analytic results (e.g., estimates of odds ratios of interest) might be questionable when no correction effort is made. Therefore, valid and accessible methods with which to deal with these issues are still in high demand. Here we elucidate extensions of well-studied methods in order to facilitate misclassification adjustment when a binary outcome and binary exposure variable are both subject to misclassification. By formulating generalizations of assumptions underlying the original “matrix” method (Barron, 1977) and an altenative “inverse matrix” method (Marshall, 1990) into the framework of maximum likelihood, we establish methods to facilitate differential misclassification adjustment in 2X2 tables. We place heavy emphasis on the use of internal validation data in order to evaluate a richer set of misclassification mechanisms. The value and application of our approach are demonstrated by means of simulations and detailed analysis of bacterial vaginosis and trichomoniasis data in the HIV Epidemiology Research Study (HERS). |
Title: | CRRA Utility Maximization under Risk Constraint |
Speaker: | Professor Anthony Réveillac |
Date: | 17 April 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Colloquium Room 1, MAS-05-36 |
Abstract: | In this talk, we will study the problem of optimal investment with CRRA (constant, relative risk aversion) preferences, subject to dynamic risk constraints on trading strategies. The market model considered is continuous in time and incomplete and the prices of financial assets are modeled by Itô processes. The dynamic risk constraints, which are time and state dependent, are generated by risk measures. Optimal trading strategies are characterized by a quadratic Backward Stochastic Differential Equation. Numerical results will emphasize the effects of imposing risk constraints on trading. |
Title: | Data Transmission and Compression in Networks at Blocklength = 1000 |
Speaker: | Dr Vincent Tan Yan Fu |
Date: | 13 April 2012 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | We discuss the fundamental limits of distributed lossless source coding (Slepian-Wolf) when the blocklength is finite. The ideas are then extended to provide fundamental limits for communication over multiple-access and broadcast channels in the finite blocklength setting. |
Title: | Combinatorial, discretized and multivariate normal approximations by Stein’s method |
Speaker: | Xiao Fang |
Date: | 13 April 2012 |
Time: | 1.30pm - 2.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Since its introduction in 1972, Stein’s method has been widely used to prove distributional approximation results for random variables with complicated dependence structures. In this talk, we present some recent advances in normal approximation by Stein’s method. Combinatorial central limit theorem was motivated by permutation tests in non-parametric statistics. By proving a general bound on the Levy concentration function, we obtain a Berry-Esseen bound for a combinatorial central limit theorem where the components of the matrix are assumed to be independent random variables. Inspired by the idea of continuity correction, we prove total variation bounds between distributions of integer valued random variables and discretized normal distributions under the framework of Stein coupling. We apply the result to the number of vertices with a given degree in a random graph and the uniform multinomial occupancy model. By extending the concentration inequality approach and the recursive approach in Stein’s method to the multivariate setting, we provide new results for multivariate normal approximation on convex sets. As an application, we propose a homogeneity test in dense random graphs. |
Title: | Size of orthogonal sets of exponentials for the disk |
Speaker: | Professor Mihalis Kolountzakis |
Date: | 12 April 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Suppose $\Lambda \subseteq \mathbb R^2$ has the property that any two exponentials with frequency from $\Lambda$ are orthogonal in the space $L^2(D)$, where $D \subseteq \mathbb R^2$ is the unit disk. Such sets $\Lambda$ are known to be finite (by work of Iosevich and Rudnev and by Fuglede) but it is not known if their size is uniformly bounded. We show that if there are two elements of $\Lambda$ which are distance $t$ apart then the size of $\Lambda$ is $O(t)$. As a consequence we improve a result of Iosevich and Jaming and show that $\Lambda$ has at most $O(R^{2/3})$ elements in any disk of radius $R$. As is usual in this field our method works by studying properties of the zero set of the Fourier Trasnform of the disk. This imposes constraints on the interpoint distances of points of $\Lambda$, essentially that they are nearly integers. This is joint work with Alex Iosevich (Rochester). |
Title: | Robust Recovery-Based a posteriori Error Estimators for Finite Element Methods |
Speaker: | Dr Shun Zhang |
Date: | 10 April 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | The Zienkiewicz-Zhu (ZZ) a posteriori error error estimator, as an example of recovery-based error estimators, is extremely popular in the engineering community. But it is also well known that if we apply ZZ error estimator naively to complicated problems like elliptic interface equations, it will over-refine regions with small error and is not robust. In this talk, by using the intrinsic continuities of the underlying problem and the properties of different finite element discretizations, we present a unifying framework for constructing robust recovery based error estimators. Different types of finite element approximations (conforming, mixed, nonconforming, and DG) and partial differential equations will be discussed. |
Title: | A Semiparametric Approach to Secondary Analysis of Nested Case-Control Data |
Speaker: | Professor Agus Salim |
Date: | 4 April 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Many epidemiological studies use a nested case-control (NCC) design to reduce cost while maintaining study power. However, since the sampling was conditional on the primary outcome and matching variables, routine application of ordinary logistic regression to analyze secondary outcome will generally produce biased odds-ratios. Recently, several authors have proposed methods that can be used for this secondary outcome. These methods are based on either weighted likelihood (Samuelsen, 1997; Salim et al. 2009) or full maximum-likelihood (Saarela et al., 2008). A common feature of all methods developed so far is that they require the availability of survival time for the secondary outcomes for all cohort members, including those not selected into the nested case-control study. This requirement may not be feasible when the cohort is a hospital cohort where often we only have survival data for those selected into the study. An additional limitation of Saarela’s method is that it assumes the hazards of the two outcomes are conditionally independent given a set of covariates. This assumption is not plausible when individuals have different levels of frailties not captured by the covariates. Here, we provide a maximum likelihood method that explicitly model the individual frailties and avoid the need to have access to the full cohort data. The likelihood contribution is derived by respecting the original sampling procedure with respect to the primary outcome. The proportional hazard models are used to model the marginal hazards and to model the individual frailties, Clayton’s copula is used. We show that the new method is more efficient than the weighted likelihood method and unlike Saarela’s method, is unbiased in the presence of frailties. We apply the methodology to study risk factors of diabetes using nested case-control data that were originally collected to study risk factors of cardiovascular diseases in a cohort of Swedish twins. |
Title: | Assessing the Extent of Uniqueness of a Fingerprint Match |
Speaker: | Associate Professor Sarat C. Dass |
Date: | 2 April 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | It is possible for fingerprints from two different persons to be matched closely with each other. Such spurious matches should be statistically detected to avoid false positive identifications. A statistical assessment is possible if the variability of the observational processes are accounted for and modeled adequately. The extent of a match depends on two sources of variability: (1) inter-class variability arising from the spatial configuration of fingerprint features, and (2) intra-class variability arising from image quality, variability due to the fingertip placement on the sensor and elastic distortion of the skin. To adequately capture feature variability, marked point processes are developed on the feature space that are able to model clustering tendencies and spatial correlations between neighboring marks. Inference is carried out in a Bayesian MCMC framework. The proposed class of models is fitted to real fingerprint images to demonstrate the flexibility of fit to different kinds of fingerprint feature patterns arising in practice. Evidence of a Paired Impostor Correspondence (EPIC) is developed as a measure of fingerprint uniqueness and its predictive value is obtained using simulation from the fitted models. More robust EPIC values obtained by clustering based on Dirichlet process priors will also be discussed. |
Title: | Taming the Data Deluge |
Speaker: | Dr Qin Zhang |
Date: | 30 March 2012 |
Time: | 1.30pm - 2.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | There has been a spectacular increase in the amount of data being collected and processed in many modern applications, and traditional algorithms theory is ill-suited for handling such amount of data. In particular, traditional theory used simplistic machine models which are not suitable for modern computer architectures and applications, and the inadequacy of the simplistic machine models directly translates into deficiencies of software when processing massive data. In this talk, I will try to address this issue by introducing several successful models for handling massive data. I will highlight the primary features they capture and the central issues we need to explore. I will then focus on one particular model - the distributed streaming model, provide an overview of the important and fundamental problems, and illustrate how to design and analyze efficient algorithms in this model. All these works are strongly motivated by databases, data mining and network applications. |
Title: | On complex analysis in Banach spaces |
Speaker: | Dr Imre Patyi |
Date: | 19 March 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | We define the basic notions of complex analysis on Banach spaces and review a few theorems in the global theory of complex Banach manifolds. In particular, we emphasize the role of holomorphic domination as an important tool in overcoming the lack of compactness inherent in infinite dimensional Banach spaces. |
Title: | Sparse blind signal separation methods of spectral sensing mixtures and applications |
Speaker: | Dr Yuanchang Sun |
Date: | 8 March 2012 |
Time: | 1.30pm - 2.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Spectral sensing involves a range of technologies for detecting, identifying chemicals and biological agents. An important application is in homeland security where a critical problem is identification of unknown explosives. Though modern imaging and spectroscopy technology have made it possible to classify pure chemicals by spectra, realistic data are often composed of mixtures of chemicals and environmental noise, also subject to changing background. In most cases, one has to deal with a so called blind signal (source) separation (BSS) problem. Conventional approaches such as NMF and ICA are non-convex and too general to be robust and reliable in real-world applications. Based on a partial knowledge of the data (e.g. local spectral sparseness), we are able to reduce the problem to a series of convex sub-problems. The methods we developed consist of data clustering, model reduction, geometric sub-manifold identification, and $\ell_1$ optimization. Compressive sensing algorithms are also brought into play for recovering more signals than the number of spectral measurements. The methods will be illustrated in processing of real-world data-sets in NMR, DOAS, and Raman spectroscopy. |
Title: | Mutually Unbiased Bases |
Speaker: | Professor William M. Kantor |
Date: | 23 February 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Two orthonormal bases of an n-dimensional complex vector space are called "mutually unbiased" if all inner products of members of different bases have the same absolute value. The maximal possible number of bases pairwise behaving in this manner is n+1. Finding such large sets of bases arose independently in research in electrical engineering, quantum physics, Euclidean configurations and coding theory. This talk will assume nothing about any of those subjects, and focus instead on the relationships between such sets of bases and Gauss-type sums, affine planes and relative difference sets. |
Title: | The Self-Avoiding Random Walk |
Speaker: | Professor Gregory F. Lawler |
Date: | 23 February 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | A self-avoiding random walk (SAW) is a simple random walk that is not allowed to visit any site more than once. The model arose in statistical physics to model polymer chains. Although it is a simple model to define, it is very difficult to analyze rigorously and many questions remain open. I will give a survey of what is conjectured and what is known rigorously. |
Title: | Hermitian Positivity of Real Polynomials |
Speaker: | Professor Mihai Putinar |
Date: | 14 February 2012 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Real algebraic varieties embedded into a complex space carry a restrictive notion of hermitian positivity, defined by weighted sums of hermitian square certificates. Tarski’s elimination of quantifiers method still yields in this case a powerful, abstract Positivstellensatz, as only recently it was revealed. The lecture will place the new Hermitian Positivstellensatz into a rich historical context, marked by Riesz-Fejer type factorizations, an early discovery of Quillen and rigidity phenomena in complex geometry. A necessary quantization procedure will bring into discussion non-commutative algebras and duality techniques. A series of applications will be briefly presented: a new invariant in Cauchy-Riemann geometry, a hyperbolicity/stability criterion for systems of differential equations with delay in the argument, a study of the hermitian structure of the Unitary Extension Principle in wavelet theory with concrete consequences in its turn. |
Title: | Topics in Actuarial Science, Financial Mathematics and Economics |
Speaker: | Dr Daniel Burren |
Date: | 31 January 2012 |
Time: | 11.00am - 12.00pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Actuarial science, mathematical finance and economics are disciplines which have many similarities. This talk gives an overview of my research which touches on all three disciplines. One topic is concerned with optimizing risk capital for insurance companies. For risk-averse agents with exponential utility and claims described by a compound Poisson, I first derive a critical capital requirement level above which the insurance market breaks down and show how this level depends on insurance demand, claims intensity, risk-aversion and required return of insurance company holders. In a dynamic set-up, the optimal insurance path will be derived. A different project deals with estimating the ultimate claim amount of claims which have occurred but are not yet settled and analyses the robustness of the chain-ladder estimator and the nearest neighbor estimator in presence of breaks in the development pattern of claims. The accuracy of the estimators is studied analytically and by means of Monte-Carlo simulations. Finally, I conclude with an existing paper (joint with Bäurle) which shows how to consistently implement the economic model selection device called Business Cycle Accounting. We characterize the class of models which are equivalent to the benchmark economic model and discuss means to enlarge this class of models. |
Title: | Can we determine tumor blood flow parameters from dynamic imaging data? |
Speaker: | Dr Jessica Libertini |
Date: | 18 January 2012 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Current cancer drug Phase II testing uses static tumor size measurements to determine efficacy, however this process requires wait times of months. In an effort to decrease this time, radiologists and applied mathematicians have been working together since the 1990s to explore models for using dynamic imaging data to measure blood flow parameters directly, allowing anti-angiogenesis efficacy to be determined in just weeks. Mathematically, this involves solving an inverse problem for the coefficients in a system of coupled PDEs, which were simplified to ODEs. In 2008, it was shown that a key simplifying assumption underlying these newer methodologies was not applicable. Since then, work has been done to try to recover these parameters directly from the PDEs without the use of the simplification. Current efforts are evaluating the feasibility of a steepest descent approach with automatic differentiation by exploring the topology of the parameter space. |
Title: | Introduction to twisted Alexander invariants of knots |
Speaker: | Dr Huynh Quang Vu |
Date: | 9 January 2012 |
Time: | 1.00pm - 2.00pm |
Venue: | MAS Executive Classroom 1 |
Abstract: | Twisted Alexander invariants associated to representations of the fundamental groups of knot complements are generalizations of the classical Alexander polynomial. The new invariants are more powerful, distinguishing knots which the Alexander polynomial could not distinguish. This talk will be aimed at undergraduates who have had a first course in Algebraic Topology. If they wish to prepare for this talk they should find out what a tensor product is, and review the basic theory of covering spaces. A good survey of twisted Alexander invariants can be found in the article arXiv:0905.0591 by Friedl and Vidussi. |