Seminars 2010
Title: | |
Speaker: | |
Date: | |
Time: | |
Venue: | |
Host: | |
Abstract: |
Title: | |
Speaker: | |
Date: | |
Time: | |
Venue: | |
Host: | |
Abstract: |
Title: | |
Speaker: | |
Date: | |
Time: | |
Venue: | |
Host: | |
Abstract: |
Title: | |
Speaker: | |
Date: | |
Time: | |
Venue: | |
Host: | |
Abstract: |
Title: | |
Speaker: | |
Date: | |
Time: | |
Venue: | |
Host: | |
Abstract: |
Title: | |
Speaker: | |
Date: | |
Time: | |
Venue: | |
Host: | |
Abstract: |
Title: | What is a logic for PTIME? |
Speaker: | Professor Yijia Chen |
Date: | 5 October 2010 |
Time: | 1.30pm - 2.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Whether there is a logic capturing PTIME is the central problem in descriptive complexity. In the first part of the talk I will give a very brief introduction of the problem, where it comes from, its connection to database theory, classical complexity, and graph theory, and its current status. Next, I will explain our recent joint work with Joerg Flum which shows that one major open problem in proof complexity turns out to be a special instance of the logic capturing PTIME question. Our result also links the problem to a question in parameterized complexity. |
Title: | Reverse Mathematics and Nonstandard Methods |
Speaker: | Professor Kazuyuki Tanaka |
Date: | 27 September 2010 |
Time: | 2.30pm - 3.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Reverse mathematics is a research program in foundations of mathematics, initiated by H. Friedman in 1970's and most notably advanced by S. Simpson in 80's. A nonstandard proof method for reverse mathematics was introduced by Tanaka in the 90's, and was first presented at the fifth ALC in Singapore in 1993. In this talk, the speaker will survey some recent progress of the nonstandard methods in reverse mathematics. |
Title: | Error Analysis of Discontinuous Galerkin Methods for Time-Dependent Maxwell Equations |
Speaker: | Dr Xie Ziqing |
Date: | 15 September 2010 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Title: | Augmented Lagrangian Method for Image Processing by Piecewise Constant Level Set Method |
Speaker: | Dr Yao Chang-Hui |
Date: | 8 September 2010, Wednesday |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Rationale: Image science is used to denote a widely range of problems related to digital images. It is generally referred to problems related to image processing, computer graphics and computer version. The type of mathematical techniqes involve range from discrete math, linear algebra, statistic, approximation theory, PDEs, quasi-convexity analysis and even algebraic geometry. Tools: Level set methods for image processing often relate to PDEs techniques involving one or more of the following features: 1) regarding an image as a function sampled on a given grid with the grid values corresponding to the pixel intensity in suitable color space; 2) regularization of the solutions; 3) representing boundaries; 4) the numeric developed for the level set methods. Applications: We use piecewise constant level set method (PCLSM) to segment images (may be corrupted by noise and blurred) in order to get the boundary of the image. New model is presented and numerical experiments show that PCLSM is a novel method and can segment image efficiently. PCLSM can be applied to the ‘History matching problem’. By this tool, we can find the reservoir characterization using a regularization technique. Experiments show that it can recover the reservoir characterization stably and efficiently. |
Title: | Image Zooming Based on Bregmanized Nonlocal Total Variation Regularization |
Speaker: | Professor Yang Yu-Fei |
Date: | 23 August 2010 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | In this talk, we explore the nonlocal total variation regularization technique for image zooming and apply the split Bregman iteration algorithm to solve the related nonlinear Euler-Lagrange equation. The convergence of the proposed algorithm is analyzed. Experimental results illustrate the effectiveness and reliability of our proposed algorithm by comparing with the bilinear interpolation. About the Speaker: |
Title: | Equilibrium Asset and Option Pricing under Jump Diffusion |
Speaker: | Assistant Professor Jin Zhang |
Date: | 23 August 2010 |
Time: | 11.00am - 12.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | This paper develops an equilibrium asset and option pricing model in a production economy under jump diffusion. The model provides analytical formulas for an equity premium and a more general pricing kernel that links the physical and risk-neutral densities. The model explains the two empirical phenomena of the negative variance risk premium and implied volatility smirk if market crashes are expected. Model estimation with the S&P 500 index from 1985 to 2005 shows that jump size is indeed negative and the risk aversion coefficient has a reasonable value when taking the jump into account. |
Title: | Singular Value Thresholding Algorithms for Low-rank Matrix Completion |
Speaker: | Professor Jian-Feng Cai |
Date: | 19 August 2010 |
Time: | 4.00pm - 5.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Low-rank matrix completion refers to recovering a low-rank matrix from a sampling of its entries. It routinely comes up whenever one collects partially filled out surveys, and one would like to infer the many missing entries. Matrix completion is a natural extension of compressed sensing. Candes and his co-authors proved that one can solve the low-rank matrix completion problem exactly by minimizing a nuclear norm (the L1-norm of the vector of singular values) function subject to linear constraints. In this talk, I will present singular value thresholding algorithms for the nuclear norm minimization arising from low-rank matrix completion. |
Title: | Some Mathematical Models in Biomedical Shape Processing and Analysis |
Speaker: | Professor Bin Dong |
Date: | 19 August 2010 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | I will first discuss a tight frame based segmentation model for general medical image segmentation problems. This model is motivated by the ideas of the total variation based segmentation models (convexified Chan-Vese Model). Then I will move to the topic on biological shape processing and analysis, which is a rather popular topic lately in biomedical image analysis. Within this category, I will mainly discuss the following three topics: surface restoration via nonlocal means; brain aneurysm segmentation in 3D biomedical images; and multiscale representation for shapes and its applications in blood vessel recovery (surface inpainting) and others. Some future work and ongoing projects will be mentioned in the end. |
Title: | Applied Computability Theory and Effective Randomness |
Speaker: | Assistant Professor Ng Keng Meng |
Date: | 18 August 2010 |
Time: | 11.30am - 12.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Computability theory is devoted to measuring the complexity of sets of natural numbers by algorithmic means. We examine several ways to measure complexity. Effective randomness calibrates complexity by defining when an infinite binary string can occur by chance. I will talk about the different ways in which one can take to calibrate randomness, and discuss some of the recent development in this area. In particular, I will focus on the interactions of randomness with computability and other applied logic topics such as reverse mathematics. |
Title: | Weakened Weakform (W2) Methods for Certified Solutions, Adaptive Analysis, Real-time Computation and Inverse Analysis of Engineering Systems |
Speaker: | Professor LIU Gui-Rong |
Date: | 12 August 2010 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 2, MAS-03-07 |
Abstract: | Rationale: Engineering systems become more and more sophisticated. Computer modelling for such systems is a must for optimal design, healthy monitoring, NDE, existing strength assessments, and service life prediction. The necessary requirements for an effective computational method have now become stability, convergence, automation, solution certification, adaptation, and real-time computation. Theory: This talk introduces first the basic theory for a unified formulation of a wide class of compatible and incompatible methods based on FEM and meshfree settings. Important properties and inequalities for G spaces are proven, leading to the so-called weakened weak (W2) formulation that guarantees stable and convergent solutions. We then present some possible W2 models that meet all these challenges: 1) linearly conformability ensuring the stability and convergence; 2) softening effects leading to certified solutions and real-time computational models; 3) insensitivity to the quality of mesh allowing effective uses of triangular/tetrahedral meshes best suited for automatic adaptive analyses. Applications: A large number of benchmarking examples and practical examples will be presented to examine the theory and various numerical models, including material behaviour at various extreme situations, dynamic behaviour and interactions of red blood cells, inverse identification of material properties and cracks in engineering structural systems, and integrity assessment of dental implant systems via inverse analysis with real-time computation. |
Title: | On Transformation Methods and Their Induced Parallel Properties in Temporal Domain Computation for Parabolic Problems |
Speaker: | Professor Lai Choi-Hong |
Date: | 16 July 2010 |
Time: | 10.30am - 11.30am |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Many engineering and applied science problems are described by time dependent nonlinear partial differential equations. Numerical methods of handling transient problems are usually based on temporal integration methods such as Euler’s method, Runge-Kutta methods, multi-step methods, etc. In relation to the nature of a given problem which may or may not require fine solution details at intermediate time steps, one usually has to choose a fine or a coarse time stepping. In the case of fine details are required the traditional method is to use temporal integration methods with fine time steps. These temporal integration methods are very difficult to parallelise because of their intrinsic sequential properties. In the case where fine details are not required it is still not possible to use a very large time step in an implicit scheme. There are restrictions imposed on the temporal step size usually due to stability criteria of an explicit scheme or the truncation errors of an implicit scheme in approximating the temporal derivatives. Computing time of such numerical methods inevitably becomes significant. There are also many problems which require solution details not at each time step of the time-marching scheme, but only at a few crucial steps and the steady state. Therefore effort in finding details of the solutions using many intermediate time steps is considered being wasted. Such effort becomes significant in the case of nonlinear problems where a linearisation process, which amounts to an inner iterative loop within the time-marching scheme, is required. It would be a significant save in computing time when the linearisation process and the time-marching scheme can both be done in parallel. The main objective of the present work is to remove the time stepping and to combine it with parallel/distributed computers. To investigate the parallelisation of the temporal domain, this talk begins with a concise overview of classical temporal integration methods, including time-stepping restrictions of an explicit scheme, truncation errors in an implicit scheme, and other advantages and disadvantages of using a time marching scheme, and a brief discussion is given of several attempts by various researchers in parallelising temporal integration methods. Second, the use of transformation methods and their relations to possibly induce parallel properties to certain intrinsic sequential problems are examined. These transformation methods include the Boltzmann transformations, general stretch transformations, Fourier transformation, and Laplace transformation. Several examples related to these transformations are discussed. In particular Laplace transformation methods with applications to nonlinear parabolic problems are included. Finally, discussions and conclusions are presented. |
Title: | Optimal Integer Transform for Image Coding |
Speaker: | Dr Pengwei Hao |
Date: | 7 July 2010 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Optimal integer transforms are constructed for the reversible integer transformation, low computational complexity, high image coding performance, and some other properties. With a tool for transform visualization, co-histogram, we have found some nonlinear transforms that are better than linear ones (e.g. DCT and KLT). Among those transforms, the infinity-norm rotation transform is the best. The transform is a rotation in infinity-norm space, preserves the dynamic range, has a very low computational complexity, and can be used for reversible data hiding, both lossless and lossy image coding. |
Title: | PLUS Factorization of Matrices and Its Applications |
Speaker: | Dr Pengwei Hao |
Date: | 30 June 2010 |
Time: | 3.30pm - 4.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | PLUS factorization was proposed as a new framework of matrix factorization, A=PLUS, where matrices P, L and U are almost the same as in LU factorization, permutation, unit lower and upper triangular matrices, respectively, and S is a very special matrix, which is unit, lower and triangular, but only with a small number of non-zeros. Different from LU factorization, all the diagonal elements of U in PLUS factorization are customizable, i.e., the elements can be assigned by users almost freely. With PLUS factorization, the matrix A is easily factorized further into a series of special matrices similar to S. The new computational mechanics for transforms with PLUS factorization has a few very elegant properties, such as in-place computation and simple inversion. PLUS factorization also allows of transforming integers reversibly and losslessly if the diagonal elements of U are all designated as Gaussian units, 1, -1, i, or -i. So far, it has applications in lossless source coding, reversible data hiding, fast image registration and fast volumetric data rendering. The method has been included in JPEG 2000, an international standard for image coding. |
Title: | Field-Failure and Warranty Prediction Based on Auxiliary Use-rate Information |
Speaker: | Professor Hong Yili |
Date: | 22 June 2010 |
Time: | 10.30am - 11.30am |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Usually the warranty data response used to make predictions of future failures is the number of weeks (or another unit of real time) in service. Use-rate information usually is not available (automobile warranty data are an exception, where both weeks in service and number of miles driven are available for units returned for warranty repair). With new technology, however, sensors and smart chips are being installed in many modern products ranging from computers and printers to automobiles and aircraft engines. Thus the coming generations of field data for many products will provide information on how the product has been used and the environment in which it was used. This paper was motivated by the need to predict warranty returns for a product with multiple failure modes. For this product, cycles-to-failure/use-rate information was available for those units that were connected to the network. We show how to use a cycles-to-failure model to compute predictions and prediction intervals for the number of warranty returns. We also present prediction methods for units not connected to the network. In order to provide insight into the reasons that use-rate models provide better predictions, we also present a comparison of asymptotic variances comparing the cycles-to-failure and time-to-failure models. |
Title: | Identifying QTLs and Epistasis in Structured Plant Populations Using Adaptive Mixed LASSO |
Speaker: | Professor WANG Dong |
Date: | 10 June 2010 |
Time: | 10.30am - 11.30am |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Association analysis in important crop species has generated heightened interest for its potential in dissecting complex traits by utilizing diverse mapping populations. However, the mixed linear model approach is currently limited to single marker analysis, which is not suitable for studying multiple QTL effects, epistasis and gene by environment interactions. We propose the adaptive mixed LASSO method that can incorporate a large number of predictors (genetic markers, epistatic effects, environmental covariates, and gene by environment interactions) while simultaneously accounting for the population structure. We show that the adaptive mixed LASSO estimator possesses the oracle property of adaptive LASSO. Algorithms are developed to iteratively estimate the regression coefficients and variance components. Our results demonstrate that the adaptive mixed LASSO method is very promising in modeling multiple genetic effects when a large number of markers are available and the population structure cannot be ignored. |
Title: | Tailored Unstructured Meshes for Efficient Simulations |
Speaker: | Professor Oubay Hassan |
Date: | 26 May 2010 |
Time: | 2.30pm - 3.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | The advantages of unstructured mesh methods are well known and centre, mainly, on the observation that they provide a powerful tool for the rapid discretisation of domains of complex shape. A standard method for generating appropriate meshes for viscous flows is to begin by constructing stretched structured layers adjacent to solid surfaces. The remainder of the computational domain is then discretised in a consistent fashion, using an automatic unstructured mesh generation procedure. The isotropic Delaunay method, with automatic point creation, provides the fastest method of generating high quality unstructured meshes. However, the outer layer of the elements generated by the advancing layers method often includes stretched element faces and these can cause significant boundary recovery problems when the mesh is presented to a standard Delaunay generator. The remedy, for problems of this type, is to employ an alternative Delaunay triangulation, with a modified in-circle criterion, and the effectiveness of this approach will be demonstrated. |
Title: | Potential-based Finite Element Schemes for Eddy Current Problems |
Speaker: | Professor Tong Kang |
Date: | 19 May 2010 |
Time: | 4.15pm - 5.15pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | The potential-based nodal finite element methods (A-?/T-Ψ schemes) are used to solve 3D transient and harmonic eddy current problems. Although introducing a vector potential and a scalar potential increases the number of unknowns and equations, these apparent complications are justified by a better way of dealing with possible discontinuities in the process of numerical schemes. These schemes presented in this talk are added the penalty function terms in their governing equations to guarantee the existence and uniqueness of approximating solutions. Some energy-norm error estimates of the schemes are given and several computer simulation results from TEAM Workshop Problem 7 and IEEJ model are shown to verify the validity of the schemes. |
Title: | Apery Sequences, Modular Forms and Series for 1/π |
Speaker: | Dr Shaun Cooper |
Date: | 17 May 2010 |
Time: | 10.30am - 11.30am |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: |
Title: | A Linear Implementation of PACMAN |
Speaker: | Dr Alfio Giarlotta |
Date: | 6 May 2010 |
Time: | 3.00pm - 4.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | PACMAN (Passive and Active Compensability Multicriteria ANalysis) is a multiple criteria methodology based on a decision maker oriented notion of compensation, called compensability. A basic step of PACMAN is the construction of fuzzy compensatory functions, which model intercriteria relations for each pair of criteria on the basis of compensability. In this presentation we examine a simpli?ed version of PACMAN, which uses the so-called linear compensatory functions and consistently reduces the overall complexity of its implementation in practical cases. We use Mathematica? to develop a computer-aided graphical interface that eases the interaction among the actors of the decision process at each stage of PACMAN. We also propose the possibility to perform a sensitivity analysis in this simpli?ed version of PACMAN as a nonlinear optimization problem. |
Title: | Moments and Positive Polynomials : Some Applications |
Speaker: | Dr Jean Bernard Lasserre |
Date: | 30 April 2010 |
Time: | 2.00pm - 3.00pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | Roughly speaking, the Generalized Problem of Moments (GPM) is an infinite-dimensional linear optimization problem (i.e., an infinite dimensional linear program) on a convex set of measures with support on a given subset K of Rn. From a theoretical viewpoint, the GPM has developments and impact in various area of Mathematics like algebra, Fourier analysis, functional analysis, operator theory, probability and statistics, to cite a few. In addition, and despite its rather simple and short formulation, the GPM has a large number of important applications in various fields like optimization, probability, mathematical finance, optimal control, control and signal processing, chemistry, cristallography, tomography, quantum computing, etc. In its full generality, the GPM is untractable numerically. However when the set K is compact and semi-algebraic, and the functions involved are polynomials (and in some cases piecewise polynomials or rational funcions), then the situation is much nicer. Indeed, one can define a systematic numerical scheme based on a hierarchy of semidefinite programs, which provides a monotone sequence that converges to the optimal value of the GPM. Sometimes finite convergence may even occcur. In the talk, we will present the semidefinite programming methodology to solve the GPM with polynomial data and describe several applications (notably in optimization, probability, optimal control and mathematical finance). |
Title: | Invariance Principle and Fine Structure of Spatial- Temporal Processes |
Speaker: | Dr Wanli Min |
Date: | 14 April 2010 |
Time: | 11.30am - 12.30pm |
Venue: | MAS Executive Classroom 1, MAS-03-06 |
Abstract: | In this talk, a new theoretical framework will be introduced to establish asymptotic distributions for conventional statistics, such as (partial) autocorrelation functions, for linear processes with dependent innovations (LPDI). In particular, we establish the invariance principle under mild conditions. Even for linear processes with iid innovations, we show invariance principle under weaker conditions on the linear filter than literature. The asymptotics are shown to be different from conventional ones. Consequently the conventional statistics are shown to give inconsistent inference results in model identification and model selection for such LPDI processes. In particular, we will illustrate that the widely used model selection criteria, such as AICc and AIC, fail to live up to their respective nice properties originally discovered in the context of classical time series with white noise innovations. We propose a modified model selection criteria and show its efficiency and robustness through simulations. The invariance principle is applied to certain spatial-temporal processes to identify a parsimonious representation adapted to local network structure. We will conclude with an example of real-time forecasting of network flow with composite periodicity and autoregressive conditional heteroscadestic pattern. |
Title: | The Theory of Fusion Systems |
Speaker: | Professor Radu Stancu |
Date: | 14 April 2010 |
Time: | 10.30am - 11.30am |
Venue: | MAS Colloquium Room 1, MAS-05-36 |
Abstract: | The classification of finite simple groups is a gem of modern Mathematics. One of the main tools used to achieve it is the p-local analysis of the structure of a group - where p is a prime number - i.e. the study of the invariants of the group related to its Sylow p-subgroups. An axiomatic, and unified, way to present the p-local structures is via the theory of fusion systems. Fusion systems were introduced by Puig and refined by Broto-Levi-Oliver. The theory of fusion systems is at the confluence of the Representation Theory of Finite Groups with the Algebraic Topology. In my talk I will try to explain the bases of this theory and to present the interactions between different area of Algebra that it makes possible. |
Title: | From Two-Player Games to Markets: On the Computation of Equilibria |
Speaker: | Dr Xi Chen |
Date: | 5 April 2010 |
Time: | 10.30am - 11.30am |
Venue: | Executive Classroom 1, SPMS-MAS-03-06 |
Abstract: | Recently, there has been tremendous interest in the study of Algorithmic Game Theory. This is a rapidly growing area that lies at the intersection of Computer Science, Game Theory, and Mathematical Economics, mainly due to the presence of selfish agents in highly decentralized systems, the Internet in particular. The computation of Nash equilibria in games and the computation of Market equilibria in exchange markets have received great attention. Both problems have a long intellectual history. In 1950, Nash showed that every game has an equilibrium. In 1954, Arrow and Debreu showed that under very mild conditions, every market has an equilibrium. While games and Nash equilibria are used to predict the behavior of selfish agents in conflicting situations, the study of markets and market equilibria laid down the foundation of competitive pricing. Other than the fact that both existence proofs heavily rely on fixed point theorems, the two models look very different from each other. In this talk, we will review some of the results that characterize how difficult it is to compute or to approximate Nash equilibria in two-player games. We will then show how these results also advanced our understanding about market equilibria. No prior knowledge of Game Theory will be assumed for this talk. |
Title: | Efficient CM-algorithms |
Speaker: | Professor Peter Stevenhagen |
Date: | 26 March 2010 |
Time: | 4.30pm - 5.30pm |
Venue: | Executive Classroom 1, SPMS-MAS-03-06 |
Abstract: | Complex multiplication (CM) provides beautiful algorithms to construct elliptic curves and more general abelian varieties by complex analytic methods. Due to their algebraic nature, such constructions can also be applied to obtain abelian varieties over finite fields. For these varieties to be useful in cryptographic settings, one likes to impose conditions that tend to make CM-methods impractical. I will discuss various recent theorems indicating what is feasible in algorithmic practice, and what is not. |
Title: | Modeling, algorithm and simulation of wave motion in quantum and plasma physics |
Speaker: | Professor Weizhu Bao |
Date: | 10 March 2010 |
Time: | 3.30pm - 4.30pm |
Venue: | Executive Classroom 2, SPMS-MAS-03-07 |
Abstract: | In this talk, I begin with a review of several mathematical models for describing wave motion in quantum and plasma physics. Computational difficulties for simulating wave propagation and interaction in quantum and plasma physics are discussed. Efficient and accurate numerical algorithms for computing ground and excited states as well as the dynamics of the nonlinear Schrodinger equation are presented. Applications in collapse and explosion of Bose-Einstein condensates, transport of ultra-cold atoms in optical lattices, quantized vortices in superfluididty and wave interaction in plasma physics are report. Finally, some conclusions are drawn. |
Title: | The Fat Strut Problem |
Speaker: | Professor Vinay A. Vaishampayan |
Date: | 9 March 2010 |
Time: | 10.30am - 11.30am |
Venue: | Executive Classroom 2, SPMS-MAS-03-07 |
Abstract: | Consider Z^n, the integer lattice in n dimensions. Given a point v in Z^n, an infinite cylinder of radius r whose axis contains v and the origin is called a strut if it contains no lattice points other than multiples of v. Given v, a strut of maximal radius is called a fat strut. Our objective is to find v such that the fat strut radius is maximized. The problem ---with origins in communication theory--- arises while optimizing the performance of a specific class of continuous alphabet group codes. I will show that the problem can be reduced to that of selecting a vector v such that the lattice obtained by projecting Z^n into the subspace orthogonal to v has a large packing density. I will then derive an achievable lower bound on the product lr^{n-1}, where l = (v.v)^(1/2) is the L2 norm of v. The arguments leading to this bound are non-constructive, and leave open the question of constructing good vectors v. I will then describe a newly discovered construction called the Lifting Construction, which provides a general solution to the fat strut problem. |
Title: | Network Resource Allocation Games |
Speaker: | Mr Thanh Nguyen |
Date: | 23 February 2010 |
Time: | 10.00am - 11.00am |
Venue: | Executive Classroom 1, SPMS-MAS-03-06 |
Abstract: | Resource allocation problems are important in Engineering and Applied Mathematics. With the fast growth of the internet and digital media, resource allocation problems are finding more and more applications including sharing the internet bandwidth and allocating advertisement in digital media. These are complex problems involving selfish behavior of multiple agents in the system. In this talk, I will introduce a game theoretical framework for these systems and prove a result showing a surprising fact that despite all of the competing forces, some of the very simple mechanisms widely used in practice work very efficiently. |
Title: | Electromagnetic Radiations as a Fluid Flow |
Speaker: | Professor Daniele Funaro |
Date: | 17 February 2010 |
Time: | 3.30pm - 4.30pm |
Venue: | Executive Classroom 2, SPMS-MAS-03-07 |
Abstract: | Since the advent of the theory of electromagnetic fields, more than a century ago, waves have been described as a kind of energy flow, governed by suitable transport equations in vector form, namely Maxwell's equations. In void, the electric and magnetic fields (E and B, respectively) are transversally oriented with respect to the direction of propagation, and their envelope produces a sequence of wave-fronts. This is in agreement with the fact that the energy develops according to the evolution of the vector product E times B, otherwise known as Poynting vector. We are going to present a system of equations in the three independent vector unknowns: E, B, V. In pure void, the electric and magnetic fields follow the Faraday's law together with the Amp`ere's law, where a current, flowing at velocity V, is supposed to be naturally associated with the wave. In order to close the system, the third relation is the Euler's equation for V, containing an added forcing term E+ V times B, perfectly analogous to that expressing the Lorentz's law. In this way, the three entities E, B, V turn out to be strictly entangled. |