Maseeh Colloquium Series Archive
The Maseeh Mathematics & Statistics Colloquium Series Archive.
Friday, 3:15 PM June 1, 2012 
Sir Michael Berry, H H Wills Physics Laboratory, University of Bristol, UK http://www.phy.bris.ac.uk/people/berry_mv/index.html Quantum mechanics, chaos and the singing of the primes Abstract: The Riemann hypothesis can be interpreted as stating that the prime numbers contain 'music', whose component frequencies are the Riemann zeros. The question "Frequencies of what?" leads to tantalizing connections with the energy levels of quantum systems whose corresponding classical motion is chaotic. At the level of statistics, predictions for the Riemann zeros based on semiclassical quantum asymptotics (with primes as periods of classical trajectories) have reached a high degree of accuracy and refinement. For the zeros themselves, the RiemannSiegel formula and its improvements lead to new ways of calculating quantum levels. There are speculations about the underlying operator.

Monday, 3:15 PM May 21, 2012 
Olga Rossi, Department of Mathematics, University of Ostrava http://prf.osu.eu/kma/index.php?idc=14214 Geometric structures in the calculus of variations Abstract: The lecture is an introduction to the current calculus of variations on manifolds. In the context of the history of the art we remind the main results achieved during the past 40 years in understanding the geometric structure of extremal problems. We remind the developments and generalizations of the traditional three pillars of the theory  the Lagrangian theory, the Hamilton and HamiltonJacobi theory and the theory of symmetries and conservation laws, as well as the modern fourth pillar  the theory of variational sequences and bicomplexes. Applications and open problems are also discussed.

Friday, 3:15 PM May 18, 2012 
Donald B. Rubin, Department of Statistics, Harvard University http://www.stat.harvard.edu/faculty_page.php?page=rubin.html Direct Versus Indirect Causal Effects: An unhelpful distinction.

Friday, 3:15 PM May 11, 2012 
Raul Jimenez, Department of Statistics, Universidad Carlos III, Madrid http://www.uc3m.es/portal/page/portal/dpto_estadistica/home/members/raul_jimenez Estimating perimeters of irregular shapes effortlessly Abstract: Determining the perimeter of a shape from an image is a key common problem in many areas. We present a simple but powerful method for estimating these lengths, only requiring knowing whether points, thrown at random, are within the shape, no matter how close they are to its edge. The method works well for both smooth and irregular shapes and provides normal confidence intervals that allow balance precision and sample size, as is usual in elementary statistical methods. The estimation of the coastline length from a high resolution image captures the essence of the problem. We use this framework to discuss our method and consider simulations and real data, from the National Geographic Institute of Spain, to perform a comparative study.

Friday, 3:15 PM May 4, 2012 
Jedrzej Sniatycki, Mathematics & Statistics, University of Calgary, Calgary, Canada http://math.ucalgary.ca/profiles/jedrzejsniatycki Differential Geometry of Singular Spaces Abstract: See: http://www.mth.pdx.edu/~marek/Abstract.pdf

Friday, 3:15 PM April 27, 2012 
Nilima Nigam, Department of Mathematics, Simon Fraser University, Burnaby BC, Canada http://www.math.sfu.ca/people/staff/faculty/nilima_nigam A mathematical model of bone remodeling at the cellular level Abstract: In this talk, we quickly review the physiological process of bone remodeling and some key characteristics of the process at the cellular level. We then construct a mathematical model which accounts for some of the observed features. We describe the (difficult) process of parameter estimation, and present some computational results. We end by discussing extensions to models of cancer metastasis in bone

Friday, 3:15 PM April 20, 2012 
Alvaro Pelayo, Institute for Advanced Study/Washington University http://www.math.wustl.edu/~apelayo/ Integrable Systems: Symplectic and Spectral Properties Abstract: I will review the basics of symplectic geometry, group actions on symplectic manifolds, and integrable systems. Then I will explain some recent developments on the subject, emphasizing how the symplectic geometric aspects of integrable systems are related to the spectral theory of their quantum analogues.

Wednesday, 2:00 PM April 18, 2012 
Dr. Wilfred Pinfold, Intel Corporation Technical Computing Research at Intel Corporation Abstract: Dr Pinfold will talk about innovation and technology transfer at Intel and how it has affected Technical and High Performance computing over the last 25 years. He will also discuss current research programs and government collaborations.

Friday, 3:15 PM April 13, 2012 
Bob Russell, Department of Mathematics, Simon Fraser University, Burnaby BC, Canada http://www.math.sfu.ca/people/staff/faculty/robert_russell Adaptive Mesh Generation and Moving Mesh Methods Abstract: Over the last several decades, many mesh generation methods and a plethora of adaptive methods for solving differential equations have been developed. In this talk, we take a general approach for describing the mesh generation problem, which can be considered as being in some sense equivalent to determining a coordinate transformation between physical space and a computational space. Some new theoretical results are given that provide insight into precisely what is accomplished using mesh equidistribution (which is a standard adaptivity tool used in practice). As well, we discuss two general types of moving mesh methods for solving time dependent PDEs, those based upon a variational formulation of the mesh generation problem and those which target mesh velocity. Among the methods in the latter class are those which solve the MongeAmpere equation and the optimal mass transport problem, an area which has seen intense research activity of late.

Friday, 3:15 PM April 6, 2012 
Dimiter Vassilev, Department of Mathematics and Statistics, University of New Mexico http://www.unm.edu/~vassilev/ The quaternionic contact Yamabe problem Abstract: The origin of the Yamabe type equations can be traced to two questions in geometry and analysis. On the geometric side, Yamabe considered the problem of existence of a conformal transformation (i.e. a transformation that preserves angles) of a Riemannian metric on a given compact manifold to one with a constant scalar curvature. The existence of such a conformal map is equivalent to the existence of a positive solution of a certain nonlinear partial differential equation  the Yamabe equation. A generalization of the Riemannian Yamabe problem can be obtained in the setting of geometries modeled on the Iwasawa type groups. In this way one obtains the corresponding Riemannian, CR, quaternionic contact and octanion Yamabe type problems. The solution of the Riemannian and CR cases is complete after the well known results of H. Yamabe, N. Trudinger, G. Talenti, Th. Aubin, R. Schoen, S.T. Yau in the Riemannian case and D. Jerison, J. Lee, N. Gamara and R. Yacoub in the CR case. The focus of this talk will be the known results in the quaternionic contact case. In addition, this concrete problem will be used to give an insight into the underlying subRiemannian geometry.

Friday, 3:15 PM March 9, 2012 
Leszek Demkowicz, Aerospace Engineering and Engineering Mechanics, University of Texas at Austin http://www.ae.utexas.edu/faculty/members/demkowicz.html DISCRETE STABILITY, DPG METHOD AND LEAST SQUARES Abstract: Ever since the ground breaking paper of Ivo Babuska [1], everybody from Finite Element (FE) community has learned the famous phrase: ''Discrete stability and approximability imply convergence'' In fact, Ivo gets only a partial credit for the phrase that had already been used for some time by the Finite Difference (FD) people since an equally well known result of Peter Lax [2]. The challenge in establishing convergence comes from the fact that, except for a relative small class of ''safe'' coercive (elliptic) problems, continuous stability DOES NOT imply discrete stability. In other words, the problem of interest may be well posed at the continuous level but this does not imply that the corresponding FD or FE discretization will automatically converge. No wonder then that the FE numerical analysis community spent the last 40+ years coming up with different ideas how to generate discretely stable schemes coming up with such famous results as Mikhlin's theory of asymptotic stability for compact perturbations of coercive problems, Brezzi's theory for problems with constraints, concept of stabilized methods starting with SUPG method of Tom Hughes, the bubble methods, stabilization through leastsquares, stabilization through a proper choice of numerical flux including a huge family of DG methods starting with the method of Cockburn and Shu, and a more recent use of exact sequences. In the first part of my presentation I will recall Babuska's Theorem and review shortly the milestones in designing various discretely stable methods listed above. In the second part of my presentation, I will present the Discontinuous PetrovGalerkin method developed recently by Jay Gopalakrishan and myself [3,4]. The main idea of the method is to employ (approximate) optimal test functions that are computed on the fly at the element level using BubnovGalerkin and an enriched space. If the error in approximating the optimal test functions is negligible, the method AUTOMATICALLY guarantees the discrete stability, provided the continuous problem is well posed. And this holds for ANY linear problem. The result is shocking until one realizes that we are working with a unconventional least squares method. The twist lies in the fact that the residual lives in a dual space and it is computed using dual norms. The method turns out to be especially suited for singular perturbation problems where one strives not only for stability but also for ROBUSTNESS, i.e. a stability UNIFORM with respect to the perturbation problem. I will use to important model problems: convectiondominated diffusion and linear acoustics equations to illustrate the main points. [1] I. Babuska, Errorbounds for Finite Element Method. Numer. Math, 16, 1970/1971. [2] P. Lax, Numerical Solution of Partial Differential Equations. Amer. Math. Monthly, 72 1965 no. 2, part II. [3] L. Demkowicz and J. Gopalakrishnan. A Class of Discontinuous PetrovGalerkin Methods. Part II: Optimal Test Functions. Numer. Meth. Part. D. E., 27, 70105, 2011. [4] L. Demkowicz and J. Gopalakrishnan. A New Paradigm for Discretizing Difficult Problems: Discontinuous Petrov Galerkin Method with Optimal Test Functions. Expressions (publication of International Association for Computational Mechanics), November 2010.

Friday, 3:15 PM February 24, 2012 
Kobi Abayomi, School of Industrial and Systems Engineering, Georgia Tech. http://www.isye.gatech.edu/facultystaff/profile.php?entry=kabayomi3 Statistics for reidentification in networked data models Abstract: Reidentification in networked data models involves testing procedures for the identification of similar observations. We consider this testing problem from first principles: we derive probability distributions for a version of a similarity score for three well known network data models. Our method is unique in that it suggests a sufficiency property for (at least) these distributions, an unexplored area of network/graphical modeling.

Friday, 3:15 PM February 17, 2012 
Melissa Boston, School of Education, Duquesne University http://www.duq.edu/education/FacultyStaff/boston.cfm The Instructional Quality Assessment Toolkit in Mathematics Abstract: A decade of testbased accountability has not yielded the anticipated improvements in students' mathematical achievement “enough to bring the United States close to the levels of the highest achieving countries” (National Research Council, 2011, p. S3). While standardized assessments can identify deficits in students' knowledge of mathematical content, they cannot provide schools and teachers with pathways for improving those deficits, and they do not provide explanations for why the deficits occur. Furthermore, current standardized assessments do not provide an indication of the presence or quality of ambitious instructional practices advocated by the National Council of Teachers of Mathematics (NCTM, 2000, 2006) and recent national reports (i.e., National Mathematics Advisory Panel, 2008), as such practices are often antithetical to the instructional activities that result from efforts to teach to the test (Le, 2009). Direct measures of instructional quality based on observations and artifacts of teaching can provide a necessary and feasible alternative to testbased accountability systems because they have the potential to go beyond simply measuring instruction and serve as a means of improving instruction (Pianta and Hamre, 2009; Stein and Matsumura, 2008). In this talk, I will describe the development of the Instructional Quality Assessment Toolkit in Mathematics, a set of rubrics for measuring the rigor and quality of mathematics instruction based on classroom observations and collections of students' written work. I will present the conceptual foundation underlying the rubric dimensions and the tests of technical quality conducted internally (Matsumura, Slater, Garnier, and Boston, 2008) and externally (Quint, Akey, Rappaport, and Willner, 2007; Bell, Little, Croft, and Gitomer, 2009). I will discuss the types of research for which the IQA is best suited, and beyond research, how the IQA can serve as a tool for the professional learning of mathematics teachers and instructional leaders. By design, the rubrics identify specific aspects of ambitious instruction 1) empirically shown to impact students' learning and 2) that can be enhanced by engaging teachers in readilyavailable professional development materials. For different stakeholders in the educational system, the rubrics can serve as “boundary objects” to communicate a shared vision of ambitious mathematics instruction between teachers and instructional leaders and to serve as a formative evaluation tool for classroom observations, thereby setting expectations and accountability for highquality teaching. Ideally, the IQA and other direct measures of teaching will someday be incorporated into formal assessments of teaching quality at scale, supplementing or replacing the current national fixation on indirect assessments of teacher quality.

Friday, 3:15 PM February 10, 2012 
Artin Armagan, SAS Institute, Inc. http://sites.google.com/site/armaganartin On Shrinkage Priors Abstract: In recent years, a rich variety of shrinkage priors have been proposed that can also yield thresholding/variable selection procedures via maximum a posteriori estimation. In general, these priors can be expressed as scale mixtures of normals, but may possibly have more complex forms and better properties than more traditional shrinkage priors such as the double exponential. I will be presenting some of our recent work that includes the novel hierarchical modeling of such priors and some of their interesting and promising properties.

Friday, 3:15 AM February 3, 2012 
Joachim Schoberl, Institute for Analysis and Scientific Computing Vienna University of Technology http://www.asc.tuwien.ac.at/~schoeberl Hybrid Discontinuous Galerkin Methods with Vector Valued Finite Elements Abstract: In this talk we discuss some recent finite element methods for solid mechanics and fluid dynamics. Here, the primary unknowns are continuous vector fields. We show that it can be useful to treat the normal continuity and tangential continuity of the vector fields differently. One example is to construct exact divergencefree finite element spaces for incompressible flows, which leads to finite element spaces with continuous normal components. An other example is structural mechanics, where tangential continuous finite elements lead to locking free methods. Keeping one component continuous, we arrive either at H(curl)conforming, or H(div)conforming methods. The other component is treated by a hybrid discontinuous Galerkin method. We discuss a generic technique to construct conforming high order finite element spaces spaces for H(curl) and H(div), i.e., Raviart Thomas and Nedelec  type finite elements. By this construction, we can easily build divergencefree finite element subspaces.

Friday, 3:15 PM January 27, 2012 
Malgorzata Peszynska, Department of Mathematics, Oregon State University http://www.math.oregonstate.edu/people/view/mpesz Mathematical modeling of methane in subsurface: toward hybrid models Abstract: Methane gas is both an energy resource and environmental hazard. In the talk we describe two applications: methane hydrates and coalbed methane, whose understanding requires broad interdisciplinary collaborations and is important for global climate and energy studies. We first discuss various models based on partial differential equations with special constructs allowing to incorporate multiple scales, multivalued operators, and inequality constraints. We indicate the mathematical and computational challenges which include the paucity of analysis results available for comprehensive models. Then we indicate the need to go beyond these, e.g., to coarse grained discrete models of statistical mechanics, for accurate modeling of complex dynamic phenomena at micropore and molecular scales. These eventually need to be coupled to continuum models. While such hybrid models offer great opportunities, they also present an entirely new set of challanges and research questions.

Friday, 3:15 PM January 20, 2012 
Lee Gibson, School of Natural Sciences, Indiana University  Southeast http://dl.dropbox.com/u/144447/index.html Speed and Rate of escape for Random Walks Abstract: A random walk on an infinite graph tends to wander farther and farther away from its starting point. On average, this displacement grows at different rates depending on both the properties of the walk and of the graph geometry. The volume growth, branching, or group structures of the graph have been used to determine whether the displacement grows as the number of steps, as the squareroot of the number of steps, or somewhere in between. We will survey many known results in this area and describe a new result for nontransitive graphs with exponential volume growth.

Friday, 3:15 PM January 13, 2012 
Carlos Corvini, Carlos Corvini Internacional, Argentina http://www.corvini.com/en_ccorvini_cont.htm FAST APPROACH SYSTEM (FAS) FOR LEARNING CALCULUS Subject: Solving Ordinary Differential Equations Abstract: The main objective in teaching Calculus is the understanding of concepts. In this work I propose a FAS philosophy to learn Calculus; that is, a way of reaching the concepts from an equidistant equilibrium between a traditional and a reformist point of view. Because of its structure, this system may be as traditional and instructive (of the cookbookrecipe type) or as reformist (cutting out or taking different approaches) as you wish, thus stimulating logical and creative thinking. The system is aimed at the fast selflearning of Calculus, offering flexibility and interaction similar to oneonone tutoring. It also collaborates with and complements and/or supplements syllabi developed by teachers. The philosophical key to FAS is based on access to knowledge by means of a set of different approaches, each of which has been carefully designed to be independent from each other. Graphically, we may picture each approach as lying parallel to each other like the ends of a fork. The idea is to gain understanding of the concepts using a strategy that surfs between the different approaches, consolidating the process of comprehension and so generating a way of thinking. Due to the independence of each approach, every single one may be considered as a book in itself. The new information provided by each approach is added into the cognitive structure and associated to preexisting knowledge. Each approach is used not as an end in itself but as a means to trigger a disintegration of the cognitive structure and its subsequent automatic restructuring. To illustrate the FAS we take up the concept of solving ordinary differential equations. Targets: 1. The system is aimed at teaching Calculus to those who need it as a work tool, when knowledge is needed promptly. For example, engineers who wish to recall or broaden their knowledge on some matters.2. It is also aimed at High School and College students, who are benefited by the selfacquisition of knowledge for better academic performance, eventually allowing them to save money on tutoring. 3. On the other hand, it is aimed at instructors who wish to turn the relationship with their students into a beneficial symbiosis.

Friday, 3:15 PM December 2, 2011 
Kristen Bieda, Department of Teacher Education, Michigan State University https://www.msu.edu/~kbieda Enacting the Standards for Mathematical Practice from the Common Core State Standards: Challenges and Opportunities Abstract: The Common Core State Standards asserts that one facet of mathematical proficiency is the ability to generate and critique mathematical arguments. This recommendation is not new considering the history of mathematics education reform, but the wide adoption of the Common Core Standards, and their link to standardized assessments, brings new impetus for teachers to engage students in this fundamental mathematical practice. In this talk, I will discuss the implications of findings from three studies investigating the nature of students' experiences with generating arguments in math classrooms, including the role of curriculum, classroom discourse and classroom norms. The challenges and opportunities provided by the Standards for Mathematical Practice, for mathematics teachers and mathematics teacher educators, will be discussed.

Friday, 3:15 PM November 18, 2011 
Mark Levi, Department of Mathematics, Pennsylvania State University http://www.math.psu.edu/levi/Welcome.html Bicycle tracks, the symplectic group and forced pendula Abstract: I will describe the above named objects and will describe a surprisingly close connection between them, mentioning also a recent solution of Menzin's conjecture (1906). Some of this talk is based on our joint work with Sergei Tabachnikov, Robert Foote and Warren Weckesser.

Friday, 3:15 PM November 4, 2011 
David Wright, Department of Mathematics, Washington University http://wumath.wustl.edu/people/wright_david Mathematics and Music Abstract: Many people intuitively sense that there is a connection between mathematics and music. If nothing else, both involve counting. There is actually much more to the association. Mathematics has been observed to be the most abstract of the sciences, music the most abstract of the arts. Mathematics attempts to understand conceptual and logical truth and appreciates the intrinsic beauty of such. Music evokes mood and emotion by the audio medium of tones and rhythms without appealing to circumstantial means of eliciting such innate human reactions. It is therefore not surprising that the symbiosis of the two disciplines is an age old story. The Greek mathematician Pythagoras noted the integral relationships between frequencies of musical tones in a consonant interval; the 18th century musician J. S. Bach studied the mathematical problem of finding a practical way to tune keyboard instruments. In today's world it is not at all unusual to encounter individuals who have significant interest in both subjects. In this talk, several musical and mathematical notions will be brought together, such as scales/modular arithmetic, octave identification/equivalence relation, intervals/logarithms, equal temperament/exponents, overtones/integers, tone/trigonometry, timbre/harmonic analysis, tuning/rationality. Music concepts to be discussed include scales, intervals, rhythm, meter, melody, chords, progressions, note classes, overtones, timbre, formants, equal temperament, and just intonation. Mathematical concepts covered include integers, rational and real numbers, equivalence relations, geometric transformations, modular arithmetic, logarithms, exponentials, and periodic functions. Each of these notions enters the scene because it is involved in one way or another with a point where mathematics and music converge. A number of musical and sound examples will be played to demonstrate concepts.

Friday, 3:15 PM October 28, 2011 
Bala Krishnamoorthy, Department of Mathematics, Washington State University http://www.math.wsu.edu/math/faculty/bkrishna/welcome.html Computational approaches for protein mutagenesis Abstract: Mutagenesis is the process of changing one or more amino acids (AAs) in a protein to alternative AAs such that the resulting mutant protein has desirable properties the original one lacks, e.g., increased or decreased stability, reactivity, solubility, or temperature sensitivity. Biochemists often have to choose a small subset of mutations from a large set of candidates in order to identify the desirable ones. Computational approaches are invaluable for efficiently making these choices. This talk will overview my work on computational approaches to predict stability and solubility mutagenesis, and to understand the mechanisms of temperaturesensitive (Ts) mutations. I employ techniques from several areas including computational geometry, optimization, machine learning, and statistics. The scoring functions for predicting changes to stability and solubility due to mutations are developed using the computational geometry construct of Delaunay tessellation. In the case of solubility, we optimize the scoring function using linear programming techniques. Our findings for the case of Ts mutations reveal that attributes of the neighborhood of the mutation site are as significant in determining Ts mutants as the properties of the site itself.

Monday, 2:00 PM October 17, 2011 
Teresa Graham, Department of Mathematics, BaldwinWallace College, Ohio http://www.bw.edu/academics/mth/faculty/ The Mathematical Preparation of K6 Teachers Abstract: The following topics will be discussed as we explore the Mathematics Preparation of K  6 Teachers: 1. The variation in programs across institutions 2. What content should be included in programs? 3. At what level of rigor should these courses be taught? (Is Licensure testing part of the solution?) 4. What instructional strategies should be utilized?

Friday, 3:15 PM October 14, 2011 
Sergei Ovchinnikov, Department of Mathematics, San Francisco State University http://userwww.sfsu.edu/~sergei BMC Elementary  math circles for grades 13 Abstract: Berkeley Math Circle for grades 13 (BMC Elementary) was established in fall of 2009. I ran this program for three semesters starting January 2010. In this talk I will describe the program, introduce its leaders, present sample lessons, and share with you many things that I have learned while working with this age group of schoolchildren.

Friday, 3:15 PM October 7, 2011 
Loyce Adams, Department of Applied Mathematics, University of Washington http://www.amath.washington.edu/~adams Computing Factorizations of Matrices Perturbed by SmallRank Updates Abstract: Computing factorizations of matrices is an important first step in the efficient solution of many scientific problems. A common application problem is that of producing a model that predicts known data, typically in the least squares sense For this problem, a relevant factorization is the singular value decomposition (SVD) which can be used even if the problem matrix is rankdeficient. The computation of the SVD takes O(mn^2) work for an m x n matrix. If we change the matrix (hence the model), do we have to pay this same cost to factor the new matrix? Of course this depends on how we change the matrix. One could consider adding rows or columns. Instead, this talk addresses the question: If we know the SVD of a matrix, can we use this efficiently to compute the SVD of its sum with another matrix of lowrank? We answer this question for various cases, starting with specific perturbations associated with known results, and moving to more complicated cases. This work shows that two different strategies prove useful in finding the singular values of the new matrix for the cases involving nonsymmetric perturbations. One involves the solution of three symmetric perturbation problems for the singular values coupled with a formula for the singular vectors. The other finds roots of a secular equation coupled with the same formula for the singular vectors.

Friday, 3:15 PM June 3, 2011 
Keith Nabb, Department of Mathematics, Moraine Valley Community College http://www.keithnabb.com FirstPerson Accounts of Advanced Mathematical Thinking Abstract: The field of "advanced mathematical thinking" (AMT) has seen enormous growth and development since its infancy in the 1980s. Through the research of many working groups and independent thinkers, elaborate theories have developed about how humans think and do mathematics. Currently, there are unresolved issues in the field including (a) a lack of consensus regarding the nature of AMT, (b) a lack of attention on students, and (c) theoretical saturation. Each of these contributes, albeit in different ways, to deepening the rift between educational research and classroom practice. A study in its early stages recommends we gather student descriptions of AMT to press forward and contribute in three important ways: (a) sharpening what the community means by AMT, (b) determining how (or whether) these theories manifest themselves in problemsolving situations, and (c) infusing the results of student thinking into modern curricula. Video Link ODIN username + password required

Wednesday, 3:00 PM June 1, 2011 
Tom Ferguson, Department of Mathematics, University of California at Los Angeles http://www.math.ucla.edu/~tom Predicting the Last Success Abstract: We observe a finite sequence of independent Bernoulli trials with known but varying probabilities of success, and we want to predict when a given success will be the last. The problem of maximizing the probability of predicting correctly has been solved in a remarkably simple theorem by Thomas Bruss (2000). Among the applications of this theorem is the solution to the wellknown Secretary Problem. A general theorem will be presented that generalizes Bruss's theorem in several ways. This generalization involves predicting, one stage in advance, when a discrete time stochastic process will hit an absorbing state. Several applications of this theorem will be discussed.

Friday, 3:15 PM May 20, 2011 
Sanjib Basu, Division of Statistics, Northern Illinois University http://www.niu.edu/stat/people/faculty/basu.shtml A unified competing risks cure rate model with application to cancer survival data Abstract: A competing risks framework refers to multiple risks acting simultaneously on a subject or on a system. A cure rate, or a limitedfailure model, postulates a fraction of the subjects/systems to be cured or failurefree, and can be formulated as a mixture model, or alternatively by a bounded cumulative hazard model. We develop models that unify the competing risks and limitedfailure approaches. The identifiability of these unified models are studied in detail. We describe Bayesian analysis of these models, and discuss conceptual, methodological and computational issues related to model fitting and model selection. We describe detailed applications in survival data from breast cancer patients in the Surveillance, Epidemiology, and End Results (SEER) program of the National Cancer Institute (NCI) of the United States.

Friday, 3:15 PM May 13, 2011 
David Bressoud, President of the MAA, Macalester College http://www.macalester.edu/~bressoud Proofs and Confirmations: The Story of the Alternating Sign Matrix Conjecture Abstract: What is the role of proof in mathematics? Most of the time, the search for proof is less about establishing truth than it is about exploring unknown territory. In finding a route from what is known to the result one believes is out there, the mathematician often encounters unexpected insights into seemingly unrelated problems. I will illustrate this point with an example of recent research into a generalization of the permutation matrix known as the "alternating sign matrix." This is a story that began with Charles Dodgson (aka Lewis Carroll), matured at the Institute for Defense Analysis, drew in researchers from combinatorics, analysis, and algebra, and ultimately was solved with insights from statistical mechanics. Video Link ODIN username + password required

Friday, 3:15 PM May 6, 2011 
Natalia Hritonenko, Department of Mathematics, Prairie View A&M University http://www.pvamu.edu/pages/2359.asp Modeling of Climate Change Impact On Timber Production and Carbon Sequestration Abstract: Adaptation and mitigation address climate change issues in various ways. Carbon Sequestration in forestry and soil is one of major mitigation techniques. An environmentaleconomic model of forest management is analyzed under different climate change scenarios. The model is formulated as an optimal control of a system of nonlinear partial differential equations with integral terms. The goal of considered management problem is to find optimal benefits from timber production and carbon sequestration and estimate the change in tree growth rate caused by climate changes. The objective function includes the revenue from timber production, operational expenses, and profit from carbon sequestration. The model takes into account the size structure of trees, intraspecies competition, and density effects and considers changes of the parameters as consequence of climate change. The possible dynamics of climate change is taken from known global scenarios, e.g. the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, and incorporated into the model. The provided qualitative analysis assists in understanding of how environmental changes impact biological processes of forest and carbon sequestration, shows the dynamics of change of carbon price and the optimal logging time under various climate change scenarios, and suggests how management of carbon sequestration and timber production can be adapted to climate changes. The model is approbated on real data on forestry in Spain. Video Link ODIN username + password required

Friday, 3:15 PM April 29, 2011 
Raymond Carroll, Department of Statistics, Texas A&M University http://www.stat.tamu.edu/~carroll/index.php Deconvolution and Classification Abstract: In a series of papers on Lidar data, magically good classification rates are claimed once data are deconvolved and a dimension reduction technique applied. The latter can certainly be useful, but it is not clear a priori that deconvolution is a good idea in this context. After all, deconvolution adds noise, and added noise leads to lower classification accuracy. I will give a more or less formal argument that in a closely related class of deconvolution problems, what statisticians call "Measurement Error Models", deconvolution leads to increased classification error rates. An empirical example in a more classical deconvolution context illustrates the results.

Wednesday, 3:30 PM April 27, 2011 
Lew Lefton, School of Mathematics, Georgia Institute of Technology http://www.math.gatech.edu/users/llefton Infinity Bottles of Beer on the Wall  or What's so Funny about Mathematics Abstract: In addition to being a mathematician, Dr. Lefton has worked as a standup and improv comedian. Of course, this means that he's funny, and he can prove it! This talk will be a stand up comedy set consisting of original material based on Lefton's experiences as a graduate student, professional mathematician, and college professor. WARNING: This presentation will include certain portions of Lefton's material that are only suitable for mathematically mature audiences!

Friday, 3:15 PM April 22, 2011 
Peter Hoff, Department of Statistics and Center for Statistics and the Social Sciences, University of Washington http://www.stat.washington.edu/people/people.php?id=67 Mean and covariance models for tensorvalued data Abstract: Modern datasets are often in the form of matrices or arrays, potentially having correlations along each set of data indices. For example, researchers often gather relational data measured on pairs of units, where the population of units may consist of people, genes, websites or some other set of objects. Multivariate relational data include multiple relational measurements on the same set of units, possibly gathered under different conditions or at different time points. Such data can be represented as a multiway array, or tensor. In this talk I consider models of mean and covariance structure for such arrayvalued datasets. I will present a model for mean structure based upon the idea of reducedrank array decompositions, in which the elements of an array are expressed as products of lowdimensional latent factors. The modelbased version extends the scope of reduced rank methods to accommodate a variety of data types such as binary longitudinal network data. For modeling covariance structure I will discuss a class of array normal distributions, a generalization of the matrix normal class, and show how it is related to a popular version of a higherorder singular value decomposition for tensorvalued data. Video Link ODIN username + password required

Friday, 3:15 PM March 11, 2011 
Peter Olver, School of Mathematics, University of Minnesota http://www.math.umn.edu/~olver Moving Frames in Applications Abstract: In this talk, I will describe a new approach to the theory of moving frames. The method is completely algorithmic, and applies to very general Lie group actions and even infinitedimensional pseudogroup. It has led to a wide range of new applications, ranging over differential equations, numerical methods, the calculus of variations, geometric flows, computer vision, differential geometry, invariant theory, and elsewhere. The talk will survey the key ideas and a range of applications. Video Link ODIN username + password required

Friday, 3:15 PM March 4, 2011 
Sterling Hilton, David O. McKay School of Education, Brigham Young University http://education.byu.edu/edlf/faculty/hilton_sterling.html The Comprehensive Mathematics Instruction Framework: A new lens for examining teaching and learning in the mathematics classroom Abstract: Research has documented the disparity between the vision of mathematical work espoused by the Principles and Standards for School Mathematics and the nature of actual mathematical activity in most classrooms in the United States. Similar studies suggest that this disparity stems in part because educators are often left "with no framework for the kinds of specific, constructive pedagogical moves that teachers might make." (Chazan & Ball, 1999, p. 2) The Comprehensive Mathematics Instruction (CMI) Framework was designed to provide access to reformbased pedagogical strategies for K12 mathematics teachers. The CMI Framework is not textbook specific; rather, it is a system of instruction consisting of a learning cycle, a teaching cycle, and a continuum of understanding. It was developed over several years of collaborative efforts between professors from Brigham Young University and representatives from five surrounding school districts representing onethird of the students in Utah. The primary purpose of this presentation is to describe the CMI Framework and its creation; however, some preliminary data describing the effects of professional development in CMI on teachers and their students will also be shared. Video Link ODIN username + password required

Friday, 3:15 PM February 18, 2011 
Steven Schwager, Department of Biological Statistics and Computational Biology and Department of Statistical Science, Cornell University http://www.bscb.cornell.edu/~schwager Censored Geometric Regression with Application to Human Subfertility Abstract: The geometric distribution describes the number of failures observed in a Bernoulli sequence until the first success. We propose a model for geometric distributed data in the framework of generalized linear models, which can incorporate rightcensored responses. Covariates can be included to describe the parameters of this distribution. Diagnostics for this model include a test for overdispersion. These methods are applied to a model of the success of pregnancy induced by intrauterine insemination in 200 women who were treated for subfertility at Yale. The number of these insemination services performed before success (pregnancy) is assumed to follow a geometric distribution. Right censoring occurs when a woman and her doctor decide to discontinue before success, for any of a variety of reasons. We discuss and develop an EM algorithm to estimate the parameters for the geometric model and the probability of discontinuing treatment. A Bayesian model is also developed to estimate the number of treatments expected before pregnancy is achieved. This is joint work with Daniel Zelterman at Yale University. Video Link ODIN username + password required (colloquium starts 6 min 00 sec in) Video Link Part Two

Friday, 3:15 PM February 4, 2011 
Robert delMas, Department of Educational Psychology, University of Minnesota http://www.cehd.umn.edu/edpsych/people/Faculty/delMas.html A different flavor of introductory statistics: Teaching students to really cook Abstract: The NSFfunded CATALST project is developing a radically different undergraduate introductorystatistics course based on ideas presented by George Cobb and Danny Kaplan (Cobb, 2007a, b; Kaplan, 2007). Standard parametric tests of significance, such as the twosample ttest and Chisquare analyses, are not taught in the course. Instead, a carefully designed sequence of activities based on research in mathematics and statistics education help students develop their understanding of randomness, chance models, randomization tests and bootstrap coverage intervals. For each unit in this course, students first engage in a ModelEliciting Activity (MEA; Lesh & Doer, 2003; Zawojewski, Bowman, & DiefesDux, 2008) that primes them for learning the statistical content of the unit (Schwartz, 2004). This is followed by activities where the students explore how to model chance and chance models using modeling software such as TinkerPlots and then transition to carry out randomization tests and estimate bootstrap coverage intervals. The talk will present activities from different parts of the course to illustrate this approach, as well as results from preliminary data gathered fall 2010. Video Link ODIN username + password required

Friday, 3:15 PM January 21, 2011 
Samuel Kou, Department of Statistics, Harvard University http://www.people.fas.harvard.edu/~skou Multiresolution inference of stochastic models from partially observed data Abstract: Stochastic models, diffusion models in particular, are widely used in science, engineering and economics. Inferring the parameter values from data is often complicated by the fact that the underlying stochastic processes are only partially observed. Examples include inference of discretely observed diffusion processes, stochastic volatility models, and double stochastic Poisson (Cox) processes. Likelihood based inference faces the difficulty that the likelihood is usually not available even numerically. Conventional approach discretizes the stochastic model to approximate the likelihood. In order to have desirable accuracy, one has to use highly dense discretization. However, dense discretization usually imposes unbearable computation burden. In this talk we will introduce the framework of Bayesian multiresolution inference to address this difficulty. By working on different resolution (discretization) levels simultaneously and by letting the resolutions talk to each other, we substantially improve not only the computational efficiency, but also the estimation accuracy. We will illustrate the strength of the multiresolution approach by examples. Video Link ODIN username + password required (colloquium starts 2 min 50 sec in)

Friday, 4:00 PM January 14, 2011 
Sadie Bowman, Education Manager, Matheatre, Know Theatre of Cincinnati http://www.knowtheatre.com Calculus the Musical Abstract: Using musical parodies from light opera to hiphop to introduce and illuminate such mathematical concepts as limits, integration and differentiation, this is a comic review of calculus. From Archimedes to Riemann, the quest for the instantaneous rate of change and the area under the curve comes to life through song.

Friday, 3:15 PM January 7, 2011 
Kenneth Stedman, Department of Biology, Portland State University http://web.pdx.edu/~kstedman The news from Mono Lake, where the conditions are extreme, the researchers are opinionated, and the microbes are all above average Abstract: On Dec. 2, 2010, NASA held a news conference on "The Discovery of New Life" http://science.nasa.gov/sciencenews/scienceatnasa/2010/02dec_monolake/ (announced about a week earlier) in which researchers from the USGS and NASA Astrobiology Institute described a newly "discovered" microbe that these researchers concluded uses Arsenic instead of Phosphate in its biological molecules, including DNA. The online "buzz" regarding this announcement was tremendous and highly speculative, most thought that there would be an announcement of life discovered beyond earth. A paper was published simultaneously with the news conference in Science. The stated results which would "alter biology textbooks" were met with considerable skepticism. This skepticism has circulated on the WWW since the Dec. 2 2010 news conference and publication. I will review some basic molecular biology, follow with a description of the unique ecosystem from which the microorganisms were isolated, discuss the claims of the researchers, followed by the data presented and the followup discussion, including the nature of scientific disclosure. Video Link ODIN username + password required

Friday, 3:15 PM December 3, 2010 
Mihailo Jovanovic, Department of Electrical & Computer Engineering, University of Minnesota http://www.ece.umn.edu/users/mihailo Design of structured optimal feedback gains for interconnected systems Abstract: We consider the design of optimal static feedback gains for interconnected systems subject to structural constraints on the distributed controller. These constraints are in the form of sparsity requirements for the feedback matrix, implying that each controller has access to information from only a limited number of subsystems. For this nonconvex constrained optimal control problem, we derive necessary conditions for optimality in the form of coupled matrix equations. For stable openloop systems, we show that in the limit of expensive control, the optimal controller can be found analytically using perturbation techniques. We use this feedback gain to initialize homotopybased Newton iteration that finds an optimal solution to the original (nonexpensive) control problem. We further employ the augmented Lagrangian method to alleviate the difficulty of finding a stabilizing structured gain to initialize efficient Newton and quasiNewton methods that exploit the sparsity structure of the constraint set. The developed technique is used to design optimal localized controllers in largescale vehicular formations. The issue of scaling, with respect to the number of vehicles, of global and local performance measures in the optimallycontrolled formation will be discussed in detail.

Friday, 3:15 PM November 19, 2010 
David Hammond, NeuroInformatics Center, University of Oregon Wavelets on Graphs via Spectral Graph Theory Abstract: Wavelet analysis has proved to be a very successful tool for signal analysis and processing. However, many interesting data sets are defined on domains with complicated networklike structure to which classical wavelets are not suitable. In this talk I will describe a novel approach for generating a wavelet transform for data defined on the vertices of a finite weighted graph. A fundamental obstacle to extending classical wavelet analysis to graph domains is the question of how to define translations and scalings for functions defined on an irregular graph. We sidestep this by appealing to analogy with the graph analogue of the Fourier transform generated by the spectral decomposition of the discrete graph Laplacian $\L$. Using a bandpassshaped generating function $g$ and scale parameter $t$, we define the scaled wavelet operator $T^t_g=g(t\L)$. The spectral graph wavelets are then formed by localizing this operator by applying it to indicator functions at each vertex. An analysis at fine scales shows that this procedure yields localized wavelets. I will describe how spectral graph wavelet frames can be designed through sampling of the scale parameter, with controllable frame bounds. Additionally, I will describe a fast algorithm for computing the wavelet coefficients based on Chebyshev polynomial approximation, which avoids the need to diagonalize $\L$. I will conclude with illustrative examples of the spectral graph wavelets on several different problem domains.

Friday, 3:15 PM November 5, 2010 
Alexander Dimitrov, Department of Mathematics and Science Programs, Washington State University at Vancouver http://directory.vancouver.wsu.edu/people/alexanderdimitrov Soft Clustering Decoding of Neural Codes Abstract: Methods based on Rate Distortion theory have been successfully used to cluster stimuli and neural responses in order to study neural codes at a level of detail supported by the amount of available data. They approximate the joint stimulusresponse distribution by softclustering paired stimulusresponse observations into smaller reproductions of the stimulus and response spaces. An optimal soft clustering is found by maximizing an informationtheoretic cost function subject to both equality and inequality constraints, in hundreds to thousands of dimensions.

Friday, 3:15 PM October 29, 2010 
Serge Preston, Fariborz Maseeh Department of Mathematics + Statistics, Portland State University http://www.mth.pdx.edu/~serge Incidentally on the Fractals Abstract: In this talk I remind some basic properties of fractal sets  different dimensions, some operations on the fractals and the behavior under biLpischitz transformations. Then I will present two constructions of integration over fractals  one, that is due to J. Harrison, Berkeley, who followed the approach by Hassler Whitney and the other which employs the densities of fractal order.

Friday, 3:15 PM October 22, 2010 
Ian Dinwoodie, Fariborz Maseeh Department of Mathematics + Statistics, Portland State University Monte Carlo Methods for Statistical Analysis Abstract: We will give examples of how randomness and sampling have been used to solve applied problems for over 100 years. Special attention will be given to Markov Chain Monte Carlo methods and applications to exact analysis of discrete data.

Friday, 3:15 PM October 15, 2010 
Finbarr [Barry] Sloane, School of Education, University of Colorado at Boulder http://www.colorado.edu/education/faculty/barrysloane/index.html Measurement in and for Mathematics Education Abstract: The goal of this presentation is to look carefully at the role of measurement in mathematics education. To do so, I will introduce Item Response Theory, specifically one version of this psychometric theory, in the context of the study of young children's learning of whole number concepts in innercity school settings. The presentation will describe one hypothetical learning trajectory in the context of a method of instruction. The theoretical base for this trajectory was originally proffered by researchers at the University of Wisconsin, Madison. The central feature of this theory of learning asks how students interpret "word problems" and integrates student's chosen strategies into a theory of instruction. In this presentation, we ask how the theory of learning informs the construction of test items, the scaling of these test items, and the building of measures. We close by examining recent changes in psychometrics and their implications for measurement in mathematics education.

Friday, 3:15 PM October 8, 2010 
Tom Siegfried, Editor in Chief, Science News http://www.tomsiegfried.com/ Odds Are It's Wrong: Science and Statistics in the Media Abstract: I will talk about science news coverage and the statistical issues that arise when writing about scientific research for the public.

Friday, 3:15 PM October 1, 2010 
John Stillwell, Department of Mathematics, University of San Francisco https://web.usfca.edu/facultydetails.aspx?id=4294969536 From Perspective Drawing to the Eighth Dimension Abstract: The discovery of perspective drawing in the 15th century led to projective geometry, in which points and lines are the main ingredients. Even with this simple subject matter there are some surprises, where three points fall on the same line or three lines pass through the same point, seemingly for no good reason. The big surprises, or "coincidences", of projective geometry are the Pappus theorem, Desargues theorem, and the little Desargues theorem. Even more surprising, these purely geometric theorems were found (by David Hilbert and Ruth Moufang) to control what kind of *algebra* is compatible with the geometry. Compatible algebras live in 1, 2, 4, and 8 dimensions.

Friday, 3:15 PM May 14, 2010 
Alan Genz, Department of Mathematics, Washington State University Numerical Computation of Multivariate Normal and Multivariate t Probabilities Abstract: A review and comparison will be given for the use of Monte Carlo and QuasiMonte Carlo methods for multivariate Normal and multivariate t distribution computation problems. Sphericalradial transformation and separationofvariables transformation these problems will be considered. The use of various Monte Carlo methods and QuasiMonte Carlo methods will be discussed for different application problems.

Monday, 3:00 PM May 10, 2010 
Doug Arnold, School of Mathematics, University of Minnesota Stability, Consistency, and Convergence: Modern Variations on a Classical Theme Abstract: Consistency and stability of numerical discretizations are the basic leitmotifs of numerical analysis, and the idea that consistency and stability imply convergence is a principle theme, particularly in the numerical solution of partial differential equations. But both consistency and stability can, in some situations, be more subtle and elusive than they might appear. Even seemingly simple examples of numerical discretizations can yield unexpectedand even, in real life applications, tragicresults. The development of consistent, stable numerical methods remains elusive for important classes of problems. We will survey these ideas from their origins in the precomputer era to the present day, via a variety of examples. We finish by describing a new approach, the finite element exterior calculus, which borrows tools from geometry and topology to design and elucidate stable finite element methods.

Friday, 3:15 PM May 7, 2010 
Michael Ernst, Information System Department, St. Cloud University Active Learning? Not with My Syllabus! Abstract: It is widely accepted among statistics educators that engaging students with activities can help them learn important statistical concepts. But finding the time to do such activities in class can be challenging for college instructors with an overfull syllabus and limited student contact time. In this talk, I will focus on one part of the introductory statistics course  probability. I will discuss an approach to teaching probability that uses an activity to get students involved, but also minimizes the amount of time spent on the subject.

Friday, 3:15 PM April 30, 2010 
Xue Lan, Department of Statistics, Oregon State University Incorporating Correlation for Multivariate Failure Time Data When Cluster Size Is Large Abstract: We propose a new estimation method for multivariate failure time data using the quadratic inference function (QIF) approach. The proposed method efficiently incorporates withincluster correlations. Therefore it is more efficient than those which ignore withincluster correlation. Furthermore, the proposed method is easy to implement. Unlike the weighted estimating equations in Cai and Prentice (1995), it is not necessary to explicitly estimate the correlation parameters. This simplification is particularly useful in analyzing data with large cluster size where it is difficult to estimate intracluster correlation. Under certain regularity conditions, we show the consistency and asymptotic normality of the proposed QIF estimators. A Chisquared test is also developed for hypothesis testing. We conduct extensive Monte Carlo simulation studies to assess the finite sample performance of the proposed methods. We also illustrate the proposed methods by analyzing a data set from a kidney infection study.

Friday, 3:15 PM April 23, 2010 
Michael Perlman, Department of Statistics, University of Washington Two Statistical Vignettes: Simpson's Paradox and Shaved Dice Abstract: 1. Simpson's Paradox occurs for events A, B, and C if A and B are positively correlated given B, positively correlated given notB, but are negatively correlated in the aggregate. If a 2x2x2 table is chosen "at random", what is the probability that it will exhibit Simpson's Paradox? 2. Persi Diaconis has fascinated audiences at all levels with the following question: If one face of a standard gaming die is shaved uniformly by a specified fraction $s$, express the new face probabilities as a function of $s$. This apparently simple problem appears to be intractable. However, this leads to an interesting statistical question: if the shaved dice are thrown in pairs, as typical in the game of craps, what is the most efficient design for accurate estimation of the new face probabilities? (It will benefit the audience to review the properties of the Fisher information number beforehand.) (This is joint work with Marios Pavlides, with the assistance of Fred Bookstein.)

Friday, 3:15 PM April 16, 2010 
Daniel Schwartz, School of Education, Stanford University Neuroscience, mathematics, and education Abstract: Cognitive neuroscience investigates intelligent behavior, for example, attention, visual search, memory, decision making. It is just beginning to investigate mathematics. The goal of this talk is to give some idea of the constraints and benefits of brainbased methods, specifically fMRI, while also sharing some of the recent findings on mathematical thinking. A second goal is to consider three ways that neuroscience might influence mathematics education: clinical, direct to instruction, and theoretical. The talk will present some new findings as test case of whether the theoretical approach is satisfying to mathematicians and educators. The question to be addressed through fMRI is how people make meaning of abstract mathematical concepts.

Friday, 3:15 PM April 9, 2010 
Songming Hou, Department of Mathematics and Statistics, Louisiana Tech University Shape Reconstruction and Classification Abstract: Applications such as medical imaging, nondestructive testing, seismic imaging, and target detection/recognition utilize active arrays of transducers that emit signals and record reflected and/or transmitted signals. Recording the interelement response forms the response matrix of an active array. I will discuss reconstructing the shape of targets using multifrequency response matrix data with direct and iterative imaging methods. I will also discuss shape classification using response matrix data.

Friday, 3:15 PM March 12, 2010 
Richard Gomulkiewicz, Department of Mathematics and School of Biological Sciences, Washington State University When is correlation coevolution? Abstract: Spatial correlations between traits of interacting species have long been used to identify putative cases of coevolution. Here we evaluate the utility of this approach using models to predict correlations that evolve between traits of interacting species for a broad range of interaction types. Our results reveal coevolution is neither a necessary nor sufficient condition for the evolution of spatially correlated traits between species. Specifically, our results show that coevolutionary selection fails to consistently generate statistically significant correlations and, conversely, that noncoevolutionary processes can readily cause significant correlations to evolve.

Friday, 3:15 PM February 19, 2010 
Jiming Jiang, Department of Statistics, University of California at Davis INVISIBLE FENCE METHODS AND THE IDENTIFICATION OF DIFFERENTIALLY EXPRESSED GENE SETS Abstract: The fence method (Jiang et al. 2008; Ann. Statist. 36, 16691692) is a recently developed strategy for model selection. The idea involves a procedure to isolate a subgroup of what are known as correct models (of which the optimal model is a member). This is accomplished by constructing a statistical fence, or barrier, to carefully eliminate incorrect models. Once the fence is constructed, the optimal model is selected from amongst those within the fence according to a criterion which can be made flexible. The construction of the fence can be made adaptively to improve finite sample performance. We extend the fence method to situations where a true model may not exist or among the candidate models. Furthermore, another look at the fence methods leads to a new procedure, known as invisible fence (IF). A fast algorithm is developed for IF in the case of subtractive measure of lackoffit. The main application considered here is microarray gene set analysis. In particular, Efron and Tibshirani (2007; Ann. Appl. Statist. 1, 107129) proposed a gene set analysis (GSA) method based on testing the significance of genesets. In typical situations of microarray experiments the number of genes is much larger than the number of microarrays. This special feature presents a real challenge to implementation of IF to microarray geneset analysis. We show how to solve this problem, and carry out an extensive Monte Carlo simulation study that compares the performances of IF and GSA in identifying differentially expressed genesets. The results show that IF outperforms GSA, in most cases significantly, uniformly across all the cases considered. Furthermore, we demonstrate both theoretically and empirically the consistency property of IF, while pointing out the inconsistency of GSA under certain situations. A real data example is considered. This work is joint with Dr. Thuan Nguyen of Oregon Health and Science University, and Dr. J. Sunil Rao of Case Western Reserve University.

Friday, 3:15 PM February 19, 2010 
Prabir Barooah, Departmenmt of Mechanical and Aerospace Engineering, University of Florida Stability and robustness issues in decentralized formation control Abstract: Formation control typically refers to the design of feedback control laws to move a group of agents along a given trajectory while maintaining a desired formation geometry. The agents can be manmade vehicles or animals such as birds and fish, and as such the problem is relevant to coordinated motion of robotic vehicles for search and surveillance, as well as to understand group coordination in nature. The agents are modeled as point masses, or double integrators. Each agent's control (acceleration command) depends on the state of other vehicle. It has been known for some time that the coupled system exhibits poor stability margin and high sensitivity to external disturbances when the number of agents is large. Scalability becomes poorer when the control law is required to be decentralized, that is, when each agent is allowed to access information only from a small number of nearby agents to compute its control action. The difficulty in the decentralized formation control problem comes from several sources, one being the lack of appropriate control design and analysis tools. Classical control theory is useful to design only centralized control laws. In this talk we will discuss some recent developments on the analysis and design fronts for the formation control problem. The first part of the talk will describe application of graphtheoretic concepts to analyze the behavior of the system as a function of the number of agents and the topology of the interconnection network. The spectrum of the Dirichlet Laplacian matrix of the interconnection graph is seen to play a major role, which is then utilized to establish how performance scales with size and structure of the network. The second part of the talk is about a novel design methodology called "mistuning", which is suggested by a PDE approximation of the formation dynamics. The advantage of the PDE formulation is that it reveals the mechanism for loss of performance and ways to ameliorate much more clearly than the traditional statespace formulation does. The resulting mistuning design yields large improvement in the stability margin and robustness to disturbances compared to the traditional designs.

Friday, 3:15 PM February 12, 2010 
Mike Jeffrey, Applied Nonlinear Mathematics Group, University of Bristol Hunting ducks and nondeterminism in nonsmooth dynamics Abstract: The effectiveness of physical theories depends heavily upon context  the parameter regime in which a particular theory/model makes a problem solvable. The real world of nature and engineering, however, consists of multiple systems, all with different structures and different scales, and therefore different models. Bring two different systems into contact and the results are often highly nonsmooth  e.g. impact, friction, and bursting  and are still barely understood. The theory of nonsmooth dynamical systems is young, but fruitful. A nonsmooth jump in phase space is something much more violent than familiar sudden events, like bifurcations. Particular interest lies in trajectories that stick to and slide along discontinuity surfaces in phase space (Filippov systems). These cause 'sliding bifurcations', which we recently showed can destroy stable and periodic dynamics without warning. As we will discuss: these were recently observed in a superconducting sensor, they pose problems for our understanding of fundamental mechanics and control by giving rise to nondeterministic chaos, and they offer new insight into the "Canard" (french duck) phenomenon. Canards are related to relaxation oscillations and the neuronal spiking responsible for epilepsy, and having taken a journey through the nonsmooth world we are led back, via a relation with ''nonstandard analysis'', towards an understanding of what puts the ''non'' into ''nonsmooth''.

Friday, 3:15 PM February 5, 2010 
Mai Zhou, Department of Statistics, University of Kentucky A Tale of Two Empirical Likelihoods Abstract: We first discuss and compare two general but different approaches to the Empirical Likelihood with right censored data. Both approach appeared in the literature. We argure that one is better. We then examine a censored quantile regression model and propose an Empirical Likelihood with the "caseweight" estimator. Simulation and Example are given at the end.

Friday, 3:15 PM January 22, 2010 
Helen Burn, Highline Community College Factors that Shape Curricular Reasoning Abstract: Curriculum development and implementation are the daily work of faculty. This talk introduces perspectives on curriculum from higher education research and presents findings from an empirical study of community college faculty engaged in reform of college algebra that demonstrates key influences and pressing issues that shape curricular reasoning.

Friday, 3:15 PM January 15, 2010 
Beth Andrews, Department of Statistics, Northwestern University RankBased Estimation for Time Series Model Parameters Abstract: In this talk, I consider a rankbased method for estimating time series model parameters. The parameter estimates minimize the sum of meancorrected model residuals weighted by a function of residual rank, and are similar to the rank estimates proposed by L.A. Jaeckel [Estimating regression coefficients by minimizing the dispersion of the residuals, Ann. Math. Statist. 43 (1972) 14491458] for estimating linear regression parameters. Rank estimates are known to be robust and relatively efficient in general. It will be shown that this is true in the case of parameter estimation for standard linear and nonlinear time series processes. The estimation technique is robust because the rank estimates are n^{1/2}consistent (n represents sample size) and asymptotically normal under mild conditions. Since the weight function can be chosen so that rank estimation has the same asymptotic efficiency as maximum likelihood estimation, rank estimation is also relatively efficient. The relative efficiency extends to the unknown noise distribution case since rank estimation with a single weight function can be nearly as efficient as maximum likelihood for a large class of noise distributions. In addition, rank estimation dominates traditional Gaussian quasimaximum likelihood estimation with respect to both robustness and asymptotic efficiency.

Friday, 3:15 PM December 4, 2009 
Azhar Iqbal, School of Electrical and Electronic Engineering, University of Adelaide An Introduction to Quantum Games Abstract: The games of chess, poker, and rockscissorspaper etc are known to everyone. In the year 1999 David Meyer asked what may happen to a game when we play it in the microworld, where the governing laws of physics are quantum mechanical and our everyday intuition fails. With an interesting example answering this question he suggested that game theory can be a useful tool in the better understanding and intuition of quantum algorithms and quantum communication protocols. Meyer's work was received with great interest and the last decade has seen a new research field emerging that is devoted to the study and the analysis of various games in which the participating players share quantum mechanical resources. This talk introduces the field of quantum games and discusses some of its exciting examples.

Monday, 3:15 PM November 30, 2009 
Lan Wang, School of Statistics, University of Minnesota Locally Weighted Censored Quantile Regression Abstract: Censored quantile regression offers a valuable supplement to Cox proportional hazards model for survival analysis. Existing work in the literature often requires stringent assumptions, such as unconditional independence of the survival time and the censoring variable or global linearity at all quantile levels. Moreover, some of the work use recursive algorithms making it challenging to derive asymptotic normality. To overcome these drawbacks, we propose a new locally weighted censored quantile regression approach that adopts the redistributionofmass idea and employs a local reweighting scheme. Its validity only requires conditional independence of the survival time and the censoring variable given the covariates, and linearity at the particular quantile level of interest. Our method leads to a simple algorithm that can be conveniently implemented with R software. Applying recent theory of Mestimation with infinite dimensional parameters, we establish the consistency and asymptotic normality of the proposed estimator. The proposed method is studied via simulations and is illustrated with the analysis of an acute myocardial infarction dataset. (This is joint work with Dr. Huixia Judy Wang form Department of Statistics, North Carolina State University)

Friday, 3:15 PM November 13, 2009 
Megan Staples, University of Connecticut Justification as a disciplinary practice and a learning practice: Negotiating its meaning in mathematics classrooms Abstract: In this talk, I explore the notion of justification as a disciplinary practice (central to the work of research mathematicians) and as a classroom learning practice (central to the work of mathematics teachers). As a disciplinary practice, justification has many purposes: it is used to validate claims, provide insight into a result, and systematize knowledge (Hanna, 2000). As a learning practice, justification may be used for these purposes, but also for other classroomrelevant purposes such as assessment, promoting mathematical dispositions, and pursuing content learning goals. Drawing on data from two research projects, I try to clarify the nature of justification as a learning practice and the relationship between its role in the classroom and the role of justification in the mathematicians’ community. I further analyze factors that shape these differences and explore the potential for greater alignment. Implications for mathematics education and teacher development will be discussed.

Thursday, 2:00 PM November 12, 2009 
Hiroshi Suzuki, International Christian University, Tokyo, Japan An Introduction to DistanceRegular Graphs Groups, Graphs and Algebras  Abstract: The completion of a classification of finite simple groups was announced in 1980s, and recently the main gap in its proof was filled by Aschbacher and Smith. As a next step several proposals were made to classify various finite combinatorial or geometrical objects with high symmetry or regularities. In this talk I give a brief introduction of distanceregular graphs, some recent developments and algebras associated to them.

Friday, 3:15 PM November 6, 2009 
Swaroop Darbha, Department of Mechanical Engineering, Texas A&M String Stability of Automatic Vehicle Following Systems Abstract: Automatic Vehicle Following Systems (AVFS) have been investigated for the past seventy years and have recently been deployed in passenger vehicles in the form of Adaptive Cruise Control (ACC) Systems. AVFS couple the motion of vehicles by feedback and consequently, errors in spacing and velocity can propagate in a collection of vehicles employing AVFS. Amplification of errors in spacing can cause accidents and hence, is an important topic to be studied from a practical viewpoint. The topic of string stability in automatic vehicles is precisely concerned with this problem and is the central focus of my talk. In this talk, I will relate how information flow among vehicles plays an important role in the propagation of errors by considering various types of information flow graphs. In particular, I will show that symmetric information flow graphs are not suited for automatic vehicle following systems that desire to maintain a fixed distance from other vehicles throughout their motion.

Friday, 3:15 PM October 30, 2009 
Yaming Yu, Department of Statistics, University of California, Irvine Weighted sums, stochastic orders, and entropy Abstract: Some stochastic inequalities are presented for weighted sums of i.i.d. random variables. The main inequalities are derived using majorization techniques under certain logconcavity assumptions. Related results comparing distributions according to Shannon entropy are also discussed.

Friday, 3:15 PM October 23, 2009 
Xuming He, Department of Statistics, University of Illinois at UrbanaChampaign On Dimensionality of Mean Structure from a Single Data Matrix Abstract: We consider inference from data matrices that have low dimensional mean structures. In educational testing and in probelevel microarray data, estimation and inference are often made from a single data matrix believed to have a unidimensional mean structure. In this talk, we focus on probelevel microarray data to examine the adequacy of a unidimensional summary for characterizing the data matrix of each probeset. To do so, we propose a lowrank matrix model, and develop a useful framework for testing the adequacy of unidimensionality against targeted alternatives. We analyze the asymptotic properties of the proposed test statistics as the number of rows (or columns) of the data matrix tends to infinity, and use Monte Carlo simulations to assess their small sample performance. Applications of the proposed tests to GeneChip data show that evidence against a unidimensional model is often indicative of practically relevant features of a probeset. The talk is based on joint work with Xingdong Feng, currently a postdoctoral fellow at the National Institute of Statistics Sciences.

Friday, 3:15 PM October 9, 2009 
John Caughman, Department of Mathematics & Statistics, Portland State University Lattice chains and Delannoy paths A triumph of paper and pencil Abstract: Lattice chains and Delannoy paths represent two ways to progress through a finite integer lattice. Recent computational techniques reveal some deep and fascinating connections between the two, but the complexity of counting these objects rises quickly with the dimension, limiting even the most powerful computers to very modest results. However, using nothing more than paper, pencil, and some classical counting techniques, we gain new insight into these questions and prove a number of results that are computationally inaccessible.

Friday, 3:15 PM October 2, 2009 
Alison Ahlgren, Department of Mathematics, University of Illinois at Urbana Champaign READINESS ASSESSMENT AND COURSE PLACEMENT THROUGH INTRODUCTORY CALCULUS Abstract: The University of Illinois and the Department of Mathematics recently undertook a broad and massive implementation of ALEKS to test students for course placement readiness. ALEKS is used for advising and placement purposes, assessment, remediation, and as a core course component. The ALEKS implementation allows for better student placement, remediation, and retention, and has increased passing rates in mathematics courses through Calculus I. We are able to better place students, better educate students, and to save students, instructors, and advisors time —for students' time is crucial. ALEKS is a powerful artificialintelligence based assessment tool that zeros in on the strengths and weaknesses of a student's mathematical knowledge, reports its findings to the student, and then if necessary provides the student with a learning environment for bringing this knowledge up to an appropriate level for course placement. ALEKS is nonmultiple choice and when a student ﬁrst enters ALEKS a brief tutorial shows the student how to use the ALEKS input tools. The student then takes the ALEKS assessment. In approximately 45 minutes, ALEKS assesses the student's current mathematical knowledge by asking roughly 30 questions. ALEKS chooses each question on the basis of the student's answers to all the previous questions. Each set of assessment questions is unique. When the student has completed the assessment, ALEKS produces a precise report of the student's mathematical knowledge. Read more about the U of I Math Placement Exam through ALEKS at http://www.math.uiuc.edu/ALEKS/

Monday, 3:15 PM August 10, 2009 
Brian Alspach, Department of Mathematics and Statistics, Simon Fraser University Hamilton paths in vertextransitive graphs Abstract: Hamilton cycles have a rich history in graph theory because of Hamilton's original game and relations with early attempts to settle the fourcolor problem. The importance of the travelling saleman problem became apparent in the midtwentieth century adding to their relevance. In the last twenty years computer scientists have become interested in Hamilton paths and cycles because of computer architecture. On the other hand, vertextransitive graphs are important in computer architecture and network theory. The two concepts were united by Lovasz's famous 1969 question: Does every connected vertextransitive graph have a Hamilton path? I shall present a survey of what is known about a stronger version of Lovasz's problem involving Hamilton paths in vertextransitive graphs.

Friday, 3:15 PM June 12, 2009 
Jun Liu, Departrment of Statistics, Harvard University A Bayesian Partition Model for Detecting eQTL Modules Abstract: Studies of the relationship between genomic DNA variation and gene expression variation, often referred to as ?expression quantitative trait loci (eQTL) mapping?, has been conducted in many species and resulted in many significant findings. Because of the large number of genes and genetic markers in such analyses, it is extremely challenging to discover how a small number of eQTLs interact with each other to affect mRNA expression levels for a set of (most likely coregulated) genes. We present a Bayesian method to facilitate the task, in which coexpressed genes mapped to a common set of markers are treated as a module characterized by latent indicator variables. A Markov chain Monte Carlo algorithm is designed to search simultaneously for the module genes and their linked markers. We show by simulations that this method is much more powerful for detecting true eQTLs and their target genes than traditional QTL mapping methods. We applied the procedure to a data set consisting of gene expression and genotypes for 112 segregants of S. cerevisiae and identified modules containing genes mapped to previously reported eQTL hot spots, dissected these large eQTL hot spots into refined modules with biological implications, and discovered a few epistasis modules. If time permits, I will also discuss a few ideas regarding Bayesian modeling and discovery of interactions among a large number of variables in a classification or regression framework.

Monday, 3:00 PM June 1, 2009 
Jayant V. Deshpande, University of Pune Load sharing systems : Models and Applications Abstract: Strength and life time of load sharing parallel systems have been considered at least since Daniels (1945) considered bundles of fibers. The main characteristic of such systems is that upon the failure of any component the surviving components have to shoulder extra load which changes the joint probability distribution of the component life times at each component failure. Gross et al (1971) observed and modeled the failure of two organ systems in biological entities in a similar way. Recently, such systems have been studied under the names 'dynamic reliability systems' as well as 'sequential koutofn systems' by Kvam and Pena (2005), Cramer and Kamps (2001), etc In this talk we shall restrict to two component parallel systems and consider some appropriate models to describe the load sharing rules and its properties. We shall verify whether common bivariate reliability distributions fit this model and a test for testing whether such a load sharing rule actually governs the system.

Friday, 3:15 PM May 29, 2009 
Charles Doering, Department of Mathematics, University of Michigan Twist & Shout: Enstrophy generation, regularity, and uniqueness of solutions to the 3D NavierStokes equations Abstract: It is still not known whether solutions to the 3D NavierStokes equations for incompressible flows in a finite periodic box can become singular in finite time. Indeed, this question is the subject of one of the $1M Clay Prize problems. It is known that a solution remains smooth as long as the enstrophy, i.e., the meansquare vorticity, is finite. The generation rate of enstrophy is given by a functional that can be bounded using elementary functional estimates, and those estimates establish shorttime regularity but do not rule out finitetime singularities in the solutions. In this work we formulate and solve the variational problem for the maximal growth rate of enstrophy and display flows that generate enstrophy at the greatest possible rate. Implications for questions of regularity or singularity in solutions of the 3D NavierStokes equations are discussed. This is joint work with Lu Lu, published in Indiana University Mathematics Journal Vol. 57, pp. 26932727 (2008).

Friday, 3:15 PM May 22, 2009 
Natasha Speer, University of Maine Teacher Knowledge and the Complexity of Orchestrating Discussions in a Differential Equations Classroom Abstract: The literature on mathematics education reform contains many examples of the valuable learning opportunities created for students by teachers who are masterful at orchestrating discussions. Over the past several decades, however, the research community has also amassed examples of how wellintentioned, otherwise capable, teachers can sometimes lead students in discussions that fail to provide ample opportunities to learn the intended mathematical ideas. In an effort to better understand factors that influence teachers? capacities to create rich learning opportunities via wholeclass discussion, in this research project, a Ph.D. mathematician with 17 years of teaching experience was studied as he used an inquiryoriented curriculum for an undergraduate differential equations course for the first time. Classroom episodes when discussions fell short of their intended mathematical objectives were analyzed and the types of teaching related knowledge that might have enabled him to be more successful in orchestrating the discussions were examined. Findings will be presented along with implications for undergraduate mathematics instruction and the professional development of college mathematics instructors.

Friday, 3:15 PM May 15, 2009 
Jiming Peng, University of Illinois at UrbanaChampaign Matrix Splitting: A Journey from Linear Equation to Discrete Optimization Abstract: Matrix splitting technique has provided a powerful for solving classes of problems from linear equation system to linear complementarity problems. In this talk, we discuss how the idea of matrix splitting can be used to attack a class of extremely challenging discrete optimization problems, the socalled quadratic assignment problem (QAP). The talk starts with a brief introduction to matrix splitting and a few generic optimization models, followed by a short discussion on the class of QAPs and the computational challenge in solving this class of problems. Then we propose a new relaxation model for QAPs based various matrix splitting schemes. A new mechanism to construct the valid cut that can further improve the relaxation model will be introduced. Preliminary experimental results based on the new relaxation models will be reported. The talk is based on works joint with my student X. Li and H. Mittelmann (ASU), supported by NSF and AFOSR.

Friday, 3:15 PM May 8, 2009 
Sergei Ovchinnikov, Mathematics Department, San Francisco State University What is a piecewise linear function? Abstract: Piecewise linear functions (PLfunctions) are important albeit auxiliary tools in many branches of mathematics and its applications. However, there are little known properties of these functions that are hard to find in standard literature. Even definitions of PLfunctions differ from one source to another. In this talk we present a ‘micro' theory of PLfunctions and discuss its connections with problems in analysis, combinatorics, electrical engineering, and genetic networks. The talk will be accessible to undergraduate and graduate students.

Friday, 11:30 AM May 1, 2009 
Chris Rasmussen, Department of Mathematics, San Diego State Univeristy The Inquiry Oriented Differential Equations Project: Addressing Challenges Facing Undergraduate Mathematics Education Abstract: Undergraduate mathematics education today faces a number of new challenges and difficulties. One way to address these challenges is to build on promising theoretical advances and instructional approaches, even those not originally developed with undergraduate mathematics in mind. The Inquiry Oriented Differential Equations Project (IODE) is one such effort, which can serve as model for other undergraduate course innovations. In this presentation I describe central characteristics of the IODE approach, report on results of a comparison study, and detail the emergence of a bifurcation diagram, a surprising and illustrative example of student reinvention. I use the bifurcation diagram reinvention example to develop the notion of brokering, which speaks to the unique role of the instructor in student reinvention of significant mathematical ideas. The notion of brokering, which generalizes beyond differential equations, highlights how teaching and learning mathematics is a cultural practice, one that is mediated by and coordinated with the broader mathematics community, the local classroom community, and the small groups that comprise the classroom community.

Friday, 3:15 PM April 17, 2009 
Ming Yuan, H. Milton Stewart School of Industrial & Systems Engineering, Georgia Tech Sparse Gaussian Graphical Model Estimation using l1 Regularization Abstract: We propose penalized likelihood methods for estimating the concentration matrix in the Gaussian graphical model. The methods lead to a sparse and shrinkage estimator of the concentration matrix that is positive definite, and thus conduct model selection and estimation simultaneously. The implementation of the methods is nontrivial because of the positive definite constraint on the concentration matrix, but we show that the computation can be done effectively by taking advantage of recent advances in convex optimization. Simulations and real examples demonstrate the competitive performance of the new methods.

Friday, 3:15 PM April 10, 2009 
Yuedong Wang, Department of Statistics & Applied Probability, UCSanta Barbara Nonparametric Nonlinear Regression Models Abstract: Almost all of the current nonparametric regression methods such as smoothing splines, generalized additive models and varying coefficients models assume a linear relationship when nonparametric functions are regarded as parameters. In this talk we present a general class of nonparametric nonlinear models that allow nonparametric functions to act nonlinearly. They arise in many fields as either theoretical or empirical models. We propose new estimation methods based on an extension of the GaussNewton method to infinite dimensional spaces and the backfitting procedure. We extend the generalized cross validation and the generalized maximum likelihood methods to estimate smoothing parameters. Connections between nonlinear nonparametric models and nonlinear mixed effects models are established. Approximate Bayesian confidence intervals are derived for inference. We will also present a user friendly R function for fitting these models. The methods will be illustrated using two real data examples.

Friday, 3:15 PM April 3, 2009 
Andrew Rich, Department of Mathematics and Computer Science, Manchester College Binary Representations, Bijections, and Minkowski's Question Mark Abstract: Count the number of base 2 representations of a natural number, allowing digits 0, 1, and 2. This sequence has the surprising property that every positive rational occurs exactly once as a ratio of consecutive terms. The bijection between integers and rationals is explained using the Euclidean algorithm and continued fraction expansions. Flipping the bijection "across the decimal point" leads to an amazing continuous function with wonderful settheoretic, analytic, and fractal properties, first discovered by Minkowski.

Friday, 3:15 PM March 20, 2009 
Judith Grabiner, Flora Sanborn Pitzer Professor of Mathematics, Pitzer College "It's All for the Best: Optimization in the History of Science" Abstract: Many problems, from optics to economics, can be solved mathematically by finding the highest, the quickest, the shortest  the best of something. This has been true from antiquity to the present. Why did people start looking for such explanations, and how did we come to conclude that we could productively do so? Scientific examples will include problems from ancient optics, and more modern questions in optics and classical mechanics, drawing on ideas from Newton's and Leibniz's calculus and from the EulerLagrange calculus of variations. A surprising role will also be played by philosophical and theological ideas, including those of Leibniz, Maupertuis, MacLaurin, and Adam Smith.

Friday, 3:15 PM March 13, 2009 
Keith Weber,Graduate School of Education, Department of Learning and Teaching Rutgers University Undergraduates' comprehension of mathematical arguments Abstract: In reformoriented mathematics classrooms, students are expected to learn from hearing the arguments of their classmates. In advanced mathematics classrooms, mathematics majors are expected to learn from observing their professors present proofs and reading proofs in their textbooks. Both modes of instruction rest on the specific assumption that students are able to learn from the arguments of others. Unfortunately, research suggests that most students are not convinced and do not comprehend the arguments that they observe. In this talk, I will present two studies that investigate the reasoning processes that mathematics majors use when they read mathematical arguments. The data from these studies will be use for three purposes: (1) to provide data on what competences mathematics majors have and lack when reading arguments, (2) to explain why mathematics majors have difficulty with comprehending and evaluating mathematical arguments, and (3) to delineate strategies that successful students use to learn when reading mathematical text.

Monday, 2:00 PM March 9, 2009 
Barbara Edwards, Department of Mathematics, Oregon State University Revitalizing College Algebra: Reconsidering Math 95, Math 111, Math 112

Friday, 3:15 PM March 6, 2009 
Joan Garfield, College of Education and Human Development, Educational Psychology, University of Minnesota Change Agents in Statistics Education Abstract: This presentation will describe the newly emerging field of statistics education and outline some of its current challenges. I will present an argument for changes that are needed in the teaching of statistics, and share several information on projects aimed at addressing these challenges and needed changes.

Friday, 3:15 PM February 27, 2009 
Erning Li, Department of Statistics, Texas A&M University Joint Models for a Primary Endpoint and Longitudinal Covariate Processes Abstract: The relationship between a primary endpoint and longitudinal processes is often of interest in medical and public health research. Joint models that represent the association through shared dependence of the primary and longitudinal data on random effects are increasingly popular. Implementation by imputing subjectspecific effects from individual regression fits yields biased inference, and several methods for reducing this bias have been proposed. These methods require a parametric (normality) assumption on the random effects, which may be unrealistic. Moreover, the existing methods routinely assume independent withinsubject measurement errors in the longitudinal covariate processes. We propose conditional estimation procedures that require neither a distributional or covariance structural assumption on randomeffects nor an independence assumption on withinsubject measurement errors. The new procedures readily cover scenarios that have multivariate longitudinal covariate processes and can be calculated using available software. We evaluate the performance of the new estimators through simulations and analysis of data from a hypertension study. Alternatively, we consider a semiparametric joint model that makes only mild assumptions on the random effects distribution and develop likelihoodbased inference on the association and distribution. The estimated distribution can reveal interesting population features, as we demonstrate for a study of the association between longitudinal hormone levels and bone status in perimenopausal women. An extension to nonparametric joint models will be discussed.

Friday, 3:15 PM February 20, 2009 
Randy Philipp, Center for Research in Mathematics & Science Education , San Diego State University Focusing on Children's Mathematical Thinking to Support the Development of Prospective and Practicing Elementary School Teachers Abstract: The knowledge required for teaching elementary school mathematics proficiently to students is extensive and complex and cannot be mastered during teacher preparation. As such, teachers must learn to learn from their practices. One particularly promising approach to teacher preparation and professional development, focused on children's mathematical thinking, will be described, and results of two largescale studies will be shared. In one study, prospective teachers who learned about children's mathematical thinking while learning mathematics learned more mathematics and developed richer beliefs than those who did not learn about children's thinking. In the other study, professional development focused on children's mathematical thinking led to teachers developing deeper mathematical understanding, richer beliefs, and more sophisticated ways of noticing than experienced teachers who had not learned about children's mathematical thinking.

Friday, 3:15 PM February 13, 2009 
Ricardo Todling, Global Modeling and Assimilation Office, NASA/GSFC, Greenbelt, Maryland A Brief Overview of the GEOS5 4DVar and its Adjoint Tools Abstract: The fifth generation of the Goddard Earth Observing System (GEOS5) Atmospheric Data Assimilation System (ADAS) is a threedimensional variational (3dvar) system that uses the Gridpoint Statistical Interpolation (GSI) analysis developed in collaboration with NOAA/NCEP, and a general circulation model (GCM) developed at Goddard that combines the finitevolume hydrodynamics of GEOS4 wrapped in the Earth System Modeling Framework (ESMF) and physical packages tuned to provide a reliable hydrological cycle for the integration of the Modern Era Retrospectiveanalysis for Research and Applications (MERRA). The MERRA system is currently operational and the next generation GEOSADAS is under intense development with a prototype now available and undergoing scientific evaluation. The prototype system adds the time dimension to GEOSADAS, turning it into a 4dvar system. It implements various modifications to GSI such as timebinning the observations, an observationoperator capability, an interface to the tangent linear and adjoint models of the GCM, and a Lanczosbased conjugate gradient minimization method. The 4dvar development also implements the adjoint of GSI  an effective tool for studying analysis sensitivity to the observations, background fields, and to the underlying error statistics. The prototype system brings together the adjoints of both GSI and the GCM to add the capability to perform observation impact studies with GEOSADAS. This presentation discusses the progress of the GMAO 4dvar development. It also gives an introduction to some of the adjointbased tools in GEOSADAS, and shows preliminary results of various applications.

Friday, 3:15 PM February 6, 2009 
Mario Livio, Senior Astronomer, Space Telescope Science Institute Is God a Mathematician? Abstract: Nobel Laureate Eugene Wigner once wondered about "the unreasonable effectiveness of mathematics" in the formulation of the laws of nature. In this talk, I consider why mathematics is as powerful as it is. From ancient times to the present, scientists and philosophers have marveled at how such a seemingly abstract discipline could so perfectly explain the natural world. More than that  mathematics has often made predictions, for example, about subatomic particles or cosmic phenomena that were unknown at the time, but later were proven to be true. Is mathematics ultimately invented or discovered? If, as Einstein insisted, mathematics is "a product of human thought that is independent of experience," how can it so accurately describe and even predict the world around us? Mathematicians often insist that their work has no practical effect. The British mathematician G. H. Hardy went so far as to describe his own work this way: "No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world." He was wrong. The HardyWeinberg law allows population geneticists to predict how genes are transmitted from one generation to the next, and Hardy's work on the theory of numbers found unexpected implications in the development of codes.

Friday, 3:15 PM January 30, 2009 
Michail Myagkov, Department of Political Science, University of Oregon Experimental Design and Methods in Behavioral Economics Abstract: The main topic of the talk will be the study of a form and shape of "human utility function". For many years economists assumed (and used in their models) that individuals make decisions based on a concave utility function that represented such important features as, for example, decreasing return to scale and riskaversion. It was noted however that many human economic decisions do not follow the standard models. For the most part the failures of classic microeconomic theory were attributed to emotions and feelings that people have when making decisions. A whole variety of very important discoveries about how humans make economic decisions were made in recent years. A Nobel Prize, was lately awarded for a discovery of Prospect Theory  a theory that introduces the issue of "prospects" or "framing g" into decision making. The focus of the talk will be twofold. First, I will talk in general(and show examples) of why the classic utility function is not working, and compare it to the function proposed by "prospect theory". Second, I will talk about my recent project (sponsored by the NSF) that focuses on the "relative" effects of the utilities, and role off sociality in that process. In other words humans would not mind getting $1, if they play against computer that win a $100. But humans would be very unhappy to get a dollar if the other human they are playing gets that same $100. I will talk about challenges in modeling such utilities, and the experimental design that I am using.

Friday, 3:15 PM January 23, 2009 
Juha Pohjanpelto, Deptartment of Mathematics, Oregon State University Group Actions, Moving Frames, and Variational Principles Abstract: Continuous pseudogroups appear as the infinite dimensional counterparts of local Lie groups of transformations in various physical and geometrical contexts, including gauge theories, Hamiltonian mechanics, symplectic and Poisson geometries, conformal field theory, symmetry groups various partial differential equations, such as the NavierStokes and KadomtsevPetviashvili equations of fluid mechanics and plasma physics, image recognition, and geometric numerical integration. In this talk I will give an accessible overview of my recent joint work with Peter Olver on extending Cartan's classical moving frames method to infinite dimensional pseudogroups. As in the finite dimensional case, moving frames can be employed to produce complete sets of differential invariants for a pseudogroup action and to analyze the algebraic structure of the invariants, thus providing an effective tool for analyzing and constructing objects invariant under the group action. As an additional application, the moving frames method yields a novel reduction process for finding particular solutions to systems of differential equations.

Friday, 3:15 PM January 16, 2009 
Tom Fielden, Portland State University The California Market Analyzer: Modeling policy implications for an emerging Carbon Emissions Trading System Abstract: Since the passage of California Assembly Bill 32, California has set a goal of reducing it's Greenhouse Gas emissions to 1990 level by the year 2020. The California Air Resources Board (CARB) is tasked with designing a regulatory system to reach this goal. Many companies across a range of industries will be subject to the new regulations. A "Carbon Market" is expected whereby companies in a better position to reduce their emissions can sell to other regulated entities reductions achieved below their regulated levels as "allowances" in a managed financial market. The California Market Analyzer (CAMA) is the first generation answer to the question "Among the policies that achieves the overall goal, which one is least costly to California?" There are many policy components on the table and there is a significant amount of publicly available data on the costs and availability of emissions reduction opportunities. As part of my Ph.D. work at Portland State, I worked with EcoSecurities to produce CAMA at the request of PG&E of California for use by CARB to support their policy decisions. I am also a member of the PSU team that is constructing the second generation of CAMA. As a first generation tool, CAMA assumes perfect information and uses the World War II era mathematical tool, Linear Programming, to combine policy options, costs and availability and report the least cost portfolio. Linear Programming is a well understood mathematical tool that produces, sometimes, nonintuitive results. Policy makers and well as regulators and participants benefit from the insight such a seemingly simple tool provides. In this talk I will demonstrate the California Market Analyzer tool itself and discuss Portland State's team plan for a second generation tool, incorporating contemporary mathematical techniques such as Convex Optimization, Nonparametric Statistics, and Game Theoretic modeling to address risk management and decision support.

Friday, 3:15 PM January 9, 2009 
Francisco Samaniego, Department of Statistics, University of California, Davis On Conjugacy, Consensus and Self Consistency in Bayesian Inference Abstract: Until fairly recently, conjugate prior distributions served as essential tools in the implementation of Bayesian inference. Today, they occupy a much less prominent place in Bayesian theory and practice. We discuss the reasons for this, and argue that the devaluation of the role of conjugacy may be somewhat premature. To assist in this argument, we introduce a Bayesian version of the notion of “self consistency” in the context of point estimation, relative to squared error loss, of the parameter of an exponential family of distributions. In this setting, a prior distribution ð with mean * (or the corresponding Bayes estimator ð) is said to be self consistent (SC) if the equation E(  = *) = * is satisfied, where is assumed to be a sufficient and unbiased estimator of . The SC condition simply states that if your experimental outcome agrees with your prior opinion about the parameter, then the experiment should not change your opinion about it. Surprisingly, many prior distributions do not enjoy this property. We will study self consistency and its extended form (the estimator T( ) of is generalized SC relative to a prior ð with mean * if T( *) = *). The problem of estimating based on the prior opinion received from k experts is examined, and the properties of a particular class of “consensus estimators” are studied. Conditions ensuring generalized selfconsistency and an important convexity property of these estimators are identified. We conclude by applying Samaniego and Reneau's (JASA, 1994) results to generalized self consistent consensus estimators, characterizing the circumstances in which such estimators outperform classical procedures.

Friday, 3:15 PM December 5, 2008 
Brian Greer, Graduate School of Education, Portland State University The Isis problem as a probe for understanding students' adaptive expertise and ideas about proof Abstract: The Isis problem, which has a link with the Isis cult of ancient Egypt, asks: “Find which rectangles with sides of integral length (in some unit) have area and perimeter (numerically) equal, and prove the result.” (You are invited to tackle this (simple) problem using as many different forms of argument as you can find). Since the solution requires minimal technical mathematics, the problem is accessible to a wide range of students. Further, it is notable for the variety of proofs (empirically grounded, algebraic, geometrical) using different forms of argument, and their associated representations, and it thus provides an instrument for probing students' ideas about proof, and the interplay between routine and adaptive expertise. Results from smallscale studies of students in Belgium and Portland will be reported.

Friday, 3:15 PM November 21, 2008 
Ying Wei, Department of Biostatistics, Mailman School of Public Health, Columbia University An approach to multivariate covariatedependent quantile contours with application Abstract: Multivariate quantile contours are useful in a number of applications and have been studied in different contexts. However,no easy solutions exist when dynamic and conditional quantile contours are needed without strong tributional assumptions. We propose a new form of bivariate quantile contours and a twostage estimation procedure to take time effect into account. The proposed procedure relies on quantile regression for longitudinal data, and is flexible to include potentially important covariates as needed. In addition, we propose a visual model assessment tool, and discuss a practical guideline for model selection. The performance of the proposed methodology is demonstrated by a simulation study, as well as an application to joint heightweight screening of young children in the United States. We construct bivariate growth charts by a nested sequence of agedependent and covariatevarying quantile contours of height and weight, and use that it locate individual subject's percentile rank with respect to a reference population. Our work shows that the proposed method is valuable for pediatric growth monitoring and provides more informative readings than the conventional approach based on univariate growth charts.

Friday, 3:15 PM November 7, 2008 
Gunnar Carlsson, Department of Mathematics, Stanford University Topology and Data Abstract: As most people are aware, all branches of science and engineering are producing enormous amounts of data, much faster than it can be analyzed. In addition, the data comes in many different forms, such as genomic sequence data network data, binary data, as well as Euclidean data. Often a notion of distance can be constructed, which is very useful in understanding the it, but the notions of distance are not reliable and are not backed up by solid theoretical models. This means that one needs methods of data analysis which are robust to small changes of distance, and which are flexible in the sense that they can be adapted to the various different kinds of data in a straightforward way. It turns out that topology can often supply this kind of method, and in this talk I plan to discuss some of these methods, the underlying topology from which they arise, as well as some examples of applications of the methods.

Friday, 3:15 PM October 31, 2008 
Ke Wu, Department of Mathematics, California State University, Fresno Minimum Distance Estimation In TwoSample Scale Problem Under Different Censoring Models of Survival Data Abstract: In survival analysis, one is often interested in studying the difference of the effects of two treatments in a clinical trial, leading to the twosample problem. In the twosample scale model a specific treatment extends the life of the subject, in the sense that the lifetime is multiplied by a scale parameter. A large number of papers have been written on the estimation of the scale parameter with uncensored and censored survival data. During this talk we will describe some CramerVon Mises type minimum distance estimators of the scale parameter when the data are under different censoring models including the general right censoring model, KoziolGreen model, and partial KoziolGreen model. We will present the largesample properties of the estimators and study the efficiencies of the estimators in the twosample scale problem under these different censoring models through theoretical results and simulation study results.

Friday, 3:15 PM October 10, 2008 
Oleg Roderick, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL Uncertainty Quantification: Improved Stochastic Finite Element Approach Abstract: In our work, we introduce a stochastic fnite elementbased approach to describing the uncertainty of a complex system of diferentialalgebraic equations with random inputs. For our test system, we take a 3dimensional steadystate model of heat distribution in the core of a nuclear reactor. The heat exchange between the fuel and the coolant depends on thermodynamical properties of the involved materials; the estimation of temperature dependence of these properties includes experimental error. The problem of uncertainty quantifcation may be solved by sampling methods; or through the creation of a surrogate model (a valid simplifed version of the system). The choices for representing the output of the system include linear approximation, or a projection onto a set of interpolating functions. A generic stochastic fnite element method (SFEM) approach uses a complete set of orthogonal polynomials (a polynomial chaos), of degrees up to 35. Given statistical information on the inputs, SFEM model provides explicit description of the distribution of the output. If input uncertainty structure is not provided, the surrogate model can still be used to estimate the range and variance of the output. One disadvantage of the approach is the large dimension of the model, requiring many full integrations of the system for interpolation. We construct the surrogate model as a goaloriented projection onto an incomplete space of interpolating polynomials; and the coordinates of the projection by collocation; and use derivative information to signifcantly reduce the number of the required collocation sample points. The basis may be trimmed to linear functions in some variables, and extended to high order polynomials in the others. Derivatives of the output with respect to random parameters are obtained using an eficient adjoint method with elements of automatic diferentiation, the relative magnitudes of the derivatives are also used to decide on the importance of the variables. The resulting model is more computationally eficient that random sampling, or generic SFEM; and has significantly greater precision than linear models. Currently, we work on applying the analysis to an extended model of the reactor core, with additional uncertainties coming from the description of neutron interaction, nonuniform ow of the coolant, and structural deformation of the core elements. We will investigate the possibilities to further improve the approach through the optimal choice of the collocation points; and a more sophisticated sensitivity analysis resulting in an optimal polynomial basis. We are open to suggestions of future collaboration on the mathematical and applied aspects of the study.

Friday, 3:15 PM October 3, 2008 
Jianhui Zhou, Department of Statistics, University of Virginia Feature Identification in Functional Linear Regression Model Abstract: We propose a shrinkage method to estimate the coefficient function and identify its null interval in a functional linear regression model. Our proposal has two steps. In the first step, the Dantzig selector is employed to have an initial estimate of the null interval. In the second step, the null interval is refined and the coefficient function is estimated simultaneously by the proposed group SCAD method. Theoretically, we show that our estimator enjoys the desired oracle property: it identifies the null interval with probability tending to 1, and estimates the coefficient function on the nonnull interval with asymptotic normality. Real data motivated simulation study is conducted to show the performance of the proposed method.

Friday, 3:15 PM June 6, 2008 
David Tyler, Department of Statistics, Rutgers University Exploring Multivariate Data via Multiple Scatter Matrices Abstract: When sampling from a multivariate normal distribution, the sample mean vector and sample variancecovariance matrix are a sufficient summary of the data set. To protect against nonnormality, and in particular against longer tailed distributions and outliers, one can replace the sample mean and covariance matrix with robust estimates of multivariate location and scatter. Outliers can often be detected by examining the corresponding robust Mahalonobis distances. Such an approach is appropriate if the bulk of the data arises from a multivariate normal distribution or more generally from an elliptically symmetric distribution. However, if the data arises otherwise, then different location/scatter estimates do no estimate the same population quantities, but rather are reflecting different aspects of the underlying distribution. This suggests comparing different estimates of multivariate location/scatter may reveal interesting structures in the data, ones which may not be apparent from a plot of robust Mahalonobis distances. In this talk, new multivariate methods based upon the comparison of different estimates of multivariate scatter are introduced. These methods are based on the eigenvalueeigenvector decomposition of one estimate of scatter relative to another. An important property of this decomposition is that the corresponding eigenvectors generate an affine invariant coordinate system (ICS) for the multivariate data. Consequently, this leads to the development of a wide class of affine equivariant coordinatewise multivariate methods. In particular, by plotting the data with respect to this new invariant coordinate system, various data structures can be revealed. Under certain independent component analysis models, which are currently popular within computer science and engineering disciplines, the invariant coordinates correspond to the independent components. When the data comes from a mixture of elliptical distributions, a subset of the invariant coordinates correspond to Fisher's linear discriminant subspace, even though the class identification of the data points are unknown. Several examples are given to illustrate the utility of the proposed methods.

Friday, 3:15 PM May 30, 2008 
Joris Vankerschaver, Control and Dynamical Systems, California Institute of Technology, The geometry behind solid bodies interacting with perfect fluids Abstract: Ever since the pioneering work of Arnol'd in the 1960s, it has been known that the dynamics of a perfect fluid can be described geometrically by an ordinary differential equation on the space of all diffeomorphisms of a given manifold. In doing so, the tools of modern differential geometry can be used to gain insight into numerous aspects of fluid dynamics and this has led to a number of remarkable insights, ranging from the shorttime existence of solutions of the Euler equations to an explanation of the essential unpredictability of weather forecasts. In this talk, I will give an overview of this interplay between geometry and fluid dynamics; as an application I will focus on our recent work on the dynamics of rigid bodies immersed in perfect fluid flows. It turns out that the geometric formulation sheds new light on a number of classical concepts, such as the KuttaJoukowski force on a rigid body with circulation, and suggests interesting new analogies with other areas of physics. This is joint work with E. Kanso and J. E. Marsden.

Friday, 3:15 PM May 23, 2008 
Martin Golubitsky, Department of Mathematics, University of Houston Symmetry Breaking and Synchrony Breaking Abstract: A coupled cell system is a network of interacting dynamical systems. Coupled cell models assume that the output from each cell is important and that signals from two or more cells can be compared so that patterns of synchrony can emerge. We ask: Which part of the qualitative dynamics observed in coupled cells is the product of network architecture and which part depends on the specific equations? In our theory, local network symmetries replace symmetry as a way of organizing network dynamics, and synchrony breaking replaces symmetry breaking as a basic way in which transitions to complicated dynamics occur. Background on symmetry breaking and some of the more interesting examples will be presented.

Friday, 3:15 PM May 16, 2008 
2008 Maseeh Lecture: Mathematical Problems of Liquid Crystal Abstract: Most mathematical work on nematic liquid crystals has been in the context of the OseenFrank theory, which models the mean orientation of the constituent rodlike molecules by means of a director field consisting of unit vectors. However nowadays most physicists use the Landau  de Gennes theory, whose basic variable is a tensorvalued order parameter. Unlike the OseenFrank theory, that of Landau  de Gennes does not assign an unphysical orientation to the director field. The lecture will describe the two theories and the relationship between them, as well as other interesting mathematical problems related to the Landaude Gennes theory.

Friday, 3:15 PM May 9, 2008 
Lyndia Brumback, Department of Biostatistics, University of Washington Using the ROC curve for gauging treatment effects in clinical trials Abstract: Nonparametric procedures such as the Wilcoxon ranksum test, or equivalently the MannWhitney test, are often used to analyze data from clinical trials. These procedures enable testing for treatment effect, but traditionally do not account for covariates. We adapt recently developed methods for receiver operating characteristic (ROC) curve regression analysis to extend the MannWhitney test to accommodate covariate adjustment and evaluation of effect modification. Our approach naturally extends use of the MannWhitney statistic in a fashion that is analogous to how linear models extend the ttest. We illustrate the methodology with data from clinical trials of a therapy for Cystic Fibrosis.

Friday, 3:15 PM May 2, 2008 
Richard Canary, Department of Mathematics, University of Michigan The Classification of hyperbolic 3manifolds Abstract: Recent developments have allowed for a complete classification, up to isometry, of hyperbolic 3manifolds with finitely generated fundamental group. In this talk, we will begin by introducing hyperbolic space and discussing the relative merits of playing various sports there. We will then discuss hyperbolic 3manifolds and survey the results which have led to their classification.

Friday, 3:15 PM April 25, 2008 
Zonghui Hu, Mathematical Statistician, National Institute of Allergy and Infectious Diseases Semiparametric twosample changepoint model with application to HIV studies Abstract: A twosample changepoint model is proposed to investigate the difference between two treatments or devices. Under this semiparametric approach, no assumptions are made about the underlying distributions of the measurements from the two treatments or devices, but a parametric link is assumed between the two. The parametric link contains the possible changepoint where the two distributions start to differ. We apply the maximum empirical likelihood for model estimation, and show the consistency of the changepoint estimator. An extended changepoint model is studied to handle censored data due to detection limits in HIV viral load assays. Permutation and bootstrap procedures are proposed to test the existence of a changepoint and the goodnessoffit of the model. The performance of the semiparametric changepoint model is compared with that of parametric models in a simulation study. We provide two applications in HIV studies: one is a randomized placebocontrolled study to evaluate the effects of a recombinant glycoprotein 120 vaccine on HIV viral load; the other is a study to compare two types of tubes in handling plasma samples for viral load determination.

Friday, 3:15 PM April 18, 2008 
Héctor J. Sussmann, Department of Mathematics, Rutgers University http://www.math.rutgers.edu/~sussmann Multivalued generalized differentials, approximating multicones, and the Pontryagin Maximum Principle Abstract: We present two examples of notions of multivalued generalized differentials, and use them to construct two theories of "approximating multicones" to a set at a point, generalizing the classical theories of Boltyanskii approximating cones and the Clarke tangent cone. We explain how in each of these theories there is a set separation theorem, giving a necessary condition for two sets to be such that their intersection consists of a single point. We describe how these conditions can be applied to prove versions of the finitedimensional Pontryagin Maximum Principle, and give reasons why these versions cannot be expected to be unified into a single general result.

Friday, 3:15 PM April 11, 2008 
Babette Brumback, Division of Biostatistics, Department of Health Services Research, Management, and Policy, University of Florida On EffectMeasure Modification: Relations Among Changes in the Relative Risk, Odds Ratio, and Risk Difference Abstract: It is well known that presence or absence of effectmeasure modification depends upon the chosen measure. What is perhaps more disconcerting is that a positive change in one measure may be accompanied by a negative change in another. Therefore, research demonstrating that an effect is 'stronger' in one population when compared to another, but based on only one measure, for example the odds ratio, may be difficult to interpret for researchers interested in another measure. This talk reports on an investigation of relations among changes in the relative risk, odds ratio, and risk difference from one stratum to another. Analytic and simulated results are presented concerning conditions under which the measures can and cannot change in opposite directions. For example, when exposure increases risk but all risks are less than 0.5, it is impossible for the relative risk and risk difference to change in the same direction but opposite to that of the odds ratio. Dataanalytic and hypothetical examples are used for demonstration, including an examination of the how the relationship between physical quality of life and body mass index differs across women and men, based on data from the 2005 Behavioral Risk Factor Surveillance System survey. This work is joint with Arthur Berg (Department of Statistics, Institute of Food and Agricultural Sciences, University of Florida).

Friday, 3:15 PM April 4, 2008 
Jing Shi, Department of Mathematics, Wayne State University Numerical Methods for Multidimensional Tunneling Transitions: A Quantum Transition State Theory Abstract: Tunneling describes the penetration of particles through potential barriers with negative kinetic energy. It arises from the quantum mechanical wave nature of mattter and is classically forbidden. It plays a crucial yet subtle role in some physical and chemical processes, ranging from nuclear physics, condensed matter to chemical reactions. Tha past decade witnessed an explosive interest in tunneling in enzymatic reactions. The quantum nature of the motion often comes along with multidimensionality, namely the coupled motion of many degrees of freedom. The high dimensionality of the potential energy surface poses a great challenge both theoretically and numerically. Numerical simulation based on the Schrodinger equation is often prohibitively expensive. We present an efficient and accurate numerical method to compute the tunneling transition, based on the Feynman path integral formulation of quantum transition state theory. It utilises the intrinsic connection between quantum mechanics and statistical physics and employs some recently developed tools in multiscale modeling. The application to hydrogen tunneling transfer in polyatomic molecules will be demonstrated.

Friday, 3:15 PM March 14, 2008 
Elena Erosheva, Department of Statistics, University of Washington, Extended Grade of Membership mixture model Abstract: The class of Hierarchical Bayesian Mixed Membership Models has recently gained popularity in many fields. The mixed membership assumption in these models allows for each object to belong to more than one class, group or subpopulation, in contrast to the full membership assumption in traditional mixture models. On the example of the Grade of Membership (GoM) model, we first show the equivalence between mixed membership and classic mixture models. We then develop an extended GoM mixture model that combines components of mixed and full membership. We motivate and illustrate our developments with two examples, using data from laboratory tests for chlamidyal trachomatis and from survey questionnaires for functional disability.

Friday, 3:15 PM March 7, 2008 
Judith Lok, Department of Biostatistics, Harvard School of Public Health, Harvard University Optimal start of treatment based on timedependent covariates Abstract: Using observational data, we estimate the effects of treatment regimes that start treatment once a covariate, X, drops below a certain level, x. This type of analysis is difficult to carry out using experimental data, because the number of possible values of x may be large. In addition, we estimate the optimal value of x, which maximizes the expected value of the outcome of interest within the class of treatment regimes studied in this paper. Our identifying assumption is that there are no unmeasured confounders. We illustrate our methods using the French Hospital Database on HIV. The best moment to start Highly Active AntiRetroviral Therapy (HAART) in HIV positive patients is unknown. It may be the case that withholding HAART in the beginning is beneficial, because it postpones the time patients develop drug resistance, and hence might improve the patients' long term prognosis.

Friday, 3:15 PM February 22, 2008 
Adrian Dobra, Department of Statistics, University of Washington The mode oriented stochastic search algorithm for loglinear models with conjugate priors Abstract: We describe a novel stochastic search algorithm for rapidly identifying regions of high posterior probability in the space of decomposable, graphical and hierarchical loglinear models. Our approach is based on the conjugate priors for loglinear parameters introduced in Massam, Liu and Dobra (2008). We discuss the computation of Bayes factors through Laplace approximations and the Bayesian Iterate Proportional Fitting algorithm for sampling model parameters. We also present a clustering algorithm for discrete data and develop regressions derived from loglinear models. We compare our model determination approach with similar results based on multivariate normal priors for loglinear parameters. The examples concern sixway, eightway and sixteenway contingency tables. This is joint work with Helene Massam from York University.

Friday, 3:15 PM February 15, 2008 
Eric SheaBrown, Department of Applied Mathematics, University of Washington, Correlation transfer in spiking neurons and consequences for coding Abstract: How do neurons encode information about sensory inputs and motor outputs? In many cases, the averaged spiking rates of "tuned" neuronal populations carry these signals. We use linear response and asymptotic methods and an intuitive statistical model to show that, if pairs of neurons receive overlapping inputs, a tuning of spike rates implies a tuning of their correlations. Thus, correlations carry signals as well, a fact which can have strong consequences for the neural code. We illustrate these via Fisher information, which quantifies the accuracy of encoding.

Wednesday, 2:00 PM February 13, 2008 
Michael Wolf, Department of Mathematics, Rice University, Minimal Desingularizations of Planes in Space Abstract: We prove that there is only one way to 'desingularize' the intersection of two planes in space and to obtain a periodic minimal surface as a result. The proof is mostly an exercise in, and an introduction to, the basic theory of spaces of Riemann surfaces: we translate the geometry of the possible minimal surface in space into a statement about a moduli space of flat structures on very simple Riemann surfaces, and then study deformation theory and degenerations in this moduli space to prove the result. I hope to make the talk accessible to graduate students.

Friday, 3:15 PM February 8, 2008 
James Demmel, Professor of Mathematics and Computer Science, University of California, Berkeley, http://www.cs.berkeley.edu/~demmel/ The Future of High Performance and High Accuracy Numerical Linear Algebra Abstract: The algorithms of numerical linear algebra for solving linear systems of equations, least squares problems, and finding eigenvalues and eigenvectors, are all undergoing significant changes. New asymptotically faster and numerically stable versions of all these algorithms have been developed, which have the same complexity as the fastest matrix multiplication algorithm that will ever exist (whether that matrix multiplication algorithm is stable or not!). But minimizing arithmetic operations is no longer the most important aspect of algorithm design on modern processors, it is minimizing data movement, either between processors over a network, or between levels of a memory hierarchy. It is possible to reorganize many classical algorithms to optimally reduce data movement; again maintaining numerical stability is a critical challenge. Next, practical problems often have additional structure that can be exploited, e.g. Vandermonde matrices arising from polynomial interpolation problems depend on just n parameters, rather than having n^{2} independent entries. It is possible to systematically exploit such structures to develop algorithms that are arbitrarily more accurate than conventional algorithms that ignore such structure. Beyond these theoretical results, we survey our recent efforts to incorporate the best new algorithms in the widely used LAPACK and ScaLAPACK libraries.

Friday, 3:15 PM February 1, 2008 
Eric Knuth, Department of Curriculum & Instruction, University of WisconsinMadison Middle School Students' Production of Mathematical Justifications Abstract: Proof is considered to be central to the discipline of mathematics and to the practice of mathematicians, yet surprisingly, its role in school mathematics has traditionally been peripheral at best. More recently, however, the nature and role of proof in school mathematics has been receiving increased attention in the mathematics education community with many mathematics educators advocating that proof should be a central part of the mathematics education of students at all grade levels. In this talk, I will present results concerning middle grade students' understandings of proof, in general, and their production of mathematical justifications, in particular. In addition, I will discuss implications regarding instructional conditions necessary to promote the development of students' proving competencies.

Friday, 3:15 PM January 25, 2008 
Kenneth Rice, Department of Biostatistics, University of Washington, http://www.biostat.washington.edu/people/faculty.php?netid=kenrice Optimal inference from intervals: a decisiontheoretic approach Abstract: Most Bayesian analyses conclude with a summary of the posterior distribution, thus summarizing uncertainty about parameters of interest. For purposes of inference, this is not enough, as it avoids stating what it is about the parameters that we actually want to know. Formally, deciding our criteria for a 'good' answer defines a loss function, or utility, and is usually only considered for point estimation problems. We present several new results for interval estimation, where simple and interpretable loss functions provide formal justification and optimality for twosided pvalues, Bonferroni correction, the BenjaminiHochberg algorithm, and standard samplesize calculations. Other consequences of this work will be discussed, including a resolution of Lindley's Paradox that is neither Bayesian nor frequentist.

Friday, 3:15 PM January 18, 2008 
Gilbert Strang, Massachussetts Institute of Technology Teaching and Research in Computational Science Abstract: The name "Computational Science and Engineering" describes the mixture of mathematics and algorithms that are at the center of modern applied mathematics. I believe that our teaching must go beyond the formulabased courses of the past to a **solutionbased** course. I will describe how that course can start (with great matrices that come from calculus  the talk begins right at our basic courses). In research, medical imaging and other applications use the maximum flowminimum cut theorem for image segmentation in the plane. This theorem connects to the isoperimetric problem of minimizing perimeter/area. The new constraint is that the set must stay inside a given region (the minimum ratio is the "Cheeger constant"). So the usual winner (the Greeks knew it would be a circle) is no longer best. There are solved and unsolved problems in this mixture of geometry and optimization and duality and applications.

Friday, 3:15 PM January 11, 2008 
Heinz Schaettler, Department of Electrical and Systems Engineering, Washington University in St. Louis Optimal and Suboptimal Protocols for Mathematical Models of Tumor Growth under Angiogenic Inhibitors Abstract: Tumor antiangiogenesis is a novel approach to cancer treatment that aims at preventing a tumor from developing the blood vessel network it needs for its growth. In this talk we shall show how tools from optimal control theory can be used to analyze a class of mathematical models for tumor antiangiogenesis that are based on a paper by Hahnfeldt et al. (Cancer Research, 59, 1999). In these models the state of the system represents the primary tumor volume and the carrying capacity of the vasculature related to the endothelial cells. The nonlinear dynamics models how control functions representing angiogenic inhibitors effect the growths of these variables. The objective is to minimize the tumor volume with a given total amount of inhibitors. In the talk a full theoretical solution to the problem will be presented in terms of a synthesis of optimal controls and trajectories. Using tools of geometric control theory (based on Lie bracket computations), analytic formulas for the theoretically optimal solutions will be given. Optimal controls are concatenations of bangbang controls (representing therapies of full dose and rest periods) and singular controls (therapies with specific feedback type timevarying partial doses). These, however, are not practically realizable. Properties of the dynamics and knowledge of the theoretically optimal solution are used to formulate realistic suboptimal protocols and evaluate their efficiency.

Friday, 3:15 PM November 30, 2007 
Yi Li, Department of Biostatistics, Harvard University, and DanaFarber Cancer Institute http://www.hsph.harvard.edu/faculty/yili/ A New Class of Models for Spatially Correlated Survival Data Abstract: There is an emerging interest in modeling spatially correlated survival data in biomedical and epidemiological studies. In this talk, I discuss a new class of semiparametric normal transformation models for right censored spatially correlated survival data. This class of models assumes that survival outcomes marginally follow a Cox proportional hazard model with unspecified baseline hazard, and their joint distribution is obtained by transforming survival outcomes to normal random variables, whose joint distribution is assumed to be multivariate normal with a spatial correlation structure. A key feature of the class of semiparametric normal transformation models is that it provides a rich class of spatial survival models where regression coefficients have population average interpretation and the spatial dependence of survival times is conveniently modeled using the transformed variables by flexible normal random fields. We study the relationship of the spatial correlation structure of the transformed normal variables and the dependence measures of the original survival times. Direct nonparametric maximum likelihood estimation in such models is practically prohibited due to the high dimensional intractable integration of the likelihood function and the infinite dimensional nuisance baseline hazard parameter. We hence develop a class of spatial semiparametric estimating equations, which conveniently estimate the populationlevel regression coefficients and the dependence parameters simultaneously. We study the asymptotic properties of the proposed estimators, and show that they are consistent and asymptotically normal. The proposed method is illustrated with an analysis of data from an Asthma Study and its performance is evaluated using simulations.

Friday, 3:15 PM November 16, 2007 
Sheldon Axler, College of Science & Engineering, San Francisco State University http://www.axler.net/ Harmonic Polynomials Abstract: A function of n real variables is called harmonic if its Laplacian (the sum of the second derivatives with respect to each variable) equals 0. This talk discusses the interesting theoretical and computational problems that arise from the following question: Given a polynomial of n variables, how can we find a harmonic function that agrees with it on some specified surface such as a sphere or an ellipsoid?

Friday, 3:15 PM November 2, 2007 
Hal Smith, Department of Mathematics & Statistics, Arizona State University http://math.la.asu.edu/~halsmith Dynamical Systems in Biology Abstract: We identify a class of dynamical systems which typically arise in modeling in the biological sciences. Roughly, these systems are characterized by defined feedback relations among the dependent variables. A natural goal then is to understand the asymptotic behavior of this class of systems. We provide some results in this direction, largely from the theory of monotone dynamical systems, and also indicate why a complete understanding of global dynamics may be impossible. Many examples from the biological sciences will be discussed, ranging from population dynamics, gene regulation, and epidemiology.

Friday, 3:15 PM October 26, 2007 
Marianthi Markatou, Department of Biostatistics, Columbia University. Variance Estimators of CrossValidation Estimators of the Generalization Error Abstract: We bring together methods from two different disciplines, machine learning and statistics, in order to address the problem of estimating the variance of crossvalidation estimators of the generalization error. Specifically, we approach the problem of variance estimation of the CV estimators of the generalization error of computer algorithms as a problem in approximating the moments of a statistic. The approximation illustrates the role of training and tests sets in the performance of the algorithm. It provides a unifying approach to evaluation of various methods used in obtaining training and tests sets and it takes into account the variability due to different training and test sets. For the simple problem of predicting the sample mean and in the case of smooth loss functions, we show that the variance of the CV estimator of the generalization error is a function of the moments of the random variables Y, Z, where Y denotes the cardinality of the intersection of two different training sets and Z denotes the cardinality of the intersection of two different test sets. We prove that the distribution of these two random variables is hypergeometric and we compare our estimator with the estimator proposed by Nadeau and Bengio (2003). We extend these results to the regression case and the case of absolute error loss, and indicate how the methods can be extended to the classification case and the general case of kernel regression.

Friday, 3:15 PM October 19, 2007 
Krishna Jandhyala, Mathematics Department, Washington State University, Pullman, WA River stream flows in the Northern Québec Labrador region: A Multivariate Changepoint Analysis Abstract: Spring stream flows of five rivers that flow in the northern Québec Labrador region are modeled in a multivariate Gaussian framework and have been analyzed for possible changepoints in their average flows. The multivariate formulation takes into account correlations among the flows of the five rivers that flow in the same region. Significant change was detected in the multivariate mean vector, and the unknown changepoint is estimated by the method of maximum likelihood estimation. We then establish the asymptotic distribution of the changepoint mle under both abrupt change as well as smooth change formulations. The asymptotic distributions allow us to compute confidence interval estimates of the changepoint in the river flows. The decrease in the mean river flows in 1984 may have been a consequence of a sharp decline in the region's snow cover that occurred about a year or two ahead.

Friday, 3:15 PM October 12, 2007 
Sergei Ovchinnikov, San Francisco State University, http://userwww.sfsu.edu/~sergei/welcome.htm Cubical Token Systems Abstract: A cubical token system is a particular instance of a general algebraic structure, called 'token system', describing a mathematical, physical, or behavioral system as it evolves from one 'state' to another. This structure is formalized as a pair (S,T) consisting of a set S of states and a set T of tokens. Tokens are transformations of the set of states. Strings of tokens are 'messages' of the token system.

Friday, 3:15 PM October 5, 2007 
Violetta Piperigou, Department of Mathematics, University of Patras. http://www.math.upatras.gr/~vpiperig/englishversion.html Maximum Likelihood Estimators for a Class of Discrete Distributions Abstract: The method of maximum likelihood (ML) yields estimators which, asymptotically, are normally distributed, unbiased and with minimum variance. However, computational difficulties are encountered, since for families of discrete distributions such as convolutions and compound distributions the probabilities are given through recurrence relations and hence the ML estimators require iterative procedures to be obtained. It is shown that in a large class of such distributions, the ML equations can be reduced by one, which is replaced by the first equation of the method of moments. Thus, for a twoparameter distribution, such the Charlier and the Neyman, only a single equation need be solved iteratively to derive the estimators.

Friday, 3:15 PM September 28, 2007 
Roger Nelsen, Department of Mathematical Sciences, Lewis & Clark College. http://www.lclark.edu/~mathsci/nelsen.html Presidential Primaries: Is Democracy Possible? Abstract: American presidential primaries are examples of multicandidate elections in which plurality usually determines the winner. Is this the "best" way to decide the winner? While plurality is a common procedure,it has serious flaws. Are there alternative procedures which are in some sense more "fair?" How do we determine the "fairness" of an election procedure? With no more mathematics than arithmetic (to count votes!), we'll examine some alternate procedures, and some fairness criteria.

Friday, 3:15 PM June 8, 2007 
Joanne Lobato, San Diego State University, http://www.sci.sdsu.edu/crmse/new_site/personal_pages/lobato.html "How Is What We Notice' in Mathematics Classrooms Socially Organized" Abstract: I am interested in how people attend to and notice particular mathematical properties when many sources of information compete for one?s attention. In the study presented in this talk, students from two 7th grade classes came to ?see? and reason about situations involving linear functions in distinctly different ways despite the fact that both classes utilized reformoriented practices and addressed the same mathematical content. Analyses of the classroom data demonstrate relationships between these different reasoning patterns and the nature of attentionfocusing in each classroom. In particular, the analyses reveal ways in which noticing is socially organized and tied to the practices that developed in each classroom. By building upon our previous notion of ?focusing phenomena? and upon Chuck Goodwin's work on ?professional vision,? we present what we call the ?focusing interactions? framework as a way of coordinating individual and social aspects of attentionfocusing. The results will demonstrate how microfeatures of instruction related to attentionfocusing can lead to significantly different reasoning patterns for students even when many macrofeatures of reform are in place.

Friday, 4:00 PM June 1, 2007 
Dev Sinha, University of Oregon http://www.uoregon.edu/~dps/index.php Counting Lie products, with applications in topology Abstract: The central object of study in our talk is collection of vector spaces obtained by "Lie multiplying together n variables in all possible ways, using each variable exactly once," which are known by the names Lie(n). As a warmup, we might count the corresponding question in other settings. We answer our question by developing canonical dual spaces to this collection. Since the Lie(n) spaces control Lie algebras, these duals control Lie coalgebras. These duals arise naturally in understanding the topology of configuration spaces. Our main application at the very end of the talk will be to Hopf invariants in topology.

Friday, 3:15 PM May 25, 2007 
Rina Zazkis, Simon Fraser University, http://www.sfu.ca/~zazkis/ Number Theory in Mathematics Education: A Queen and a Servant Abstract: Gauss referred to Number Theory as a "Queen of Mathematics". In the last decade my research has focused on preservice teachers' understanding of Number Theory. This included studies on the understanding of divisibility, factors and multiples, prime numbers and the fundamental theorem of arithmetic. I will present snapshots from this research, suggesting that there is no sufficient attention to this area in mathematics education, in both curriculum and research. I will argue that Number Theory can serve a dual purpose: as a queen and also as a servant, and that attending to this duality is beneficial for acquiring understanding of mathematical practice.

Thursday, 4:00 PM May 17, 2007 
Rebacca Glodstein, Radcliffe Institute for Advanced Study, Harvard University http://www.rebeccagoldstein.com/ Spinoza's Mind Abstract: There is at the moment a resurgence of interest in Spinoza and his rigorous universalism. This seems to be connected to the current tension between Reason and Religion. Spinoza's specific obsession with Reason – from mathematical logic and scientific determinism to ethics and politics might have grown out of the obsessions of his Jewish community.

Friday, 3:15 PM May 11, 2007 
Tevian Dray, Oregon State University http://oregonstate.edu/~drayt/ THE GEOMETRY OF THE OCTONIONS Abstract: The octonions are the last in the sequence of 4 division algebras generalizing the real and complex numbers. The octonions play a somewhat surprising role in a wide range of geometric phenomena, some of which will be described in this talk. For instance, the octonions provide a natural description of the rotation groups SO(7) and SO(8), and explain why the vector cross product only exists in 3 and 7 dimensions, while 2x2 octonionic Hermitian matrices provide a natural description of the Lorentz group SO(9,1) (and the last Hopf fibration), and explain why supersymmetry only exists in 3, 4, 6, and 10 dimensions. Furthermore, the exceptional groups G2, F4, and E6 have natural octonionic descriptions, with applications to both quantum mechanics and particle physics.

Friday, 3:15 PM May 4, 2007 
Jere Confrey, Washington University, http://www.artsci.wustl.edu/~educ/edu_confrey.htm Integrating Mathematics and Animation: Catching Content Instruction Up with 6th Graders' Interests Abstract: Children live in a world filled with animation, from video games to television commercials to cell phone images. Yet, few understand what makes these images work and therefore miss the invitation to learn science and mathematics as a means to pursue compelling careers using these ideas. In this talk, we report on a summer workshop for urban sixth graders using this software, Graphs N Glyphs, which is designed to make the mathematics behind animation visible. In the workshop, students explored the ideas of how animations recreate motion, time and space as they learn ideas of measurement, operations with integers, graphing, similarity and scaling and a variety of transformation. We present examples of student projects and discuss the underlying concept of professional transitional software as a means to entice urban youth into advanced study in mathematics.

Friday, 3:15 PM April 27, 2007 
Jeremy Kilpatrick, University of Georgia http://math.coe.uga.edu/GradCoord/KilpatHomePg.html Seeking Common Ground Abstract: In this talk, I'll describe the math wars of the new math era and of recent years, contrasting some of the issues and participants in each set of math wars. I'll discuss why mathematicians and mathematics educators might want to seek common ground and what that common ground might consist of. I'll end with some thoughts on what might be coming next, especially given the reception of the Curriculum Focal Points document from the National Council of Teachers of Mathematics (NCTM), as well as the interim report from the National Mathematics Advisory Panel.

Thursday, 4:00 PM April 26, 2007 
James Glimm, AMS president, University at Stony Brook and Brookhaven National Laboratory http://www.ams.sunysb.edu/~glimm/glimm.html Turbulent Mixing in Real (Nonideal) Fluids Abstract: We begin with an overview of the theory and computational practice for nonlinear systems of conservation laws. These equations are widely used in numerical studies of gas dynamics. We explain how this very classical subject has remaining research problems still open. This introduction leads us to the main topic of this talk. Turbulent mixing is an important but difficult problem in fluid dynamics. We consider the mixing zone generated by the acceleration of a fluid interface between two fluids of different densities. The case of steady acceleration is called the RayleighTaylor instability, and it gives rise to a mixing zone of the two fluids, growing in thckness as time increases. The mixing rate, experimentally, shows a universal growth rate, as multiple of t^2, but simulations in most cases disagree with experiments by a factor of 2 or more. We report on a new class of simulations (based on (a) an improved front tracking algorithm and on (b) inclusion of real fluid effects) which agree with experiment. Here the real fluid effect is surface tension for immiscible fluids and mass diffusion, initial (t = 0) mass diffusion, or viscosity for miscible flui ds. We document significant dependence of the mixing rate on both physical effects (e.g. surface tension) and on numerical artifacts such as mass diffusion (for untracked simulations). We conclude that modeling of turbulent mixing is sensitive to details of transport, surface tension, and to their numerical analogues. The full dependence of the mixing rates on these physical phenoma and their numerical analogues is only partially understood. When we consider the averaged equations, which remove all of the microphysical complexity of the mixing process and replace it with mean or averaged flow properties, serious difficulties emerge in the problem of closure, or the proper definition of the new nonlinear terms arising from the averaging process to define the closed equations. The averaged equations are never in conservation form, and for this reason the proper meaning of the nonlinear terms and their discretization and regularization is still a research issue. The numerical data base resulting from experimentally validated simulations becomes invaluable in providing a rational and systematic way to evaluate proposed closures. We find excellent agreement between the direct average of the simulated data and closures we have proposed. We also find a fair degree of insensitivity in comparing the data to other closures. Finally, we discuss briefly plans at Stony Brook and Brookhaven National Laboratory for computational science.

Monday, 3:15 PM April 23, 2007 
Ian McKeague, Columbia University, http://biostat.columbia.edu/~mckeague/ Analyzing Trajectories: Functional Predictors of Univariate Responses? Abstract: In recent years epidemiologists have embarked on numerous studies of early determinants of adult heath. It is increasingly believed that health "trajectories" from gestation through childhood can have a profound impact on a range of adult health outcomes. For example, one question of interest is whether there are sensitive periods in childhood during which growth rate influences the incidence of neuropsychological disease. Data have only recently become available to address such questions, e.g., in followup studies of individuals recruited from the National Collaborative Perinatal Program (19591974), which collected data on approximately 58,000 study pregnancies. New statistical methodology needs to be devised to address the challenging statistical issues involved. A particularly difficult problem is to reduce the dimension of the complex growth trajectories in an interpretable way, for use as predictors in regression modeling. This talk discusses a new class of functional regression models that can provide inference for sensitive growth periods. The basic idea is to include time points among the parameters of interest. The new methodology will involve nonstandard asymptotic theory; specifically, nonGaussian limit distributions and rates of convergence that are determined in some sense by the smoothness of the trajectory.

Thursday, 3:30 PM April 19, 2007 
Mario Livio, Hubble Space Telescope Institute, http://www.stsci.edu/institute/ The equation that couldn't be solved Abstract: For thousands of years mathematicians solved progressively more difficult algebraic equations, from the simple quadratic to the more complex quartic equation, yielding important insights along the way. Then they were stumped by the quintic equation, which resisted solutions for three centuries until two great prodigies independently proved that quintic equations cannot be solved by simple formula. These geniuses, a young Norwegian named Niels Henrik Abel and an even younger Frenchman named Evariste Galois, both died tragically. Galois, in fact, spent the last night before his fatal duel (at the age of twenty) scribbling a brief summary of his proof, occasionally writing in the margin of his notebook “I have no time.” Some of the mysteries surrounding his death, which have lingered for more than 170 years, are finally resolved in this book. Galois' work gave rise to group theory, the “language” that describes symmetry. Group theory explains much about the esthetics of our world, from the choosing of mates to Rubik's cube, Bach's musical compositions, the physics of subatomic particles and the popularity of Anna Kournikova.

Friday, 3:15 PM April 13, 2007 
Diana Fisher, Wilson High School Modeling Dynamic Systems at Wilson High School Abstract: Population change, sustainability policies, the spread of epidemics, pharmacokinetics, unemployment and city growth, supply and demand are all topics students study as they create models using the STELLA software in the "Modeling Dynamic Systems" course at Wilson High School. Students must also select an original modeling topic to research, design a working model, and write a technical paper. Student work will be displayed. Additionally, students have used the STELLA software to experiment with systems models in traditional algebra, precalculus, and calculus classes, reinforcing the characteristic behavior of different functions over time. Because the software is icon based and dependencies are displayed visually students are able to study more sophisticated topics using another representation approach (in addition to the symbolic, graphical, and numeric). Visual modeling tools allow a more inclusive strategy for students who are visual learners. The design of the model structures for each function (linear, quadratic, exponential, convergent, logistic, and sinusoidal) introduce a simple conceptual calculus (differential equation) approach to the study of the functions (mentioned) long before students study calculus formally. It is a wonderful, powerful learning tool. Fisher has used STELLA and a system dynamics modeling approach to support the study of mathematics in her high school math and modeling classes for 17 years (10 years at Franklin High School and 7 years at Wilson High School).

Thursday, 3:15 PM April 12, 2007 
Patrick D. Roberts, Neurological Sciences Institute, Oregon Health & Science University http://www.ohsu.edu/nsi/faculty/robertpa/ Stability of Adaptive Sensory Processing in Mormyrid Electric Fish Abstract: Many sensory systems adapt to repeated stimuli in order to ignore predictable sensory patterns and emphasize novel experience. An experimentally tractable sensory system that exhibits sensory adaptation is the electrosensory system of mormyrid electric fish. Electrosensory adaptation has been shown to rely on spiketiming dependent synaptic plasticity where the change in synaptic strength is dependent on the exact relative timing of prepostsynaptic spikes. In this talk, we will use a mathematical representation of the electrosensory system to show that the particular synaptic learning rule found in this system drives the output to a stable equilibrium that cancels predictable stimuli. Conditions will be derived that insure that the equilibrium is not just locally, but globally asymptotically stable. The results demonstrate that the synaptic mechanism for adaptation of the electrosensory system's output would generate a stable cancellation of expected sensory patterns.

Friday, 3:15 PM April 6, 2007 
Jianying Zhang, Western Washington University Numerical and analytical approaches to some problems on viscoplastic fluids Abstract: Viscoplastic fluids are a family of nonNewtonian fluids with yield stress. A viscoplastic fluid behaves as an elastic solid if the applied shear stress is under the yield value while it flows as a nonNewtonian viscous fluid when the yield value is exceeded. The mechanics, as well as mass and heat transport of viscoplastic materials are becoming increasingly important with the discovery that many multicomponent fluids, such as foams, slurries, suspensions and emulsions, are viscoplastic. The main challenges in understanding a viscoplastic fluid are the nonlinearities caused by its nonNewtonian features and the multiphase flow patterns due to the existence of the yield surfaces. How does the yield stress affect the fluid stability? How to accurately locate the yield surfaces? These are interesting research questions. I will outline the progress that has been made in answering them.

Friday, 3:15 AM March 16, 2007 
Moshe Shaked, University of Arizona, http://math.arizona.edu/~shaked/ Conditional Ordering and Positive Dependence Abstract: Every univariate random variable is smaller, with respect to the ordinary stochastic order and with respect to the hazard rate order, than a right censored version of it. In this paper we attempt to generalize these facts to the multivariate setting. It turns out that in general such comparisons do not hold in the multivariate case, but they do under some assumptions of positive dependence. First we obtain results that compare the underlying random vectors with respect to the usual multivariate stochastic order. A larger slew of results, that yield comparisons of the underlying random vectors with repect to various multivariate hazard rate orders, is given next. Some comparisons with respect to the orthant orders are also discussed. Finally, some remarks about applications are highlighted.

Friday, 3:15 PM March 9, 2007 
Loki Natarajan, Moores UCSD Cancer Center http://cancer.ucsd.edu/Research/summaries/lnatarajan.asp Measurement Error Models Abstract: The impact of mismeasured covariates on parameter estimates in commonly used regression models will be discussed. Two wellknown approaches used for correcting for measurement error (i) regression calibration and (ii) likelihoodbased methods will be presented and compared via simulations. These methods will also be compared to multiple imputation, which is applicable to missing data problems, and is gaining favor as a method for measurement error correction. Data from a study examining the associations between dietary intake of carotenoids and breast cancer recurrence will be used to illustrate the ideas.

Friday, 3:15 PM March 2, 2007 
Christine Escher, Oregon State University http://oregonstate.edu/~escherc/ The topology of nonnegatively curved manifolds. Abstract: Title: Abstract: An important question in the study of Riemannian manifolds is how to distinguish smooth manifolds that admit a metric with positive sectional curvature from those that admit one of nonnegative curvature. Surprisingly, if the manifolds are compact and simply connected, all known obstructions to positive curvature are already obstructions to nonnegative curvature. On the other hand, there are very few known examples of manifolds with positive curvature. In contrast to the positive curvature setting, there exist comparatively many examples with nonnegative sectional curvature. Hence it is natural to ask whether, among the known examples, it is possible to topologically distinguish manifolds with nonnegative curvature from those admitting positive curvature. In joint work with Wolfgang Ziller we address this question. In this talk, after reviewing some of the history, I will describe the topology of two specific families of nonnegatively curved manifolds in dimension seven and compare them to known examples of manifolds of positive curvature.

Friday, 3:15 PM February 23, 2007 
Michael Miller, University of Victoria http://www.cs.uvic.ca/~mmiller/ Decision Diagrams for Reversible and Quantum Circuits Abstract: The behaviour of reversible and quantum gates can be described in matrix form. The matrices are permutations for reversible logic gates and complexvalued for the quantum case. The behaviour of a circuit (cascade) of such gates can be computed as the product of the appropriate matrices. However, the size of the matrices grows exponentially with the number of lines in the circuit. This talk will present decision diagram techniques for representing and manipulating the matrices encountered for binary and multiplevalued reversible and quantum circuits. QuIDDpro for binary circuits, developed at the University of Michigan, is outlined. A new approach called Quantum Multiplevalued Decision Diagrams (QMDD) will be described in detail. New results on variable reordering for QMDD will be presented.

Friday, 3:15 PM February 16, 2007 
Professor Haijun Li, Washington State University, http://www.sci.wsu.edu/math/faculty/lih/welcome.html Tail Dependence of Multivariate Distributions Abstract: The tail dependence of a multivariate distribution describes the limiting proportion of exceedence of some margins over a large threshold given that the other margins have already exceeded that threshold, and can be used in the analysis of dependence among extremal events. The multivariate tail dependence is frequently studied via the method of copulas. In this talk, we discuss an alternative method to derive tractable formulas of tail dependence for the distributions whose copulas are not explicitly available. Our method depends only on the tail analysis and does not involve the marginal transforms on the entire distributions. Combining with closure properties of total positivity, our method also enables us to establish the monotonicity of tail dependence with respect to heavy tail index. The bivariate elliptical distribution and bivariate Pareto distribution are discussed throughout to illustrate the results.

Friday, 3:15 PM February 2, 2007 
Linda Petzold, UCSB, member of the National Academy of Engineering http://www.me.ucsb.edu/dept_site/people/new_faculty_pages/petzold_page.html Multiscale Simulation of Biochemical Systems Abstract: In microscopic systems formed by living cells, the small numbers of some reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA). Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) the presence of multiple timescales (both fast and slow reactions); and (2) the need to include in the simulation both chemical species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation. We will describe several recently developed techniques for multiscale simulation of biochemical systems, along with a new software toolkit called StochKit.

Friday, 3:15 PM January 26, 2007 
Bostwick Wyman, Ohio State University http://www.math.ohiostate.edu/~wyman/ What is a Reciprocity Law? (Reprise) Abstract: In 1972 I published a rather naïve paper in the Monthly called “What is a Reciprocity Law,” concluding with the comment “I still don't know what a reciprocity law is, or what one should be.” In another Monthly article (published in 1978), Emma Lehmer reported on new results on “Rational Reciprocity Laws” and presented the classical approach as an answer to my question. This year, motivated by the new book Fearless Symmetry by Avner Ash and Robert Gross, I decided to try again. Since I have been out of the number theory business for many years, I will have to give a talk for amateurs and plead for help: what is a reciprocity law?

Friday, 3:15 PM January 19, 2007 
Patricio Herbst, University of Michigan Creating representations of mathematical work and using them to study teaching Abstract: Reform in mathematics education has usually been predicated on the need to make students' work in classrooms more authentic—to engage students in inquiries that resemble those of mathematicians. Little attention has been given to understanding the complexities that such work creates for teachers who have to manage students' learning in classrooms. I present the approach followed by project ThEMaT (Thought Experiments in Mathematics Teaching) to study the perspective of teachers. ThEMaT has created representations of teaching (in the form of animated cartoons, comic books, and slideshows) that sketch classroom episodes where students are involved in mathematical work—specifically, episodes that show how students work on a problem might lead to a the statement of a theorem or how a class could work together in doing a proof. These representations are currently being used as prompts for conversations among teachers of high school geometry, whose reactions to those representations help us understand the predicaments of teaching geometry, particularly the tensions that geometry teachers need to manage in orchestrating mathematical work.

Friday, 3:15 PM December 1, 2006 
Hal Sadofsky, University of Oregon, http://darkwing.uoregon.edu/~sadofsky/ Homology and Limits Abstract: One of the key invariants in algebraic topology is homology, which provides a useful invariant for distinguishing between spaces which aren't homotopy equivalent, and provides a first guess for when spaces are homotopy equivalent. Two of the key constructions involved in building topological spaces from basic building blocks are union and cartesian product. Homology is defined so that the homology of a union is easily calculated, and the homology of finite products can be calculated with the K"unneth Theorem. By contrast, the homology of infinite products (or more generally sequential limits) is not accessible with the standard tools of homology. We describe a "solution" to this problem for homology (or generalized homology such as Ktheory) of spectra, and give examples. This solution involves calculating higher derived functors of the "finite module" functor, G. Let k be a field, and A be a kalgebra. If M is an Amodule, we define G(M) = {m in M : Am is a finitely generated kvector space} G() is leftexact, but not exact. If A is the ring of operations for a homology theory, then the key to understanding the homology of a sequential limit of spectra is understanding the higher derived functors of G(). I'll describe what I know about these functors in the cases given by Z/2 homology and by complex Ktheory.

Monday, 3:15 PM November 27, 2006 
Diane Whitfield, Casio MRD Center and Portland Community College Using Technology to Enhance a Lecture and Generate Discussions Abstract: Using technology during a lecture can help us address a variety of learning styles quickly and encourages students to ask questions. During this presentation, I will use the ClassPad Manager software to show how easy it is to implement technology into a lecture and how this simple technique improves student understanding. I will also show examples used by other instructors and one written by a student. Examples will range from basic algebra to calculus.

Friday, 3:15 PM November 17, 2006 
Hammou El Barmi,Zicklin School of Business, Baruch College, City University of New York http://stat.baruch.cuny.edu/~helbarmi Inference under Stochastic Orderings: the ksample Case Abstract: Inference under Stochastic Orderings: the ksample Case.pdf

Saturday, 11:00 AM November 11, 2006 
Richard Brualdi, University of Wisconsin, http://www.math.wisc.edu/~brualdi The Bruhat order for (0,1)matrices Abstract: We discuss the classical Bruhat order on the set of permutations of {1,2,...,n} and two possible extensions to more general matrices of 0's and 1's.

Friday, 3:15 PM October 27, 2006 
Xuelin Huang, University of Texas, M.D. Anderson Cancer Center http://gsbs.uth.tmc.edu/tutorial/huang_x.html Proportional Hazards Models with an Assumed Copula for Dependent Censoring: A Sensitivity Analysis Approach Abstract: In clinical studies, when censoring is caused by competing risks or patient withdrawal, there is always a concern about the validity of treatment effects that are obtained under the assumption of independent censoring. Since dependent censoring is nonidentifiable without additional information, the best we can do is a sensitivity analysis to evaluate the changes of parameter estimates under different degrees of assumed dependent censoring. Such analysis is especially useful when knowledge about the degree of dependent censoring is available, e.g. through literature review or expert opinion. The effects of dependent censoring depend on not only the fraction of subjects being censored, but also the imbalance of this fraction between different treatment groups. Due to this complexity, the effects of dependent censoring on parameter estimates are usually not clear. We provide an approach to assess these effects for the Cox proportional hazards models. A copula function is assumed to specify the joint distribution of failure and censoring times, while their marginal distributions are specified by semiparametric Cox models. An iteration algorithm is proposed to estimate the regression parameters and marginal survival functions under the assumed copula model. Simulation studies show the proposed method works well. The method is applied to the data analysis for an AIDS clinical trial, in which 27% of the patients withdrew due to toxicity or request from the patient or investigator.

Friday, 3:15 PM October 13, 2006 
Joel H. Shapiro, Emeritus Professor, Michigan State University http://www.mth.pdx.edu/~shapiroj Composition operators and analytic function theory Abstract: Any holomorphic (i.e., analytic) mapping that takes a plane domain G into itself induces, via composition on the right, a linear mapping C(phi) on Hol(G), the space of all functions holomorphic on G: C(phi)f := f(phi) (f in Hol(G)g) The challenge here is to make connections between the operatortheoretic behavior of C(phi) and the functiontheoretic properties of phi. I'll show how this plays out in dramatic fashion for the problem of compactness in the setting of the Hardy space H^2the simplest Hilbert space of analytic functions on the open unit disc. Time permitting, I'll briefly indicate some other directions, e.g.: eigenvalues, cyclicity, matrices...

Friday, 3:15 PM October 6, 2006 
Karen Willcox, Dept. of Aeronautics and Astronautics, MIT Modelconstrained optimization methods for reduction of largescale systems Abstract: Model reduction entails the systematic generation of costefficient representations of largescale systems that result, for example, from discretization of partial differential equations. Considerable progress in model reduction methodologies for largescale systems has seen successful application to many fields, such as computational fluid dynamics, structural dynamics, atmospheric modeling, and circuit design. Of current interest are challenges associated with optimal design, optimal control and inverse problem applications. For these applications, a key challenge is deriving reduced models to capture variation over a parametric input space, which, for many optimization applications, is of high dimension. This talk presents recent methodology developments in which the task of determining a suitable reduced basis is formulated as a sequence of optimization problems. Applications of the methodology to a turbomachinery design problem and a largescale contaminant transport inverse problem will be presented.

Monday, 3:15 PM June 5, 2006 
Dr. Srinivasa Rao, Institute of Mathematical Sciences, Madras, India Life and Work of Srinivasa Ramanujan Abstract: Srinivasa Ramanujan has been hailed as a ‘natural' mathematical genius and compared to all time great Euler and Gauss. In his short life span of 32 years 4 months and 4 days he left behind an incredibly vast number of theorems (more than 3200) in his celebrated Notebooks, without Proofs, which hence became the source for research for generations of mathematicians. George Andrews discovered in 1976, 100 sheets containing about 600 theorems, in the estate of G.N. Watson and gave to them the name ‘Lost' Notebook of Ramanujan, which was released at the birth centenary of Ramanujan on December 22, 1987, in Chennai. Bruce Berndt spent more than two decades on these Notebooks, guided about a score of students to work on the Entries of Ramanujan in his Notebooks and published a five part work entitled: "Ramanujan's Notebooks", Part I to V (SpringerVerlag, 1975 to 1997). Ratan P. Agrwala produced a three part work entitled: "Resonance of Ramanujan's mathematics" (New Age International, 1996). The CD ROMs on the life and work of Ramanujan now provide a unique opportunity of seeing the original Entries in the Notebooks along with the work of Bruce Berndt. The two part work was released in December 2003 in India. Glimpses into the CD ROMs will be provided along with the relevant details on the life and work of Ramanujan.

Friday, 3:15 PM June 2, 2006 
Terry Wood, Purdue University Working in Groups: What Matters?

Friday, 3:15 PM May 26, 2006 
Patrick De Leenheer Coexistence, bistability and oscillations in the feedbackcontrolled chemostat Abstract: The chemostat is a biological reactor used to study the dynamics of species competing for nutrients. If there are n>1 competitors and a single nutrient, then at most one species survives, provided the control variables of the reactor are constant. This result is known as the competitive exclusion principle. I will review what happens if one of the control variablesthe dilution rate is treated as a feedback variable. Several species can coexist for appropriate choices of the feedback. Also, the dynamical behavior can be more complicated, exhibiting oscillations or bistability.

Thursday, 4:00 PM May 18, 2006 
Ronald Graham, University of California, San Diego Old and New Problems and Results in Ramsey Theory Abstract: I will discuss some of the classic unsolved problems in this branch of combinatorics, as well as some of the very recent developments. As usual, I don't expect the audience to have any special prior knowledge of this subject. see also The Inaugural Fariborz Maseeh Distinguished Lecture

Friday, 3:15 PM May 12, 2006 
Iva Stavrov, Lewis and Clark College On Osserman Problems in SemiRiemannian Geometry Abstract: Spectral geometry of the Riemann curvature tensor studies relationships between the geometry and the spectrum of operators arising from this tensor. Osserman problems, classic problems in this field, involve the Jacobi operator J(v)x:=R(x,v)v. It is known that a Riemannian manifold (whose dimension is not 16) is a rank 1 symmetric space if and only if the spectrum of the Jacobi operator is constant on the set of unit vectors v. Generalizing this result to semiRiemannian geometry has so far not been very successful. In this talk I will give a short introduction into Osserman problems and discuss powerful polynomial methods I've been using lately.

Friday, 3:15 PM May 5, 2006 
Jerry Keating, University of Texas at San Antonio Using Multivariate Methods to Analyze Texas Well Water Data Abstract: In this talk we introduce data from 127 water wells in the Texas Panhandle. We obtain data at each well for the presence of 11 known toxins and also for three chemical properties. We use Cluster analysis to group the wells by chemical assay with the result that the wells are geographically clustered by water source. We determine that two principal components explain 95% of the variation in the study. These linear combinations of chemicals can be attributed to an overall abundance of toxins and to a curious combination that leads to the source of toxicity. MANOVA methods are used to detect difference in toxin levels by cluster. This analysis is used as a basis for a larger study funded by the state.

Friday, 3:15 PM April 28, 2006 
Andrew Izsak, University of Georgia

Friday, 3:15 PM April 21, 2006 
Dorothy Wallace, Dartmouth College Unique Factorization Abstract: Unique factorization of the integers is a key element in cryptography, but many mathematical structures besides the integers admit factorizations of various kinds. Sometimes a type of factorization that is not unique can be made unique by specifying the algorithm used to factor the element. In this talk I will give several examples of these kinds of theorems for matrices, and some examples of how they might be used in cryptography schemes.

Friday, 3:15 PM April 14, 2006 
Prof. Adrian Sandu, Virginia Tech Recent developments in chemical data assimilation Abstract: The task of providing an optimal analysis of the state of the atmosphere requires the development of data assimilation tools that efficiently integrate the observational data and the models. We will discuss recent developments in both the variational (4DVar) approach and the ensemble Kalman filter approach to the assimilation of chemical and particulate data. We will illustrate the application of these tools on the assimilation of real data sets.

Friday, 3:15 PM April 7, 2006 
Leo Kadanoff, Departments of Physics and Mathematics, University of Chicago Making a Splash; Breaking a Neck: The Development of Complexity in Physical Systems (This talk is cosponsored by the Dept of Physics) Abstract: The fundamental laws of physics are very simple. They can be written on the top half of an ordinary piece of paper. The world about us is very complex. Whole libraries hardly serve to describe it. Indeed, any living organism exhibits a degree of complexity quite beyond the capacity of our libraries. This complexity has led some thinkers to suggest that living things are not the outcome of physical law but instead the creation of a (super)intelligent designer. In this talk, we examine the development of complexity in fluid flow. Examples include splashing water, necking of fluids, swirls in heated gases, and jets thrown up from beds of sand. We watch complexity develop in front of our eyes. Mostly, we are able to understand and explain what we are seeing. We do our work by following a succession of very specific situations. In following these specific problems, we soon get to broader issues: predictability and chaos, mechanisms for the generation of complexity and of simple laws, and finally the question of whether there is a natural tendency toward the formation of complex ‘machines'.

Monday, 3:15 PM March 20, 2006 
Dr Xiahou Li, Lazhu University, China Negative Dependence in Frailty Models Abstract: The frailty random variable and the overall population random variable in frailty models are proved to be negatively likelihood ratio dependent. It is also shown that the likelihood ratio order and the reversed hazard rate order under which a stochastic increase of the frailty random variable results in a stochastic decrease of the overall population random variable in terms of the likelihood ratio order and the hazard rate order, respectively.

Friday, 3:15 PM March 17, 2006 
Paul Terwilliger, University of Wisconsin, Madison DistanceRegular Graphs and the Quantum Affine sl2 Algebra Abstract: Combinatorial objects, such as graphs, can often be used to construct representations of abstract algebras. In this talk we will consider a graph possessing a high degree of regularity, known as a distanceregularity. For this graph we define an algebra generated by the adjacency matrix and a certain diagonal matrix. There exists a set of elements in this algebra that, under a minor assumption, satisfy some attractive relations. Using these relations we obtain a representation of the quantum affine sl2 algebra.

Friday, 3:15 PM March 10, 2006 
Pat Thompson, Arizona State University Where is the mathematics in mathematics education?” Abstract: Recent reform efforts by NCTM and NSF have engendered strong reactions from members of the mathematics community that mathematical integrity has often been lost. Representatives of the math education community have replied that the nature of students' engagement is the crucial issue, and that engaging them in mathematical activity at a level appropriate for their backgrounds is the surest way to accomplish this. I will develop the argument that both camps are right in important ways and both camps are wrong in important ways. I will argue that neither camp understands that the central problem is incoherence in the mathematics that teachers teach and students learn, and that addressing this problem requires an effort similar to what was achieved in the foundations of mathematics in the 1800's as a result of the arithmetization of the calculus.

Thursday, 4:00 PM March 9, 2006 
Sir Roger Penrose, Oxford University Fashion, Faith, and Fantasy in Modern Physical Theory Abstract: The greatest advances in science and mathematics occur on the frontiers, at the interface between ignorance and knowledge, where the most profound questions are posed. There¹s no better way to assess the current condition of science and mathematics than exploring some of the questions that cannot be answered. Unsolved mysteries provide motivation and direction. Gaps in the road to knowledge are not potholes to be avoided, but opportunities to be exploited. One of the most creative qualities a researcher can have is the ability to ask the right questions.

Wednesday, 2:00 PM March 8, 2006 
Robert D. Luneski, Tektronix Inc. Application of Statistics to Operations and Supply Chain Challenges at Tektronix Abstract: The talk/information session will highlight some of the technical challenges within operations at Tektronix. Skill sets required to tackle these challenges and the associated job opportunities and career paths will be also discussed. Tektronix is currently recruiting for candidates for a job opening in this department.

Friday, 3:15 PM March 3, 2006 
Professor Jens Harlander, Western Kentucky University Car Crashes on a Sphere and Group Theory Abstract: Given a tesselated 2Sphere (i.e., a subdivision of the surface of a ball into faces or, equivalently, a planar graph) the Euler characteristic "vertices  edges + regions" is always equal 2. This combinatorial invariant captures the topology of the 2sphere. Geometric, analytic and dynamical properties of the 2sphere can also be expressed combinatorially. This talk consists of an overview and a discussion of applications to questions in Low Dimensional Topology and Group Theory. For example, around 1993 Klyachko observed the following property and called it "suitable for a school mathematics tournament": Given a tesselated 2sphere. Let a car drive around the boundary of each region in an anticlockwise direction. The cars travel at arbitrary speed, never stop and visit each point on the boundary infinitely often. Then there must be at least two places on the sphere where complete crashes occur. Klyachko used this observation to prove the Kervaire Conjecture for torsionfree groups, a problem which had been open for 30 years.

Friday, 3:15 PM February 24, 2006 
Michael Navon, Florida State University On adjoint error correction or goaloriented methods Abstract: The refinement of quantities of interest (goal or cost functionals) using adjoint (dual) parameters and a residual is at present a well established technology. The truncation error may be estimated via the value of residual engendered by the action of a differential operator on some extrapolation of the numerical solution. The adjoint approach allows accounting for the impact of truncation error on a target functional by summation over the entire calculation domain. Numerical tests demonstrate the efficiency of this approach for smooth enough physical fields (heat conduction equation and Parabolized NavierStokes (PNS)). The impact of solution smoothness on the error estimation is found to be significant. However, the extension of this approach to discontinuous field is also feasible. We can handle the error of discontinuous solution (Euler equations) using the solution for viscous flow (PNS) as a reference. The influence of viscous terms may be accounted for using adjoint parameters. Results of numerical tests demonstrate applicability of this approach. For error estimates we use numerical results that are significantly less smooth then the computed physical field. For nonmonotonic finitedifference schemes error bounds may be too large. Thus, the applicability of method considered above is restricted to numerical schemes which do not exhibit nonphysical oscillations. Finally applicability of the approach to POD model reduction with dual weighted residuals will be briefly addressed as well as an implementation in adaptive mesh ocean modelling.

Friday, 3:15 PM February 17, 2006 
Pedro Ontaneda, SUNY Binghamton Is the Space of Negatively Curved Metrics Connected? Abstract: A geometry on a space is specified by the choice of a (Riemannian) metric. A geometry is negatively curved if geodesic rays emanating from a point diverge faster that they would in our usual Euclidean space. We will discuss the following question: can any two negatively curved metrics be joined by a path of negatively curved metrics?

Friday, 3:15 PM February 10, 2006 
Gareth P. Parry, University of Nottingham Group structure and properties of discrete defective crystals Abstract: I consider, in the continuum context, a crystal that has dislocation density constant in space . A simple iteration procedure generates an infinite set of points which is associated with such crystals. When certain necessary conditions are met, there is a minimum nonzero separation of points in this set, so the set is discrete. I describe joint work with Paolo Cermelli (Torino) which gives the structure of such sets explicitly.

Friday, 3:15 PM February 3, 2006 
Cameron Gordon, University of Texas at Austin The Unknotting Number of a Knot Abstract: The unknotting number u(K) of a knot K is the minimal number of times you must allow K to pass through itself in order to unknot it. Although this is one of the oldest and most natural knot invariants, it remains mysterious. We will survey known results on u(K), including some recent joint work with John Luecke on when socalled algebraic knots have u(K) = 1, and discuss several open questions.

Friday, 3:15 PM January 27, 2006 
Ery AriasCastro, Univ. of California, San Diego Fast Multiscale Detection of Parametric and Nonparametric Geometric Objects Abstract: The scan statistic, also known as matched filter, is used to detect patterns for example in point clouds or pixel images. We describe a multiscale approach to approximating the scan statistic that dramatically improves the computation speed without significantly compromising the detection performance. We illustrate the method with parametric and nonparametric examples. Emphasis will be on theory. Joint work with David L. Donoho (Stanford) and Xiaoming Huo (UC Riverside)

Friday, 3:15 PM January 20, 2006 
S.R. Jammalamadaka, University of California, Santa Barbara Statistical Analysis of Directional Data Abstract: The talk will provide a general introduction to this novel area of statistics, where the observations are directions. After discussing some applications, new descriptive measures as well as statistical models will be introduced for such data. Problems of estimation and testing hypothesis will be briefly outlined.

Wednesday, 3:15 PM January 18, 2006 
Mervyn Silvapulle, Monash University, Australia Constrained Statistical Inference in Parametric Models  an overview Abstract: Statistical inference problems with inequality constraints on unknown parameters arise in many areas of applications such as biology, medicine and economics. Examples include testing that (i) a treatment is better than a control with univariate/multivariate responses, and (ii) a regression function is concave. A unique feature of this type of problems is that a relevant parameter space is not open. Consequently, the behavior of estimators and test statistics depend crucially on the local geometry of the parameter space at the true value. The standard asymptotic results for parametric models, namely (i) the asymptotic distribution of an estimator is normal and (ii) the asymptotic null distribution of a test statistic is chisquared, do not hold. Construction of a confidence region, even in an apparently simple model, may not be easy. In this seminar, an overview of this area will be provided; in particular, a range of problems for which solutions are available and the nature of the solutions will be discussed.

Friday, 3:15 PM December 2, 2005 
Yongmin Zhang, SUNY Stony Brook Modeling and Simulation of Fluid Mixing for Laser Experiments Abstract: Recently, laboratory astrophysics has been playing an important role in the study of astrophysical systems, especially in the case of supernova explosions through the creation of scaled reproductions of astrophysical systems in the laboratory. In collaboration with a team centered at U. Michigan and LLNL, we have conducted front tracking simulations for axisymmetrically perturbed spherical explosions relevant to supernovae as performed on NOVA laser experiments, with excellent agreement with experiments. We have extended the algorithm and its physical basis for preshock interface evolution due to radiation preheat. The preheat simulations motivate direct experimental measurements of preheat as part of any complete study of shockdriven instabilities by such experimental methods. Our second focus is to study turbulent combustion in a type Ia supernova (SN Ia) which is driven by RayleighTaylor mixing. We have extended our front tracking to allow modeling of a reactive front in SN Ia. Our 2d axisymmetric simulations show a successful level of burning. Our front model contains no adjustable parameters so that variations of the explosion outcome can be linked directly to changes in the initial conditions.

Friday, 3:15 PM November 18, 2005 
Lisa Madsen, Oregon State University Maximum Likelihood Estimation of Regression Parameters with Spatially Misaligned Data Abstract: Suppose X(s), Y(s), and e(s) are stationary spatially autocorrelated Gaussian processes that are related as Y(s) = b_0 + b_1 X(s) + e(s) for any location s. Our problem is to estimate the b's, when Y and X are not necessarily observed in the same location. This situation may arise when the data are recorded by different agencies or when there are missing X values. A natural but naive approach is to predict ("krige") the missing X's at the locations Y is observed, and then use least squares to estimate ("regress") the b's as if these X's were actually observed. This krigeandregress estimator b_KR is consistent, even when the spatial covariance parameters are estimated. If we use b_KR as a starting value for a NewtonRaphson maximization of the likelihood, the resulting maximum likelihood estimator b_ML is asymptotically efficient. We can then use the asymptotic distribution of b_ML for inference. As an illustration, we relate ozone levels observed at EPA monitoring stations to economic variables observed for most counties in the United States.

Friday, 3:15 PM November 4, 2005 
Jennifer Christian Smith, University of Texas at Austin Paths to Proof: Strategies for Constructing Proofs in the Context of a ProblemBased Course Abstract: In order to more closely examine the strategies and processes students employ when constructing mathematical proofs, we followed six students enrolled in a problembased undergraduate number theory course. During a series of interviews, the students were asked to construct proofs of various number theory statements. The students' strategies for constructing proofs varied depending on context, but in each case they were primarily engaged in making sense of the mathematics, rather than applying a previously known proof strategy. In addition, their strategies varied greatly from one proof to another; they did not appear to show a preference for a particular approach to constructing a proof. We claim that the problembased structure of the course facilitated the development of these students' relatively fluid approaches to proof construction.

Wednesday, 2:00 PM November 2, 2005 
Dr. Robert B. Laughlin, Department of Physics, Stanford University The Physical Limits of Computation Abstract: Simulations work in practice because they exploit higherlevel organizing principles in nature. Good code writing requires faithfulness to these principles and the discipline not to exceed their limits of validity. An important exception is the use of simulation to search for new kinds of emergence.

Friday, 3:15 PM October 28, 2005 
Michael Schilmoeller, Northwest Power and Conservation Council One Practitioner's View of Financial Mathematics Abstract: What mathematical tools are utilized in financial mathematics? What types of questions are addressed? What breakthroughs in theory have shaped our lives? Find out the answers to these questions and more!

Monday, 3:15 PM October 24, 2005 
Professor Hira Koul, Michigan State University Goodnessoffit testing in interval censoring case Abstract: In the interval censoring case 1, or in the so called current status data, instead of observing an event occurrence time X, one observes an inspection time T and I(X\le T). Here we shall discuss asymptotically distribution free tests of the goodnessoffit hypotheses pertaining to the d.f. F(X). The proposed tests are shown to be consistent against a large class of fixed alternatives and have nontrivial asymptotic power against a large class of local alternatives. A simulation study is included to exhibit the finite sample level and power behavior.

Friday, 3:15 PM October 14, 2005 
Edward Formanek, Penn State University The Automorphism Group of a Free Group is not Linear Abstract: A group G is linear if for some field K and integer r, there is an embedding from G into GL(r,K), the group of invertible rxr matrices over K. In 1980, W. Magnus and C. Tretkoff raised the question: Is Aut(F_n), the automorphism group of a free group of rank n, linear? In 1988, A. Lubotzky found necessary and sufficient grouptheoretic conditions for a finitely generated group to be linear over a field of characteristic zero, but his conditions are difficult to check, and the proof of the following theorem does not use his result. Theorem (E. Formanek  C. Procesi, 1992) For n>2, Aut(F_n) is not a linear group. I will outline its proof, which depends on the representation theory of algebraic groups. The remaining case n = 2 was not decided until 2000, when D. Krammer established the linearity of B_4 and B_4/Z, the braid group on four strings and the braid group on four strings modulo its center. Since B_4/Z is isomorphic to a subgroup of index two in Aut(F_2), his result implies the linearity of Aut(F_2).

Friday, 3:15 PM October 7, 2005 
Christian Genest, Universite' Laval Tests of independence based on the empirical copula process Abstract: After arguing that the dependence between continuous random variables is characterized by their underlying copula, the speaker will show how it is possible to construct powerful graphical and formal tests of independence using the empirical copula process, which is based on the ranks of the observations. The determination of the limiting behavior of this process under the null and under sequences of contiguous alternatives will allow him to compare the asymptotic efficiency of tests of independence based on a Cram\'ervon Mises functional of the process to the locally most powerful rank test of independence associated with any given parametric copula model for dependence. This work was done in collaboration with J.F. Quessy (UQTR), B. R\'emillard (HECMontr\'eal) and F. Verret (Statistics Canada).

Friday, 3:15 PM October 7, 2005 
Roman Dwilewicz, University of MissouriRolla CauchyRiemann Theory  An Overview Abstract: CauchyRiemann (CR) theory nicely combines Complex Analysis, Partial Differential Equations, Geometric Analysis, Geometry (Algebraic and Differential) and other areas. In the talk I present basic CR problems, their connections to the above mentioned areas, and state some approximation and extension theorems.

Friday, 3:15 PM June 3, 2005 
Jong Sung Kim and Solange Mongoue, Department of Mathematics & Statistics, Portland State University TBA

Friday, 3:15 PM May 27, 2005 
Juan Campos, University of Granada Periodic Solutions of Autonomous ‘Pendulum type' Equations Abstract: Old fashioned clocks are among the many mechanical contraptions that can be modeled by a periodic Lagrangian equation. As an example we will consider the classical pendulum equation where there are periodic solutions of minimal period T if and only if T is bigger than 2 pi. The main topic of the talk is a generalization of this property to high dimensional pendula.

Friday, 3:15 PM May 20, 2005 
Serge Preston and Jim Vargo Thermodynamical Metric of R.Mrugala and the Heisenberg Group. Abstract: We present results of the study of an (n+1, n) pseudoRiemannian metric G on the contact phase space (P, \theta) of homogeneous thermodynamics suggested by R.Mrugala in 1996. Calculation of curvature, the second fundamental form for the Legendre submanifolds (constitutive surfaces of concrete thermodynamical systems), Killing vector fields, and a nonholonomic frame in which this metric is constant are presented for (P, G) as well as for its symplectization ({\ ~P},{\ ~G}). It is shown that it is possible to identify naturally the space (P,\theta ,G) with the Heisenberg nilpotent Lie group H_{n} endowed with the right invariant pseudoRiemannian metric and contact form. Possible directions of further work are indicated.

Friday, 3:15 PM May 13, 2005 
Marina Meila, Department of Statistics, University of Washington Spectral Clustering Meets Machine Learning

Friday, 3:15 PM May 6, 2005 
Marcel Dettling, Division of Oncology, Johns Hopkins School of Medicine Finding Pathways from Microarray Gene Expression Data

Friday, 3:15 PM April 29, 2005 
M. M. Rao, UC  Riverside Poisson Measure and Integral Abstract: After recalling the elementary concept of the Poisson distribution from three different angles, I shall show how a review of the same concept leads to a measure and a Stieltjes type (random) integral of interest in many applications. One of them is to Fourier series and another to continuous parameter time series which touches both stationary and some nonstationary classes. The presentation is a broad outline, and will be aimed at a general audience.

Friday, 3:15 PM April 22, 2005 
Carolyn Kieran, University of Quebec at Montreal Thirty Years of Research on the Learning of Algebra Abstract: The learning of algebra has always been a fundamental and vibrant stream of the research carried out within the mathematics education community. While the themes of the past are still to be found in the algebra research of the present, there have been some major shifts. Earlier research tended to focus on students' algebraic concepts and procedures, and their difficulties in making the transition from arithmetic to algebra and in solving word problems. The lettersymbolic was the primary algebraic form that was investigated and theoretical frameworks for analyzing research data rarely went beyond the Piagetian. However, over time, algebralearning research broadened to encompass other modes of representation, the use of technological tools, and a wider variety of theoretical frameworks. My talk will synthesize the main themes of interest of algebralearning researchers over the years, and will include some highlights that reflect these themes. The concluding part of the talk will integrate the evolution that has taken place in school algebra research within a discussion of the various sources from which students derive meaning in algebra: (i) meaning from the algebraic structure itself, involving the lettersymbolic form, (ii) meaning from various mathematical representations, (iii) meaning from the problem context, and (iv) meaning from that which is exterior to the mathematics/problem context (e.g., linguistic activity, gestures and bodily language, metaphors, lived experience, image building

Friday, 3:15 PM April 15, 2005 
Robert Stein, Professor Emeritus, Cal State San Bernadino The Remarkable History of Logarithms Abstract: Today we think of logarithms either as integrals or as exponents, but their origin had nothing to do with either of these ideas. It took nearly a century after logarithms were invented for anyone to recognize their connection to exponents. The connection between logarithms and integrals came a bit earlier, but the person whose work established that connection did not recognize his accomplishment! We will revisit this history and some of the fascinating people and events involved with it. The talk will be readily understandable by anyone who knows first year calculus and should be of particular interest to teachers of calculus, exponential functions, and logarithms at any level.

Friday, 3:15 PM April 15, 2005 
Nandini Kannan, University of Texas at San Antonio Inference for the Simple Stepstress Model Abstract: In several applications in Survival Analysis and Reliability, the experimenter is often interested in the effects of extreme or varying stress levels (temperature, voltage, load, pollution levels) on the lifetimes of the experimental units. Step Stress Tests allow the experimenter to increase the stress levels at prespecified times during the experiment. The most common model used to analyze these experiments is the "cumulative damage" or "cumulative exposure" model. In this talk, we consider a simple stepstress model with only two stress levels. We assume the failure times at the two different stress levels follow exponential distributions. When the stress level changes, the distribution is assumed to be linear exponential which allows us to model a "lag" effect of the stress. We obtain the Maximum Likelihood estimators of the unknown parameters and propose different techniques for constructing confidence intervals. Keywords and Phrases: Accelerated Testing; stepstress models; cumulative exposure model; maximum likelihood estimation; bootstrap method; TypeII censoring; spacings; exponential distribution.

Friday, 3:15 PM April 8, 2005 
Julia Parrish, John Toner, Peter Veerman Formations, Flocking, and Consensus

Friday, 3:15 PM April 8, 2005 
Bertrand Clarke, University of British Columbia Sample Size and Effective Samples Abstract: We distinguish between two classes of sample size problems. The first is the actual sample size needed to achieve a specific inference goal. The second is an effective sample where we try to interpret one sample under one model as another sample under a different model. An effective sample leads to an effective sample size which may be more important that the sample itself. For the actual case, we give asymptotic expressions for the expected value, under a fixed parameter, for certain types of functionals of the posterior density in a Bayesian analysis. The generality of our approach permits us to choose functionals that encapsulate different inference criteria. The dependence of our expressions on the sample size means that we can preexperimentally obtain adequate sample sizes to get inferences with a prespecified level of accuracy. For the effective sample case, we find a 'virtual' sample under one model that gives the same inferences as an actual sample under another model. We use the same prior for both models, but this is not necessary. We show these effective samples exist and give some examples to show that their behavior is consistent with statistical intuition and the procedure can be extended to give a notion of effective number of parameters as well.

Friday, 3:15 PM April 1, 2005 
James Pommersheim, Reed College Quantum Computation and Quantum Learning Abstract: In recent years, computer scientists, physicists, and mathematicians alike have become excited by the idea of building a quantum computer, a computer that operates at the logical level according to quantum mechanical principles. Over the past decade, we have learned that such a computer would be capable of tasks, such as factoring large integers, which are widely believed to be difficult to solve on a classical computer. This technology would also enable two parties to communicate securely over a bugged channel.

Friday, 3:15 PM March 11, 2005 
David Oury, Portland State University & McGill University Building Free Braided Pivotal Categories Abstract: The talk begins with a description of the relevant category theory. In particular, a strict monoidal category, braiding, and pivotal category are defined. The language of strict monoidal categories is then constructed using the concept of a graph with relations. A specific set of relations, which correspond to the Reidemeister moves of knot theory, is then used to define, with this language, a specific strict monoidal category. This category is in fact a free braided pivotal category. I indicate briefly the reasoning behind this claim. In the remainder of the talk tangles are defined and tangle categories are constructed. I then indicate, again briefly, the argument which shows they are free braided pivotal categories. To conclude I'll hint at the way in which these categories can be used to find knot invariants.

Friday, 3:15 PM March 4, 2005 
Sarah Reznikoff, Reed College The TemperleyLieb algebra and its representations Abstract: The abstract TemperleyLieb algebra, TL, is defined to be the algebra generated by planar diagrams on "strings" in a rectangle. This algebra arises naturally in the study of operator algebras: the projections propagating the Jones tower of subfactors generate a a quotient of TL. We will familiarize the audience with these notions, and also describe how Kauffman diagrams can be used to realize the representations of TL. We will next introduce the annular TemperleyLieb algebra, ATL, which is given by planar diagrams on strings in an annulus, and explain how the representations of this algebra are relevant to subfactors. We conclude by discussing a recent result with Vaughan Jones characterizing the irreducible representations of this annular algebra.

Friday, 3:15 PM February 25, 2005 
Robert Devaney, Boston University Mandelbrot, Farey, and Fibonacci Abstract: In this lecture we describe several folk theorems concerning the Mandelbrot set. While this set is extremely complicated from a geometric point of view, we will show that, as long as you know how to add and how to count, you can understand this geometry completely. We will encounter many famous mathematical objects in the Mandelbrot set, like the Farey tree and the Fibonacci sequence. And we will find many soontobefamous objects as well, like the "Devaney" sequence. There might even be a joke or two in the talk.

Friday, 3:15 PM February 18, 2005 
Ralph E. Showalter, Oregon State University Hysteresis Models of Adsorption and Deformation Abstract: We shall illustrate by a sequence of examples the representation of various classes of hysteresis functionals by singular differential equations. These representations lead to a very natural formulation of initialboundaryvalue problems which contain free boundaries or memory functionals. These constructions require that we extend out notion of 'function' in a very natural way, and similarly lead easily to nice theories for such problems with hysteresis.

Friday, 3:15 PM February 11, 2005 
Don Pierce, Radiation Effects Research Foundation The Effect of Choice of Reference Set on Frequency Inferences Abstract: We employ secondorder likelihood asymptotics to investigate how ideal frequency inferences depend on more than the likelihood function. This is of foundational interest in quantifying how frequency inferences violate the Likelihood Principle. It is of practical interest in quantifying how ordinary likelihoodbased inferences, frequencyvalid only to first order, are affected by secondorder corrections depending on the probability model or the chosen reference set. It has been noted that there are two aspects of higherorder corrections to firstorder likelihood methods: (i) that involving effects of fitting nuisance parameters and leading to the modified profile likelihood, and (ii) another part pertaining to limitation in adjusted information. Generally, secondorder corrections to likelihoodbased methods involve firstorder adjustments of each of these types. However, we show that for some important settings, likelihoodirrelevant model specifications have secondorder effect on both of these adjustments — this result includes specification of the censoring model for survival data. On the other hand, for sequential experiments the likelihoodirrelevant specification of the stopping rule has secondorder effect on adjustment (i) but firstorder effect on adjustment (ii). Thus in both of these settings the modified profile likelihood does not depend on extralikelihood model specifications. These matters raise the issue of what are ‘ideal' frequency inferences, since consideration of ‘exact' frequency inferences will not suffice. Although this is a deep issue, we indicate— somewhat surprisingly — that to second order ideal frequency inferences may be based on the distribution of the ordinary signed likelihood ratio statistic, without commonly considered adjustments to this.

Friday, 3:15 PM February 4, 2005 
Richard A. Levine, San Diego State University Implementing Componentwise Hastings Algorithms Abstract: Markov chain Monte Carlo (MCMC) routines have revolutionized the application of Monte Carlo methods in statistical application and statistical computing methodology. The Hastings sampler, encompassing both the Gibbs and Metropolis samplers as special cases, is the most commonly applied MCMC algorithm. The performance of the Hastings sampler relies heavily on the choice of sweep strategy, that is, the method by which the components or blocks of the random variable X of interest are visited and updated, and the choice of proposal distribution, that is the distribution from which candidate variates be drawn for the acceptreject rule each iteration of the algorithm. We focus on the random sweep strategy, where the components of X are updated in a random order, and random proposal distributions, where the proposal distribution is characterized by a randomly generated parameter. We develop an adaptive Hastings sampler which learns from and adapts to random variates generated during the algorithm towards choosing the optimal random sweep strategy and proposal distribution for the problem at hand. As part of the development, we prove convergence of the random variates to the distribution of interest and discuss practical implementations of the methods. We illustrate the results presented by applying the adaptive componentwise Hastings samplers developed to sample multivariate Gaussian target distributions and Bayesian frailty models.

Friday, 3:15 PM January 14, 2005 
Barry C. Arnold, University of California, Riverside Robin Hood and Majorization: Not just for Economists after all Abstract: Classical majorization admits a colorful interpretation as an ordering of wealth inequality that is essentially determined by the acceptance of the axiom that robbing a little from the rich and giving it to the poor will decrease inequality. However majorization and its first cousin, the Lorenz order, have a role to play in an enormously diverse array of settings. Any problem whose solution is a vector of the form (c,c,…,c) may well be susceptible to rephrasing in terms of some cleverly chosen Schur convex function and its extreme value under the majorization ordering. A sampling of such cases will be provided with the hope that audience members will be able to think up even more settings in which Robin Hood can play a perhaps unexpected role.

Friday, 3:15 PM December 3, 2004 
Professor Mara Tableman, Dept. of Mathematics and Statistics, Portland State University What Only Time Can Tell: The Story of Two Survival Curves Abstract: TBA

Friday, 3:15 PM November 19, 2004 
Professor Nichole Carlson, Dept. of Pubilc Health & Preventive Medicine, OHSU A Bayesian Approach to Bivariately Modeling Abstract: TBA

Friday, 3:15 PM November 19, 2004 
Brian Hunt, University of Maryland Determining the initial conditions for a weather forecast Abstract: While it is generally agreed that the earth's atmosphere is chaotic, applications of the theory of chaotic dynamical systems are most successful for lowdimensional systems  those that can be modeled with only a few state variables. By contrast, current numerical weather models keep track of over 1,000,000 variables. Nonetheless, our group at the University of Maryland has found that in a local region (say the Eastern United States) over a time span of a few days, the model can behave much like a lowdimensional system. I will describe how we are using this perspective to attack a fundamental problem in weather forecasting  how to set the initial conditions for the forecast model. Our methodology can be used more generally to forecast spatiotemporally chaotic systems for which a model is available but initial conditions cannot be measured accurately.

Monday, 3:15 PM November 8, 2004 
Dusan Repovs, University of Ljubljana, Slovenia History of the problem of detecting topological manifolds (19542004) Abstract: We shall present a historical survey of the geometric topology of generalized manifolds, i.e. ENR homology manifolds, from their early beginnings in 1930's to the present day, concentrating on those geometric properties of these spaces which are particular for dimensions 3 and 4, in comparison with generalized (n>4)manifolds. In the second part of the talk we shall present the current state of the main two problems concerning this class of spaces  the Resolution problem (the work of BestvinaDavermanVenemaWalsh, BryantLacher, BrinMcMillan, LacherRepovs, Thickstun, and others) and the General position problem (the work of Bing, Brahm, LambertSher, DavermanEaton, LacherRepovs, DavermanThickstun, DavermanRepovs, Brahm, and others). We shall list open problems and related conjectures.

Friday, 3:15 PM November 5, 2004 
Raphael Gottardo, Ph.D. Candidate, Department of Statistics, University of Washington Robust Bayesian Analysis of Gene Expression Microarray Data Abstract: TBA

Wednesday, 3:15 PM November 3, 2004 
James A. Yorke, University of Maryland Mathematical models of HIV/AIDS infectiousness Abstract: The "infectiousness" of HIV is a measure of how easy it is to transmit it from one person to another. It can only be estimated by using considerable mathematical analysis since direct experimental measurements are not available. There has been a major flaw in the published results and our work will correct these. I will report on our results that are to be published in the journal JAIDS.

Friday, 3:15 PM October 22, 2004 
Dr. Din Chen, Pacific Northwest Halibut Commission, Seattle, Washington A Statistical MarkRecapture Model for Fish Movement (CANCELLED) Abstract: TBA

Friday, 3:15 PM October 8, 2004 
Don Johnson, Professor Emeritus, New Mexico State University A Representation Theorem Revisited Abstract: An old result of mine showed that the members of a certain class of latticeordered rings could be represented as rings of extended realvalued functions on a locally compact space. The existence of this representation has been useful; however, the abstractness of the proof has obscured the identities of the representating functions from all but a few hardy souls, thus limitimg the usefulness of the result. Herewith, a new, more transparent, proof, one which finds ready application. First, it yields a strong statement of the uniqueness of the representation, something that had not been observed before. Second, some insight is gained into the situation with regard to the question that prodded my return to this topic: "What can one say about the functoriality of this representation?

Friday, 3:15 PM June 4, 2004 
Jay Buckingham, Microsoft Corp. Combining weak classifiers for image categorization

Friday, 3:15 PM May 28, 2004 
tba

Friday, 3:15 PM May 21, 2004 
tba

Friday, 12:00 AM May 14, 2004 
"Pattern Selection in Consequence of Finiteness"

Friday, 3:15 PM May 7, 2004 
David Birkes, OSU Likelihood Estimation in Normal VarianceComponents Models Abstract: Maximum likelihood is an appealing approach to estimation because of its wide applicability, its optimal asymptotic properties under fairly general conditions, and its good performance in many of the situations in which it has been applied. Typically there is no explicit formula for a maximum likelihood estimator (MLE) and it must be computed by an iterative algorithm. This raises several questions: Does the MLE exist? Will the algorithm converge? Will it converge to the MLE? We investigate these questions in a normal variancecomponents model. Besides MLEs, we also consider REMLEs (restricted maximum likelihood estimators).

Friday, 3:15 PM April 30, 2004 
G.L.Vasconcelos, Phys. Dept., UFPE, Brazil The mathematics of twodimensional bubbles

Friday, 3:15 PM April 30, 2004 
Matthew Stephens The Haplotype Inference Problem

Friday, 3:15 PM April 23, 2004 
Rochelle Fu

Friday, 3:15 PM April 9, 2004 
Herbert Hethcote, University of Iowa  Talk at Cramer Hall 371 A predator prey model with infected prey Abstract: Abstract. A predator prey model with logistic growth in the prey is modified to include an SIS parasitic infection in the prey with infected prey being more vulnerable to predation. Thresholds are identified which determine when the prey and predator populations survive and when the disease remains endemic. Depending on the parameter values, the greater vulnerability of the infected prey can allow the predator population to persist, or the predation of the more vulnerable prey can cause the disease to die out. This is joint work with Wendi Wang, Litao Han, and Zhien Ma. ATTENTION  Talk will be at Cramer Hall 371.

Friday, 3:15 PM April 2, 2004 
Brad Crain, 4th floor NH Mouse Calculus Abstract: Consider a mouse that is trying to escape from a maze. Assume that mouse has no memory and randomly tries different paths to escape. We discuss probabilities P(Xi=k), P(Xi

Friday, 3:15 PM March 19, 2004 
TBA TBA

Friday, 3:15 PM March 12, 2004 
Martin Bohner, University of MissouriRolla Dynamic Equations on Time Scales Abstract: Time scales have been introduced in order to unify continuous and discrete analysis and in order to extend those theories to cases "in between". We will offer a brief introduction into the calculus involved, including the socalled delta derivative of a function on a time scale. This delta derivative is equal to the usual derivative if the time scale is the set of all real numbers, and it is equal to the usual forward difference operator if the time scale is the set of all integers. However, in general, a time scale may be any closed subset of the reals. We present some basic facts concerning dynamic equations on time scales (those are differential and difference equations, resp., in the above two mentioned cases) and initial value problems involving them. We introduce the exponential function on a general time scale and use it to solve initial value problems involving first order linear dynamic equation. We also present a unification of the Laplace and Ztransform, which serves to solve any higher order linear dynamic equations with constant coefficients. Throughout the talk, many examples of time scales will be offered. Among others, we will discuss the following examples:
References

Friday, 3:15 PM March 5, 2004 
Matthew Kudzin, SUNY Stony Brook Cutting up Manifolds without making a Mess.

Friday, 12:00 AM February 27, 2004 
TBA

Friday, 3:15 PM February 20, 2004 
Stat Talk TBA

Friday, 3:15 PM February 13, 2004 
Malge Peszynska, Oregon State University Mathematics of flow in porous media Abstract: In this talk we discuss a few mathematical and numerical issues for multiphase and multicomponent flow and transport in porous media. Such phenomena are the underlying mechanism for transport in aquifers as well as for oil and gas recovery. Their mathematical models are coupled nonlinear partial differential equations which are locally of degenerate parabolic, elliptic, or hyperbolic type. In the talk we discuss some recent developments in the analysis of such models and review appropriate numerical techniques with focus on adaptive and conservative methods. In the second part of the talk we present our results in two major research thrusts in the field. As the flow of fluids in a porous reservoir is rarely an isolated phenomenon, it is necessary to find stable and convergent methods of (multiphysics) couplings of models of flow in space and in time. Second issue is handling of uncertainty and multiple scales present in the geological data. The talk is illustrated with numerical results.

Friday, 3:15 PM February 6, 2004 
Carol Overdeep, Western Oregon University Using Markov Chains to Estimate Torpedo performance Abstract: We use a finite Markov Chains to estimate the probability that a torpedo successfully completes its mission.

Friday, 3:15 PM January 30, 2004 
Stephen Bricher, Linfeld College The asymptotics of blowup for semilinear heat equations via center manifold theory. Abstract: In this talk we will discuss center manifold theory for differential equations, which is a type of "reduction principle" used to determine the behavior of solutions near a fixed point whose linearization has an eigenvalue with zero real part. As an application, we will discuss how center manifold techniques can be used to determine the asymptotic behavior of solutions to semilinear heat equations that become unbounded in finite time.

Friday, 3:15 PM January 23, 2004 
Alexander Chudnovsky, CEMM, University of Illinois, Chicago FRACTURE OF SOLIDS: Challenges in Understanding and Modeling Abstract: Fracture of solids and structures is known since the very beginning of engineering practice as an inevitable part of human experience. The studies of fracture for thousands of years were just observations and accumulation of empirical data. After industrial revolution, the quest for prevention of structural failure led to development of theory of structures (beams, plates, shells) and general continuum mechanics, which was a part of applied mathematics from the very beginning. Significant progress in understanding of fracture phenomena took place in the second half of last century. However, many fundamental problems of fracture still remain unresolved. Two types of laws of Nature, deterministic and statistical ones control fracture phenomena. Dynamic equations of motion, energy balance or compatibility equations exemplify the deterministic side of the phenomena. The uncertainty in fracture initiation time and location, stochastic fracture path and large scatter of observed fracture parameters under the most controlled test conditions reflect the significant role of chance in Fracture. Modeling of the above two sides of Fracture require different formalisms. Variational principles, symmetries concept and resulting constitutive and balance equations are the modern language of deterministic Fracture Mechanics. Random fields of parameters, stochasticity of fracture path (crack trajectory), procedure of averaging over an ensemble of random crack trajectories are the features of Statistical Fracture Mechanics. A brief history of Fracture, the formalism of the contemporary theory and examples of unsolved problems, as well as the needs for an adequate mathematical formalism for the problem at hand will be discussed.

Friday, 3:15 PM January 16, 2004 
Anthony Olsen, NHEERL, Western Ecology Division Estimating amphibian occupancy rates in ponds under complex survey Abstract: "Monitoring the occurrence of specific amphibian species in ponds is one component of the US Geological SurveyG‚…s Amphibian Monitoring and Research Initiative. Two collaborative studies were conducted in Olympic National Park and southeastern region of Oregon. The number of ponds in each study region precludes visiting each one to determine the presence of particular amphibian species. Consequently, a probability survey design was implemented to select a subset of ponds for monitoring. A complete list, or geographic information system coverage, of all the ponds was not available. To overcome this limitation, a twostage cluster sample was used where the first stage primary sampling units are 5th field hydrologic units and the second stage are individual ponds located within each selected hydrologic unit. A common problem is that during a single visit to a pond, it is possible not to detect an amphibian species even when it is present, that is, the probability of detection is less than one. The objective of the survey is to estimate the proportion of ponds in each region that are occupied. Estimation of site occupancy rates when detection probabilities are less than one have been developed by MacKenzie et al. (2002) under the assumption of a simple random sample. Using the notion of generalized estimating functions, their procedures are generalized to cover not only twostage cluster samples but more general complex survey designs. The presentation will describe the two collaborative study survey designs, present the statistical estimation for complex survey designs, and illustrate the estimation with data from the two studies."

Friday, 3:15 PM January 9, 2004 
Peter Veerman, Portland State University The Parsimonious Cleaver II (Cutting manifolds along sparse level sets)
