Talks (unless otherwise indicated) are in Davis 301 from 4–5 PM on Mondays. Refreshments begin at 3:30 on the second floor of Davis.
To make sure you get email updates, add yourself to the mathstu (if a student) or mathothers (if not) email groups. Or check the Colby Math&Stats facebook page.
You can see last semester’s schedule.
Evan Randles, Colby
On the range and periodic structure of random walks on the integer lattice
In the study of random walks on the d-dimensional integer lattice, it is common to assume that a given random walk is both irreducible and aperiodic. Though these common hypotheses make the analysis particularly simple, they rule out interesting behavior about the periodic nature of generic random walks – a subject first approached by G. Pólya from a combinatorial perspective. In this talk, I will take a harmonic analysis viewpoint at the range and periodic of random walks on the lattice. With the help of Fourier transform, I will investigate a function which, for a given random walk, describes the set of possible lattice sites for the random walk at step n. This function will then be seen to appear naturally in a statement of local theorems for random walks on the integer lattice where one makes no assumptions concerning aperiodicity or irreducibility.
A panel of math and math sci majors will discuss the joys and challenges of being a major.
Lucia Petito, Harvard School of Public Health
An Exploration into Misclassified Group Tested Current Status Data
Group testing, first introduced by a military doctor in 1943, has been used as a method to reduce costs when estimating the prevalence of a binary characteristic based on a screening test of m groups that include n independent individuals in total. If the unknown prevalence in question is low, and the screening test suffers from misclassification, more precise prevalence estimates can be obtained from group testing than from testing all n samples separately. In some applications, the individual binary response corresponds to whether an underlying “time to incidence” variable T is less than an observed screening time C. This data structure at the individual level is known as current status data. Given sufficient variation in the observed Cs, it is possible to estimate the distribution function F of T non-parametrically using the pool-adjacent-violators algorithm. Here, we develop a nonparametric estimator of F based on group tested current status data for groups of size k = n/m where the group tests “positive” if and only if any individual unobserved T is less than its corresponding observed C. We will investigate the performance of the group-based estimator as compared to the individual test nonparametric maximum likelihood estimator, and show that the former can be more precise in the presence of misclassification for low values of F(t). We then apply this estimator to the age-at- incidence curve for hepatitis C infection in a sample of U.S. women who gave birth to a child in 2014, where group assignment is done at random and based on maternal age.
Starting at 4:00 pm in Olin 1. Refreshments at 3:30 pm outside of Davis 216
Xiao-Li Meng, Dean of the Harvard Graduate School of Arts and Sciences
The Law of Large Populations: The return of the long-ignored N and how it can affect our 2020 vision
For over a century now, we statisticians have successfully convinced ourselves and almost everyone else, that in statistical inference the size of the population N can be ignored, especially when it is large. Instead, we focused on the size of the sample, n, the key driving force for both the Law of Large Numbers and the Central Limit Theorem. We were thus taught that the statistical error (standard error) goes down with n typically at the rate of 1/√n. However, all these rely on the presumption that our data have perfect quality, in the sense of being equivalent to a probabilistic sample. A largely overlooked statistical identity, a potential counterpart to the Euler identity in
mathematics, reveals a Law of Large Populations (LLP), a law that we should be all afraid of. That is, once we lose control over data quality, the systematic error (bias) in the usual estimators, relative to the benchmarking standard error from simple random sampling, goes up with N at the rate of √N. The coefficient in front of √N can be viewed as a data defect index, which is the simple Pearson correlation between the reporting/recording indicator and the value reported/recorded. Because of the multiplier√N, a seemingly tiny correlation, say, 0.005, can have detrimental effect on the quality of inference. Without understanding of this LLP, “big data” can do more harm than good because of the drastically inflated precision assessment hence a gross overconfidence, setting us up to be caught by surprise when the reality unfolds, as we all experienced during the 2016 US presidential election. Data from Cooperative Congressional Election Study (CCES, conducted by Stephen Ansolabehere, Douglas River and others, and analyzed by Shiro Kuriwaki), are used to estimate the data defect index for the 2016 US election, with the aim to gain a clearer vision for the 2020 election and beyond.
Robert Benedetto, Amherst
Joe Chen, Stanford
Ernst Linder, University of New Hampshire
Tom Bellsky, University of Maine
Sam Wagstaff, Purdue