Washington Statistical Society on Meetup   Washington Statistical Society on LinkedIn

Washington Statistical Society Seminars 2001

January 2001
5
Fri.
Multipoint Linkage Analysis Using Affected Sib Pairs: Incorporating Linkage Evidence from Unlinked Regions
9
Tues.
Hierarchical Bayesian Model with Nonresponse
16
Tues.
An Integrated Framework for Database Privacy Protection
17
Wed.
Meeting the Sampling Needs of Business
31
Wed.
Data on the Racial Profiling of Travelers
February 2001
5
Mon.
Federal Statistics and Statistical Ethics: The Role of the ASA's "Ethical Guidelines for Statistical Practice"
8
Thur.
Evaluation of the Brady Handgun Violence Prevention Act
13
Tues.
Statistical Aspects of Neural Networks
14
Wed.
WSS Seminar & Julius Shiskin Award Presentation
Bias in Aggregate Productivity Trends Revisited
22
Thur.
A Prototype Data Dissemination System for the 2002 Census of Agriculture
March 2001
8
Thur.
Small-Area Poverty Estimates and Public Policy: Looking to the Future
21
Wed.
The Role of Questionnaire Design in Medicaid Estimates: Results from an Experiment
22
Thurs.
Mass Imputation of Agricultural Economic Data Missing by Design-A Simulation Study of Two Regression Based Techniques
30
Fri.
Oatmeal Cookies, Weather Modification, and Organ Failure: The Art of Combining Results from Independent Statistical Studies
April 2001
4
Wed.
NIST Engineering Division Symposium: Gibbs, MCMC, and Importance Sampling
12
Thur.
Interviewer Refusal Aversion Training to Increase Survey Participation
12
Thur.
Effect of Using Priority Mail on Response Rates in a Panel Survey
12
Thur.
New Standards for a New Decade: The Standards for Defining Metropolitan and Micropolitan Statistical Areas
May 2001
2
Wed.
Modern Regression Methods: Differences and Similarities
24
Thur.
Producing Small Area Estimates from National Surveys: Methods for Minimizing Use of Indirect Estimators
30
Wed.
Data Mining in Classification and Cluster Analysis
June 2001
4
Mon.
The 2001 Roger Herriot Award For Innovation In Federal Statistics
6
Wed.
New GAO Report on Research Record Linkage
7
Thur.
Interagency Activities to Address Nonresponse in Household Surveys (part 2)
11
Wed.
WSS President's Day
Sam Greenhouse Memorial Symposium
The Funding Opportunity In Survey Research

25
Mon.
Spatial Modeling of Age, Period and Cohort Effects
27
Wed.
The Federal Reserve Board's 1998 Survey of Small Business Finances: Methodological Issues and Cost Considerations
July 2001
12
Thur.
Revising Statistical Standards: An Exercise in Quality Improvement
16
Mon.
WSS Seminar and Julius Shiskin Award Presentation
Time Series Decomposition and Seasonal Adjustment
17
Tues.
The Data Web Project: Confidentiality Issues
26
Thur.
The Statistical Power of National Data to Evaluate Welfare Reform
August 2001
1
Thur.
Converting Your Technical Paper into the Best Presentation of Your Life!
September 2001
13
Thur.
Evaluating Welfare Reform in an Era of Transition Report of the Committee on National Statistics Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs
13
Thur.
New Concepts in Test Equating and Linking
28
Fri.
A new alternative to Bayes factors: the resolution of Lindley's paradox through the posterior distribution of the likelihood ratio
October 2001
17
Wed.
Informing America's Policy on Illegal Drugs: What We Don't Know Keeps Hurting Us. Report of the Committee on Data and Research for Policy on Illegal Drugs
23
Thur.
Likelihood Analysis Of Neural Network Models
November 2001
1
Thur.
University of Maryland
Statistics Program, Department of Mathematics Seminar
The Fisher Information on a Location Parameter Under Additive and Multiplicative Perturbations
2
Fri.
The George Washington University
Department of Statistics Seminar
A New Alternative To Bayes Factors: The Resolution Of Lindley's Paradox Through The Posterior Distribution Of The Likelihood Ratio
7
Wed.
Why Mathematics Is Needed to Understand Disease-Gene Associations
7
Wed.
The Empirical Role of Young Versus Old Gestational Ages in the Abortion Debate
8
Thur.
University of Maryland
Statistics Program, Department of Mathematics Seminar
Multidimensional Time Markov Processes
9
Fri.
The George Washington University
Department of Statistics Seminar
Modified Maximum Likelihood Estimators Based on Ranked Set Samples
13
Tues.
Evaluation of Score Functions to Aid in the 2002 Census of Agriculture Review Process
13
Tues.
THE MORRIS HANSEN LECTURE
Election Night Estimation
15
Thur.
University of Maryland
Statistics Program, Department of Mathematics Seminar
Clinical Outcome Prediction in Diffuse Large Bcell Lymphoma via Micro array
20
Tues.
U.S. Bureau Of Census
Statistical Research Division Seminar
From Single-Race Reporting to Multiple-Race Reporting: Using Imputation Methods to Bridge the Transition
27
Tues.
U.S. Bureau Of Census
Statistical Research Division Seminar
An Exploratory Data Analysis Retrospective
28
Wed.
U.S. Bureau Of Census
Statistical Research Division Seminar
Residential Mobility and Census Coverage
29
Thur.
The George Washington University
Department of Statistics Seminar
Urban Heat Island Effect in the Greater Washington Metropolitan Area
December 2001
19
Wed.
University of Maryland
Statistics Program, Department of Mathematics Seminar
Bioinformatics for HIV Genomics
19
Wed.
Outlier Selection for RegARIMA Models




Title: Multipoint Linkage Analysis Using Affected Sib Pairs: Incorporating Linkage Evidence from Unlinked Regions

  • Speaker: Dr. Kung-Yee Liang, Johns-Hopkins University, Department of Biostatistics
  • Date/Time: Friday, January 5, 2001, 11:00 a.m.
  • Location: Executive Plaza North, Conference Room G 6310 Executive Plaza Blvd, Rockville, MD. Additional Info: Susan Winer, Office of Preventive Oncology, 301-496-8640 winers@mail.nih.gov. Directions: Beltway to 270 N. Exit 14a (Montrose Rd east) Right on Jefferson (quickly becomes Executive Blvd). Right on Executive Plaza (1st stoplight). Building is the rightmost of the two twin buildings ahead to your right. Parking validation available.
  • Sponsor: Public Health and Biostatistics Section




Abstract:

Credentials that are outside the traditional postsecondary educational attainment We will start the talk with a brief discussion on some central questions in genetic epidemiological research. We will then focus on one of the main questions concerning localizing susceptibility genes. We present a multipoint linkage method to assess evidence of linkage to one region by incorporating linkage evidence from other regions. This method is especially useful for complex diseases such as asthma, diabetes and psychiatric disorders in which the effect of each gene is likely to be small or modest. Our approach uses affected sib pairs in which the number of alleles shared identical by descent is the primary statistic. The method proposed uses data from all available families to simultaneously test the hypothesis of statistical interaction between regions and to estimate the location of the susceptibility gene in the target region. As an illustration, we have applied this method to an asthma sib pair study (Wjst et al., 1999, Genomics) which earlier reported evidence of linkage to chromosome 6 but showed no evidence for chromosome 20. Our results yield strong evidence of linkage to chromosome 20 after incorporating linkage information from chromosome 6. In addition, it estimates with 95% certainty that the map location of the susceptibility gene is flanked by markers D20S186 and D20S101, which are approximately 16.3 CM apart. Return to top

Title: Hierarchical Bayesian Model with Nonresponse

  • Speaker: Geunshik Han, PhD., Professor of Computer Science & Statistics, Hanshin University
  • Chair: Jai Choi, PhD., Office of Research and Methodology, National Center for Health Statistics
  • Date: Tuesday, January 9, 2001, 10:00-11:30 a.m.
  • Location: National Center for Health Statistics, Presidential Building, 11th Floor Auditorium, Room 1110 6525 Belcrest Road, Hyattsville, Maryland (Metro: Green Line, Prince George's Plaza, then approximately 2 blocks).
  • Sponsor: Public Health and Biostatistics Section


Abstract:

We describe a hierarchical Bayesian model to analyze multinomial nonignorable nonresponse data from small areas. We use Dirichlet prior on the multinomial probabilities and beta prior on the response probabilities which permit a pooling of data from different areas. This pooling is needed because of the weak identifiability of the parameters in the model. Inference is sampling based and Markov chain and Monte Carlo methods are used to perform the computations. We apply our method to Body Mass Index (BMI) data from the Third National Health and Nutrition Examination Survey (NHANES III). Return to top

Title: An Integrated Framework for Database Privacy Protection

  • Speaker: Speaker: LiWu Chang, Naval Research Laboratory, Center for High Assurance Computer Systems
  • Discussant: Stephen Roehrig, Carnegie Mellon University
  • Chair: Laura Zayatz, Census Bureau
  • Date/Time: Tuesday, January 16, 2001, 12:30 - 1:45 p.m. (NOTE SPECIAL TIME)
  • Location: Bureau of Labor Statistics, Postal Square Building (PSB), Conference Center, Room G440 - Room 9, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Methodology Section
Abstract:

One of the central objectives of studying database privacy protection is to protect sensitive information held in a database from being inferred by a generic database user. We propose a framework to assist in the formal analysis of the database inference problem. In this work, the inference problem is dealt with from two perspectives which are characterized by different properties of database attributes. One perspective is about the probabilistic dependency relationship among attributes. The other perspective is about the identification of an individual data item. The proposed framework involves two-tier processing. First, a similarity analysis is used to examine attributes that can be used to effectively identify individual information. Attributes selected from this analysis are processed by aggregation. The second tier is to mitigate the inference induced by the probabilistic dependency relationship. In our present approach, a blocking strategy is adopted to reduce the amount of released information. The desired attribute values to be blocked are determined by the corresponding sample probability. Our framework is based on an association network which is composed of a similarity measure and a (Bayesian) probabilistic network model. This provides a unified framework for database inference analysis. Return to top

Topic: Meeting the Sampling Needs of Business

  • Speaker: Yan Liu and Mary Batcher, Ernst & Young LLP
  • Discussant: Fritz Scheuren, The Urban Institute
  • Chair: Linda Atkinson, Economic Research Service
  • Day/Time: Wednesday, January 17, 2001, 12:30-2:00 p.m.
  • Location: BLS, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Metro Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street.
  • Sponsor: Economics Section
Abstract:

Sampling for business uses often requires efforts to keep the sample small. There are situations where there are major cost implications associated with an increase in sample size. Generally, in these circumstances, the increased precision of ratio or regression estimation is useful. For small samples, the design-based ratio estimate under simple random sampling could be seriously biased unless the sample is balanced with respect to the covariate. Stratification on the covariate can achieve the effect of balancing, but the sample size needs to be reasonably large. We propose a deep stratification method at the design stage such that only one unit is drawn from each of the equal sized strata. We then use the regular design-based ratio estimate and its variance estimate. This method makes a remarkable contribution towards the bias reduction and also gives good variance estimates and coverage rates. Simulation results are presented. Return to top

Title: Data on the Racial Profiling of Travelers

  • Speakers: Eva Rezmozic & Wendy Ahmed, Government Accounting Office
  • Chair: Ruth McKay, Government Accounting Office
  • Date/Time: Wednesday, January 31, 2000, 12:00 - 1:30 p.m.
  • Location: BLS, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Metro Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street.
  • Sponsor: Social Demographic Section
Abstract:

Law enforcement officers are prohibited from engaging in discriminatory behavior on the basis of individuals' race, ethnicity, or national origin. However, there have been numerous allegations of such behavior from minorities who travel the nation's roadways and those who transit through the nation's airports.

This presentation will discuss two reviews that GAO undertook at the request of Congress. One review examined the available quantitative research on racial profiling of motorists, as well as data that federal, state, and local law enforcement agencies collect on motorist stops. The other review examined the U.S. Customs Service's policies and procedures for conducting personal search and what controls Customs had in place to ensure that airline passengers were not inappropriately subjected to personal searches because of their race or sex.

With respect to racial profiling of motorists, we found five quantitative analyses as of March 2000. All contained significant limitations, the most prominent being the use of inappropriate benchmarks to assess whether minorities were stopped proportionately more often than whites. Most analyses examined the racial composition of motorists who were stopped, but not the racial composition of motorists at risk of being stopped. The best studies collected data on both the population of travelers as well as the traffic violators on specific roadways. However, even the well-designed studies had methodological limitations, so we could not conclusively determine whether and to what extent racial profiling of motorists may occur. There is a clear need for more and better data on the subject, which law enforcement agencies and additional researchers are attempting to collect.

With respect to personal search practices of the U.S. Customs Service, we obtained data on all Customs personal searches, the outcome of the search, and some passenger characteristics, such as race and sex. Using a series of logistic regression models, we examined which types of passengers were more or less likely to be searched and the results of the searches. We found that certain groups of passengers were selected for more intrusive searches at rates that were not consistent with the rates of finding contraband. While this finding may or may not be evidence of profiling, it indicates that there is room for improvement in the efficiency of the Customs Service's targeting criteria. Return to top

Title: Federal Statistics and Statistical Ethics: The Role of the ASA's "Ethical Guidelines for Statistical Practice"

  • Speaker: William Seltzer, Senior Research Scholar, Fordham University
  • Discussant: John Gardenier, National Center for Health Statistics
  • Chair: Nancy Gordon, US Census Bureau
  • Date/Time: Monday, February, 5, 2001, 12:30 - 1:45 p.m.(NOTE SPECIAL TIME)
  • Location: Bureau of Labor Statistics, Postal Square Building (PSB), Conference Center, Room G440 - Room 1, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. To gain entrance to BLS, please see "Notice" at the top of this announcement.
  • Sponsor: WSS Methodology Section
Abstract:

In their daily work, government statisticians must reconcile a host different requirements and demands. Among these are: user needs, statistical and subject-matter understandings and constraints, agency policies and traditions, supervisory instructions, political priorities, budgetary imperatives, federal law, and personal beliefs and friendships. These and other categories of requirements and constraints often provide conflicting guidance. Moreover, many of the categories themselves do not offer monolithic or even consistent instruction.

Needless to say, the process of sorting out all this conflicting advice on what to do and what not to do is usually implicit. In these circumstances, mistakes, big and small, can and do happen. Ethics provide a tool to help examine and proceed through this maze "do's" and "dont's" and thereby reduce the likelihood of such mistakes. While ethics are helpful in avoiding all sorts of mistakes in coping with conflicting demands, they are particularly helpful in avoiding ethical mistakes. Three sets of ethical standards are currently available for use in connection with the federal statistical system: (1) the American Statistical Association's (ASA) "Ethical Guidelines for Statistical Practice," (2) the International Statistical Institute's "Declaration on Professional Ethics," and (3) the United Nations Statistical Commission's "Fundamental Principles of Official Statistics." Although the focus and language of each is slightly different, certainly with respect to official statistics, they provide consistent guidance. The ASA guidelines, the subject of today's talk, consists of an executive summary, a preamble, and nine subsections that constitute the body of the guidelines. The ASA guidelines provide guidance on normal work issues as well as on the extraordinary challenges that one may sometimes face. The ASA guidelines, moreover, can be an important starting point for fostering ethical awareness in a statistical agency and ethical activism among agency leadership and staff. Indeed, recent research has demonstrated that, without such awareness and activism, the single-minded pursuit of other goals have led to serious lapses. Return to top

Title: Evaluation of the Brady Handgun Violence Prevention Act

  • Speaker: Philip J. Cook, ITT/Terry Sanford Professor of Public Policy, Sanford Institute of Public Policy, Duke University
  • Discussant: David Cantor, Westat
  • Chair: Elizabeth Martin, U.S. Census Bureau
  • Date/Time: Thursday, February 8, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Conference Center, Room 7, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. This session will be Video-conferenced to the Census Bureau and other interested sites.
  • Sponsor: WSS Public Policy Section
Abstract:

The Brady Act established a nationwide requirement that licensed firearms dealers observe a waiting period and initiate a background check for handgun sales. As it turned out, 18 states already met these requirements at the time of its implementation in February 1994. They serve as a control group in analyzing the effect of the Act on homicide and suicide rates. An evaluation written by Jens Ludwig and Phillip Cook that reported largely negative findings was published in the Journal of the American Medical Association of August 2, 2000, and has been attacked from both sides since then. The pro-control folks argue that the "control" states in fact benefitted from Brady, while the anti-control folks argue that if we had done the evaluation right we would have found that Brady increased the homicide rate.

In this talk, the speaker will review the methods and findings, and review the bidding on the critics. He will also discuss the gaping loopholes in the Act that most likely account for its ineffectiveness. Return to top

Title: Statistical Aspects of Neural Networks

  • Speaker: Sandy Balkin, Ernst & Young
  • Discussant: David Banks, BTS/DOT
  • Moderator: Charlie Hallahan, ERS/USDA
  • Date/Time: Tuesday, February 13, 2001, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, Room 7&8, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. This session will be video-conferenced to the Census Bureau and other interested cites.
  • Sponsor: Statistical Computing Section
Abstract:

An Artificial Neural Network (ANN) is an information-processing paradigm inspired by the way the brain processes information. ANNs have been vigorously promoted in the computer science literature for tackling a wide variety of scientific problems. Recently, statisticians have started to investigate whether they are useful for tackling various statistical problems. Most of the interest in ANNs is motivated by their use as a universal function approximator. However, comparative studies with traditional statistical methods have given mixed results as to the added benefit they might provide.

This talk will explain what neural networks are and how they relate to statistical models. Model selection, estimation, and validation will be discussed. Advantages and disadvantages of their use and successful applications will be presented. Return to top

WSS Seminar & Julius Shiskin Award Presentation

Topic: Bias in Aggregate Productivity Trends Revisited

  • Speakers: William Gullickson and Michael J. Harper, Bureau of Labor Statistics
  • Chair: Howard Hogan, U.S. Bureau of the Census
  • Date/Time: February 14, 2001; 12:30 PM 2:00 p.m.
  • Location: Bureau of Labor Statistics, 2 Massachusetts Ave. NE, Conference Center Room 9. Enter at Massachusetts Avenue and North Capitol Street (Red Line: Union Station). To gain entrance to BLS, please see "Notice" at the top of this announcement.
  • Sponsor: Economics Section and Shiskin Award Committee
Abstract:

This paper develops measures of U.S. multifactor productivity (MFP) growth for aggregate sectors and for industries. MFP is designed to measure the joint influences on economic growth of factors such as technological change, efficiency improvements, returns to scale, and reallocation of resources. The paper updates results from earlier work. We continue to find a number of industries outside of manufacturing with negative MFP trends during the 1980s and 1990s. These include insurance, construction, banking, and health care. The paper considers the possibility that these negative trends reflect problems in measuring outputs and inputs. Attention is focused on the fact that service output sector trends are low relative to input trends. This may reflect the fact that we are better able to measure quality change for goods, especially high tech goods, than for services.

Following the seminar presentation, Marilyn Manser, Associate Commissioner for Productivity and Technology, BLS, will make remarks granting the Julius Shiskin Award of 2000 to Edwin R. Dean, formerly of the Bureau of Labor Statistics, for his important contributions to the improvement of and understanding of productivity measures, and also to programs on international comparisons of labor statistics and international technical cooperation. The Julius Shiskin Award was intended to honor original and important contributions in the development of economic statistics and in their use in interpreting economic events. It is jointly sponsored by the Washington Statistical Society and the National Association of Business Economists. Edwin Dean's expertise and innovation has placed increased emphasis on the Bureau of Labor Statistics' international technical cooperation program. These steps have helped foster the reputation of the United States as a leader in the World's increasingly global economy.

Please join the Washington Statistical Society on February 14, 2001, at 12:30 p.m. to honor Edwin Dean as we present the award to him and celebrate in a reception following the award. Return to top

Title: A Prototype Data Dissemination System for the 2002 Census of Agriculture

  • Speaker: Irwin Anolik, USDA - National Agricultural Statistics Service - Research and Development Division 3251 Old Lee Hwy., Room 305, Fairfax VA 22030-1504, (703) 235-5218 x114, (703) 235- 3386 FAX, ianolik@nass.usda.gov
  • Chair: Dan Beckler, USDA - NASS
  • Date/Time: February 22, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Room 2990, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. To gain entrance to BLS, please see "Notice" at the top of this announcement.
  • Sponsor: WSS Agricultural & Natural Resources Section
Abstract:

Historically, the Census of Agriculture represents the leading source of local area statistics about U.S. agriculture. In 1997, responsibility for the Census of Agriculture was transferred from the Bureau of the Census to the National Agricultural Statistics Service (NASS). In large part, this responsibility involves collecting, analyzing, and publishing data regarding all places defined as farms.

This prototype system uses data from the 1997 Census of Agriculture to demonstrate methods of graphical display that could give NASS data customers a better understanding of patterns and structure in the data. These methods could also give data customers enhanced ability to view, analyze, and interact with summary data previously available only in tables.

Using concepts and methods developed and inspired by Tukey, Cleveland, Tufte, Carr, and others, the system demonstrates how NASS data can be more effectively displayed and disseminated using dots, lines, arrows, and maps. While this system uses data from the 1997 Census of Agriculture, the applicability to other sources of survey and census data will hopefully be apparent.

The prototype system presented uses a Web browser to display and disseminate data, and will ultimately feature the ability to dynamically generate and interact with various charts and maps.

Whether the customer's intent is to print a given display or view it on a screen, the system is structured to make NASS published data easier to access and understand. Return to top

Topic: Small-Area Poverty Estimates and Public Policy: Looking to the Future

  • Speakers: Graham Kalton, Westat; Constance Citro, Committee on National Statistics
  • Discussant: Charles Alexander, U.S. Census Bureau
  • Chair: Daniel Kasprzyk, National Center for Education Statistics
  • Date/Time: Thursday, March 8, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Conference Center, Room 7, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. This session will be Video-conferenced to the Census Bureau and other interested cites.
  • Sponsor: WSS Public Policy Section
Abstract:

More than $130 billion of federal funds are allocated each year to states and localities by means of formulas that include estimates of poverty or income. States also use small-area income and poverty estimates to allocate their own and federal funds to substate areas. The funds support a wide range of activities and services, including child care, community development, education, job training, nutrition, public health and others.

The newest source of estimates is the Census Bureau's Small Area Income and Poverty Estimates (SAIPE) Program, which was begun in the early 1990s to provide estimates that would be more timely than those from the decennial census. 1994 legislation specified the use of SAIPE estimates to allocate more than $7 billion each year of Title I Elementary and Secondary Education Act funds for disadvantaged children, pending a review by a Committee on National Statistics (CNSTAT) panel, and the SAIPE estimates are used for other federal programs as well.

The CNSTAT Panel on Estimates of Poverty for Small Geographic Areas evaluated the currently used SAIPE methods and estimates and outlined an agenda for future research and develpment. Graham Kalton, chair of the panel, and Connie Citro, the panel's study director, will review the panel's findings and recommendations, with a particular emphasis on future R&D and the role that new surveys, such as the 2000 census long form and the American Community Survey, and administrative records can play in improving the estimates for public policy use in the next decade and beyond. Return to top

Topic: The Role of Questionnaire Design in Medicaid Estimates: Results from an Experiment

  • Speaker: Joanne Pascale, U.S. Bureau of the Census, Washington, DC
  • Chair: Manuel de la Puente, U.S. Census Bureau, Washington, DC
  • Date/Time: Wednesday, March 21, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Conference Center, Room G440, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Public Health and Biostatistics Section
Abstract:

Two Congressional acts of the 1990s -- the welfare reform law (Personal Responsibility and Work Opportunity Reconciliation Act of 1996) and the Children's Health Insurance Program (CHIP) in 1997 -- have profound implications for the Medicaid target population. While researchers and public policy experts have monitored Medicaid since its inception, these recent federal initiatives have heightened the demand for reliable statistics on the number and characteristics of people on Medicaid, both to detect unintended consequences of welfare reform and to track the success of the CHIP program. Figures on Medicaid, however, vary significantly depending on the survey used to generate the estimates.

This session will present findings from an experimental survey (the Census Bureau's 1999 Questionnaire Design Experimental Research Survey) which measured Medicaid participation under four different survey designs. The designs contain key features of several important health surveys, including the Current Population Survey (the survey used by the federal government to generate official statistics on health insurance coverage), the Survey of Income and Program Participation and the Robert Would Johnson Foundation's Community Tracking Survey. Results will focus on the levels and characteristics of Medicaid participants as measured under these four different designs. Attendants of the session can expect to recognize associations between survey design features and the estimates those surveys generate, and to analyze existing survey data and design new surveys in light of these associations. Return to top

Topic: Mass Imputation of Agricultural Economic Data Missing by Design-A Simulation Study of Two Regression Based Techniques

  • Speaker: Matt Fetter, USDA/NASS/Research and Development Division
  • Chair: Dan Beckler, USDA/NASS
  • Date/Time: Thursday, March 22, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Conference Center Room 2, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. This session will be Video-conferenced to the Census Bureau and other interested cites.
  • Sponsor: WSS Agriculture and Natural Resources Section
Abstract:

In making an effort to reduce respondent burden, an approach that one might take is to reduce significantly the amount of data being collected. It would be desirous to do this in a manner that facilitates the use of methodology that decreases the impact of this reduction in data collection on both point estimates and analyses obtained from the reduced data set.

Two imputation techniques are investigated, one based on a Markov Chain Monte Carlo algorithm (Schafer, 1997) under a conditional multivariate normal assumption, the other using a simple least squares regression with an added random empirical residual (RER). Computer simulation was used to study the statistical characteristics of agriculture economic data sets completed by imputing data using these methods. The performance of these methods when applied to situations where, by design, as much as 60 % of the records require an imputed value for a given variable is of particular interest.

Particularly problematic is the extreme skewness and semi-continuous nature of many of these data-the former a result of a relatively few large values, the latter caused by a preponderance of legitimate zero values. First, power transformations are used to create more normal-like marginal distributions for each variable. Logistic regression is then used to determine if a positive or zero is to be imputed for a given variable on a given record. Then, the afore mentioned modeling techniques are applied to obtain the necessary positive values to impute.

For estimates of means, moderate gains in precision were achieved for some variables, but for other variables, the imputations appeared to add nothing but noise and/or bias. The Random Empirical Error method did preserve correlations quite well whereas the Markov Chain Monte Carlo approach was less effective in this regard. Return to top

Title: Oatmeal Cookies, Weather Modification, and Organ Failure:
The Art of Combining Results from Independent Statistical Studies

  • Speaker: Ingram Olkin, Stanford University
  • Discussant: To be announced
  • Chair: Steven B. Cohen, Agency for Healthcare Research and Quality
  • Date/Time: Friday, March 30, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Postal Square Building PSB), Conference Center, Conferences Rooms 1 and 2, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB
  • Sponsor: WSS Methodology Section
Abstract:

The global information explosion in almost all areas of science, coupled with the movement of evidence-based medicine, has generated the need for the synthesis and assessment of evidence. There is now a huge body of studies that deal with specific problems. For example, there are over 750 experiments on the effect of cloud seeding, some of which may use different seeding agents, may seed in different months, and so on. In the health sciences there are many studies concerned with the effects of a particular drug or treatment. For example, is a combination of estrogen and progesterone effective in reducing osteoporosis in women, or is aspirin effective in diminishing heart attacks? In each case, different studies may use different populations, concentrations, or frequency may vary, etc. Statisticians are being challenged to define procedures for combining the results of such uncoordinated studies. The set of such procedures has been called meta-analysis, in contrast to primary or secondary analyses.

An area of current interest is how to model dependencies. The mechanisms for such modeling may arise from statistical concepts, physical structures, characterizations, mixtures, etc. Thus, we might expect models for the joint failure distributions of the engines on an airplane that differ from the models of failure distributions of organs in the body. This area is fascinating in that it permits one to study various physical phenomena statistically, and that it requires a combination of both physical and statistical insight.

Note: The talk is similar to Dr. Olkin's Fisher Lecture at the ASA meetings last August. Return to top

NIST Statistical Engineering Division Symposium

As part of the project at NIST to update the Handbook of Mathematical Functions by Abramovitz and Stegun and to make it available electronically as the Digital of Library of Mathematical Functions conferences are being held at NIST for some of the principal topics. The Mathematical and Computational Sciences Division and the Statistical Engineering Division are jointly sponsoring a symposium on Gibbs, MCMC and Importance Sampling, specific topics that are part of the expansion of coverage for the Digital Library. A demonstration of the NIST-SEMATECH handbook, also a web-based publication, will be included.

The symposium is open to the public; and WSS members in particular are invited; and there is no fee. Reservations are requested in order to guarantee space and to take NIST shuttle from Metro. Call Stephany Bailey at 301 975-2839 (Statistical Engineering Division) or via email at stephany.bailey@nist.gov for advance registration or for additional information.
  • Date/Time: Wednesday, April 4, 2001, 8:30 am - 2:30 pm
  • Topics:
    "The Handbook of Mathematical Functions Goes Digital"
    Dan Lozier, Mathematical and Computational Sciences Division, NIST

    "Statistics in the Digital Library"
    Ingram Olkin, Stanford University
    David Kemp, St. Andrews University

    "Gibbs, MCMC, Importance Sampling"
    George Casella, University of Florida

    "Monte Carlo Sampling - Real Application"
    Jun Liu, Harvard University

    "Introducing the NIST-SEMATECH Handbook on the WEB"
    Will Guthrie, Statistical Engineering Division, NIST
    Alan Heckert, Statistical Engineering Division, NIST

    Roundtable Discussion
    (Lunch available at the NIST cafeteria.) NIST can be reached from the Metro Red Line stop at Shady Grove by taking the NIST shuttle.
Return to top

Topic 1: Interviewer Refusal Aversion Training to Increase Survey Participation

  • Speakers: Thomas S. Mayer and Eileen M. O'Brien, U.S. Census Bureau

Topic 2: Effect of Using Priority Mail on Response Rates in a Panel Survey

  • Speaker: Stephanie Smilay, Demographic Surveys Division, U.S. Census Bureau

  • Chair: Nancy A. Bates, U.S. Census Bureau
  • Date/Time: Thursday, April 12, 2001, 12:00 - 3:00 p.m.
  • Location: Bureau of Labor Statistics, Postal Square Building (PSB), Conference Center, Room G440 - Room 3, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Note: The Interagency Household Survey Nonresponse Group will be holding it's bi-monthly meeting immediately following the WSS Methodology seminar in the same room and that all are welcome to attend (2:00-3:00).
  • Sponsor: WSS Methodology Section
Abstracts:

Topic 1:
Initial nonresponse rates for continuing Federal household surveys are increasing (e.g., Atrostic, Bates, Burt, Silberstein, & Winters, 1999). Interviewer administered surveys allow interviewers to influence respondent participation because they can generate person-level, customized appeals. Interviewer training, however, is often inadequate in developing skills to effectively engage respondents in this way. Interviewers do not feel prepared in answering respondent's questions, communicating the purpose of the survey, and establishing and maintaining rapport with the respondent (e.g., Doughty et al., 2000). Based on the concepts of tailoring and maintaining interaction (Groves and Couper, 1998), Groves and McGonagle (in press) have recently examined a theory guided training protocol designed to enhance interviewers' skills in avoiding refusals. The current paper describes a similar training protocol tested in an omnibus RDD household survey conducted by the Census Bureau. Results indicate that interviewers who received training had significantly higher first contact cooperation rates than when compared with their own pre-training rates or with interviewers who did not receive training.

Topic 2:
The Consumer Expenditure (CE) Quarterly Survey, a panel survey with monthly data collection, mails several types of introductory letters to eligible sample addresses; the wording varies depending on the month in sample. Reluctant respondents and those who are difficult to find at home may also receive a "Better Understanding Letter" (BUL) from the regional office which coordinates data collection. The results of a 1998 experiment with the Survey of Income and Program Participation (SIPP) showed that the use of priority mail envelopes in association with introductory letters improved response rates.

For the CE Quarterly survey, we tested the use of priority mail envelopes over a three-month period, using a subsample of all eligible addresses. Half of the subsample received treatment and half did not. The subsample included 1) addresses that were eligible noninterviews during the previous month in sample or 2) addresses that received a BUL during the month in sample. Procedures provided the method for selecting the half that received treatment; selection was random across all areas in sample. The control group received their introductory letter or BUL in standard envelopes. The treatment group received their introductory letter or BUL in Priority Mail envelopes. This paper presents the results. Return to top

Title: New Standards for a New Decade: The Standards for Defining Metropolitan and Micropolitan Statistical Areas

  • Speaker: James D. Fitzsimmons, U.S. Census Bureau and Chair, Metropolitan Area Standards Review Committee
  • Discussant: Ed Spar, COPAFS
  • Chair: Ed Spar, COPAFS
  • Date/Time: April 12, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Conference Center, Room 7, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. This session will be Video-conferenced to the Census Bureau and other interested cites.
  • Sponsor: WSS Public Policy Section
Abstract:

The Metropolitan Area Standards Review Project concluded with the publication of new standards in late December of last year. Under way from the early 1990s, the project yielded standards that will be applied with Census 2000 data to define new Metropolitan Statistical Areas and Micropolitan Statistical Areas in 2003. In addition to these areas that are the first focus of the standards, the Office of Management and Budget also will apply the standards to define Metropolitan Divisions components within the largest and most complex Metropolitan Statistical Areas as well Combined Statistical Areas for those seeking a larger, regional perspective. The new standards are simpler than the 1990 standards that they replace yet provide enhanced flexibility to data producers and users and address more of the nation's settlement and activity patterns. Return to top

Topic: Creating and Rejuvenating Statisticians: What DC Area Universities Can Do A Panel Discussion

  • Panelists:
    Richard Bolstein, George Mason University
    Robert Groves, Joint Program in Survey Methodology
    Phillip Kott, USDA Graduate School
    Hosam Mahmoud, George Washington University
    John Nolan, American University
    Paul J. Smith, University of Maryland
  • Moderator: Brenda G. Cox, Mathematica Policy Research, Inc.
  • Date/Time: Thursday, April 19, 2001, 12:30 p.m. to 2:00 p.m.
  • Location: Bureau of Labor Statistics, Room 2990-Cognitive Lab, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: Washington Statistical Society
Abstract:

In this panel discussion, representatives from local universities discuss their statistics programs and the types of degrees and courses offered. Two different groups will be targeted in the discussion of what DC area universities offer for statistics education. The first group is local statisticians that have been thinking about pursuing an additional degree or taking additional coursework and want to understand their choices in the local area. The second group is area employers or managers who need to hire statisticians with specific training or to have current employees retrained in specific areas. The discussion will focus on the nature of current program and then the audience will be invited to comment. Return to top

Title: Modern Regression Methods: Differences and Similarities

  • Speaker: Kenneth Cogger, Professor Emeritus, U. of Kansas
  • Discussant: Efstathia Bura, George Washington University
  • Moderator: Charlie Hallahan, ERS/USDA
  • Date/time: Wednesday, May 2, 2001, 12:30 - 2:00 p.m.
  • Place: BLS Conference Center, Room 8, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: Statistical Computing Section
Talk to be video-conferenced

Abstract:

In the last few years, many new methods have been developed and applied to regression problems. Some of the major competitors include artificial neural networks (ANN), multivariate adaptive regression splines (MARS), generalized additive models (GAM), hinged hyperplanes, adaptive logic networks (ALN), and regression trees (CART). There are similarities and differences between these methods that are not widely understood. This talk will illustrate the technical differences and also show how these aspects carry over into an actual analysis of some real data from the public domain, the Boston Housing sample. In the past, software availability has been limited, but within the past year, there have been several developments worthy of note, and these are also described. Return to top

Title: Producing Small Area Estimates from National Surveys:
Methods for Minimizing Use of Indirect Estimators

  • Speaker: David A. Marker, Westat
  • Chair and Discussant: Steven B. Cohen, Agency for Healthcare Research and Quality
  • Date/Time: Thursday, May 24, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Postal Square Building PSB), Conference Center, Conferences Rooms 1 and 2, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Methodology Section
Abstract:

National surveys conducted by governments are usually designed to produce estimates for the country as a whole and for major geographical regions. There is, however, a growing demand for small area estimates on the same attributes measured in these surveys. For example, many countries in transition are moving away from centralized decision-making, and western countries like the United States are devolving programs such as welfare from Federal to state responsibilities. Direct estimates for small areas from national surveys are frequently too unstable to be useful, resulting in the desire to find ways to improve estimates for small areas.

While it is always possible to produce indirect, model-dependent, estimates for small areas, it is desirable to produce direct estimators where possible. Through stratification and oversampling, it is possible to increase the number of small areas for which accurate direct estimation is possible. When estimates are required for other small areas, it is possible to use forms of dual-frame estimation to combine the national survey with supplements in specific areas to produce direct estimates. This presentation reviews the methods that may be used to produce direct estimates for small areas. It is not, however, possible, nor economical, to use such methods to produce direct estimates for all variables and small areas. In these situations model-dependent, indirect, estimators are the best methodology. Return to top

Title: Data Mining in Classification and Cluster Analysis

  • Speaker: David Banks, BTS/DOT
  • Discussant: Murray Aitken, ESSI/AIR
  • Moderator: Charlie Hallahan, ERS/USDA
  • Date/time: Wednesday, May 30, 2001, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, Room 8, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: Statistical Computing Section
Talk to be video-conferenced

Abstract:

Many federal agencies have large data sets and research problems that involve classification and cluster analysis. For example, one might want to use results from follow-up surveys to predict the non-responding households in future surveys (classification). Or one might attempt to find clusters of automobile fatalities, perhaps corresponding to drunk drivers, tire failures, and weather conditions. This talk describes the data mining issues that arise in these applications, especially the Curse of Dimensionality, and reviews methods such as bagging, boosting, and racing that can improve statistical inference. Return to top

Title: The 2001 Roger Herriot Award For Innovation In Federal Statistics

  • Recipient: Jeanne Griffith, Consultant and former employee of the National Center for Education Statistics, the National Science Foundation, the Office of Management and Budget, the Census Bureau and the Congressional Research Service
  • Speakers: Katherine Wallman, Office of Management and Budget, Emerson Elliott, former Commissioner of the National Center for Education Statistics, and Norman Bradburn, National Science Foundation
  • Chair: Lynda Carlson, National Science Foundation
  • Date/Time: Monday, June 4, 2001, 12:30 - 2:00 p.m.; Reception to follow
  • Location: Bureau of Labor Statistics, Conference Center, Room 1, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB. This session will be video-conferenced to the Census Bureau and other interested sites.
  • Cosponsor of the Award: Washington Statistical Society and the American Statistical Association's Section on Government Statistics and Section on Social Statistics


Abstract:

On June 4, 2001, the Washington Statistical Society will present the Roger Herriot award to Jeanne Griffith. Jeanne is currently an international education consultant and former Acting Commissioner for the National Center for Education Statistics, is the 2001 recipient of the Roger Herriot Award for Innovation in Federal Statistics. Jeanne is the eighth recipient with a long and distinguished career in the federal government. During her twenty eight years in government, Jeanne has specialized in the coordination of statistics and policy in several agencies of both the Executive and Legislative branches. Her innovative work in improving the collection and dissemination of education statistics, has encompassed management, executive liaison and representation, research and reporting, and statistical policy. She has had an impact in the fields of education statistics, social demography, aging and retirement, labor force, and income and poverty. Jeanne's contributions reflect key elements of Roger Herriot's career: most prominently finding innovative ways to improve the quality and integrity of federal and international statistics.

Three speakers will discuss Jeanne's various contributions to federal statistics. Invited speakers include: Katherine Wallman, Office of Management and Budget, Emerson Elliott, former Commissioner of the National Center for Education Statistics, and Norm Bradburn, National Science Foundation.

The Roger Herriot award is sponsored by the Washington Statistical Society, the ASA's Social Statistics Section and the ASA's Government Statistics Section. Roger Herriot was the Associate Commissioner for Statistical Standards and Methodology at the National Center for Education Statistics (NCES) before he died in 1994. Throughout his career at NCES and the Census Bureau, Roger developed unique approaches to the solution of statistical problems in federal data collection programs. Jeanne Griffith truly exemplifies this tradition.

Please join the Washington Statistical Society on Monday, June 4, 2001 at 12:30 p.m. to honor Jeanne as we present the award to her and celebrate in a reception following the award. Return to top

Title: New GAO Report on Research Record Linkage

  • Speakers: Judy Droitcour and Nancy Donovan, U.S. General Accounting Office
  • Chair: Fritz Scheuren, Urban Institute
  • Discussants: Kathrine Wallman, OMB and Dave Williams, IRS
  • Date/Time: Wednesday, June 6, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Postal Square Building PSB, Conference Center, Conference Room 10, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB
  • Sponsor: WSS Methodology Section


Abstract:

The increasing ability to store, retrieve, cross-reference, and link electronic records brings information benefits as well as new responsibilities and concerns. Record linkage--a computer-based process that combines multiple sources of data to produce new research and statistical information-is a case in point. On one hand, federally sponsored linkage projects can inform policy debates, help government and business planning, and contribute knowledge that might benefit millions of people. But, on the other hand, new information about individuals is created as part of the linkage process: Linkages occur at the person level, and oftentimes "the whole is greater than the sum of the parts." Thus, personal privacy is a potential concern-even though research and statistical projects do not involve government action towards any data subject.

This overview of issues concerning record linkage and privacy in the federal arena includes the following general findings: Federal projects generate new research and statistical information by tapping into-and linking--survey responses, existing records, and "contextual data"; Privacy issues raised by linkages like these include, among others: (1) whether consent to linkage was obtained, (2) whether data-sharing between organizations was required to "make the link," and (3) whether "de-identified" linked data are made available to the public-and might be vulnerable to reidentification risks; Various techniques to help address these privacy issues include signed consent forms, tools for masking personal data, and secure data centers where researchers analyze linked data under controlled conditions; Strategies for enhancing data stewardship could include, among others, developing agency systems for accountability and fostering an organizational culture that emphasizes the values of personal privacy, confidentiality, and security. Questions for further study include: How extensive is federal record linkage? How do legal and regulatory privacy protections vary across federal agencies? What new privacy "tools" and stewardship strategies can be developed? Return to top

Topic: Interagency Activities to Address Nonresponse in Household Surveys (part 2)

  • Speakers: Roberta L. Sangster, U.S. Bureau of Labor Statistics
    Pei-Lu Chiu, National Center for Health Statistics,
  • Discussant: John Dixon, Bureau of Labor Statistics
  • Chair: Nancy Bates, U.S. Census Bureau
  • Date/Time: Thursday, June 7, 12:30-2:00 p.m.
  • Location: Bureau of Labor Statistics, Conference Center, Room 2990, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Public Policy Section


Abstract:

Members of the Interagency Household Survey Nonresponse Group (IHSNG) continue their debate on the appropriate policies to adopt in response to declining response rates in household surveys. Two papers are featured: "Comparison of Outcome Codes Used for Attrition Calculations for Telephone Samples" by Roberta L. Sangster, Charles Mason, and Cassandra Wirth; and "A comparison of Household-level Characteristics in the Late/Difficult and Non-Late/Difficult Interviews in the National Health Interview Survey (NHIS)" by Pei-Lu Chiu, Howard Riddick and Ann Hardy. Attendees are welcome to stay after the WSS session and participate in the regular IHSNG meeting.

The Telephone Point of Purchase Survey (TPOPS) is one of several surveys that are used by the Bureau of Labor Statistics to create the Consumer Price Index. TPOPS is conducted to create the establishment frame for the pricing of goods and services used for the market basket of goods. It is conducted quarterly over a one-year cycle. A sample for each panel is drawn via random digit dialing. The quarterly target sample size is approximately 24,000 households. Twenty-five percent of this is new RDD sample, along with some supplemental sample added due to attrition. This study focuses on the outcome variables needed to compute an attrition rate for the TPOPS. Outcome codes consist of all major work actions taken on each case, which may affect its work progress and final disposition. This includes interviewer actions, supervisor's actions, and programs set up in the instrument (e.g., maximum call attempt rules). Multitudes of decisions are made before a final outcome code is assigned to a case. We compare distributions based on what data are included in outcome codes that are used to calculate attrition. The limitations of the current outcome codes are also discussed.

Recent studies have shown that interviews completed very late in the field period and/or cases that require multiple contacts are significantly different from interviews completed earlier in the interview cycle and/or cases that require fewer contacts. Presumably, these cases reflect "late/difficult" interviews and they comprise the last few percent of the survey interviews. With tight closeout dates imposed on surveys such as NHIS and increasing reluctance toward surveys by the public, the late/difficult cases may potentially become nonrespondents, and may subsequently affect the estimates. In the 1998 NHIS, 1197 interviews out of 38,209 interviewed households were completed after the official interviewer closeout date (15 days from the assignment starting date) and identified as "late" interviews; and 1475 interviews which were completed before the closeout date were considered to be "difficult" interviews due to their requiring more than 9 contacts. The object of this study is to examine (1) if there is a difference in household-level characteristics between the late/difficult and non-late/difficult interviews, and further between the late and difficult interviews; (2) if there are predictive factors associated with these late/difficult interviews; (3) the impact of these late/difficult cases on the health data estimates if they become non-interviews. Return to top

Topic: The Funding Opportunity In Survey Research

  • Organizer: Research Subcommittee of the Federal Committee On Statistical Methodology (Monroe Sirken, NCHS 301-458-4505, mgs2@CDC.gov)
  • Date/Time: Monday, June 11, 2001, 9:30 AM - 4:00 PM   (NOTE SPECIAL TIME)
  • Location: Bureau of Labor Statistics, Conference and Training Center, Rooms 1, 2, and 3, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use First St. NE, entrance to the PSB.
  • SPECIAL NOTE: RSVP by May 18 to Barbara Hetzler (bhetzler@cdc.gov or 301-458-4267).
  • Sponsors: WSS Methodology and Data Collection Sections and AAPOR-DC


Abstract:



In 1998, a consortium of 12 Federal statistical agencies, collaborating with the Methodology, Measurement and Statistics Program, National Science Foundation, initiated a 3- year program that funds basic research in survey methods of potential value to Federal agencies. This Seminar features reports by the principal investigators of four research projects that are being supported by the Funding Opportunity In Survey Research. The four reports are: The Cognitive Basis for Seam Effects In Panel Surveys; A Computer Tool To Critique Survey Questionnaires; Cognitive Issues In Designing Web Surveys; Small Area Estimation.

There are 3 morning and 3 afternoon sessions. Session 1 describes the "Funding Opportunity In Survey Research" - - its origins, current status and arrangements. The four research projects are discussed in consecutive sessions 2 -.5 In each of these sessions, the principal investigators highlight key research findings, and staff of Federal agencies discuss the relevance of the reported research to the work of their respective agencies. In session 6, staff of Federal agencies discuss the future prospects for interagency collaboration in supporting basic survey research. Return to top

Topic: Spatial Modeling of Age, Period and Cohort Effects

  • Speaker: Theodore R. Holford, Yale University and National Cancer Institute
  • Chair: Linda Pickle, National Cancer Institute
  • Day/Time: Monday, June 25, 2001, 11:00 a.m. - 12:30 p.m.
  • Location: Executive Plaza North, conference room J, 6130 Executive Blvd., Rockville, MD. The EPN building is a short walk from the White Flint Metro stop (Red Line) or there is a free shuttle bus available. Call 301-435-7739 for more detailed directions.
  • Sponsor: Public Health and Biostatistics Section


Abstract:

Space and time are two fundamental elements of disease etiology, therefore, effective data analytic methods for descriptive epidemiology should provide simultaneous summaries of disease rate patterns by time and geographic region. One approach to the analysis of spatial-temporal trends focuses on the standardized mortality/morbidity ratio (SMR), which provides a way of adjusting for the effect of age while displaying trends with date of diagnosis or period. The difficulty with this approach is that for many chronic diseases like cancer, generational or cohort effects are usually more important than period effects. Existing age-period-cohort models concentrate on the analysis of rates constructed using equal age and period intervals, and these are extended to the unequal case. In addition, a spatial model for the effects of age, period and cohort is developed, thus providing an approach for obtaining estimable functions of the corresponding effects. Bayes estimates may be obtained based on a conditional autoregressive (CAR) prior that provides nonparametric smoothing for space and time, and the calculations make use of Markov Chain Monte Carlo (MCMC) techniques. Alternative approaches to summarizing the time trends that result from fitting these models will be described and illustrated using data on U.S. cancer mortality by state. Return to top

Title: The Federal Reserve Board's 1998 Survey of Small Business Finances: Methodological Issues and Cost Considerations

  • Speaker: John D. Wolken, Federal Reserve Board
  • Discussant: Brenda Cox, Mathematica Policy Research
  • Chair: Arthur Kennickell, Federal Reserve Board
  • Date/Time: Wednesday, June 27, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, 2 Massachusetts Ave. NE; Conference Center Room 8. Enter at Massachusetts Avenue and North Capitol Street (Red Line: Union Station).
  • Sponsor: Economics Section


Abstract:

The Federal Reserve Board has just completed the third Survey of Small Business Finances. This survey provides a set of data for both issues of direct importance to the Board, as well as helping to fill a void in the availability of financial data for small firms. Indeed, the data from this survey are probably the most extensive data available on small U.S. owned businesses. This survey collects an extensive set of information from a nationally representative sample of small businesses, including information on firm and owner demographics, detailed information on the sources and types of financial services used by businesses, as well as an abbreviated income statement and balance sheet. Collecting these types of data for small businesses has proven to be quite challenging. The businesses we are surveying tend to be quite small on average (less than 5 employees), are often not very financially sophisticated, and importantly, participate in the survey on a voluntary basis.

The talk will briefly describe the motivations and possible uses of the survey data, and then describe the implementation of the 1998 survey. Topics covered will include a description of the sampling frame, issues associated with the use of the frame, and the sampling design necessitated by a requirement to over-represent minority owned businesses. Information on the results of the use of incentives for a subset of the interviewed firms will be reported, as well as some practical considerations of data collection associated with data cleaning and imputation. The talk will conclude with a discussion of how the data are made available to the general research community in light of concerns about confidentiality. Return to top

Title: Revising Statistical Standards: An Exercise in Quality Improvement

  • Speaker: Marilyn M. McMillen, The National Center for Education Statistics
  • Discussant: Fritz Scheuren, Urban Institute
  • Chair: Amrut Champaneri, Bureau of Transportation Statistics, Department of Transportation
  • Date/time: Thursday, July 12, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Conference Center, Room 7, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: Quality Assurance Section


Abstract:

The National Center for Education Statistics collects and disseminates data on all aspects of American Education from pre-school through adult education. NCES Statistical Standards provide a unifying framework of best practices, and minimum requirements, to help ensure high quality data and reports. NCES has had written statistical standards since 1987. The set currently in use was published in 1992. The Center is committed to periodic evaluation of the implementation of the standards and periodic review of the standard's operational feasibility. NCES is currently engaged in a multi-stage review process leading to the revision of the 1992 standards. The Statistical Standards Program is leading an agency-wide project to produce an updated set of statistical standards. This project includes 15 working groups and over one half of the NCES work force.

The topics covered range from survey planning and management, file formats, data documentation, graphic displays and product dissemination to data confidentiality, disclosure risk control, nonresponse, imputations, multiple comparisons, and variance estimation. The process we are following includes three levels of internal review, two rounds of external review, and finally an independent review by an expert panel convened by the National Institute of Statistical Sciences.

The efforts of the working groups have uncovered a number of difficult issues in areas such as disclosure risk control, the measurement and use of response rates, the analysis of nonresponse bias analysis, and the imputation of data from longitudinal studies. This presentation will outline the steps involved in the standards revision process and discuss the issues and proposed resolutions for some of the data quality issues uncovered by the revision process. Return to top

WSS Seminar and Julius Shiskin Award Presentation

Topic: Time Series Decomposition and Seasonal Adjustment

  • Speakers: George C. Tiao, The University of Chicago
  • Chair: David Findley, U.S. Bureau of the Census
  • Date/Time: July 16, 2001; 12:30 PM - 2:00 PM
  • Location: Bureau of Labor Statistics, 2 Massachusetts Ave. NE, Conference Center Rooms 1 and 2. Enter at Massachusetts Avenue and North Capitol Street (Red Line: Union Station).
  • Sponsor: Economics Section and Shiskin Award Committee


Abstract:

Economic time series often exhibit a strong seasonal behavior. It is a common practice to remove the seasonal pattern from the data, as this may make it easier to study the 'underlying' trend movement. In this presentation, we present a concise discussion of two approaches to seasonal adjustment of economic time series: an empirically developed filtering method used by the U. S. Bureau of the Census, Findley et al (1998); and an ARIMA-model-based canonical decomposition method, Hillmer and Tiao (1982) and Maravall (1995). Empirical examples will be presented to highlight the similarity and distinction between these two methods. We will conclude by discussing recent work on fusing the two approaches together.

Shiskin Award:

George C. Tiao, the W. Allen Wallis Professor of Econometrics and Statistics of the Graduate School of Business at the University of Chicago, is the recipient of the 2001 Julius Shiskin Award for Economic Statistics sponsored by the National Association for Business Economics, the Washington Statistical Society, and the Business and Economic Statistics Section of the American Statistical Association. The award recognizes Tiao's research and leadership contributions to the methodological foundations of the first model-based approach to seasonal adjustment that has been adopted by several national statistical offices and central banks.

Tiao received his Ph.D. in Economics from the University of Wisconsin in Madison and then joined the Statistics and Business faculties of this university, where he collaborated on Bayesian statistics and time series analysis with George Box and others. While teaching a short course on time series modeling for Dupont, he was asked about the connection between time series models and the X-11 seasonal adjustment program developed by the Census Bureau under Julius Shiskin's leadership. This led Tiao to make contact with Shiskin and to begin a program of research on possible roles for time series models in seasonal adjustment that has been underway for two decades, carried out in part with Ph.D. students, some of whom have become leading researchers in the field of seasonal adjustment. His work with Steven Hillmer showed how to go from an autoregressive moving average model to a seasonal adjustment in a systematic way that has been versatility implemented in the TRAMO\SEATS program of Agustin Maravall and Victor Gomez, software that is becoming widely used in Europe and is the object of much attention at statistical offices in the U.S.

Please join the Washington Statistical Society on July 16, 2001, at 12:30 p.m. to honor George Tiao as we present the award to him and celebrate in a reception following the award. Return to top

Title: The Data Web Project: Confidentiality Issues

  • Speakers: Cavan Capps, U.S. Census Bureau, and Robert Chapman, Centers for Disease Control
  • Discussant: TBA
  • Chair: Virginia de Wolf, Committee on National Statistics, National Academy of Sciences
  • Date/Time: Tuesday, July 17, 2001, 12:30 - 1:45 p.m. (NOTE SPECIAL TIME)
  • Place: BLS Conference Center, Rooms 7& 8, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Methodology Section


Abstract:

The Data Web is a project that is networking together databases from all over the World Wide Web into a single dispersed database, creating in effect the infrastructure for a virtual data library spanning multiple municipal, state, and federal agencies. This virtual data library includes data from many subject domains including but not limited to: economic data, demographic data, labor data, health data, crime data, transportation data, education data, and agricultural data. The online "reference librarian tool" called DataFERRETT, provides a means to search the multiple data sources over the web and to manipulate the data using standard spreadsheet-like tools, allowing analysts to preview the data before subsetting and downloading or to quickly create descriptive analysis for reports. Survey data, census data, administrative data from national and local sources, and data that are aggregated from these sources are included.

This Internet technology provides new opportunities to provide access to locally-maintained administrative data as it is being created by local government processes. For example, real time access to community crime statistics on muggings in a given neighborhood would be important to impacted citizens. Crop statistics that integrates real time weather data and at the same time presents a time series of crop production for a given area could be made available to analysts as local and national administrative data sources are posted on the World Wide Web. Advances in treating health epidemics might be made if real time access to health treatment administrative data were available.

Internet technology has made each of these scenarios increasingly practical. To make such data available in any meaningful way, confidentiality is a fundamental issue that must be intentionally considered and effectively protected. We will attempt to describe some of the technologies available to implement confidentiality filters in such a distributed environment and outline some of the policy dilemmas that we expect to face as data on the Internet matures.

About the speakers: The Data Web project is being jointly developed by the Centers for Disease Control in Atlanta, Georgia, and the U.S. Census Bureau in Washington D.C. Robert Chapman is Chief of Programming Branch responsible for developing the CDC Wonder data access system in the Division of Public Health Surveillance and Informatics, Epidemiology Program Office, Centers for Disease Control and Cavan Capps is Chief of the Survey Modernization Programming Branch, Demographic Surveys Branch, U.S. Census Bureau. Return to top

Title: The Statistical Power of National Data to Evaluate Welfare Reform

  • Speaker: John Adams, Rand
  • Co-author: Joe Hotz of UCLA
  • Chair: Shelly Ver Ploeg, National Academy of Sciences
  • Date/Time: Thursday, July 26, 2001, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, Postal Square Building PSB), Conference Center, Conference Room 8, 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Methodology Section


Abstract:

The Statistical Power of National Data to Evaluate Welfare Reform Changes in the Aid to Families with Dependent Children (AFDC) program that have occurred over the past decade (with the Personal Responsibility and Work Reconciliation Act of 1996 and pre-1996 waivers granted to states to alter their AFDC programs) have spawned a great interest in the effects of the reforms. A number of recent studies have used the cross-state variation in waivers granted to states to assess the extent to which these particular reforms could account for the decline in the AFDC caseloads that occurred during the 1990s, as well as trends in labor force participation, earnings and poverty rates among welfare-prone groups in the population. This approach takes as the unit of analysis a state in a given year. A key question that must be addressed in evaluating the usefulness of such analyses is the statistical power of such analyses to detect the effect of a feature of a state's welfare policy (or any other state-specific provision) on a particular outcome being analyzed.

This paper examines the statistical power of analyses to detect the effects of these indicators of state-level welfare policy reforms. Mirroring the existing literature, we examine the effects of state-level AFDC waivers on several different outcome measures with data for the pre-PRWORA era. We find that while the data sets used in this literature have reasonably good levels of power for detecting the overall effects of reform on low variance outcomes like the AFDC participation rates, they do not have sufficient power for detecting the overall effects of reform on higher variance outcomes like employment, earnings and family income. Further, power for detecting the effects for specific components of reform, such as time limits, is quite low, even for the Current Population Survey, the largest national survey available. Return to top

Topic: Converting Your Technical Paper into the Best Presentation of Your Life!

  • Speaker: David DesJardins
    Statistical Research Division
    U.S. Bureau of the Census
  • Location: U.S. Bureau of Census, 4700 Silver Hill Road, Suitland, Maryland - the Morris Hansen Auditorium, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call Barbarta Palumbo at (301) 457-4974 to be placed on the visitors' list. A photo ID is required for security purposes.
  • Date/Time: August 1, 2001, 10:30 a.m. - 12:00 p.m.
  • Sponsor: U.S. Bureau of Census, Statistical Research Division


Abstract:

This seminar will focus on a number of fundamental tools and techniques for presenting technical data. It will review the ten key factors of a good presentation (the "Do's and the Don'ts") and will then highlight seven key methods of presenting complex technical data (using examples of Economic, Demographic, and Industry data).

This seminar is an outgrowth of the special 4-day/4-part hands-on, practical application course currently taught to Census Bureau employees by Mr. DesJardins. The content of many current "good presentation" courses ignore the enhanced communication capabilities of "live" graphs -- as well as the use of new computer presentation tools like PowerPoint. These courses could just as well have been taught a century ago. Students who take the 4-part course currently taught by the author first learn the actual tools (SAS Insight & JMP graphics software, and PowerPoint software). They then use these graphs in conjunction with a number of key presentation techniques to effectively communicate highly technical data. Graphs, the natural language of mankind, communicate concepts across many scientific disciplines and often make even complex statistical concepts easy to understand.

Graphical Exploratory Data Analysis (EDA) techniques have proven to be excellent at not only editing/analyzing Census data, but also in quickly finding hidden methodology problems. However, the next/fundamental step is to move these EDA graphs from our day-to-day data analyses into the mainstream -- to make our data highly visible and clearly understandable to the general public. Accordingly, as part of his course, the author has also generated a "cookbook" of the easily learned, quickly generated (point-and-click), very powerful graphs -- that can then be easily inserted into technical papers and PowerPoint presentations.

This seminar is physically accessible to persons with disabilities. Requests for sign language interpretations or other auxiliary aids should be directed to Barbara Palumbo (SRD) (301) 457-4974 (v), (301) 457-3675 (TDD). Return to top

Title: Evaluating Welfare Reform in an Era of Transition Report of the Committee on National Statistics Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs

  • Speaker: Robert Moffitt, Johns Hopkins University and chair of the panel
  • Chair: Shelly Ver Ploeg, National Research Council
  • Discussant: TBD
  • Date/Time: Thursday, September 13, 2001, 12:30 - 2:00.
  • Location: Bureau of Labor Statistics, Conference Center, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Policy Section


Abstract:

Welfare reform in the 1990s fundamentally altered safety net programs for the poor and there is great interest in knowing the consequences of these far-reaching program changes. The National Academy of Sciences' Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs was charged with assessing the strengths and limitations of evaluation methods and data for measuring the effects of changes in welfare policy. The panel, which was sponsored by the Office of the Assistant Secretary for Planning and Evaluation in the Department of Health and Human Services, recently issued its final report entitled Evaluating Welfare Reform in an Era of Transition.

The report notes that there has been a tremendous and unprecedented volume of research on the effects of welfare reform, with support from private foundations and the federal government. However, the panel concludes that there are serious gaps in what is known about the effects of the reform, and that methodological and data limitations hamper the ability to conduct needed evaluations. The report gives the panel's recommendations for improving the evaluation infrastructure so that important questions about the effects of reform can be addressed. This seminar will include an overview and discussion of the findings and recommendations of the report. Return to top

Title: New Concepts in Test Equating and Linking

  • Speaker: Professor R. Darrell Bock, University of Chicago
  • Time: 11:00-12:00 pm, Thursday, September 13, 2001
  • Location: Funger Hall 321, 2201 G Street NW. Foggy Bottom metro stop on the blue and orange line.
  • The George Washington University, Department of Statistics


Abstract:

Test equating is a critical step in the development and maintenance of standardized tests. It is required in many different contexts: random parallel forms equating, vertical equating of forms for use in successive age groups, equating congeneric tests (i.e., tests measuring the same underlying factor), and linking of tests that are not strictly congeneric (i.e., predicting scores on one test from scores on one or more other tests). In classical test theory, equating is limited to the equipercentile method applied to test scores from randomly equivalent groups of examinees; prediction requires calibrating data from groups of examinees who have taken both tests in counter-balanced order. In modern item response theory, equating can be extended to non-equivalent groups when the test forms have some items in common; prediction can be calibrated at the item level rather than the score level. Discussion of these topics will be illustrated by results from the equating of the paper-and-pencil version of the Armed Services Vocational aptitude battery (ASVAB) and the prediction of scores from the National Assessment of Educational Progress (NAEP) linked to state educational achievement test scores.

Note: For a complete list of upcoming seminars check the dept's seminar web site: http://www.gwu.edu/~stat/seminars/Fall2001.htm. The campus map is at: http://www.gwu.edu/Map/. The contact person is Reza Modarres at Reza@gwu.edu or 202-994-6359. Return to top
: A new alternative to Bayes factors: the resolution of Lindley's paradox through the posterior distribution of the likelihood ratio
  • Speaker: Professor Murray Aitkin, Department of Statistics, University of Newcastle and Education Statistics Services Institute
  • Time: 11:00-12:00 pm, Friday, September 28, 2001
  • Location: Funger Hall 222, 2201 G Street NW. Foggy Bottom metro stop on the blue and orange line.
  • The George Washington University, Department of Statistics


Abstract:

The Lindley paradox (correctly formulated by Bartlett) is the basis for the claim that Bayes factors (or the Schwarz BIC criterion), unlike P-values, can provide strong support for a point null hypothesis against a general alternative hypothesis. A difficulty of Bayes factors is well known: that as the sample size increases they can give strong support to any point null hypothesis, regardless of the data or the hypothesis. This talk points to a basic inconsistency between Bayes factors and posterior distributions of the parameter; the latter do not show the paradoxical feature of the former. By transforming the posterior distribution from the parameter to the likelihood ratio, the Bayes conclusions become consistent with P-value conclusions, though the latter need reformulation as measures of strength of evidence. The posterior distribution of the likelihood ratio can be extended to general models with nuisance parameters, providing a general theory of Bayesian point null hypothesis testing which does not suffer from the Lindley paradox and gives conclusions consistent with P-value conclusions, when the latter are correctly reformulated.

Note: For a complete list of upcoming seminars check the dept's seminar web site: http://www.gwu.edu/~stat/seminars/Fall2001.htm. The campus map is at: http://www.gwu.edu/Map/. The contact person is Reza Modarres at Reza@gwu.edu or 202-994-6359. Return to top

Title: Informing America's Policy on Illegal Drugs: What We Don't Know Keeps Hurting Us. Report of the Committee on Data and Research for Policy on Illegal Drugs

  • Speaker: Charles F. Manski, Northwestern University and chair of the panel
  • Chair: Shelly Ver Ploeg, National Research Council
  • Discussant: TBA
  • Date/Time: October 17, 2001,12:30 - 1:30 p.m.
  • Location: Bureau of Labor Statistics, Conference Center, Postal Square Building (PSB), 2 Massachusetts Avenue, NE, Washington, DC. Please use the First St., NE, entrance to the PSB.
  • Sponsor: WSS Policy Section


Abstract:

The consumption of illegal drugs and the design of efforts to control drug use pose some of the most difficult and divisive problems confronting the American public, and adequate data and research are essential to judge the effectiveness of the nation's efforts to cope with them. The National Research Council's Committee on Data and Research for Policy on Illegal Drugs was charged with assessing data and research sources that support drug policy analysis, identifying new data and research needs, and exploring ways to integrate theory and research findings to increase understanding of drug use and the operation of illegal drug markets. The committee, which was sponsored by the Office of National Drug Control Policy, Executive Office of the President recently issued its final report entitled Informing America's Policy on Illegal Drugs: What We Don't Know Keeps Hurting Us.

The report observes that although the nation has spent hundreds of billions of dollars throughout the 1990s to implement its drug policy, there has been a lack of investment in programs of data collection and research that would enable evaluation of this investment. The committee found that the country is particularly hampered in its ability to evaluate the effectiveness of law enforcement programs and strategies that receive the bulk of government resources to control the sale and use of illegal drugs. The report provides recommendations for building a data and research infrastructure to support new programs of data collection and research on enforcement and for improving the scientific underpinnings of drug control policy, generally. This seminar will include an overview and discussion of the findings and recommendations of the report. Return to top

Title: Likelihood Analysis Of Neural Network Models

  • Speaker: Murray Aitkin, ESSI/AIR
  • Discussant: David Banks, BTS/DOT
  • Moderator: Charlie Hallahan, ERS/USDA
  • Place: Bureau of Labor Statistics, Postal Square Building (PSB), Conference Center, Room 7, 2 Massachusetts Ave., NE, Washington, DC. Please use the First Street entrance to the PSB.
  • Date: October 23, 2001, 12:30 - 2:00 p.m.
  • Sponsor: Statistical Computing Section
  • Talk to be video-conferenced.


Abstract:

The multilayer perceptron (MLP) is a type of artificial neural network (ANN) widely used in computer science and engineering for object recognition, discrimination and classification, and process monitoring and control. "Training" of the network is not a straightforward optimization problem.

This talk examines features of the implicit network model which contribute to the optimization difficulties. We examine the likelihood surface of the model and describe its singularities which create the difficulties for optimization routines. The form of the model allows a simple iterative weighted least squared algorithm to be used for maximum likelihood analysis, although the multiple singularities in the likelihood require a large number of random starting points for the algorithm to give the global maximum in even simple problems.

We reformulate the model as an explicit latent variable model. This has the same parameters and mean structure as the MLP but a different variance structure. The likelihood for this model does not have the singularities of the MLP though it may have local maxima. An EM algorithm can be used for ML in this model, which is a finite mixture model, equivalent to a special form of the "mixture of experts" model.

(co-author Rob Foxall, University of East Anglia) Return to top

Title: The Fisher Information on a Location Parameter Under Additive and Multiplicative Perturbations

  • Speaker: Abram Kagan, University of Maryland
  • Time: Thursday, November 1 , 2001, 3:30 pm
  • Place: Room 1313, Mathematics Building, University of Maryland College Park. For directions, please visit the Mathematics Web Site: http://www.math.umd.edu/dept/contact.html
  • Sponsor: University of Maryland, Statistics Program, Department of Mathematics


Abstract:

Let the distribution of a random variable X depend on a location parameter and let U be an ancillary random element. The behavior of the Fisher information I(X+Y, U) and I(X/Y, U) on the location parameter contained in (X+Y, U) and (X/Y, U), respectively, is studied for random variable Y independent of (X, U). A resonance type phenomenon when I(X+tY) is not monotone decreasing in |t| also will be discussed.

Return to top

Title: A new alternative to Bayes factors: the resolution of Lindley's paradox through the posterior distribution of the likelihood ratio

  • Speaker: Professor Murray Aitkin, Department of Statistics, University of Newcastle and Education Statistics Services Institute
  • Time: 11:00-12:00 pm, Friday, November 2, 2001
  • Location: Funger Hall 222, 2201 G Street NW. Foggy Bottom metro stop on the blue and orange line.
  • Sponsor: The George Washington University, Department of Statistics


Abstract:

The Lindley paradox (correctly formulated by Bartlett) is the basis for the claim that Bayes factors (or the Schwarz BIC criterion), unlike P-values, can provide strong support for a point null hypothesis against a general alternative hypothesis. A difficulty of Bayes factors is well known: that as the sample size increases they can give strong support to any point null hypothesis, regardless of the data or the hypothesis. This talk points to a basic inconsistency between Bayes factors and posterior distributions of the parameter; the latter do not show the paradoxical feature of the former. By transforming the posterior distribution from the parameter to the likelihood ratio, the Bayes conclusions become consistent with P-value conclusions, though the latter need reformulation as measures of strength of evidence. The posterior distribution of the likelihood ratio can be extended to general models with nuisance parameters, providing a general theory of Bayesian point null hypothesis testing which does not suffer from the Lindley paradox and gives conclusions consistent with P-value conclusions, when the latter are correctly reformulated.

Note: For a complete list of upcoming seminars check the dept's seminar web site: http://www.gwu.edu/~stat/seminars/Fall2001.htm. The campus map is at: http://www.gwu.edu/Map/. The contact person is Reza Modarres at Reza@gwu.edu or 202-994-6359. Return to top

Title: Why Mathematics Is Needed to Understand Disease-Gene Associations

  • Speaker: Marek Kimmel, Rice University
  • Date/Time: Wednesday, November 7, 2001, 11:00 a.m.
  • Location: National Cancer Institute, Executive Plaza North, Conference Room G, 6116 Executive Plaza Drive Rockville, MD.
  • Sponsor: WSS Biostatistics Section
For information contact Susan Winer 301-496-8640 or winers@mail.nih.gov. Return to top

Title: The Empirical Role of Young Versus Old Gestational Ages in the Abortion Debate

  • Speaker: M.K. Moran, M.S.P.H.
  • Co-author: Professor James M. Lepkowski, PhD
  • Chair: Mary Batcher, Ernst & Young
  • Date/Time: Wednesday, November 7, 2001, 12:30 to 2:00 p.m.
  • Location: Bureau of Labor Statistics, Postal Square Building (PSB), Conference Center, Conference Room 1, 2 Massachusetts Ave., NE, Washington, DC. Please use the First Street entrance to the PSB.
  • Sponsor: WSS Methodology Section
About the Investigators

M.K. Moran matriculated with B.S. from Virginia Tech, 1985; earned the M.S.P.H. in Biostatistics from the University of South Carolina, 1998. M. K. Moran is a statistician/researcher for the federal government. M. K. Moran has been published in Social Science Computer Review (2000) and in the British Journal of Nursing (2001). The research described in this talk is not endorsed by or related to the employer of M. K. Moran.

Professor James M. Lepkowski is Senior Research Scientist at the Survey Research Center and Associate Professor of Biostatistics at the University of Michigan. He is also Research Professor at the Joint Program in Survey Methodology at the University of Maryland. He received his Ph.D. from the University of Michigan in 1980. He currently directs the Summer Institute in Survey Research Techniques. His research examines sampling methods for varied populations, methods for compensating for missing data in surveys, estimation strategies for complex sample survey data, telephone sampling methods, and the role of interviewer and respondent behavior on the quality of survey data.



Abstract:

In large numbers, authorities attuned to the differences among gestational ages will infer a pro-choice rule of law from the "youngest" of pregnancies, infer a pro-life rule of law from the "oldest" of pregnancies, and defend the conventional wisdom that one must cross camps at some gestational age in-between. No research has exposed, much less challenged, their assumption. If woman-doctor partnerships individually applied their own arbitrary profile for which pregnancies to abort (implying a second profile for non-intervention), then cued by facets of pregnancies besides race, sex in utero, and confounders, woman-doctor partnerships as a group would necessarily experience different prospects, before and after the gestational age threshold, for labelling a given pregnancy for the intervention to which it qualifies according to profile. Pregnancies younger than the threshold would more or less fit the one or other profile, but pregnancies older than the threshold would in relatively more cases defy everyone's profiles. This talk describes a novel survey in which women and their doctors assort real pregnancies according to how well each pregnancy matches their own ideas on abortion, a survey probing whether an objectively measurable profiling cross-over, above and beyond popularities, occurs. A gestational age "tau" is proposed, representing the last gestational age for which pregnancies profiled for and not for abortion are uniquely (noncentrally) distributed. Tau reflects a threshold in the fit between pregnancies and the woman-doctor subject's personal profiles for ever and never aborting. For hypothesis tests of the authorities' conjecture, a wide latitude among values of tau are equivalent, from slightly past 0 to slightly before 40 weeks pregnant. While we cannot estimate tau directly, we can evaluate parameters redundant with tau, particularly noncentrality.

Disclaimer: The current employers of M.K. Moran and Prof. Lepkowski are not sponsoring, funding, facilitating, reviewing, or in any way responsible for any of the research described. Therefore, the opinions expressed are specifically theirs and do not necessarily reflect that of any institution. Return to top

Title: Multidimensional Time Markov Processes

  • Speaker: Prof. Tomasz Downarowicz, Wroclaw University of Technology, Poland
  • Date/Time: Thursday, November 8, 2001, 3:30 pm
  • Location: Room 1313, Mathematics Building, University of Maryland College Park. For directions to the Mathematics Building, University of Maryland, please visit the Mathematics web site.
  • Sponsor: University of Maryland, Statistics Program, Department of Mathematics
Abstract:

We will consider Markov chains with multidimensional (d=2 or 3) discrete time, i.e., a semigroup of Markov transition probabilities induced by 2 or 3 commuting ones. We will show that for d=2 there is a full analogy to the Yonescu-Tulcea theorem about the existence of compatible measures on trajectories. For d=3 (and larger) such theorem fails. Moreover, it may happen that there are no trajectories at all! We also discuss a topological analog of transition probability, so called transition system. For dimension 1 there is a full correspondence between transition probabilities and transition systems. Such correspondence fails already for d=2. Return to top
Return to top

Title: Modified Maximum Likelihood Estimators Based on Ranked Set Samples

  • Speaker: Dr. Gang Zheng, Office of Biostatistics Research, National Heart, Lung and Blood Institute
  • Time: 11:00-12:00 pm, Friday, November 9, 2001
  • Location: Funger Hall 222, 2201 G Street NW. Foggy Bottom metro stop on the blue and orange line
  • Sponsor: The George Washington University, Department of Statistics


Abstract:

The Lindley paradox (correctly formulated by Bartlett) is the basis for the claim that Bayes factors (or the Schwarz BIC criterion), unlike P-values, can provide strong support for a point null hypothesis against a general alternative hypothesis. A difficulty of Bayes factors is well known: that as the sample size increases they can give strong support to any point null hypothesis, regardless of the data or the hypothesis. This talk points to a basic inconsistency between Bayes factors and posterior distributions of the parameter; the latter do not show the paradoxical feature of the former. By transforming the posterior distribution from the parameter to the likelihood ratio, the Bayes conclusions become consistent with P-value conclusions, though the latter need reformulation as measures of strength of evidence. The posterior distribution of the likelihood ratio can be extended to general models with nuisance parameters, providing a general theory of Bayesian point null hypothesis testing which does not suffer from the Lindley paradox and gives conclusions consistent with P-value conclusions, when the latter are correctly reformulated.

Note: For a complete list of upcoming seminars check the dept's seminar web site: http://www.gwu.edu/~stat/seminars/Fall2001.htm. The campus map is at: http://www.gwu.edu/Map/. The contact person is Reza Modarres at Reza@gwu.edu or 202-994-6359. Return to top

Title: Evaluation of Score Functions to Aid in the 2002 Census of Agriculture Review Process

  • Speaker: Denise Abreu
    USDA/NASS Research and Development Division
    3251 Old Lee Hwy. Suite 305
    Fairfax, VA 22030
    (703) 235-5213 ext. 170
    (703) 235-3386 - Fax
    dabreu@nass.usda.gov
  • Chair: Dan Beckler, USDA/NASS
  • Date/Time: Tuesday, November 13, 2001, 12:30 - 2:00 pm
  • Location: Bureau of Labor Statistics, Postal Square Building (PSB), Conference Center, Conference Room 6, 2 Massachusetts Ave., NE, Washington, DC. Please use the First Street entrance to the PSB.
  • Sponsor: WSS Agriculture and Natural Resources

  • This seminar will be video-conferenced.


Abstract:

The National Agricultural Statistics Service (NASS) has expressed concern about the cost effectiveness of survey editing. This is particularly true since 1997 when NASS assumed the responsibility for the census of agriculture for which hand-editing of all questionnaires is not feasible. Survey estimates can be improved by priority sorting the records so that those records that are more likely to contain errors which will have a significant effect on county estimates are reviewed first. This priority sorting of records for review is accomplished by first applying a score function and then sorting the records. This paper presents an evaluation of three score functions thought to be most suitable for data collected by NASS. The ultimate goal is to reduce the labor-intensive manual review of data without damaging data quality. The results indicate that using a score function presents an improvement over the previous way of reviewing census records. Return to top

The Morris Hansen Lecture

Title: Election Night Estimation

  • Chair: Joseph Waksberg, Westat
  • Speakers: Warren Mitofsky, Mitofsky International and Murray Edelman, Voter News Service
  • Discussant: Martin Frankel, Statistics and Computer Information Systems, Baruch College, City University of New York and Abt Associates, Inc.
  • Date/Time: Tuesday, November 13, 2001: 3:30-5:30 pm.
  • Location: The Jefferson Auditorium, USDA South Building, between 12th and 14th Streets on Independence Avenue S.W., Washington DC. The Independence Avenue exit from the Smithsonian METRO stop is at the 12th Street corner of the building, which is also where the handicapped entrance is located. Except for handicapped access, all attendees should enter at the 5th wing, along Independence Avenue. Please bring a photo ID to facilitate gaining access to the building.
  • Sponsors: The Washington Statistical Society, Westat, and the National Agricultural Statistics Service.
  • Reception: The lecture will be followed by a reception from 5:30 to 6:30 p.m. in the patio of the Jamie L. Whitten Building, across Independence Avenue S.W.


Abstract:

Network election estimation has been one of the most visible statistical applications of the last 50 years. Unlike any other sampling applications, the accuracy of the estimates is widely known to the public within hours after they are made. While accuracy over the last twenty years has been very good, the mistakes of the 2000 election have called into question the confidence the public and the networks may have had. We will examine the mistakes and the subsequent methodological revisions. We also will review the various methods that have been used over the years, the networks' motivations for broadcasting projections, and the controversies that have ensued. Return to top

Title: Clinical Outcome Prediction in Diffuse Large Bcell Lymphoma via Micro array

  • Speakers: Dr. George Wright, National Cancer Institute
  • Date/Time: Thursday, November 15, 2001, 3:30 pm
  • Location: Room 1313, Mathematics Building, University of Maryland College Park. For directions, please visit the Mathematics web site.
  • Sponsor: University of Maryland, Statistics Program, Department of Mathematics
Abstract:

Diffuse large B-cell lymphoma (DLBCL) is the most common type of lymphoma and also the fifth most common cancer overall. Patients with this disease have also been found to be very heterogeneous in their response to treatment. Through the use of micro array technology it is possible to analyze the genetic expression of a tumor sample to help predict patient outcome, and to do so in a way that is biologically interpretable. In the first part of the talk, a brief introduction to the technology and statistical issues of microarrays will be presented, followed by a description of the method used to analyze the DLBCL samples. Return to top

Topic: From Single-Race Reporting to Multiple-Race Reporting: Using Imputation Methods to Bridge the Transition

  • Speaker: Nathaniel Schenker
    Senior Scientist for Research and Methodology
    National Center for Health Statistics
    (Joint work with Jennifer D. Parker)
  • Date/Time: November 20, 2001, 10:30 - 11:30 a.m.
  • Location: U.S. Bureau of the Census, 4700 Silver Hill Road, Suitland, Maryland - the Morris Hansen Auditorium, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call (301) 457-4974 to be placed on the visitors' list. A photo ID is required for security purposes.
  • Sponsor: U.S. Bureau Of Census, Statistical Research Division


Abstract:

In 1997, the Office of Management and Budget issued a revised standard for the collection of race information within the federal statistical system. One revision in this standard allows individuals to choose more than one race group when responding to federal surveys and other federal data collections. This talk will explore methods that impute single-race categories for those who have given multiple-race responses. Such imputations would be useful when it is desired to conduct analyses involving only single-race categories, such as when trends over time are being examined by race group so that data collected under the old and new standards are being combined.

The National Health Interview survey has allowed multiple-race responses for several years, while also asking respondents to specify one race as their "main" race. Exploratory analyses of data from the survey suggest that imputation methods that use demographic and contextual covariate information to predict the main race can have advantages with respect to lower bias and improved variance estimation compared to simpler methods discussed by the Office of Management and Budget. It also appears, however, that the relationships between the main race and covariates might be changing over time. Thus, caution should be exercised if an imputation model fitted to data from one time period is to be applied to data from another time period.

Note: This program is physically accessible to persons with disabilities. For interpreting services, contact Yvonne Moore at TYY 301-457-2540 or 301-457-2853 (voice mail) or Sherry.Y.Moore@census.gov. Return to top

Topic: An Exploratory Data Analysis Retrospective

  • Speaker: Dr. James Filliben
    National Institute of Standards and Technology
  • Date/Time: November 27, 2001, 10:45 - 11:45 a.m.
  • Location: U.S. Bureau of the Census, 4700 Silver Hill Road, Suitland, Maryland - the Morris Hansen Auditorium, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call (301) 457-4974 to be placed on the visitors' list. A photo ID is required for security purposes.
  • Sponsor: U.S. Bureau Of Census, Statistical Research Division


Abstract:

The purpose of this seminar is to provide a retrospective of EDA - with special side comments about John Tukey. I was fortunate enough to have received my PhD under the Father of EDA, John Tukey. Former students of John Tukey recently had a special celebration to honor John Tukey in Denver, Colorado as part of the ASA CO-WY Fall Chapter meeting. The symposium was entitled Exploratory Data Analysis in the 21st Century: Research in Modern EDA: A Symposium in Honor of John W. Tukey. As part of this seminar I will highlight sections of my presentation at Tukey's symposium.

My talk has three parts. In the first part, I share a collection of personal reflections of John Tukey from the vantage point of a graduate student working under him in the early Princeton years.

In the second part, I discuss core principles of data analysis that John practiced, taught, and instilled into every one of his "disciples", and which serve daily as the backdrop for the what, why, and how of exploratory data analysis.

In part three of my talk, I put these general principles into action. I discuss the "classical" approach (taught as standard fare in many university courses), and I contrast it with a more Tukey-esque approach -- maximizing insight and squeezing the data dry of all relevant information -- by the judicious application of one after another of Tukey's principles.

Note: For many years I have been a close colleague of Dave DesJardins of the Census Bureau. We first met when Dave was a student in my (Tukey inspired) DATAPLOT EDA software package class (for which I was awarded the Department of Commerce's Gold Medal). As a special feature of this retrospective, I have also agreed to provide a Tukey-esque analysis of a Bureau data set.

Note: This seminar is also being sponsored by the Bureau's Graphics Users group.

Note: This program is physically accessible to persons with disabilities. For interpreting services, contact Yvonne Moore at TYY 301-457-2540 or 301-457-2853 (voice mail) or Sherry.Y.Moore@census.gov. Return to top

Topic: Residential Mobility and Census Coverage

  • Date/Time: November 28, 2001, 10:30 a.m. - Noon
  • Location: U.S. Bureau of the Census, 4700 Silver Hill Road, Suitland, Maryland - the Morris Hansen Auditorium, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call (301) 457-4974 to be placed on the visitors' list. A photo ID is required for security purposes.
  • Sponsor: U.S. Bureau Of Census, Statistical Research Division

  • Speakers:
    • What We Know About Residential Mobility From Surveys
      Jason Schacter (Population Division)

    • The General Method Of The Ethnographic Social Network Tracing Evaluation Leslie Ann Brownrigg (Statistical Research Division)

    • The Impact Of Mobility On Coverage In Census 2000
      Among Interacting Social Networks Of:
      1. Haitian Migrant And Seasonal Farm Workers Based In South Florida
        L. Marcelin (School of Medicine, University of Miami)

      2. Mexican Former Migrant Workers In The Midwest
        Alicia Chavira-Prado (University of Illinois)

      3. Young Adult Seasonal Park Workers
        Nancy Murray (Hatpin Communications)

      4. Commercial Fishermen Docking On The South Atlantic
        Kathi Kitner (National Marine Fisheries Service, NOAA)

      5. An American Indian Men's Society In Oklahoma
        Brian Gilley (North Central College, Naperville, IL)

      6. Homeless American Indians In The San Francisco Bay Area
        Susan Lobo (Intertribal Friendship House/ University of Arizona)


Abstract:

Summarizes information on residential mobility from the Current Population Survey, ethnographic tracing of interacting social networks which include highly mobile people in the context of the 2000 Census, and the search for matching census records.

Note: This program is physically accessible to persons with disabilities. For interpreting services, contact Yvonne Moore at TYY 301-457-2540 or 301-457-2853 (voice mail) or Sherry.Y.Moore@census.gov. Return to top

Title: Urban Heat Island Effect in the Greater Washington Metropolitan Area

  • Speaker: Dr. Ivan Cheung, Department of Geography, The George Washington University
  • Time: 3:00-4:00 pm, Thursday, November 30, 2001
  • Location: Funger Hall 321, 2201 G Street NW. Foggy Bottom metro stop on the blue and orange line.
  • Sponsor: The George Washington University, Department of Statistics


Abstract:

The Greater Washington Metropolitan Area has grown tremendously in recent decades. Similar to other rapidly growing metropolitan areas, the sprawling conditions are most evident in the suburban counties of Virginia and Maryland (such as Prince George's, Montgomery, Frederick, and Loudoun counties). Large tracts of forests and pastures have been cleared for roads, office buildings, shopping malls, and residential developments. Using a combined network of NCDC and Automated Weather Source (AWS) 4WIND weather observation sites, the urban heat island morphology is studied based upon the hourly and daily meteorological observations. Using nocturnal minimum temperature, a typical "cliff-plateau-peak-plateau-cliff" morphology transverses across the area from the west to the east. The striking emergence of a secondary peak in Tyson (Fairfax County) correlates with the drastic land surface changes in that locality. To further evaluate the thermal effects of urban sprawl in Washington, land surface characteristics are established using remotely sensed information obtained from the Landsat 7 ETM+ sensor. Normalized difference vegetation indices (NDVI) are computed using bands 3 and 4 signatures whereas surface temperatures are estimated from band 6. The former characterizes the density of vegetative covers at a 30m x 30m spatial resolution. This information is then combined with the 1998 Mid-Atlantic Regional Science Application Center (RESAC) landcover map and Loudoun County Geographic Information System (LOGIS) dataset to assess the thermal responses at the observation sites.

Note: For a complete list of upcoming seminars check the dept's seminar web site: http://www.gwu.edu/~stat/seminars/Fall2001.htm. The campus map is at: http://www.gwu.edu/Map/. The contact person is Reza Modarres at Reza@gwu.edu or 202-994-6359. Return to top

Topic: Bioinformatics for HIV Genomics

  • Speaker: Professor Francoise Seillier-Moiseimitsch, Director, Bioinformatics Research Center, UMBC
  • Title: Bioinformatics for HIV Genomics
  • Time: Thursday, December 6th, 2001, 3:30 pm
  • Place: Room 1313, Mathematics Building, University of Maryland College Park. For directions, please visit the Mathematics Web Site: http://www.math.umd.edu/dept/contact.html
  • Sponsor: University of Maryland, Statistics Program, Department of Mathematics


Abstract:

The inefficiency of the replication process in HIV, like in any retrovirus, gives rise to many variants. The observed variability reflects both viability of the mutant and selection pressure from the immune system. This talk will review some recently developed methodology to study various aspects of the molecular evolution of HIV. One of the central themes is to quantify heterogeneity and to compare subgroups. Another focus is to detect correlated mutations and to incorporate them into phylogenetic reconstructions. Finally, it is becoming increasingly important to identify the link between the sequence information and some specific biological characteristic. Return to top

Topic: Outlier Selection for RegARIMA Models

  • Speakers:
    Kathleen M. McDonald-Johnson, U.S. Census Bureau Catherine C. Hood, U.S. Census Bureau
  • Discussant: Stuart Scott, Bureau of Labor Statistics
  • Chair: Linda Atkinson, Economic Research Service, USDA
  • Date/Time: December 19, 2001; 12:30 PM - 2:00 PM
  • Location: Bureau of Labor Statistics, Conference Center Room 1, Postal Square Building (PSB), 2 Massachusetts Ave. NE, Washington, D.C. Please use the First St., NE, entrance to the PSB.
  • Sponsor: Economics Section


Abstract:

In most data applications, statisticians must identify and estimate outlier effects. When doing seasonal adjustment, we are concerned that outliers may interfere with estimation of seasonal effects. By removing outlier effects, we hope to produce the best possible seasonal adjustment. The autocorrelation structure of time series differs from that of other types of data, so the outlier selection techniques also must be different. Using a large sample of economic time series from the U.S. Census Bureau, we fit regARIMA models (regression models with ARIMA errors) to the data with the X-12-ARIMA seasonal adjustment program. Our research simulated production as we added data one month at a time, refitting the regARIMA models for each run. We looked at the performance of automatic outlier identification when we raised or lowered the critical value, and we compared that to visual outlier selection methods.

We expected our visual selection methods to improve on automatic outlier identification, but we concluded that the current automatic identification procedure was generally the best method. Return to top

Seminar Archives

2024 2023
2022 2021 2020 2019
2018 2017 2016 2015
2014 2013 2012 2011
2010 2009 2008 2007
2006 2005 2004 2003
2002 2001 2000 1999
1998 1997 1996 1995  

Methodology