Science Publications

Publication Lists

Dr. Isabel E. Allen: UCSF Profile
Dr. Julia Seaman: PubMed Bibliography

Recent Peer-reviewed Research and Clinical Publications


Tutorials for researchers

A series of short papers that are part of continuing outreach efforts, to introduce researchers in the quality field to new statistical techniques for data analysis, stressing the availability of extremely large databases. These are published a few times a year in ASQ’s Journal of Quality Progress [https://asq.org/quality-progress/].

Survey Setup

Know what questions to ask when reading and interpreting survey results.
Does the sample represent the population it is summarizing and is it clear what population was included? Was the data collected with a random sample or a convenience sample? How are the percentages calculated?

Is Heterogeneity Your Friend?

Using sensitivity analyses to design improved studies.
IDENTIFYING, ELIMINATING or controlling heterogeneity is a fundamental principle in many statistical techniques. Heterogeneity and its opposite, homogeneity, refer to how consistent or stable a particular data set or variable relationship are.

Know there are unknowns

How suspected confounding variables influence models.
Do the items people carry in their pockets or purses make them more likely to develop cancer? In an epidemiological study on the development of lung cancer in the population, is carrying matches an important variable? No, but it is an indication of a variable that, if not measured, may confound the results. The confounder is whether the individual smokes and does not carry matches, although there probably is a strong relationship between the two.

Likert Scales and Data Analyses

Surveys are consistently used to measure quality. For example, surveys might be used to gauge customer perception of product quality or quality performance in service delivery. Likert scales are a common ratings format for surveys. Respondents rank quality from high to low or best to worst using five or seven levels. Data analyses using nominal, inter- val and ratio data are generally straightforward and transparent. Analyses of ordinal data, particularly as it relates to Likert or other scales in surveys, are not.

The significance of power

Avoid mistakenly rejecting the null hypothesis in statistical trials.
The concept of “power” has long been overshadowed in statistical circles by its big brother, “significance.” Both parameters, chosen before a test, dictate the sample size and likelihood of making an erroneous conclusion when comparing two groups

Making data manageable

Fold changes, ratios of means and not the mean ratio.
Often in testing groups or treatments, you need to know what has changed and how it has changed during the test or experiment. In comparing an outcome of two conditions, X and Y, you can look at the difference in the outcomes in several ways. The simplest is the differ- ence (X - Y) of the outcomes.

So many variables, so few observations

A look at five variable reduction techniques to avoid troubles in a prediction model.
Collapsing variables in a factor analysis can have dangerous implications, and it also can be difficult to interpret the resulting factors. But what if you have no choice? Increasingly in many fields, the number of variables collected can dwarf the observations available for each variable. The aims of your overall analysis will play a role in how you deal with this problem statistically.