“McCloskey and Ziliak have been pushing this very elementary, very correct, very important argument through several articles over several years and for reasons I cannot fathom it is still resisted. If it takes a book to get it across, I hope this book will do it. It ought to.”
—Thomas Schelling, Distinguished University Professor, School of Public Policy, University of Maryland, and 2005 Nobel Prize Laureate in Economics
“With humor, insight, piercing logic and a nod to history, Ziliak and McCloskey show how economists—and other scientists—suffer from a mass delusion about statistical analysis. The quest for statistical significance that pervades science today is a deeply flawed substitute for thoughtful analysis. . . . Yet few participants in the scientific bureaucracy have been willing to admit what Ziliak and McCloskey make clear: the emperor has no clothes.”
—Kenneth Rothman, Professor of Epidemiology, Boston University School of Health
The Cult of Statistical Significance shows, field by field, how “statistical significance,” a technique that dominates many sciences, has been a huge mistake. The authors find that researchers in a broad spectrum of fields, from agronomy to zoology, employ “testing” that doesn’t test and “estimating” that doesn’t estimate. The facts will startle the outside reader: how could a group of brilliant scientists wander so far from scientific magnitudes? This study will encourage scientists who want to know how to get the statistical sciences back on track and fulfill their quantitative promise. The book shows for the first time how wide the disaster is, and how bad for science, and it traces the problem to its historical, sociological, and philosophical roots.
Stephen T. Ziliak is the author or editor of many articles and two books. He currently lives in Chicago, where he is Professor of Economics at Roosevelt University. Deirdre N. McCloskey, Distinguished Professor of Economics, History, English, and Communication at the University of Illinois at Chicago, is the author of twenty books and three hundred scholarly articles. She has held Guggenheim and National Humanities Fellowships. She is best known for How to Be Human* Though an Economist (University of Michigan Press, 2000) and her most recent book, The Bourgeois Virtues: Ethics for an Age of Commerce (2006).
This magnificent book is the first comprehensive history of statistics from its beginnings around 1700 to its emergence as a distinct and mature discipline around 1900. Stephen M. Stigler shows how statistics arose from the interplay of mathematical concepts and the needs of several applied sciences including astronomy, geodesy, experimental psychology, genetics, and sociology. He addresses many intriguing questions: How did scientists learn to combine measurements made under different conditions? And how were they led to use probability theory to measure the accuracy of the result? Why were statistical methods used successfully in astronomy long before they began to play a significant role in the social sciences? How could the introduction of least squares predate the discovery of regression by more than eighty years? On what grounds can the major works of men such as Bernoulli, De Moivre, Bayes, Quetelet, and Lexis be considered partial failures, while those of Laplace, Galton, Edgeworth, Pearson, and Yule are counted as successes? How did Galton’s probability machine (the quincunx) provide him with the key to the major advance of the last half of the nineteenth century?
Stigler’s emphasis is upon how, when, and where the methods of probability theory were developed for measuring uncertainty in experimental and observational science, for reducing uncertainty, and as a conceptual framework for quantitative studies in the social sciences. He describes with care the scientific context in which the different methods evolved and identifies the problems (conceptual or mathematical) that retarded the growth of mathematical statistics and the conceptual developments that permitted major breakthroughs.
Statisticians, historians of science, and social and behavioral scientists will gain from this book a deeper understanding of the use of statistical methods and a better grasp of the promise and limitations of such techniques. The product of ten years of research, The History of Statistics will appeal to all who are interested in the humanistic study of science.
A daily glass of wine prolongs life—yet alcohol can cause life-threatening cancer. Some say raising the minimum wage will decrease inequality while others say it increases unemployment. Scientists once confidently claimed that hormone replacement therapy reduced the risk of heart disease but now they equally confidently claim it raises that risk. What should we make of this endless barrage of conflicting claims?
Observation and Experiment is an introduction to causal inference by one of the field’s leading scholars. An award-winning professor at Wharton, Paul Rosenbaum explains key concepts and methods through lively examples that make abstract principles accessible. He draws his examples from clinical medicine, economics, public health, epidemiology, clinical psychology, and psychiatry to explain how randomized control trials are conceived and designed, how they differ from observational studies, and what techniques are available to mitigate their bias.
“Carefully and precisely written…reflecting superb statistical understanding, all communicated with the skill of a master teacher.”
—Stephen M. Stigler, author of The Seven Pillars of Statistical Wisdom
“An excellent introduction…Well-written and thoughtful…from one of causal inference’s noted experts.”
—Journal of the American Statistical Association
“Rosenbaum is a gifted expositor…an outstanding introduction to the topic for anyone who is interested in understanding the basic ideas and approaches to causal inference.”
—Psychometrika
“A very valuable contribution…Highly recommended.”
—International Statistical Review
This book is meant to be a primer, that is, an introduction, to probability logic, a subject that appears to be in its infancy. Probability logic is a subject envisioned by Hans Reichenbach and largely created by Adams. It treats conditionals as bearers of conditional probabilities and discusses an appropriate sense of validity for arguments such conditionals, as well as ordinary statements as premisses.
This is a clear well-written text on the subject of probability logic, suitable for advanced undergraduates or graduates, but also of interest to professional philosophers. There are well-thought-out exercises, and a number of advanced topics treated in appendices, while some are brought up in exercises and some are alluded to only in footnotes. By this means, it is hoped that the reader will at least be made aware of most of the important ramifications of the subject and its tie-ins with current research, and will have some indications concerning recent and relevant literature.
From the ancients’ first readings of the innards of birds to your neighbor’s last bout with the state lottery, humankind has put itself into the hands of chance. Today life itself may be at stake when probability comes into play—in the chance of a false negative in a medical test, in the reliability of DNA findings as legal evidence, or in the likelihood of passing on a deadly congenital disease—yet as few people as ever understand the odds. This book is aimed at the trouble with trying to learn about probability. A story of the misconceptions and difficulties civilization overcame in progressing toward probabilistic thinking, Randomness is also a skillful account of what makes the science of probability so daunting in our own day.
To acquire a (correct) intuition of chance is not easy to begin with, and moving from an intuitive sense to a formal notion of probability presents further problems. Author Deborah Bennett traces the path this process takes in an individual trying to come to grips with concepts of uncertainty and fairness, and also charts the parallel path by which societies have developed ideas about chance. Why, from ancient to modern times, have people resorted to chance in making decisions? Is a decision made by random choice “fair”? What role has gambling played in our understanding of chance? Why do some individuals and societies refuse to accept randomness at all? If understanding randomness is so important to probabilistic thinking, why do the experts disagree about what it really is? And why are our intuitions about chance almost always dead wrong?
Anyone who has puzzled over a probability conundrum is struck by the paradoxes and counterintuitive results that occur at a relatively simple level. Why this should be, and how it has been the case through the ages, for bumblers and brilliant mathematicians alike, is the entertaining and enlightening lesson of Randomness.
What gives statistics its unity as a science? Stephen Stigler sets forth the seven foundational ideas of statistics—a scientific discipline related to but distinct from mathematics and computer science.
Even the most basic idea—aggregation, exemplified by averaging—is counterintuitive. It allows one to gain information by discarding information, namely, the individuality of the observations. Stigler’s second pillar, information measurement, challenges the importance of “big data” by noting that observations are not all equally important: the amount of information in a data set is often proportional to only the square root of the number of observations, not the absolute number. The third idea is likelihood, the calibration of inferences with the use of probability. Intercomparison is the principle that statistical comparisons do not need to be made with respect to an external standard. The fifth pillar is regression, both a paradox (tall parents on average produce shorter children; tall children on average have shorter parents) and the basis of inference, including Bayesian inference and causal reasoning. The sixth concept captures the importance of experimental design—for example, by recognizing the gains to be had from a combinatorial approach with rigorous randomization. The seventh idea is the residual: the notion that a complicated phenomenon can be simplified by subtracting the effect of known causes, leaving a residual phenomenon that can be explained more easily.
The Seven Pillars of Statistical Wisdom presents an original, unified account of statistical science that will fascinate the interested layperson and engage the professional statistician.
A long-overdue guide on how to use statistics to bring clarity, not confusion, to policy work.
Statistics are an essential tool for making, evaluating, and improving public policy. Statistics for Public Policy is a crash course in wielding these unruly tools to bring maximum clarity to policy work. Former White House economist Jeremy G. Weber offers an accessible voice of experience for the challenges of this work, focusing on seven core practices:
READERS
Browse our collection.
PUBLISHERS
See BiblioVault's publisher services.
STUDENT SERVICES
Files for college accessibility offices.
UChicago Accessibility Resources
home | accessibility | search | about | contact us
BiblioVault ® 2001 - 2024
The University of Chicago Press