Denna webbplats använder kakor för att fungera optimalt, analysera användarbeteende och för att visa reklam (om du inte är inloggad). Genom att använda LibraryThing intygar du att du har läst och förstått våra Regler och integritetspolicy. All användning av denna webbplats lyder under dessa regler.

Laddar... ## The Seven Pillars of Statistical Wisdom (urspr publ 2016; utgåvan 2016)## av Stephen M. Stigler (Författare)
## VerkdetaljerThe Seven Pillars of Statistical Wisdom av Stephen M. Stigler (2016)
Ingen/inga Laddar...
Gå med i LibraryThing för att få reda på om du skulle tycka om den här boken. Det finns inga diskussioner på LibraryThing om den här boken. An overview of the foundational concepts of modern statistics. I liked the way the author organized the topics, but I wish he had taken less of an historical approach and wrote more about the "pillars'" role in contemporary practice. Still, as a text on the antecedents of the field, this book is one of the best and has some surprising discoveries. It is well illustrated, too. The book is about seven themes of statistics (aggregation [means mostly], likelihood [n-root rule and exceptions], intercomparison [student t-test], regression [regression to the mean and its implications], design [randomization] and residual [residual plots and nested models]) . I'm not sure who the audience is supposed to be. It seems from organization around seven themes and the size of the book that the book is targeted towards a general audience. Some of the material supports this, the earlier material starts on averages and the emergence of the arithmetical mean from astronomy. However, the author occasionally lapses into highly technical work that I found difficult to follow (i.e. "In its simplest form in parametric estimation problems, the fisher information is the expected value of the square of the score function defined to be the derivative of the logarithm of the probability density function of the data") though there are a few interesting examples and historical facts (pyx, the origin of the student t-distribution and the cauchy distribution as an exception to the central limit theorem). Other times the author name drops a method as a solution to a technical problem without further explanation. Seems like the book would only make sense to someone very well versed in statistics, in which case, why would they be reading this book? I did like the explanation of galtons discovery of regression towards the mean, and the role of the Galton machine in relegating the rule of three into the historical dustbin. I guess my understanding of ”wisdom” differs a little bit from the author’s. I interpreted the title more like ”seven common statistical fallacies”, but the author wrote a book which could just as well have been titled ”the history of seven important ideas in statistics”. It’s hard for me to see how this book could be of any interest to people who are not professional statisticians. The first couple of chapters are easy enough to understand, but after that the discussion becomes increasingly mathematical. Based on the theoretical concepts the author freely uses, it seems like he assumes that his readers have completed at least one course in statistics at the university level. I have taken a few courses in statistics and I could more or less follow what each chapter was about, but I can’t say it was particularly interesting. I don’t think the author performed particularly well in explaining why each of these scientific innovations in statistics was so ground-breaking. In each chapter he provides a progress report on how that pillar grew from an uncertain conjecture to a rigorous theory, but the real significance of each idea is lost somewhere between the lines as he jumps from one experiment to the next. In conclusion, this book can only be recommended to people with a fair bit of background in statistics and a keen interest in the history of this scientific field. Statistics is a great subject because it draws on so many fields: probability, calculus, the sciences, and computing. Yet, those ingredients don't define statistics. This book presents the uniquely and independently statistical ideas, along with their history, and claims that there are seven of them -- and possibly an eighth on the way due to advances in storing large, multivariate datasets. The seven ideas (or pillars) are aggregation, information, likelihood, intercomparison, regression, design, and residuals. The author explores the central problem of each pillar and the historical context from which they emerged. He uses history to show that each idea was hard-won and required a big shift in thinking from convention. For example, the statistical mean was not used consistently to summarize data until the 1600s (post-renaissance). This history offers weight to the idea that each pillar is uniquely a part of the subject of statistics. The history is by far the most interesting part of this book because the chosen examples highlight these seven ideas so well and sometimes a case appears in multiple chapters to add new perspectives. Whether it’s learning that Newton was involved in the Trial of Pyx (and may have taken advantage of tolerances that were not based on likelihood for weighing bullion); or that the origin of Nightingale’s causes of mortality chart was a chart by William Farr; or that one of the earliest contingency tables was inscribed on a Sumerian clay tablet from 3000BCE (and it contained raw data on the back side!); or that Galton was related to Darwin and he both discovered and corrected a potential contradiction in the Origin of Species three years after Darwin’s death thanks to the discovery of regression; one understands that the problems of statistics have been around for a long time and many characters have been involved. In fact, the most important part of the history is that there were many examples of human behavior that either caused someone to disregard an idea or to miss it completely. Some examples of this were Gossett’s stubbornness in believing that randomization, while he used it, was better than structured designs (and his inability to comprehend the multivariate models that Fisher presented to him), as well as Gellibrand’s inability to see he had actually discovered that the magnetic field of the earth changes over time because he thought his predecessor, Borough, made a measurement mistake. I was also really pleased to see mentions of signal processing and Shannon’s information, however these were unfortunately not more than mentions and I was left wanting more to understand the connection between statistical information and signal information. (Anyone who can recommend a book on this should contact me.) I did not enjoy the derivations in this book in part because they are very cursory (and expect prior knowledge), but mostly because the typesetting is so bad in the print book. (The print book itself is a strange size and is hard to keep open for note-taking. I would recommend the digital edition.) Coincidentally, I found this book *by chance* in the library stacks and I’m glad I took the time to read it. It has inspired me to continue searching for my own discoveries and to add to the historical record -- one never knows the unexpected ways that their work might influence others. This book is not a substitute for a statistics textbook, but it did tie together many concepts in a way that makes you appreciate that statistics is definitely a subject in its own right. The book is clearly infused with the author’s deep research, however I would likely not recommend this book to anyone who is unfamiliar with statistics in the first place because many chapters assume knowledge of the subject. This is unfortunate, because this could be a great introduction for new students of statistics. There is also a defensive undertone to the book that I’ve experienced many statisticians share -- let the subject speak for itself because the ideas are clearly important. inga recensioner | lägg till en recension
"A summary of the seven most consequential ideas in the history of statistics, ideas that have proven their importance over a century or more and yet still define the basis of statistical science in the present day. Separately each was a radical idea when introduced, and most remain radical today when they are extended to new territory. Together they define statistics as a scientific field in a way that differentiates it from mathematics and computer science, fields which partner with statistics today but also maintain their separate identities. These "pillars" are presented in their historical context, and some flavor of their development and variety of forms is also given in historical context. The framework of these seven is quite different from the usual ways statistical ideas are arranged, such as in most courses on the subject, and thus they give a new way to think about statistics."-- Inga biblioteksbeskrivningar kunde hittas. |
Google Books — Laddar... ## BetygMedelbetyg:
## Är det här du? |

Seven Pillars of Statistical Wisdomis borrowed from T.E. Lawrence's ownSeven Pillars of Wisdom, and that both he and Lawrence of Arabia drew on Proverbs 9:1 as a source: "Wisdom hath built her house, she hath hewn out her seven pillars" (3). With this bow to tradition, Stigler goes on to note that an eighth pillar may well be forthcoming, without commenting on what it might be (203).While we await this possible eighth pillar, the seven current pillars are: Aggregation, Information, Likelihood, Intercomparison, Regression, Design, and Residual. While the delineation is subjective, Stigler shows a strong grasp of the material by tracing the history of each of these "bins." He finds interesting things to state about each pillar (Aggregation "inherently involves the discarding of information, an act of 'creative destruction'", 196), but lacks fuller development.

There is a strong References section, but for a book published by an academic press, the footnotes are a little light (the 33 notes in the fifth chapter, "Regression," are an outlier). While the book is not dryly "academic," neither is it an introductory text; the reader needs a general understanding of numerous statistical concepts in order to grasp the subjectivity of Stigler's slicing. It thus occupies a sort of middle ground: too short and subjective for the specialist, yet a bit too specialized and obtuse for the lay reader. I would very much like to see a fuller treatment. ( )