Popular Statistical Quotes

We've searched our database for all the quotes and captions related to Popular Statistical. Here they are! All 45 of them:

Why was I the Most Popular President Who Ever Lived? I castrated the IRS, implemented the National Sales Tax (Fair Tax) and brought an end to parasitic government - all through the use of numbers, statistics. business metrics, graphs, pie charts, efficiency - in short - results.
Nancy Omeara (The Most Popular President Who Ever Lived [So Far])
Future Politics Effecting change in national politics was mostly a matter of making better use of online forums, encouraging voters to press forth with hard questions, providing statistics and solutions. Direct-to-voter referendums became an increasingly common way of effecting national policy. If Congress were deadlocked over a particular issue, the voters would be asked to make up their minds for them in the form of an online referendum.
Nancy Omeara (The Most Popular President Who Ever Lived [So Far])
The post-totalitarian system touches people at every step, but it does so with its ideological gloves on. This is why life in the system is so thoroughly permeated with hypocrisy and lies: government by bureaucracy is called popular government; the working class is enslaved in the name of the working class; the complete degradation of the individual is presented as his or her ultimate liberation; depriving people of information is called making it available; the use of power to manipulate is called the public control of power, and the arbitrary abuse of power is called observing the legal code; the repression of culture is called its development; the expansion of imperial influence is presented as support for the oppressed; the lack of free expression becomes the highest form of freedom; farcical elections become the highest form of democracy; banning independent thought becomes the most scientific of world views; military occupation becomes fraternal assistance. Because the regime is captive to its own lies, it must falsify everything. It falsifies the past. It falsifies the present, and it falsifies the future. It falsifies statistics. It pretends not to possess an omnipotent and unprincipled police apparatus. It pretends to respect human rights. It pretends to persecute no one. It pretends to fear nothing. It pretends to pretend nothing.
Václav Havel (The Power of the Powerless: Citizens Against the State in Central-Eastern Europe (Routledge Revivals))
Representative democracy, however, harmonizes marvelously with the capitalist economic system. This new statist system, basing itself on the alleged sovereignty of the so-called will of the people, as supposedly expressed by their alleged representatives in mock popular assemblies, incorporates the two principal and necessary conditions for the progress of capitalism: state centralization, and the actual submission of the sovereign people to the intellectual governing minority, who, while claiming to represent the people, unfailingly exploits them.
Mikhail Bakunin
wasn't there some statistic somewhere she'd read, about where most people meet their spouse, that claimed weddings were the third most popular place, after university and the work place. she was sure that she had. something to do with all that romantic optimism in the air, and too much champagne, no doubt.
Elizabeth Noble (Things I Want My Daughters to Know)
When I see two little Jewish old ladies giggling over coffee at a Manhattan diner, it makes me smile, because I hear my own mother’s laughter beneath theirs. Conversely, when I hear black “leaders” talking about “Jewish slave owners” I feel angry and disgusted, knowing that they’re inflaming people with lies and twisted history, as if all seven of the Jewish slave owners in the antebellum South, or however few there were, are responsible for the problems of African-Americans now. Those leaders are no better than their Jewish counterparts who spin statistics in marvelous ways to make African-Americans look like savages, criminals, drags on society, and “animals” (a word quite popular when used to describe blacks these days). I don’t belong to any of those groups. I belong to the world of one God, one people.
James McBride (The Color of Water)
My flight was announced by Donald Duck noises from a loudspeaker; I arose and shuffled off towards the statistical improbability of dying in an airplane crash. Personally, the thought of such a death appalls me little – what civilized man would not rather die like Icarus than be mangled to death on a Motorway by a Ford Popular?
Kyril Bonfiglioli (Don't Point That Thing at Me (Charlie Mortdecai #1))
Teenage drinking has been declining since 1999, but students vastly overestimate their classmates' use of alcohol, drugs, and cigarettes. For example, a study conducted at a Midwestern high school when teenage alcohol use was peaking found that students believed that 92% of their peers Frank alcohol and 85% smoked cigarettes. When researchers surveyed the school to unearth the actual statistics, they learned that 47% of students had consumed alcohol and 17% smoked.
Alexandra Robbins (The Geeks Shall Inherit the Earth: Popularity, Quirk Theory and Why Outsiders Thrive After High School)
Popular sovereignty may be fine in theory but not when the citizenry are so obviously in need of "re-education" by their betters. The alliance of political statists and judicial statists is moving us into a land beyond law--a land of apostasy trials. The Conformicrats have made a bet that the populace will willingly submit to subtle but pervasive forms of re-education camp. Over in England, London's transportation department has a bureaucrat whose very title sums up our ruler's general disposition toward us: Head of Behavior Change.
Mark Steyn (After America: Get Ready for Armageddon)
The Christian writer will feel that in the greatest depth of vision, moral judgment will be implicit, and that when we are invited to represent the country according to survey, what we are asked to do is to separate mystery from manners and judgment from vision, in order to produce something a little more palatable to the modern temper. We are asked to form our consciences in the light of statistics, which is to establish the relative as absolute. For many this may be a convenience, since we don't live in an age of settled belief; but it cannot be a convenience, it cannot even be possible, for the writer who is a Catholic. He will feel that any long-continued service to it will produce a soggy, formless, and sentimental literature, one that will provide a sense of spiritual purpose for those who connect the spirit with romanticism and a sense of joy for those who confuse that virtue with satisfaction. The storyteller is concerned with what is; but if what is is what can be determined by survey, then the disciples of Dr. Kinsey and Dr. Gallup are sufficient for the day thereof.
Flannery O'Connor (Mystery and Manners: Occasional Prose (FSG Classics))
The Unknown Citizen by W. H. Auden (To JS/07 M 378 This Marble Monument Is Erected by the State) He was found by the Bureau of Statistics to be One against whom there was no official complaint, And all the reports on his conduct agree That, in the modern sense of an old-fashioned word, he was a saint, For in everything he did he served the Greater Community. Except for the War till the day he retired He worked in a factory and never got fired, But satisfied his employers, Fudge Motors Inc. Yet he wasn't a scab or odd in his views, For his Union reports that he paid his dues, (Our report on his Union shows it was sound) And our Social Psychology workers found That he was popular with his mates and liked a drink. The Press are convinced that he bought a paper every day And that his reactions to advertisements were normal in every way. Policies taken out in his name prove that he was fully insured, And his Health-card shows he was once in hospital but left it cured. Both Producers Research and High-Grade Living declare He was fully sensible to the advantages of the Instalment Plan And had everything necessary to the Modern Man, A phonograph, a radio, a car and a frigidaire. Our researchers into Public Opinion are content That he held the proper opinions for the time of year; When there was peace, he was for peace: when there was war, he went. He was married and added five children to the population, Which our Eugenist says was the right number for a parent of his generation. And our teachers report that he never interfered with their education. Was he free? Was he happy? The question is absurd: Had anything been wrong, we should certainly have heard.
W.H. Auden
Personality typing is immensely popular among laypeople. It is not pop psychology, however, nor is it New Age philosophy. It is a time-honored, statistically valid theory that explains difference and that helps people identify their place in the world. These scholars have bequeathed to the world, a starting point for achieving self-knowledge and self-acceptance, life balance, purpose, and fulfillment.
Sandra Nichols (INFP: A Flower in the Shade)
Recently a group of researchers conducted a computer analysis of three decades of hit songs. The researchers reported a statistically significant trend toward narcissism and hostility in popular music. In line with their hypothesis, they found a decrease in usages such as we and us and an increase in I and me. The researchers also reported a decline in words related to social connection and positive emotions, and an increase in words related to anger and antisocial behavior, such as hate or kill.
Brené Brown (Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead)
The post-totalitarian system touches people at every step, but it does so with its ideological gloves on. This is why life in the system is so thoroughly permeated with hypocrisy and lies: government by bureaucracy is called popular government; the working class is enslaved in the name of the working class; the complete degradation of the individual is presented as his or her ultimate liberation; depriving people of information is called making it available; the use of power to manipulate is called the public control of power, and the arbitrary abuse of power is called observing the legal code; the repression of culture is called its development; the expansion of imperial influence is presented as support for the oppressed; the lack of free expression becomes the highest form of freedom; farcical elections become the highest form of democracy; banning independent thought becomes the most scientific of world views; military occupation becomes fraternal assistance. Because the regime is captive to its own lies, it must falsify everything. It falsifies the past. It falsifies the present, and it falsifies the future. It falsifies statistics. It pretends not to possess an omnipotent and unprincipled police apparatus. It pretends to respect human rights. It pretends to persecute no one. It pretends to fear nothing. It pretends to pretend nothing. Individuals need not believe all these mystifications, but they must behave as though they did, or they must at least tolerate them in silence, or get along well with those who work with them. For this reason, however, they must live within a lie. They need not accept the lie. It is enough for them to have accepted their life with it and in it. For by this very fact, individuals confirm the system, fulfil the system, make the system, are the system.
Václav Havel (The Power of the Powerless (Vintage Classics))
In 1963, the chaos theorist Edward Lorenz presented an often-referenced lecture entitled “Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” Lorenz’s main point was that chaotic mathematical functions are very sensitive to initial conditions. Slight differences in initial conditions can lead to dramatically different results after many iterations. Lorenz believed that this sensitivity to slight differences in the beginning made it impossible to determine an answer to his question. Underlying Lorenz’s lecture was the assumption of determinism, that each initial condition can theoretically be traced as a cause of a final effect. This idea, called the “Butterfly Effect,” has been taken by the popularizers of chaos theory as a deep and wise truth. However, there is no scientific proof that such a cause and effect exists. There are no well-established mathematical models of reality that suggest such an effect. It is a statement of faith. It has as much scientific validity as statements about demons or God.
David Salsburg (The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century)
Still, I think that one of the most fundamental problems is want of discipline. Homes that severely restrict viewing hours, insist on family reading, encourage debate on good books, talk about the quality and the morality of television programs they do see, rarely or never allow children to watch television without an adult being present (in other words, refusing to let the TV become an unpaid nanny), and generally develop a host of other interests, are not likely to be greatly contaminated by the medium, while still enjoying its numerous benefits. But what will produce such families, if not godly parents and the power of the Holy Spirit in and through biblical preaching, teaching, example, and witness? The sad fact is that unless families have a tremendously strong moral base, they will not perceive the dangers in the popular culture; or, if they perceive them, they will not have the stamina to oppose them. There is little point in preachers disgorging all the sad statistics about how many hours of television the average American watches per week, or how many murders a child has witnessed on television by the age of six, or how a teenager has failed to think linearly because of the twenty thousand hours of flickering images he or she has watched, unless the preacher, by the grace of God, is establishing a radically different lifestyle, and serving as a vehicle of grace to enable the people in his congregation to pursue it with determination, joy, and a sense of adventurous, God-pleasing freedom. Meanwhile, the harsh reality is that most Americans, including most of those in our churches, have been so shaped by the popular culture that no thoughtful preacher can afford to ignore the impact. The combination of music and visual presentation, often highly suggestive, is no longer novel. Casual sexual liaisons are everywhere, not least in many of our churches, often with little shame. “Get even” is a common dramatic theme. Strength is commonly confused with lawless brutality. Most advertising titillates our sin of covetousness. This is the air we breathe; this is our culture.
D.A. Carson (The Gagging of God: Christianity Confronts Pluralism)
In the general US population, fewer than one in two hundred people hold a law degree. In the House of Representatives, it is over one in three. In the Senate, it is over one in two. Statistics on wealth are just as striking. The median net worth of an average American is just under $45,000.115 The median net worth of an average member of Congress, by contrast, is over ten times as high, and that of senators higher still. Marked by the growing role of courts, of bureaucratic agencies, of central banks, and of supranational institutions. At the same time, there has been a rapid growth in the influence of lobbyists, in the money spent on political campaigns, and in the gulf that separates political elites from the people they are supposed to represent. Taken together, this has effectively insulated the political system from the popular will.
Yascha Mounk (The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It)
it’s claimed that they can tell us most things about life. This trend isn’t just found in popular science books. At universities, economists analyse ever greater parts of existence as if it were a market. From suicide (the value of a life can be calculated like the value of a company, and now it’s time to shut the doors) to faked orgasms (he doesn’t have to study how her eyes roll back, her mouth opens, her neck reddens and her back arches – he can calculate whether she really means it). The question is what Keynes would think about an American economist like David Galenson. Galenson has developed a statistical method to calculate which works of art are meaningful. If you ask him what the most renowned work of the last century is, he’ll say ‘Les Demoiselles d’Avignon’. He has calculated it. Things put into numbers immediately become certainties. Five naked female prostitutes on Carrer d’Avinyó in Barcelona. Threatening, square, disconnected bodies, two with faces like African masks. The large oil painting that Picasso completed in 1907 is, according to Galenson, the most important artwork of the twentieth century, because it appears most often as an illustration in books. That’s the measure he uses. The same type of economic analysis that explains the price of leeks or green fuel is supposed to be able to explain our experience of art.
Katrine Marçal (Who Cooked Adam Smith's Dinner? A Story About Women and Economics)
The Ultimate Guide To SEO In The 21st Century Search engine optimization is a complex and ever changing method of getting your business the exposure that you need to make sales and to build a solid reputation on line. To many people, the algorithms involved in SEO are cryptic, but the basic principle behind them is impossible to ignore if you are doing any kind of business on the internet. This article will help you solve the SEO puzzle and guide you through it, with some very practical advice! To increase your website or blog traffic, post it in one place (e.g. to your blog or site), then work your social networking sites to build visibility and backlinks to where your content is posted. Facebook, Twitter, Digg and other news feeds are great tools to use that will significantly raise the profile of your pages. An important part of starting a new business in today's highly technological world is creating a professional website, and ensuring that potential customers can easily find it is increased with the aid of effective search optimization techniques. Using relevant keywords in your URL makes it easier for people to search for your business and to remember the URL. A title tag for each page on your site informs both search engines and customers of the subject of the page while a meta description tag allows you to include a brief description of the page that may show up on web search results. A site map helps customers navigate your website, but you should also create a separate XML Sitemap file to help search engines find your pages. While these are just a few of the basic recommendations to get you started, there are many more techniques you can employ to drive customers to your website instead of driving them away with irrelevant search results. One sure way to increase traffic to your website, is to check the traffic statistics for the most popular search engine keywords that are currently bringing visitors to your site. Use those search words as subjects for your next few posts, as they represent trending topics with proven interest to your visitors. Ask for help, or better yet, search for it. There are hundreds of websites available that offer innovative expertise on optimizing your search engine hits. Take advantage of them! Research the best and most current methods to keep your site running smoothly and to learn how not to get caught up in tricks that don't really work. For the most optimal search engine optimization, stay away from Flash websites. While Google has improved its ability to read text within Flash files, it is still an imperfect science. For instance, any text that is part of an image file in your Flash website will not be read by Google or indexed. For the best SEO results, stick with HTML or HTML5. You have probably read a few ideas in this article that you would have never thought of, in your approach to search engine optimization. That is the nature of the business, full of tips and tricks that you either learn the hard way or from others who have been there and are willing to share! Hopefully, this article has shown you how to succeed, while making fewer of those mistakes and in turn, quickened your path to achievement in search engine optimization!
search rankings
Learn Data Science Course at SLA to extract meaningful insights from structured and unstructured data using scientific methods, algorithms, and systematic processes. Get hands-on with popular tools and technologies used to analyze data efficiently. Earn an industry-accredited certificate and placement assistance in our leading Data Science Training Institute in Chennai. Equip yourself with the key concepts of Data Science such as Probability, Statistics, Machine Learning Techniques, Data Analytics Basics, and Data Visualization processes. We are extremely dedicated to serving you better.
Data Science Course in Chennai
Basically, action is, and always will be, faster than reaction. Thus, the attacker is the one that dictates the fight. They are forcing the encounter with technique after technique that are designed to overcome any defensive techniques initiated by the defender. Much of this exchange, and determining which of the adversaries is victorious, is all a matter of split seconds. That is the gap between action and reaction. That attacker acts; the defender reacts. Military history is saturated with an uneven amount of victorious attackers compared to victorious defenders. It is common to observe the same phenomenon in popular sports, fighting competitions, in the corporate world of big business. The list goes on and on. So, how do we effectively defend ourselves when we can easily arrive at the conclusion that the defender statistically loses? It is by developing the mentality that once attacked that you immediately counter-attack. That counter-attack has to be ferocious and unrelenting. If someone throws a punch, or otherwise initiates battle with you, putting you, for a split second, on the wrong side of the action versus reaction gap. Your best chance of victory is to deflect, smoother, parry, or otherwise negate their attack and then immediately launch into a vicious counter-attack. Done properly, this forces your adversary into a reactive state, rather than an action one. You turn the table on them and become the aggressor. That is how to effectively conceptualizes being in a defensive situation. Utilizing this method will place you in a greater position to be victorious. Dempsey, Sun Tzu and General Patton would agree. Humans are very violent animals. As a species, we are capable of high levels of extreme violence. In fact, approaching the subject of unarmed combatives, or any form of combatives, involves the immersion into a field that is inherently violent to the extreme of those extremes. It is one thing to find yourself facing an opponent across a field, or ring, during a sporting match. Those contests still pit skill verses skill, but lack the survival aspects of an unarmed combative encounter. The average person rarely, if ever, ponders any of this and many consider various sporting contests as the apex of human competition. It is not. Finding yourself in a life-or-death struggle against an opponent that is completely intent on ending your life is the greatest of all human competitions. Understanding that and acknowledging that takes some degree of courage in today’s society.
Rand Cardwell (36 Deadly Bubishi Points: The Science and Technique of Pressure Point Fighting - Defend Yourself Against Pressure Point Attacks!)
For a long period of human history, most of the world thought swans were white and black swans didn’t exist inside the confines of mother nature. The null hypothesis that swans are white was later dispelled when Dutch explorers discovered black swans in Western Australia in 1697. Prior to this discovery, “black swan” was a euphemism for “impossible” or “non-existent,” but after this finding, it morphed into a term to express a perceived impossibility that might become an eventuality and therefore disproven. In recent times, the term “black swan” has been popularized by the literary work of Nassim Taleb to explain unforeseen events such as the invention of the Internet, World War I, and the breakup of the Soviet Union.
Oliver Theobald (Statistics for Absolute Beginners: A Plain English Introduction)
Today yoga is virtually synonymous in the West with the practice of āsana, and postural yoga classes can be found in great number in virtually every city in the Western world, as well as, increasingly, in the Middle East, Asia, South and Central America, and Australasia. "Health club" types of yoga are even seeing renewed popularity among affluent urban populations in India. While exact practitioner statistics are hard to come by, it is clear that postural yoga is booming.
Mark Singleton (Yoga Body: The Origins of Modern Posture Practice)
Measuring replication rates across different experiments requires that research be reviewed in some fashion. Research reviews can be classified into four types. A type 1 review simply identifies and discusses recent developments in a field, usually focusing on a few exemplar experiments. Such reviews are often found in popular-science magazines such as Scientific American. They are also commonly used in skeptical reviews of psi research because one or two carefully selected exemplars can provide easy targets to pick apart. The type 2 review uses a few research results to highlight or illustrate a new theory or to propose a new theoretical framework for understanding a phenomenon. Again, the review is not designed to be comprehensive but only to illustrate a general theme. Type 3 reviews organize and synthesize knowledge from various areas of research. Such narrative reviews are not comprehensive, because the entire pool of combined studies from many disciplines is typically too large to consider individually. So again, a few exemplars of the “best” studies are used to illustrate the point of the synthesis. Type 4 is the integrative review, or meta-analysis, which is a structured technique for exhaustively analyzing a complete body of experiments. It draws generalizations from a set of observations about each experiment.1 Integration Meta-analysis has been described as “a method of statistical analysis wherein the units of analysis are the results of independent studies, rather than the responses of individual subjects.”2 In a single experiment, the raw data points are typically the participants’ individual responses. In meta-analysis, the raw data points are the results of separate experiments.
Dean Radin (The Conscious Universe: The Scientific Truth of Psychic Phenomena)
Social Media Advertising - Different Options & Their Benefits How To Use Social Media Paid Ads Ideally? What is the most effective way to make use of social media ads? Choosing which social media platform to advertise on depends on your target audience. You need to understand which platforms are being used, the type of campaigns that can run on each platform, and what investment you’ll be required to make. Pew Research Center’s report helps give us an idea of the most preferred platform for various demographics. For example, if your product caters to the teenage group, consider advertising on Instagram, TikTok, or Snapchat. If you’re catering to a more B2B client, you can consider LinkedIn. Once you understand where your audience spends the most time, you can narrow down the platforms. However, we’d still advise on A/B testing various platforms. You’d be surprised by how many B2B clients you can find on TikTok! What Are The Most Popular Social Media Ads? Here is a brief rundown of the various social media ad options available. 1. Facebook Ads Facebook Ads are the most successful form of social media advertising. Statistics show that Facebook paid ads have an average conversion rate of 9.21%. They’re easy to set up and track, and allow you to measure campaign performance easily, giving insights into how well your ads are performing. They also offer a wide range of targeting options that help you reach people who might be interested in what you’re selling, which is why they’re so effective at generating sales leads. Facebook Ads are also highly targeted. You can target specific demographics or audiences based on gender, age range, location, and other details such as interests and behaviors or job titles. This helps ensure that only people who are interested in what you’re offering, see your ad on Facebook. 2. Twitter Ads Twitter ads are a great way to reach your target audience, especially if your company already has a presence on the platform. They’re easy to set up and manage so you can focus on other aspects of your business. As of 2022, they have an average conversion rate of 0.77%. Twitter ads also offer simple targeting options that let you get more followers, increase engagement with existing customers and gain new followers interested in what you have to offer. There are multiple ad options to choose from for accomplishing various advertising goals, including promoted ads, follower ads, amplify ads, and takeover ads. Promoted and follower ads have a much wider average cost range than their takeover counterparts. 3. LinkedIn Ads LinkedIn is a professional networking site, so it’s not as casual as other social media platforms like Instagram and Facebook. As a result, users are more likely to be interested in what you are promoting on the platform because they’re looking for something related to their professional lives. LinkedIn has an average click-through rate of 0.65%. In addition, the conversion rate for LinkedIn ads is also fairly decent (2.35%). They can have high or low conversion rates depending on factors like interests and demographics. But if your ad is effectively targeted, it will have more chances of enjoying a higher conversion rate. 4. Instagram Ads As a younger demographic, Instagram users make up a great target audience for social media advertising. They are highly engaged in the platform and are more likely to respond to call-to-action than other demographics. 5. YouTube Ads YouTube ads are excellent for marketers with video content to promote their business. Furthermore, the advertising options offered by this platform ensure that you needn't bother with YouTuber fame or even a large number of subscribers on your channel to spread the word on this platform.
David parkyd
Furthermore, it is not the people or the citizens who decide on what to vote, on which political program, at what time, and so on. It is the oligarchs and the oligarchic system that decide on this and that submit their choice to the vote of the electorate (in certain very specific cases). One could legitimately wonder, for instance, why there are not more referendums, and in particular referendums of popular initiative, in “democracy.” Cornelius Castoriadis perfectly described this state of affairs when he wrote: “The election is rigged, not because the ballot boxes are being stuffed, but because the options are determined in advance. They are told, ‘vote for or against the Maastricht Treaty,’ for example. But who made the Maastricht Treaty? It isn’t us.”127 It would thus be naive to believe that elections reflect public opinion or even the preferences of the electorate. For these oligarchic principles dominate our societies to such an extent that the nature of the choice is decided in advance. In the case of elections, it is the powerful media apparatus—financed in the United States by private interests, big business, and the bureaucratic machinery of party politics—that presents to the electorate the choices to be made, the viable candidates, the major themes to be debated, the range of possible positions, the questions to be raised and pondered, the statistical tendencies of “public opinion,” the viewpoint of experts, and the positions taken by the most prominent politicians. What we call political debate and public space (which is properly speaking a space of publicity) are formatted to such an extent that we are encouraged to make binary choices without ever asking ourselves genuine questions: we must be either for or against a particular political star, a specific publicity campaign, such or such “societal problem.” “One of the many reasons why it is laughable to speak of ‘democracy’ in Western societies today,” asserts Castoriadis, “is because the ‘public’ sphere is in fact private—be it in France, the United States, or England.”The market of ideas is saturated, and the political consumer is asked to passively choose a product that is already on the shelves. This is despite the fact that the contents of the products are often more or less identical, conjuring up in many ways the difference that exists between a brand-name product on the right, with the shiny packaging of the tried-and-true, and a generic product on the left, that aspires to be more amenable to the people. “Free elections do not necessarily express ‘the will of the people,’ ” Erich Fromm judiciously wrote. “If a highly advertised brand of toothpaste is used by the majority of the people because of some fantastic claims it makes in its propaganda, nobody with any sense would say that people have ‘made a decision’ in favor of the toothpaste. All that could be claimed is that the propaganda was sufficiently effective to coax millions of people into believing its claims.
Gabriel Rockhill (Counter-History of the Present: Untimely Interrogations into Globalization, Technology, Democracy)
Furthermore, it is not the people or the citizens who decide on what to vote, on which political program, at what time, and so on. It is the oligarchs and the oligarchic system that decide on this and that submit their choice to the vote of the electorate (in certain very specific cases). One could legitimately wonder, for instance, why there are not more referendums, and in particular referendums of popular initiative, in “democracy.” Cornelius Castoriadis perfectly described this state of affairs when he wrote: “The election is rigged, not because the ballot boxes are being stuffed, but because the options are determined in advance. They are told, ‘vote for or against the Maastricht Treaty,’ for example. But who made the Maastricht Treaty? It isn’t us.” It would thus be naive to believe that elections reflect public opinion or even the preferences of the electorate. For these oligarchic principles dominate our societies to such an extent that the nature of the choice is decided in advance. In the case of elections, it is the powerful media apparatus—financed in the United States by private interests, big business, and the bureaucratic machinery of party politics—that presents to the electorate the choices to be made, the viable candidates, the major themes to be debated, the range of possible positions, the questions to be raised and pondered, the statistical tendencies of “public opinion,” the viewpoint of experts, and the positions taken by the most prominent politicians. What we call political debate and public space (which is properly speaking a space of publicity) are formatted to such an extent that we are encouraged to make binary choices without ever asking ourselves genuine questions: we must be either for or against a particular political star, a specific publicity campaign, such or such “societal problem.” “One of the many reasons why it is laughable to speak of ‘democracy’ in Western societies today,” asserts Castoriadis, “is because the ‘public’ sphere is in fact private—be it in France, the United States, or England.”The market of ideas is saturated, and the political consumer is asked to passively choose a product that is already on the shelves. This is despite the fact that the contents of the products are often more or less identical, conjuring up in many ways the difference that exists between a brand-name product on the right, with the shiny packaging of the tried-and-true, and a generic product on the left, that aspires to be more amenable to the people. “Free elections do not necessarily express ‘the will of the people,’ ” Erich Fromm judiciously wrote. “If a highly advertised brand of toothpaste is used by the majority of the people because of some fantastic claims it makes in its propaganda, nobody with any sense would say that people have ‘made a decision’ in favor of the toothpaste. All that could be claimed is that the propaganda was sufficiently effective to coax millions of people into believing its claims.
Gabriel Rockhill (Counter-History of the Present: Untimely Interrogations into Globalization, Technology, Democracy)
The test statistics of a t-test can be positive or negative, although this depends merely on which group has the larger mean; the sign of the test statistic has no substantive interpretation. Critical values (see Chapter 10) of the t-test are shown in Appendix C as (Student’s) t-distribution.4 For this test, the degrees of freedom are defined as n – 1, where n is the total number of observations for both groups. The table is easy to use. As mentioned below, most tests are two-tailed tests, and analysts find critical values in the columns for the .05 (5 percent) and .01 (1 percent) levels of significance. For example, the critical value at the 1 percent level of significance for a test based on 25 observations (df = 25 – 1 = 24) is 2.797 (and 1.11 at the 5 percent level of significance). Though the table also shows critical values at other levels of significance, these are seldom if ever used. The table shows that the critical value decreases as the number of observations increases, making it easier to reject the null hypothesis. The t-distribution shows one- and two-tailed tests. Two-tailed t-tests should be used when analysts do not have prior knowledge about which group has a larger mean; one-tailed t-tests are used when analysts do have such prior knowledge. This choice is dictated by the research situation, not by any statistical criterion. In practice, two-tailed tests are used most often, unless compelling a priori knowledge exists or it is known that one group cannot have a larger mean than the other. Two-tailed testing is more conservative than one-tailed testing because the critical values of two-tailed tests are larger, thus requiring larger t-test test statistics in order to reject the null hypothesis.5 Many statistical software packages provide only two-tailed testing. The above null hypothesis (men and women do not have different mean incomes in the population) requires a two-tailed test because we do not know, a priori, which gender has the larger income.6 Finally, note that the t-test distribution approximates the normal distribution for large samples: the critical values of 1.96 (5 percent significance) and 2.58 (1 percent significance), for large degrees of freedom (∞), are identical to those of the normal distribution. Getting Started Find examples of t-tests in the research literature. T-Test Assumptions Like other tests, the t-test has test assumptions that must be met to ensure test validity. Statistical testing always begins by determining whether test assumptions are met before examining the main research hypotheses. Although t-test assumptions are a bit involved, the popularity of the t-test rests partly on the robustness of t-test conclusions in the face of modest violations. This section provides an in-depth treatment of t-test assumptions, methods for testing the assumptions, and ways to address assumption violations. Of course, t-test statistics are calculated by the computer; thus, we focus on interpreting concepts (rather than their calculation). Key Point The t-test is fairly robust against assumption violations. Four t-test test assumptions must be met to ensure test validity: One variable is continuous, and the other variable is dichotomous. The two distributions have equal variances. The observations are independent. The two distributions are normally distributed. The first assumption, that one variable is continuous and the other dichotomous,
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
categorical and the dependent variable is continuous. The logic of this approach is shown graphically in Figure 13.1. The overall group mean is (the mean of means). The boxplots represent the scores of observations within each group. (As before, the horizontal lines indicate means, rather than medians.) Recall that variance is a measure of dispersion. In both parts of the figure, w is the within-group variance, and b is the between-group variance. Each graph has three within-group variances and three between-group variances, although only one of each is shown. Note in part A that the between-group variances are larger than the within-group variances, which results in a large F-test statistic using the above formula, making it easier to reject the null hypothesis. Conversely, in part B the within-group variances are larger than the between-group variances, causing a smaller F-test statistic and making it more difficult to reject the null hypothesis. The hypotheses are written as follows: H0: No differences between any of the group means exist in the population. HA: At least one difference between group means exists in the population. Note how the alternate hypothesis is phrased, because the logical opposite of “no differences between any of the group means” is that at least one pair of means differs. H0 is also called the global F-test because it tests for differences among any means. The formulas for calculating the between-group variances and within-group variances are quite cumbersome for all but the simplest of designs.1 In any event, statistical software calculates the F-test statistic and reports the level at which it is significant.2 When the preceding null hypothesis is rejected, analysts will also want to know which differences are significant. For example, analysts will want to know which pairs of differences in watershed pollution are significant across regions. Although one approach might be to use the t-test to sequentially test each pair of differences, this should not be done. It would not only be a most tedious undertaking but would also inadvertently and adversely affect the level of significance: the chance of finding a significant pair by chance alone increases as more pairs are examined. Specifically, the probability of rejecting the null hypothesis in one of two tests is [1 – 0.952 =] .098, the probability of rejecting it in one of three tests is [1 – 0.953 =] .143, and so forth. Thus, sequential testing of differences does not reflect the true level of significance for such tests and should not be used. Post-hoc tests test all possible group differences and yet maintain the true level of significance. Post-hoc tests vary in their methods of calculating test statistics and holding experiment-wide error rates constant. Three popular post-hoc tests are the Tukey, Bonferroni, and Scheffe tests.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
usually does not present much of a problem. Some analysts use t-tests with ordinal rather than continuous data for the testing variable. This approach is theoretically controversial because the distances among ordinal categories are undefined. This situation is avoided easily by using nonparametric alternatives (discussed later in this chapter). Also, when the grouping variable is not dichotomous, analysts need to make it so in order to perform a t-test. Many statistical software packages allow dichotomous variables to be created from other types of variables, such as by grouping or recoding ordinal or continuous variables. The second assumption is that the variances of the two distributions are equal. This is called homogeneity of variances. The use of pooled variances in the earlier formula is justified only when the variances of the two groups are equal. When variances are unequal (called heterogeneity of variances), revised formulas are used to calculate t-test test statistics and degrees of freedom.7 The difference between homogeneity and heterogeneity is shown graphically in Figure 12.2. Although we needn’t be concerned with the precise differences in these calculation methods, all t-tests first test whether variances are equal in order to know which t-test test statistic is to be used for subsequent hypothesis testing. Thus, every t-test involves a (somewhat tricky) two-step procedure. A common test for the equality of variances is the Levene’s test. The null hypothesis of this test is that variances are equal. Many statistical software programs provide the Levene’s test along with the t-test, so that users know which t-test to use—the t-test for equal variances or that for unequal variances. The Levene’s test is performed first, so that the correct t-test can be chosen. Figure 12.2 Equal and Unequal Variances The term robust is used, generally, to describe the extent to which test conclusions are unaffected by departures from test assumptions. T-tests are relatively robust for (hence, unaffected by) departures from assumptions of homogeneity and normality (see below) when groups are of approximately equal size. When groups are of about equal size, test conclusions about any difference between their means will be unaffected by heterogeneity. The third assumption is that observations are independent. (Quasi-) experimental research designs violate this assumption, as discussed in Chapter 11. The formula for the t-test test statistic, then, is modified to test whether the difference between before and after measurements is zero. This is called a paired t-test, which is discussed later in this chapter. The fourth assumption is that the distributions are normally distributed. Although normality is an important test assumption, a key reason for the popularity of the t-test is that t-test conclusions often are robust against considerable violations of normality assumptions that are not caused by highly skewed distributions. We provide some detail about tests for normality and how to address departures thereof. Remember, when nonnormality cannot be resolved adequately, analysts consider nonparametric alternatives to the t-test, discussed at the end of this chapter. Box 12.1 provides a bit more discussion about the reason for this assumption. A combination of visual inspection and statistical tests is always used to determine the normality of variables. Two tests of normality are the Kolmogorov-Smirnov test (also known as the K-S test) for samples with more than 50 observations and the Shapiro-Wilk test for samples with up to 50 observations. The null hypothesis of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
different from 3.5. However, it is different from larger values, such as 4.0 (t = 2.89, df = 9, p = .019). Another example of this is provided in the Box 12.2. Finally, note that the one-sample t-test is identical to the paired-samples t-test for testing whether the mean D = 0. Indeed, the one-sample t-test for D = 0 produces the same results (t = 2.43, df = 9, p = .038). In Greater Depth … Box 12.2 Use of the T-Test in Performance Management: An Example Performance benchmarking is an increasingly popular tool in performance management. Public and nonprofit officials compare the performance of their agencies with performance benchmarks and draw lessons from the comparison. Let us say that a city government requires its fire and medical response unit to maintain an average response time of 360 seconds (6 minutes) to emergency requests. The city manager has suspected that the growth in population and demands for the services have slowed down the responses recently. He draws a sample of 10 response times in the most recent month: 230, 450, 378, 430, 270, 470, 390, 300, 470, and 530 seconds, for a sample mean of 392 seconds. He performs a one-sample t-test to compare the mean of this sample with the performance benchmark of 360 seconds. The null hypothesis of this test is that the sample mean is equal to 360 seconds, and the alternate hypothesis is that they are different. The result (t = 1.030, df = 9, p = .330) shows a failure to reject the null hypothesis at the 5 percent level, which means that we don’t have sufficient evidence to say that the average response time is different from the benchmark 360 seconds. We cannot say that current performance of 392 seconds is significantly different from the 360-second benchmark. Perhaps more data (samples) are needed to reach such a conclusion, or perhaps too much variability exists for such a conclusion to be reached. NONPARAMETRIC ALTERNATIVES TO T-TESTS The tests described in the preceding sections have nonparametric alternatives. The chief advantage of these tests is that they do not require continuous variables to be normally distributed. The chief disadvantage is that they are less likely to reject the null hypothesis. A further, minor disadvantage is that these tests do not provide descriptive information about variable means; separate analysis is required for that. Nonparametric alternatives to the independent-samples test are the Mann-Whitney and Wilcoxon tests. The Mann-Whitney and Wilcoxon tests are equivalent and are thus discussed jointly. Both are simplifications of the more general Kruskal-Wallis’ H test, discussed in Chapter 11.19 The Mann-Whitney and Wilcoxon tests assign ranks to the testing variable in the exact manner shown in Table 12.4. The sum of the ranks of each group is computed, shown in the table. Then a test is performed to determine the statistical significance of the difference between the sums, 22.5 and 32.5. Although the Mann-Whitney U and Wilcoxon W test statistics are calculated differently, they both have the same level of statistical significance: p = .295. Technically, this is not a test of different means but of different distributions; the lack of significance implies that groups 1 and 2 can be regarded as coming from the same population.20 Table 12.4 Rankings of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
regression as dummy variables Explain the importance of the error term plot Identify assumptions of regression, and know how to test and correct assumption violations Multiple regression is one of the most widely used multivariate statistical techniques for analyzing three or more variables. This chapter uses multiple regression to examine such relationships, and thereby extends the discussion in Chapter 14. The popularity of multiple regression is due largely to the ease with which it takes control variables (or rival hypotheses) into account. In Chapter 10, we discussed briefly how contingency tables can be used for this purpose, but doing so is often a cumbersome and sometimes inconclusive effort. By contrast, multiple regression easily incorporates multiple independent variables. Another reason for its popularity is that it also takes into account nominal independent variables. However, multiple regression is no substitute for bivariate analysis. Indeed, managers or analysts with an interest in a specific bivariate relationship will conduct a bivariate analysis first, before examining whether the relationship is robust in the presence of numerous control variables. And before conducting bivariate analysis, analysts need to conduct univariate analysis to better understand their variables. Thus, multiple regression is usually one of the last steps of analysis. Indeed, multiple regression is often used to test the robustness of bivariate relationships when control variables are taken into account. The flexibility with which multiple regression takes control variables into account comes at a price, though. Regression, like the t-test, is based on numerous assumptions. Regression results cannot be assumed to be robust in the face of assumption violations. Testing of assumptions is always part of multiple regression analysis. Multiple regression is carried out in the following sequence: (1) model specification (that is, identification of dependent and independent variables), (2) testing of regression assumptions, (3) correction of assumption violations, if any, and (4) reporting of the results of the final regression model. This chapter examines these four steps and discusses essential concepts related to simple and multiple regression. Chapters 16 and 17 extend this discussion by examining the use of logistic regression and time series analysis. MODEL SPECIFICATION Multiple regression is an extension of simple regression, but an important difference exists between the two methods: multiple regression aims for full model specification. This means that analysts seek to account for all of the variables that affect the dependent variable; by contrast, simple regression examines the effect of only one independent variable. Philosophically, the phrase identifying the key difference—“all of the variables that affect the dependent variable”—is divided into two parts. The first part involves identifying the variables that are of most (theoretical and practical) relevance in explaining the dependent
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
popular religion produces shallow people. Several years ago, Bill McKibben wrote an article in Harper’s magazine that described the current condition of American Christianity:   Only 40 percent of Americans can name more than four of the Ten Commandments, and a scant half can cite any of the four authors of the Gospels. Twelve percent believe Joan of Arc was Noah’s wife. This failure to recall the specifics of our Christian heritage may be further evidence of our nation’s educational decline, but it probably doesn’t matter all that much in spiritual or political terms. Here is a statistic that does matter: Three quarters of Americans believe the Bible teaches that, “God helps those who help themselves.” That is, three out of four Americans believe that this uber-American idea, a notion at the core of our current individualist politics and culture, which was in fact uttered by Ben Franklin, actually appears in Holy Scripture. The thing is, not only is Franklin’s wisdom not biblical; it’s counterbiblical. Few ideas could be further from the gospel message, with its radical summons to love of neighbor. On this essential matter, most Americans—most American Christians—are simply wrong, as if 75 percent of American scientists believed that Newton proved gravity causes apples to fly up.6
Judson Edwards (Quiet Faith: An Introvert's Guide to Spiritual Survival)
The Mantle of Science For a decade or so, A.A. grew modestly. But, lacking scientific confirmation, it remained a relatively small sectarian movement, occasionally receiving a boost in popular magazines. The great surge in the popularity of the A.A. disease concept came when it received what seemed to be impeccable scientific support. Two landmark articles by E. M. Jellinek, published in 1946 and 1952, proposed a scientific understanding of alcoholism that seemed to confirm major elements of the A.A. view.12 Jellinek, then a research professor in applied physiology at Yale University, was a distinguished biostatistician and one of the early leaders in the field of alcohol studies. In his first paper he presented some eighty pages of elaborately detailed description, statistics, and charts that depicted what he considered to be a typical or average alcoholic career. Jellinek cautioned his readers about the limited nature of his data, and he explicitly acknowledged differences among individual drinkers. But from the data's "suggestive" value, he proceeded to develop a vividly detailed hypothesis.
Herbert Fingarette (Heavy Drinking: The Myth of Alcoholism as a Disease)
Speaking on Stage Speakers and presenters have only a few short seconds before their audience members begin forming opinions. True professionals know that beginning with impact determines audience engagement, the energy in the room, positive feedback, the quality of the experience, and whether or not their performance will be a success. A few of the popular methods which you can use to break the ice from the stage are: • Using music. • Using quotes. • Telling a joke. • Citing statistics. • Showing a video. • Asking questions. • Stating a problem. • Sharing acronyms. • Sharing a personal story. • Laying down a challenge. • Using analogies and comparisons. • Taking surveys; raise your hand if . . . Once you refine, define, and discover great conversation starters, you will enjoy renewed confidence for communicating well with new people.
Susan C. Young (The Art of Communication: 8 Ways to Confirm Clarity & Understanding for Positive Impact(The Art of First Impressions for Positive Impact, #5))
The theory of descent presents biological science with a task that it attempts to fulfill by diligent and complicated experimental investigations. Mendel's law, verified by all observations, offers a basic statistical proposition about the facts of heredity. Further progress will most likely be connected with a precise delineation of the concepts involved and with the construction of axiomatic systems in the sense of the exact natural sciences. The short popular formulas and slogans and the philosophical and political generalizations based upon them are of no significance in this structure.
Richard von Mises (Positivism: A Study in Human Understanding)
In 1703, Gottfried von Leibniz commented to the Swiss scientist and mathematician Jacob Bernoulli that “[N]ature has established patterns originating in the return of events, but only for the most part,”1 thereby prompting Bernoulli to invent the Law of Large Numbers and methods of statistical sampling that drive modern activities as varied as opinion polling, wine tasting, stock picking, and the testing of new drugs.b Leibniz’s admonition—”but only for the most part”—was more profound than he may have realized, for he provided the key to why there is such a thing as risk in the first place: without that qualification, everything would be predictable, and in a world where every event is identical to a previous event no change would ever occur. In 1730, Abraham de Moivre suggested the structure of the normal distribution—also known as the bell curve—and discovered the concept of standard deviation. Together, these two concepts make up what is popularly known as the Law of Averages and are essential ingredients of modern techniques for quantifying risk. Eight years later, Daniel Bernoulli, Jacob’s nephew and an equally distinguished mathematician and scientist, first defined the systematic process by which most people make choices and reach decisions. Even more important, he propounded the idea that the satisfaction resulting from any small increase in wealth “will be inversely proportionate to the quantity of goods previously possessed.” With that innocent-sounding assertion, Bernoulli explained why King Midas was an unhappy man, why people tend to be risk-averse, and why prices must fall if customers are to be persuaded to buy more.
Peter L. Bernstein (Against the Gods: The Remarkable Story of Risk)
The emergence of trans-exclusionary radical feminism [TERF] in the 1970s, with its own version of trans panic, is only one of many trans-misogynistic echoes in recent history. TERFs... didn't invent trans misogyny, nor did they put a particularly novel spin in it...portrayal of trans femininity as violent and depressed could have been lifted from the British denunciation of hijras in the 1870s, or from Nazi propaganda about transvestites in the 1930s... Recent work by historians has cat doubt in his popular TERF beliefs ever were outside a few loud agitators... If anything, TERFs, whether in the 1970s or in their contemporary "gender-critical" guise, are better understood as conventional boosters of statist and racist political institutions... TERFs, like the right-wing evangelicals or white supremacists who agree with them politically, are not the lynchpin to trans misogyny; rather, they are at best one of its latest manifestations.
Jules Gill-Peterson (A Short History of Trans Misogyny)
Kirsch and his colleagues did a second meta-analysis, this time on the 35 clinical trials conducted for four of the six most widely prescribed antidepressants approved between 1987 and 1999.16 Now looking at data from more than 5,000 patients, the researchers found again that placebos worked just as well as the popular antidepressant drugs Prozac, Effexor, Serzone, and Paxil a whopping 81 percent of the time. In most of the remaining cases where the drug did perform better, the benefit was so small that it wasn’t statistically significant. Only with severely depressed patients were the prescription drugs clearly better than placebo.
Joe Dispenza (You Are the Placebo: Making Your Mind Matter)
But the secret of Japan's success is simple - laissez faire. The Emperor may have initiated his capitalist Restoration in 1868, but after 1884 it became State policy to leave industry to its own devices. Having germinated it, the Japanese government understood that capitalism runs best unfettered. This is ironic, since the popular perception abroad has been that Japan has prospered because of MITI’s direction of the economy, whereas the reality is that since 1884 Japan's government has seen itself as the servant of industry, anxious to tax it and its customers as lightly as possible. A 1988 survey by the British Central Statistical Office showed that Japan was one of the three most lightly-taxed industrialised countries, the other two being the USA and Switzerland. Those three countries' governments only sequester 32-35 per cent of GNP compared to the British State's 44 per cent, the West German's 45 per cent, and the French 52 per cent.
Terence Kealey (The Economic Laws of Scientific Research)
great Belgian statistician Adolphe Quetelet. Quetelet was the person who popularized the idea of taking the “average” or “arithmetic mean” of a group, which was a revolutionary way to summarize complex data with a single number.
Tim Harford (The Data Detective: Ten Easy Rules to Make Sense of Statistics)
In 2017, three academics—Christian Deutscher, Eugen Dimant, and Brad Humphreys—caused uproar in the German parliament when they published a working paper claiming to have statistical evidence that there were irregular betting patterns associated with two Bundesliga referees officiating between 2011 and 2015.
Simon Kuper (Soccernomics: Why England Loses; Why Germany, Spain, and France Win; and Why One Day Japan, Iraq, and the United States Will Become Kings of the World's ... the Kings of the World's Most Popular Sport)
Adolf Hitler despised smoking. The Führer was no doubt pleased when German doctors discovered that cigarettes caused cancer. For obvious reasons, though, “hated by Nazis” was no impediment to the popularity of tobacco.
Tim Harford (The Data Detective: Ten Easy Rules to Make Sense of Statistics)
Grace Canceled: How Outrage is Destroying Lives, Ending Debate, and Endangering Democracy by Dana Loesch 4/ 5 stars Great book! Book summary: “Popular talk radio host and political activist Dana Loesch confronts the Left's zero-tolerance, accept-no-apologies ethos with a powerful call for a return to core American principles of grace, redemption, justice, and empathy. Diving deep into recent cases where public and private figures were shamed, fired, or boycotted for social missteps, Loesch shows us how the politics of outrage is fueling the breakdown of the American community. How do we find common ground without compromising? Loesch urges readers to meet the face of fury with grace, highlighting inspiring examples like Congressman Dan Crenshaw's appearance on Saturday Night Live.” “Socialists’ two favorite rhetorical tools are envy and shame, and the platform they build on is identity politics. It’s culturally sanctioned prejudice… Identity politics is a tactic of statists, who foster resentment and envy and then peddle the lie that a bigger government can make everything FAIRER. These feelings justify the cruelty inherent in identity politics. Democrats’ favorite tactic is smearing as a ‘racist’ anyone who disagrees with them, challenges their opinion, or simply exists while thinking different thoughts.” -p. 20 “Democrats still need the socialists to maintain power, but it’s a dangerous trade. Going explicitly socialist would doom the Democrats to the dustbin of history. Instead, they’re refashioning the party: It believes wealth is evil, government is your church and savior, and independence is selfishness. Virtue is extinct- ‘virtue signaling’ has replaced actual virtue.” -p. 24 “The socialist definition of social justice ignores merit, neuters ambition, and diminishes the equity of labor. Equal rewards for unequal effort is unjust and fosters resentment.” - pp. 26-7 “The state purports to act on behalf of ‘the common good’. But who defines the common good? It has long been the justification for monstrous acts by totalitarian governments. ...In this way, the common good becomes an excuse for total state control. That was the excuse on which totalitarianism was built. You can achieve the common goal better if there is a total authority, and you must then limit the desires and wishfulness of individuals.” -p. 27 “Socialism is the enemy of charity because it outsources all compassion and altruism to the state. Out of sight, out of mind, they may think-- an overarching theme throughout socialism and communism (and one is just a stepping-stone to the other)... What need is there for personal ambition if government will provide, albeit meagerly, for all your needs from cradle to grave?” -pp. 32-3
Dana Loesch (Grace Canceled: How Outrage is Destroying Lives, Ending Debate, and Endangering Democracy)
My second decision, to leave out statistical esoterica, was made with much less regret. I don’t mention confidence intervals, sample sizes, p values, and similar devices in Dataclysm because the book is above all a popularization of data and data science. Mathematical wonkiness wasn’t what I wanted to get across.
Christian Rudder (Dataclysm: Love, Sex, Race, and Identity--What Our Online Lives Tell Us about Our Offline Selves)