Popular Hypothesis Quotes

We've searched our database for all the quotes and captions related to Popular Hypothesis. Here they are! All 42 of them:

The process which, if not checked, will abolish Man goes on apace among Communists and Democrats no less than among Fascists. The methods may (at first) differ in brutality. But many a mild-eyed scientist in pince-nez, many a popular dramatist, many an amateur philosopher in our midst, means in the long run just the same as the Nazi rulers of Germany: 'Traditional values are to be debunked' and mankind to be cut out into some fresh shape at the will (which must, by hypothesis, be an arbitrary will) of some few lucky people in one lucky generation which has learned how to do it.
C.S. Lewis (The Abolition of Man)
Hypothesis, my dear young friend, establishes itself by a cumulative process: or, to use popular language, if you make the same guess often enough it ceases to be a guess and becomes a Scientific Fact. After
C.S. Lewis (The Pilgrim's Regress)
It matters not to an empiricist from what quarter an hypothesis may come to him: he may have acquired it by fair means or by foul; passion may have whispered or accident suggested it; but if the total drift of thinking continues to confirm it, that is what he means by its being true.
William James (The Will to Believe, Human Immortality and Other Essays in Popular Philosophy)
The process which, if not checked, will abolish Man goes on apace among Communists and Democrats no less than among Fascists. The methods may (at first) differ in brutality. But many a mild-eyed scientist in pince-nez, many a popular dramatist, many an amateur philosopher in our midst, means in the long run just the same as the Nazi rulers of Germany. Traditional values are to be ‘debunked’ and mankind to be cut out into some fresh shape at the will (which must, by hypothesis, be an arbitrary will) of some few lucky people in one lucky generation which has learned how to do it. The belief that we can invent ‘ideologies’ at pleasure, and the consequent treatment of mankind as mere ulh, specimens, preparations, begins to affect our very language. Once we killed bad men: now we liquidate unsocial elements. Virtue has become integration and diligence dynamism, and boys likely to be worthy of a commission are ‘potential officer material’. Most wonderful of all, the virtues of thrift and temperance, and even of ordinary intelligence, are sales-resistance.
C.S. Lewis (The Abolition of Man)
There was, for example, the theory that A'Tuin had come from nowhere and would continue at a uniform crawl, or steady gait, into nowhere, for all time. This theory was popular among academics. An alternative, favoured by those of a religious persuasion, was that A'Tuin was crawling from the Birthplace to the Time of Mating, as were all the stars in the sky which were, obviously, also carried by giant turtles. When they arrived they would briefly and passionately mate, for the first and only time, and from that fiery union new turtles would be born to carry a new pattern of worlds. This was known as the Big Bang hypothesis.
Terry Pratchett (The Color of Magic (Discworld, #1; Rincewind, #1))
Optimists Optimism is normal, but some fortunate people are more optimistic than the rest of us. If you are genetically endowed with an optimistic bias, you hardly need to be told that you are a lucky person—you already feel fortunate. An optimistic attitude is largely inherited, and it is part of a general disposition for well-being, which may also include a preference for seeing the bright side of everything. If you were allowed one wish for your child, seriously consider wishing him or her optimism. Optimists are normally cheerful and happy, and therefore popular; they are resilient in adapting to failures and hardships, their chances of clinical depression are reduced, their immune system is stronger, they take better care of their health, they feel healthier than others and are in fact likely to live longer. A study of people who exaggerate their expected life span beyond actuarial predictions showed that they work longer hours, are more optimistic about their future income, are more likely to remarry after divorce (the classic “triumph of hope over experience”), and are more prone to bet on individual stocks. Of course, the blessings of optimism are offered only to individuals who are only mildly biased and who are able to “accentuate the positive” without losing track of reality. Optimistic individuals play a disproportionate role in shaping our lives. Their decisions make a difference; they are the inventors, the entrepreneurs, the political and military leaders—not average people. They got to where they are by seeking challenges and taking risks. They are talented and they have been lucky, almost certainly luckier than they acknowledge. They are probably optimistic by temperament; a survey of founders of small businesses concluded that entrepreneurs are more sanguine than midlevel managers about life in general. Their experiences of success have confirmed their faith in their judgment and in their ability to control events. Their self-confidence is reinforced by the admiration of others. This reasoning leads to a hypothesis: the people who have the greatest influence on the lives of others are likely to be optimistic and overconfident, and to take more risks than they realize.
Daniel Kahneman (Thinking, Fast and Slow)
The media-contamination hypothesis usually focuses on the book Michelle Remembers (Smith and Pazder, 1980) and the movie Rosemary's Baby;. These images were in the popular culture for centuries before survivor memories started to surface in therapy; therefore, the media-contamination hypothesis fails to account for the time lag and cannot provide a full account of the phenomenona.
Colin A. Ross (Satanic Ritual Abuse: Principles of Treatment)
The idea that depression is caused by low serotonin levels in the brain is now deeply embedded in popular folklore, and people with no neuroscience background at all will routinely incorporate phrases about it into everyday discussion of their mood, just to keep their serotonin levels up. Many people also don't know that this is how antidepressant drugs work: depression is caused by low serotonin, so you need drugs which raise the serotonin levels in your brain, like SSRI antidepressants, which are 'selective serotonin reuptake inhibitors'. But this theory is wrong. The 'serotonin hypothesis' for depression, as it is known, was always shaky, and the evidence now is hugely contradictory ... But in popular culture the depression-serotonin theory is proven and absolute, because it has been marketed so effectively.
Ben Goldacre (Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients)
Recently a group of researchers conducted a computer analysis of three decades of hit songs. The researchers reported a statistically significant trend toward narcissism and hostility in popular music. In line with their hypothesis, they found a decrease in usages such as we and us and an increase in I and me. The researchers also reported a decline in words related to social connection and positive emotions, and an increase in words related to anger and antisocial behavior, such as hate or kill.
Brené Brown (Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead)
The process which, if not checked, will abolish Man goes on apace among Communists and Democrats no less than among Fascists. The methods may (at first) differ in brutality. But many a mild-eyed scientist in pince-nez, many a popular dramatist, many an amateur philosopher in our midst, means in the long run just the same as the Nazi rulers of Germany. Traditional values are to be ‘debunked’ and mankind to be cut out into some fresh shape at the will (which must, by hypothesis, be an arbitrary will) of some few lucky people in one lucky generation which has learned how to do it.
C.S. Lewis (The Abolition of Man)
A pithier version of the Bayesian argument against paranormal claims was stated by the astronomer and science popularizer Carl Sagan (1934–1996) in the slogan that serves as this chapter’s epigraph: “Extraordinary claims require extraordinary evidence.” An extraordinary claim has a low Bayesian prior. For its posterior credence to be higher than the posterior credence in its opposite, the likelihood of the data given that the hypothesis is true must be far higher than the likelihood of the data given that the hypothesis is false. The evidence, in other words, must be extraordinary.
Steven Pinker (Rationality: What It Is, Why It Seems Scarce, Why It Matters)
different subject. The story of the serotonin hypothesis for depression, and its enthusiastic promotion by drug companies, is part of a wider process that has been called ‘disease-mongering’ or ‘medicalisation’, where diagnostic categories are widened, whole new diagnoses are invented, and normal variants of human experience are pathologised, so they can be treated with pills. One simple illustration of this is the recent spread of ‘checklists’ enabling the public to diagnose, or help diagnose, various medical conditions. In 2010, for example, the popular website WebMD launched a new test: ‘Rate your risk for depression: could you be depressed?’ It was funded by Eli Lilly, manufacturers of the antidepressant duloxetine, and this was duly declared on the page, though that doesn’t reduce the absurdity of what followed. The test consisted of ten questions, such as: ‘I feel sad or down most of the time’; ‘I feel tired almost every day’; ‘I have trouble concentrating’; ‘I feel worthless or hopeless’; ‘I find myself thinking a lot about dying’; and so on. If you answered ‘no’ to every single one of these questions – every single one – and then pressed ‘Submit’, the response was clear: ‘You may be at risk for major depression’.
Ben Goldacre (Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients)
Presumably, it won’t be only one way. Even before the age of climate change, the literature of conservation furnished many metaphors to choose from. James Lovelock gave us the Gaia hypothesis, which conjured an image of the world as a single, evolving quasi-biological entity. Buckminster Fuller popularized “spaceship earth,” which presents the planet as a kind of desperate life raft in what Archibald MacLeish called “the enormous, empty night”; today, the phrase suggests a vivid picture of a world spinning through the solar system barnacled with enough carbon capture plants to actually stall out warming, or even reverse it, restoring as if by magic the breathability of the air between the machines. The Voyager 1 space probe gave us the “Pale Blue Dot”—the inescapable smallness, and fragility, of the entire experiment we’re engaged in, together, whether we like it or not. Personally, I think that climate change itself offers the most invigorating picture, in that even its cruelty flatters our sense of power, and in so doing calls the world, as one, to action. At least I hope it does. But that is another meaning of the climate kaleidoscope. You can choose your metaphor. You can’t choose the planet, which is the only one any of us will ever call home.
David Wallace-Wells (The Uninhabitable Earth: Life After Warming)
In the meantime they could only speculate about the revealed cosmos. There was, for example, the theory that A'Tuin had come from nowhere and would continue at a uniform crawl, or steady gait, into nowhere, for all time. This theory was popular among academics. An alternative, favoured by those of a religious persuasion, was that A'Tuin was crawling from the Birthplace to the Time of Mating, as were all the stars in the sky which were, obviously, also carried by giant turtles. When they arrived they would briefly and passionately mate, for the first and only time, and from that fiery union new turtles would be born to carry a new pattern of worlds. This was known as the Big Bang hypothesis.
Terry Pratchett
Since belief is measured by action, he who forbids us to believe religion to be true, necessarily also forbids us to act as we should if we did believe it to be true. The whole defence of religious faith hinges upon action. If the action required or inspired by the religious hypothesis is in no way different from that dictated by the naturalistic hypothesis, then religious faith is a pure superfluity, better pruned away, and controversy about its legitimacy is a piece of idle trifling, unworthy of serious minds. I myself believe, of course, that the religious hypothesis gives to the world an expression which specifically determines our reactions, and makes them in a large part unlike what they might be on a purely naturalistic scheme of belief.
William James (The Will to Believe, Human Immortality and Other Essays in Popular Philosophy)
Probably the most popular and attractive hypothesis is that modern humans had developed advanced language capabilities and therefore were able to talk the Neanderthals to death. This idea has a lot going for it. It’s easy to imagine ways in which superior language abilities could have conferred advantages, particularly at the level of the band or tribe. For example, hunter-gatherers today are well known for having a deep knowledge of the local landscape and of the appearance and properties of many local plants and animals. This includes knowledge of rare but important events that happened more than a human lifetime ago, which may have been particularly important in the unstable climate of the Ice Age. It is hard to see how that kind of information transmission across generations would be possible in the absence of sophisticated language. Without it, there may have been distinct limits on cultural complexity, which, among other things, would have meant limits on the sophistica- tion of tools and weapons.
Gregory Cochran (The 10000 Year Explosion: How Civilization Accelerated Human Evolution)
Page 25: …Maimonides was also an anti-Black racist. Towards the end of the [Guide to the Perplexed], in a crucial chapter (book III, chapter 51) he discusses how various sections of humanity can attain the supreme religious value, the true worship of God. Among those who are incapable of even approaching this are: "Some of the Turks [i.e., the Mongol race] and the nomads in the North, and the Blacks and the nomads in the South, and those who resemble them in our climates. And their nature is like the nature of mute animals, and according to my opinion they are not on the level of human beings, and their level among existing things is below that of a man and above that of a monkey, because they have the image and the resemblance of a man more than a monkey does." Now, what does one do with such a passage in a most important and necessary work of Judaism? Face the truth and its consequences? God forbid! Admit (as so many Christian scholars, for example, have done in similar circumstances) that a very important Jewish authority held also rabid anti-Black views, and by this admission make an attempt at self-education in real humanity? Perish the thought. I can almost imagine Jewish scholars in the USA consulting among themselves, ‘What is to be done?’ – for the book had to be translated, due to the decline in the knowledge of Hebrew among American Jews. Whether by consultation or by individual inspiration, a happy ‘solution’ was found: in the popular American translation of the Guide by one Friedlander, first published as far back as 1925 and since then reprinted in many editions, including several in paperback, the Hebrew word Kushim, which means Blacks, was simply transliterated and appears as ‘Kushites’, a word which means nothing to those who have no knowledge of Hebrew, or to whom an obliging rabbi will not give an oral explanation. During all these years, not a word has been said to point out the initial deception or the social facts underlying its continuation – and this throughout the excitement of Martin Luther King’s campaigns, which were supported by so many rabbis, not to mention other Jewish figures, some of whom must have been aware of the anti-Black racist attitude which forms part of their Jewish heritage. Surely one is driven to the hypothesis that quite a few of Martin Luther King’s rabbinical supporters were either anti-Black racists who supported him for tactical reasons of ‘Jewish interest’ (wishing to win Black support for American Jewry and for Israel’s policies) or were accomplished hypocrites, to the point of schizophrenia, capable of passing very rapidly from a hidden enjoyment of rabid racism to a proclaimed attachment to an anti-racist struggle – and back – and back again.
Israel Shahak (Jewish History, Jewish Religion: The Weight of Three Thousand Years)
different from 3.5. However, it is different from larger values, such as 4.0 (t = 2.89, df = 9, p = .019). Another example of this is provided in the Box 12.2. Finally, note that the one-sample t-test is identical to the paired-samples t-test for testing whether the mean D = 0. Indeed, the one-sample t-test for D = 0 produces the same results (t = 2.43, df = 9, p = .038). In Greater Depth … Box 12.2 Use of the T-Test in Performance Management: An Example Performance benchmarking is an increasingly popular tool in performance management. Public and nonprofit officials compare the performance of their agencies with performance benchmarks and draw lessons from the comparison. Let us say that a city government requires its fire and medical response unit to maintain an average response time of 360 seconds (6 minutes) to emergency requests. The city manager has suspected that the growth in population and demands for the services have slowed down the responses recently. He draws a sample of 10 response times in the most recent month: 230, 450, 378, 430, 270, 470, 390, 300, 470, and 530 seconds, for a sample mean of 392 seconds. He performs a one-sample t-test to compare the mean of this sample with the performance benchmark of 360 seconds. The null hypothesis of this test is that the sample mean is equal to 360 seconds, and the alternate hypothesis is that they are different. The result (t = 1.030, df = 9, p = .330) shows a failure to reject the null hypothesis at the 5 percent level, which means that we don’t have sufficient evidence to say that the average response time is different from the benchmark 360 seconds. We cannot say that current performance of 392 seconds is significantly different from the 360-second benchmark. Perhaps more data (samples) are needed to reach such a conclusion, or perhaps too much variability exists for such a conclusion to be reached. NONPARAMETRIC ALTERNATIVES TO T-TESTS The tests described in the preceding sections have nonparametric alternatives. The chief advantage of these tests is that they do not require continuous variables to be normally distributed. The chief disadvantage is that they are less likely to reject the null hypothesis. A further, minor disadvantage is that these tests do not provide descriptive information about variable means; separate analysis is required for that. Nonparametric alternatives to the independent-samples test are the Mann-Whitney and Wilcoxon tests. The Mann-Whitney and Wilcoxon tests are equivalent and are thus discussed jointly. Both are simplifications of the more general Kruskal-Wallis’ H test, discussed in Chapter 11.19 The Mann-Whitney and Wilcoxon tests assign ranks to the testing variable in the exact manner shown in Table 12.4. The sum of the ranks of each group is computed, shown in the table. Then a test is performed to determine the statistical significance of the difference between the sums, 22.5 and 32.5. Although the Mann-Whitney U and Wilcoxon W test statistics are calculated differently, they both have the same level of statistical significance: p = .295. Technically, this is not a test of different means but of different distributions; the lack of significance implies that groups 1 and 2 can be regarded as coming from the same population.20 Table 12.4 Rankings of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
usually does not present much of a problem. Some analysts use t-tests with ordinal rather than continuous data for the testing variable. This approach is theoretically controversial because the distances among ordinal categories are undefined. This situation is avoided easily by using nonparametric alternatives (discussed later in this chapter). Also, when the grouping variable is not dichotomous, analysts need to make it so in order to perform a t-test. Many statistical software packages allow dichotomous variables to be created from other types of variables, such as by grouping or recoding ordinal or continuous variables. The second assumption is that the variances of the two distributions are equal. This is called homogeneity of variances. The use of pooled variances in the earlier formula is justified only when the variances of the two groups are equal. When variances are unequal (called heterogeneity of variances), revised formulas are used to calculate t-test test statistics and degrees of freedom.7 The difference between homogeneity and heterogeneity is shown graphically in Figure 12.2. Although we needn’t be concerned with the precise differences in these calculation methods, all t-tests first test whether variances are equal in order to know which t-test test statistic is to be used for subsequent hypothesis testing. Thus, every t-test involves a (somewhat tricky) two-step procedure. A common test for the equality of variances is the Levene’s test. The null hypothesis of this test is that variances are equal. Many statistical software programs provide the Levene’s test along with the t-test, so that users know which t-test to use—the t-test for equal variances or that for unequal variances. The Levene’s test is performed first, so that the correct t-test can be chosen. Figure 12.2 Equal and Unequal Variances The term robust is used, generally, to describe the extent to which test conclusions are unaffected by departures from test assumptions. T-tests are relatively robust for (hence, unaffected by) departures from assumptions of homogeneity and normality (see below) when groups are of approximately equal size. When groups are of about equal size, test conclusions about any difference between their means will be unaffected by heterogeneity. The third assumption is that observations are independent. (Quasi-) experimental research designs violate this assumption, as discussed in Chapter 11. The formula for the t-test test statistic, then, is modified to test whether the difference between before and after measurements is zero. This is called a paired t-test, which is discussed later in this chapter. The fourth assumption is that the distributions are normally distributed. Although normality is an important test assumption, a key reason for the popularity of the t-test is that t-test conclusions often are robust against considerable violations of normality assumptions that are not caused by highly skewed distributions. We provide some detail about tests for normality and how to address departures thereof. Remember, when nonnormality cannot be resolved adequately, analysts consider nonparametric alternatives to the t-test, discussed at the end of this chapter. Box 12.1 provides a bit more discussion about the reason for this assumption. A combination of visual inspection and statistical tests is always used to determine the normality of variables. Two tests of normality are the Kolmogorov-Smirnov test (also known as the K-S test) for samples with more than 50 observations and the Shapiro-Wilk test for samples with up to 50 observations. The null hypothesis of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
The test statistics of a t-test can be positive or negative, although this depends merely on which group has the larger mean; the sign of the test statistic has no substantive interpretation. Critical values (see Chapter 10) of the t-test are shown in Appendix C as (Student’s) t-distribution.4 For this test, the degrees of freedom are defined as n – 1, where n is the total number of observations for both groups. The table is easy to use. As mentioned below, most tests are two-tailed tests, and analysts find critical values in the columns for the .05 (5 percent) and .01 (1 percent) levels of significance. For example, the critical value at the 1 percent level of significance for a test based on 25 observations (df = 25 – 1 = 24) is 2.797 (and 1.11 at the 5 percent level of significance). Though the table also shows critical values at other levels of significance, these are seldom if ever used. The table shows that the critical value decreases as the number of observations increases, making it easier to reject the null hypothesis. The t-distribution shows one- and two-tailed tests. Two-tailed t-tests should be used when analysts do not have prior knowledge about which group has a larger mean; one-tailed t-tests are used when analysts do have such prior knowledge. This choice is dictated by the research situation, not by any statistical criterion. In practice, two-tailed tests are used most often, unless compelling a priori knowledge exists or it is known that one group cannot have a larger mean than the other. Two-tailed testing is more conservative than one-tailed testing because the critical values of two-tailed tests are larger, thus requiring larger t-test test statistics in order to reject the null hypothesis.5 Many statistical software packages provide only two-tailed testing. The above null hypothesis (men and women do not have different mean incomes in the population) requires a two-tailed test because we do not know, a priori, which gender has the larger income.6 Finally, note that the t-test distribution approximates the normal distribution for large samples: the critical values of 1.96 (5 percent significance) and 2.58 (1 percent significance), for large degrees of freedom (∞), are identical to those of the normal distribution. Getting Started Find examples of t-tests in the research literature. T-Test Assumptions Like other tests, the t-test has test assumptions that must be met to ensure test validity. Statistical testing always begins by determining whether test assumptions are met before examining the main research hypotheses. Although t-test assumptions are a bit involved, the popularity of the t-test rests partly on the robustness of t-test conclusions in the face of modest violations. This section provides an in-depth treatment of t-test assumptions, methods for testing the assumptions, and ways to address assumption violations. Of course, t-test statistics are calculated by the computer; thus, we focus on interpreting concepts (rather than their calculation). Key Point The t-test is fairly robust against assumption violations. Four t-test test assumptions must be met to ensure test validity: One variable is continuous, and the other variable is dichotomous. The two distributions have equal variances. The observations are independent. The two distributions are normally distributed. The first assumption, that one variable is continuous and the other dichotomous,
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
The causes of many mismatch diseases, such as smoking cigarettes or drinking too much soda, are popular because they provide immediate pleasures that override concerns about or rational valuations of their long-term consequences. In addition, there is a strong incentive for manufacturers and advertisers to cater to our evolved desires and sell us products that increase our convenience, comfort, efficiency, and pleasure—or that carry the illusion of being advantageous. Junk food is popular for a reason. If you are like me, you use commercial products nearly twenty-four hours a day, even when you are asleep. Many of these products, like the chair I am sitting on, make me feel good, but not all of them are healthy for my body. The hypothesis of dysevolution predicts that as long as we accept or cope with the symptoms of the problems these products create, often thanks to other products, and as long as the benefits exceed the costs, then we will continue to buy and use them and pass them on to our children, keeping the cycle going long after we are gone.
Daniel E. Lieberman (The Story of the Human Body: Evolution, Health and Disease)
In this book, you will encounter various interesting geometries that have been thought to hold the keys to the universe. Galileo Galilei (1564-1642) suggested that "Nature's great book is written in mathematical symbols." Johannes Kepler (1571-1630) modeled the solar system with Platonic solids such as the dodecahedron. In the 1960s, physicist Eugene Wigner (1902-1995) was impressed with the "unreasonable effectiveness of mathematics in the natural sciences." Large Lie groups, like E8-which is discussed in the entry "The Quest for Lie Group E8 (2007)"- may someday help us create a unified theory of physics. in 2007, Swedish American cosmologist Max Tegmark published both scientific and popular articles on the mathematical universe hypothesis, which states that our physical reality is a mathematical structure-in other words, our universe in not just described by mathematics-it is mathematics.
Clifford A. Pickover (The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics (Union Square & Co. Milestones))
...The other picture is of three Europeans in India looking at a great new star in the milky way. These were apparently all of the guests at a large dance who were interested in such matters. Amongst those who were at all competent to form views as to the origin of this cosmoclastic explosion, the most popular theory attributed it to a collision between two stars, or a star and a nebula. There seem, however, to be at least two possible alternatives to this hypothesis. Perhaps it was the last judgement of some inhabited world, perhaps a too successful experiment in induced radioactivity the part of some of the dwellers there. And perhaps also these two hypotheses are identical, and what we were watching that evening was the detonation of a world on which too many men came out to look at the stars when they should have been dancing.
Haldane J B S
Moral questions immediately present themselves as questions whose solution cannot wait for sensible proof. A moral question is a question not of what sensibly exists, but of what is good, or would be good if it did exist. Science can tell us what exists; but to compare the worths, both of what exists and of what does not exist, we must consult not science, but what Pascal calls our heart. Science herself consults her heart when she lays it down that the infinite ascertainment of fact and correction of false belief are the supreme goods for man. Challenge the statement, and science can only repeat it oracularly, or else prove it by showing that such ascertainment and correction bring man all sorts of other goods which man's heart in turn declares. The question of having moral beliefs at all or not having them is decided by our will. Are our moral preferences true or false, or are they only odd biological phenomena, making things good or bad for us, but in themselves indifferent? How can your pure intellect decide? If your heart does not want a world of moral reality, your head will assuredly never make you believe in one. Mephistophelian skepticism, indeed, will satisfy the head's play-instincts much better than any rigorous idealism can. Some men (even at the student age) are so naturally cool-hearted that the moralistic hypothesis never has for them any pungent life, and in their supercilious presence the hot young moralist always feels strangely ill at ease. The appearance of knowingness is on their side, of naïveté and gullibility on his. Yet, in the inarticulate heart of him, he clings to it that he is not a dupe, and that there is a realm in which (as Emerson says) all their wit and intellectual superiority is no better than the cunning of a fox. Moral skepticism can no more be refuted or proved by logic than intellectual skepticism can. When we stick to it that there is truth (be it of either kind), we do so with our whole nature, and resolve to stand or fall by the results. The skeptic with his whole nature adopts the doubting attitude; but which of us is the wiser, Omniscience only knows.
William James (The Will to Believe and Other Essays in Popular Philosophy, and Human Immortality)
The Mantle of Science For a decade or so, A.A. grew modestly. But, lacking scientific confirmation, it remained a relatively small sectarian movement, occasionally receiving a boost in popular magazines. The great surge in the popularity of the A.A. disease concept came when it received what seemed to be impeccable scientific support. Two landmark articles by E. M. Jellinek, published in 1946 and 1952, proposed a scientific understanding of alcoholism that seemed to confirm major elements of the A.A. view.12 Jellinek, then a research professor in applied physiology at Yale University, was a distinguished biostatistician and one of the early leaders in the field of alcohol studies. In his first paper he presented some eighty pages of elaborately detailed description, statistics, and charts that depicted what he considered to be a typical or average alcoholic career. Jellinek cautioned his readers about the limited nature of his data, and he explicitly acknowledged differences among individual drinkers. But from the data's "suggestive" value, he proceeded to develop a vividly detailed hypothesis.
Herbert Fingarette (Heavy Drinking: The Myth of Alcoholism as a Disease)
Framework hypothesis. Dr. Meredith Kline (1922–2007), who accepted many evolutionary ideas, popularized this view in America.24 It is very common in many seminaries today. Those who hold to framework treat Genesis 1 as a literary device (think poetic or semi-poetic), with the first three days paralleling and equating to the last three days of creation. These days are not seen as 24-hour days but are taken as metaphorical or allegorical to allow for ideas like evolution/millions of years to be entertained. Hence, Genesis 1 is treated as merely being a literary device to teach that God created everything (essentially in 3 days25).26 However, Genesis 1 is not written as poetry but as literal history.
Ken Ham (A Flood of Evidence: 40 Reasons Noah and the Ark Still Matter)
Overcrowding works in a different way for creators than for viewers. For creators, the problem becomes—how do you stand out? How do you get your videos watched? This is particularly acute for new creators, who face a “rich get richer” phenomenon. Across many categories of networked products, when early users join a network and start producing value, algorithms naturally reward them—and this is a good thing. When they do a good job, perhaps they earn five-star ratings, or they quickly gain lots of followers. Perhaps they get featured, or are ranked highly in popularity lists. This helps consumers find what they want, quickly, but the downside is that the already popular just get more popular. Eventually, the problem becomes, how does a new member of the network break in? If everyone else has millions of followers, or thousands of five-star reviews, it can be hard. Eugene Wei, former CTO of Hulu and noted product thinker, writes about the “Old Money” in the context of social networks, arguing that established networks are harder for new users to break into: Some networks reward those who gain a lot of followers early on with so much added exposure that they continue to gain more followers than other users, regardless of whether they’ve earned it through the quality of their posts. One hypothesis on why social networks tend to lose heat at scale is that this type of old money can’t be cleared out, and new money loses the incentive to play the game. It’s not that the existence of old money or old social capital dooms a social network to inevitable stagnation, but a social network should continue to prioritize distribution for the best content, whatever the definition of quality, regardless of the vintage of user producing it. Otherwise a form of social capital inequality sets in, and in the virtual world, where exit costs are much lower than in the real world, new users can easily leave for a new network where their work is more properly rewarded and where status mobility is higher.75 This is true for social networks and also true for marketplaces, app stores, and other networked products as well. Ratings systems, reviews, followers, advertising systems all reinforce this, giving the most established members of a network dominance over everyone else. High-quality users hogging all of the attention is the good version of the problem, but the bad version is much more problematic: What happens, particularly for social products, when the most controversial and opinionated users are rewarded with positive feedback loops? Or when purveyors of low-quality apps in a developer platform—like the Apple AppStore’s initial proliferation of fart apps—are downloaded by users and ranked highly in charts? Ultimately, these loops need to be broken; otherwise your network may go in a direction you don’t want.
Andrew Chen (The Cold Start Problem: How to Start and Scale Network Effects)
Fringe (2008–2013) and Counterpart (2017–2018)— In the twenty-first century, two popular TV shows demonstrate the idea of a single parallel world that has somehow split off from this world, but retains many similarities, including a shared history. The source of the divergence is never explained fully, but the existence of a parallel world with alternate versions of the main characters is a key plot point in both. Both shows reveal that some physics phenomenon was responsible for either (1) breaching a way into the other universe or (2) causing a branch off the main universe to create the second one.
Rizwan Virk (The Simulated Multiverse: An MIT Computer Scientist Explores Parallel Universes, The Simulation Hypothesis, Quantum Computing and the Mandela Effect)
But the sovereign cannot make the whole of his presence felt to keep his regents in their duty. Therefore he needs a controlling body; this body, whether its place is above the government or at his side, will in time tried to seize it, thus joining in one the two capacities of regent and overseer, and thereby securing for itself, unlimited authority of command. This danger leads to a multiplication of precautions; the Power and its controller are, by a division of functions, or a rapid succession of officeholders, crumbled up into small pieces, a cause of weakness and disorder in the administration of society's business. Then, inevitably, the disorder and weakness becoming at length intolerable, bring together again the crumbled pieces of sovereignty - and there is Power, armed now with the despotic authority. The wider the conception held in the time when the monopoly of it seemed a vain imagining, of the right of sovereignty, the harsher will be the despotism. If the view is that a community's laws admit of no modification whatsoever, the laws will contain the despot. Or if the view is that something of these laws, corresponding to the ordinances of God, is immutable, that part at least will remain fast. And now we begin to see that popular sovereignty may give birth to the more formidable despotism than divine sovereignty. For a tyrant, whether he be one or many, who has by hypothesis, successfully usurped one or the other sovereignty cannot avail himself of the Divine Will, which shows itself to men under the forms of Law Eternal to command whatever he pleases. Whereas the popular will has no natural stability, but is changeable. So far from being tied to a law, its voice may be heard in laws which change and succeed each other. So that a usurping Power has, in such a case, more elbow-room; it enjoys more liberty, and its liberty is the name of our arbitrary power.
Bertrand De Jouvenel (ON POWER: The Natural History of Its Growth)
It cannot be assumed that the analyst is a superman who is above such differences, just because he is a doctor who has acquired a psychological theory and a corresponding technique. He can only imagine himself to be superior in so far as he assumes that his theory and technique are absolute truths, capable of embracing the whole of the human psyche. Since such an assumption is more than doubtful, he cannot really be sure of it. Consequently, he will be assailed by secret doubts if he confronts the human wholeness of his patient with a theory or technique (which is merely a hypothesis or an attempt) instead of with his own living wholeness.
Carl G Jung (Man and His Symbols: A Popular Presentation of the Essential Ideas of Jungian Psychology with Over 500 Illustrations)
The postmodern approach to social critique is to make their ideas intangible, because then they’re unfalsifiable—that is, they can’t be disproved. Due to postmodernism’s rejection of objective truth and reason, it can’t be argued with. The postmodern perception, Lyotard writes, makes no claim to be true: “Our hypothesis … should not be accorded predictive value in relation to reality, but strategic value in relation to the question raised.” Postmodern Theory seeks to be strategically useful in bringing about its own aims, not factually true about reality.
Helen Pluckrose (Social (In)justice: Why Many Popular Answers to Important Questions of Race, Gender, and Identity Are Wrong--and How to Know What's Right: A Reader-Friendly Remix of Cynical Theories)
Dart initially echoed Darwin’s theory that bipedalism freed the hands of early hominins to make and use hunting tools, which in turn selected for big brains, hence better hunting abilities. Then, in a famous 1953 paper, clearly influenced by his war experiences, Dart proposed that the first humans were not just hunters but also murderous predators.18 Dart’s words are so astonishing, you have to read them: The loathsome cruelty of mankind to man forms one of his inescapable characteristics and differentiative features; and it is explicable only in terms of his carnivorous, and cannibalistic origin. The blood-bespattered, slaughter-gutted archives of human history from the earliest Egyptian and Sumerian records to the most recent atrocities of the Second World War accord with early universal cannibalism, with animal and human sacrificial practices of their substitutes in formalized religions and with the world-wide scalping, head-hunting, body-mutilating and necrophilic practices of mankind in proclaiming this common bloodlust differentiator, this predaceous habit, this mark of Cain that separates man dietetically from his anthropoidal relatives and allies him rather with the deadliest of Carnivora. Dart’s killer-ape hypothesis, as it came to be known, was popularized by the journalist Robert Ardrey in a best-selling book, African Genesis, that found a ready audience in a generation disillusioned by two world wars, the Cold War, the Korean and Vietnam Wars, political assassinations, and widespread political unrest.19 The killer-ape hypothesis left an indelible stamp on popular culture including movies like Planet of the Apes, 2001: A Space Odyssey, and A Clockwork Orange. But the Rousseauians weren’t dead yet. Reanalyses of bones in the limestone pits from which fossils like the Taung Baby came showed they were killed by leopards, not early humans.20 Further studies revealed these early hominins were mostly vegetarians. And as a reaction to decades of bellicosity, many scientists in the 1970s embraced evidence for humans’ nicer side, especially gathering, food sharing, and women’s roles. The most widely discussed and audacious hypothesis, proposed by Owen Lovejoy, was that the first hominins were selected to become bipeds to be more cooperative and less aggressive.21 According to Lovejoy, early hominin females favored males who were better at walking upright and thus better able to carry food with which to provision them. To entice these tottering males to keep coming back with food, females encouraged exclusive long-term monogamous relationships by concealing their menstrual cycles and having permanently large breasts (female chimps advertise when they ovulate with eye-catching swellings, and their breasts shrink when they are not nursing). Put crudely, females selected for cooperative males by exchanging sex for food. If so, then selection against reactive aggression and frequent fighting is as old as the hominin lineage.22
Daniel E. Lieberman (Exercised: Why Something We Never Evolved to Do Is Healthy and Rewarding)
More radically, how can we be sure that the source of consciousness lies within our bodies at all? You might think that because a blow to the head renders one unconscious, the ‘seat of consciousness’ must lie within the skull. But there is no logical reason to conclude that. An enraged blow to my TV set during an unsettling news programme may render the screen blank, but that doesn’t mean the news reader is situated inside the television. A television is just a receiver: the real action is miles away in a studio. Could the brain be merely a receiver of ‘consciousness signals’ created somewhere else? In Antarctica, perhaps? (This isn’t a serious suggestion – I’m just trying to make a point.) In fact, the notion that somebody or something ‘out there’ may ‘put thoughts in our heads’ is a pervasive one; Descartes himself raised this possibility by envisaging a mischievous demon messing with our minds. Today, many people believe in telepathy. So the basic idea that minds are delocalized is actually not so far-fetched. In fact, some distinguished scientists have flirted with the idea that not all that pops up in our minds originates in our heads. A popular, if rather mystical, idea is that flashes of mathematical inspiration can occur by the mathematician’s mind somehow ‘breaking through’ into a Platonic realm of mathematical forms and relationships that not only lies beyond the brain but beyond space and time altogether. The cosmologist Fred Hoyle once entertained an even bolder hypothesis: that quantum effects in the brain leave open the possibility of external input into our thought processes and thus guide us towards useful scientific concepts. He proposed that this ‘external guide’ might be a superintelligence in the far cosmic future using a subtle but well-known backwards-in-time property of quantum mechanics in order to steer scientific progress.
Paul Davies (The Demon in the Machine: How Hidden Webs of Information Are Finally Solving the Mystery of Life)
More radically, how can we be sure that the source of consciousness lies within our bodies at all? You might think that because a blow to the head renders one unconscious, the ‘seat of consciousness’ must lie within the skull. But there is no logical reason to conclude that. An enraged blow to my TV set during an unsettling news programme may render the screen blank, but that doesn’t mean the news reader is situated inside the television. A television is just a receiver: the real action is miles away in a studio. Could the brain be merely a receiver of ‘consciousness signals’ created somewhere else? In Antarctica, perhaps? (This isn’t a serious suggestion – I’m just trying to make a point.) In fact, the notion that somebody or something ‘out there’ may ‘put thoughts in our heads’ is a pervasive one; Descartes himself raised this possibility by envisaging a mischievous demon messing with our minds. Today, many people believe in telepathy. So the basic idea that minds are delocalized is actually not so far-fetched. In fact, some distinguished scientists have flirted with the idea that not all that pops up in our minds originates in our heads. A popular, if rather mystical, idea is that flashes of mathematical inspiration can occur by the mathematician’s mind somehow ‘breaking through’ into a Platonic realm of mathematical forms and relationships that not only lies beyond the brain but beyond space and time altogether. The cosmologist Fred Hoyle once entertained an even bolder hypothesis: that quantum effects in the brain leave open the possibility of external input into our thought processes and thus guide us towards useful scientific concepts. He proposed that this ‘external guide’ might be a superintelligence in the far cosmic future using a subtle but well-known backwards-in-time property of quantum mechanics in order to steer scientific progress.
Paul C.W. Davies (The Demon in the Machine: How Hidden Webs of Information Are Solving the Mystery of Life)
Forging Mettle In popular depictions of Musashi’s life, he is portrayed as having played a part in the decisive Battle of Sekigahara on October 21, 1600, which preceded the establishment of the Tokugawa shogunate. A more likely hypothesis is that he was in Kyushu fighting as an ally of Tokugawa Ieyasu under Kuroda Yoshitaka Jōsui at the Battle of Ishigakibaru on September 13, 1600. Musashi was linked to the Kuroda clan through his biological birth family who were formerly in the service of the Kodera clan before Harima fell to Hideyoshi.27 In the aftermath of Sekigahara, Japan was teeming with unemployed warriors (rōnin). There are estimates that up to 500,000 masterless samurai roamed the countryside. Peace was tenuous and warlords sought out skilled instructors in the arts of war. The fifteen years between Sekigahara and the first siege of Osaka Castle in 161528 was a golden age for musha-shugyō, the samurai warrior’s ascetic walkabout, but was also a perilous time to trek the country roads. Some rōnin found employment as retainers under new masters, some hung up their swords altogether to become farmers, but many continued roving the provinces looking for opportunities to make a name for themselves, which often meant trouble. It was at this point that Musashi embarked on his “warrior pilgrimage” and made his way to Kyoto. Two years after arriving in Kyoto, Musashi challenged the very same Yoshioka family that Munisai had bettered years before. In 1604, he defeated the head of the family, Yoshioka Seijūrō. In a second encounter, he successfully overpowered Seijūrō’s younger brother, Denshichirō. His third and last duel was against Seijūrō’s son, Matashichirō, who was accompanied by followers of the Yoshioka-ryū school. Again, Musashi was victorious, and this is where his legend really starts to escalate. Such exploits against a celebrated house of martial artists did not go unnoticed. Allies of the Yoshioka clan wrote unflattering accounts of how Musashi used guile and deceit to win with dishonorable ploys. Meanwhile, Musashi declared himself Tenka Ichi (“Champion of the Realm”) and must have felt he no longer needed to dwell in the shadow of his father. On the Kokura Monument, Iori wrote that the Yoshioka disciples conspired to ambush Musashi with “several hundred men.” When confronted, Musashi dealt with them with ruthless resolve, one man against many. Although this representation is thought to be relatively accurate, the idea of hundreds of men lying in wait was obviously an exaggeration. Several men, however, would not be hard to believe. Tested and triumphant, Musashi was now confident enough to start his own school. He called it Enmei-ryū. He also wrote, as confirmed by Uozumi, his first treatise, Heidōkyō (1605), to record the techniques and rationale behind them. He included a section in Heidōkyō on fighting single-handedly against “multiple enemies,” so presumably the third duel was a multi-foe affair.
Alexander Bennett (Complete Musashi: The Book of Five Rings and Other Works: The Definitive Translations of the Complete Writings of Miyamoto Musashi--Japan's Greatest Samurai)
Page 25: …Maimonides was also an anti-Black racist. Towards the end of the [Guide to the Perplexed], in a crucial chapter (book III, chapter 51) he discusses how various sections of humanity can attain the supreme religious value, the true worship of God. Among those who are incapable of even approaching this are: Some of the Turks [i.e., the Mongol race] and the nomads in the North, and the Blacks and the nomads in the South, and those who resemble them in our climates. And their nature is like the nature of mute animals, and according to my opinion they are not on the level of human beings, and their level among existing things is below that of a man and above that of a monkey, because they have the image and the resemblance of a man more than a monkey does. Now, what does one do with such a passage in a most important and necessary work of Judaism? Face the truth and its consequences? God forbid! Admit (as so many Christian scholars, for example, have done in similar circumstances) that a very important Jewish authority held also rabid anti-Black views, and by this admission make an attempt at self-education in real humanity? Perish the thought. I can almost imagine Jewish scholars in the USA consulting among themselves, ‘What is to be done?’ – for the book had to be translated, due to the decline in the knowledge of Hebrew among American Jews. Whether by consultation or by individual inspiration, a happy ‘solution’ was found: in the popular American translation of the Guide by one Friedlander, first published as far back as 1925 and since then reprinted in many editions, including several in paperback, the Hebrew word Kushim, which means Blacks, was simply transliterated and appears as ‘Kushites’, a word which means nothing to those who have no knowledge of Hebrew, or to whom an obliging rabbi will not give an oral explanation. During all these years, not a word has been said to point out the initial deception or the social facts underlying its continuation – and this throughout the excitement of Martin Luther King’s campaigns, which were supported by so many rabbis, not to mention other Jewish figures, some of whom must have been aware of the anti-Black racist attitude which forms part of their Jewish heritage. Surely one is driven to the hypothesis that quite a few of Martin Luther King’s rabbinical supporters were either anti-Black racists who supported him for tactical reasons of ‘Jewish interest’ (wishing to win Black support for American Jewry and for Israel’s policies) or were accomplished hypocrites, to the point of schizophrenia, capable of passing very rapidly from a hidden enjoyment of rabid racism to a proclaimed attachment to an anti-racist struggle – and back – and back again.
Israel Shahak (Jewish History, Jewish Religion: The Weight of Three Thousand Years)
For a long period of human history, most of the world thought swans were white and black swans didn’t exist inside the confines of mother nature. The null hypothesis that swans are white was later dispelled when Dutch explorers discovered black swans in Western Australia in 1697. Prior to this discovery, “black swan” was a euphemism for “impossible” or “non-existent,” but after this finding, it morphed into a term to express a perceived impossibility that might become an eventuality and therefore disproven. In recent times, the term “black swan” has been popularized by the literary work of Nassim Taleb to explain unforeseen events such as the invention of the Internet, World War I, and the breakup of the Soviet Union.
Oliver Theobald (Statistics for Absolute Beginners: A Plain English Introduction)
In order to work, the Jesus tomb hypothesis has to claim that the disciples died for something they knew was a lie—in fact, something they themselves had fabricated. Further, it has to acknowledge that none of the disciples defected, even when faced with suffering and horrible deaths, including stoning and crucifix-ion. Is that likely?
Darrell L. Bock (Dethroning Jesus: Exposing Popular Culture's Quest to Unseat the Biblical Christ)
What our assessment of the Jesus tomb hypothesis has shown is the danger of publicity-driven efforts that aren’t carefully checked.
Darrell L. Bock (Dethroning Jesus: Exposing Popular Culture's Quest to Unseat the Biblical Christ)
categorical and the dependent variable is continuous. The logic of this approach is shown graphically in Figure 13.1. The overall group mean is (the mean of means). The boxplots represent the scores of observations within each group. (As before, the horizontal lines indicate means, rather than medians.) Recall that variance is a measure of dispersion. In both parts of the figure, w is the within-group variance, and b is the between-group variance. Each graph has three within-group variances and three between-group variances, although only one of each is shown. Note in part A that the between-group variances are larger than the within-group variances, which results in a large F-test statistic using the above formula, making it easier to reject the null hypothesis. Conversely, in part B the within-group variances are larger than the between-group variances, causing a smaller F-test statistic and making it more difficult to reject the null hypothesis. The hypotheses are written as follows: H0: No differences between any of the group means exist in the population. HA: At least one difference between group means exists in the population. Note how the alternate hypothesis is phrased, because the logical opposite of “no differences between any of the group means” is that at least one pair of means differs. H0 is also called the global F-test because it tests for differences among any means. The formulas for calculating the between-group variances and within-group variances are quite cumbersome for all but the simplest of designs.1 In any event, statistical software calculates the F-test statistic and reports the level at which it is significant.2 When the preceding null hypothesis is rejected, analysts will also want to know which differences are significant. For example, analysts will want to know which pairs of differences in watershed pollution are significant across regions. Although one approach might be to use the t-test to sequentially test each pair of differences, this should not be done. It would not only be a most tedious undertaking but would also inadvertently and adversely affect the level of significance: the chance of finding a significant pair by chance alone increases as more pairs are examined. Specifically, the probability of rejecting the null hypothesis in one of two tests is [1 – 0.952 =] .098, the probability of rejecting it in one of three tests is [1 – 0.953 =] .143, and so forth. Thus, sequential testing of differences does not reflect the true level of significance for such tests and should not be used. Post-hoc tests test all possible group differences and yet maintain the true level of significance. Post-hoc tests vary in their methods of calculating test statistics and holding experiment-wide error rates constant. Three popular post-hoc tests are the Tukey, Bonferroni, and Scheffe tests.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
As an ideal of intellectual inquiry and a strategy for the advancement of knowledge, the scientific method is essentially a monument to the utility of error. Most of us gravitate toward trying to verify our beliefs, to the extent that we bother investigating their validity at all. But scientists gravitate toward falsification; as a community if not as individuals, they seek to disprove their beliefs. Thus, the defining feature of a hypothesis is that it has the potential to be proven wrong (which is why it must be both testable and tested), and the defining feature of a theory is that it hasn’t been proven wrong yet. But the important part is that it can be—no matter how much evidence appears to confirm it, no matter how many experts endorse it, no matter how much popular support it enjoys. In fact, not only can any given theory be proven wrong; as we saw in the last chapter, sooner or later, it probably will be. And when it is, the occasion will mark the success of science, not its failure. This was the pivotal insight of the Scientific Revolution: that the advancement of knowledge depends on current theories collapsing in the face of new insights and discoveries.
Kathryn Schulz (Being Wrong: Adventures in the Margin of Error)
Different cultures have different systems for learning in part because of the philosophers who influenced the approach to intellectual life in general and science in particular. Although Aristotle, a Greek, is credited with articulating applications-first thinking (induction), it was British thinkers, including Roger Bacon in the thirteenth century and Francis Bacon in the sixteenth century, who popularized these methodologies among modern scholars and scientists. Later, Americans, with their pioneer mentality and disinclination toward theoretical learning, came to be even more applications-first than the British. By contrast, philosophy on the European continent has been largely driven by principles-first approaches. In the seventeenth century, Frenchman René Descartes spelled out a method of principles-first reasoning in which the scientist first formulates a hypothesis, then seeks evidence to prove or disprove it. Descartes was deeply skeptical of data based on mere observation and sought a deeper understanding of underlying principles. In the nineteenth century, the German Friedrich Hegel introduced the dialectic model of deduction, which reigns supreme in schools in Latin and Germanic countries. The Hegelian dialectic begins with a thesis, or foundational argument; this is opposed by an antithesis, or conflicting argument; and the two are then reconciled in a synthesis.
Erin Meyer (The Culture Map: Breaking Through the Invisible Boundaries of Global Business)