Data Interpretation Quotes

We've searched our database for all the quotes and captions related to Data Interpretation. Here they are! All 100 of them:

All knowledge that is about human society, and not about the natural world, is historical knowledge, and therefore rests upon judgment and interpretation. This is not to say that facts or data are nonexistent, but that facts get their importance from what is made of them in interpretation… for interpretations depend very much on who the interpreter is, who he or she is addressing, what his or her purpose is, at what historical moment the interpretation takes place.
Edward W. Said
Properly speaking, the unconscious is the real psychic; its inner nature is just as unknown to us as the reality of the external world, and it is just as imperfectly reported to us through the data of consciousness as is the external world through the indications of our sensory organs.
Sigmund Freud (The Interpretation of Dreams)
In short, physicians are getting more and more data, which requires more sophisticated interpretation and which takes more time. AI is the solution, enhancing every stage of patient care from research and discovery to diagnosis and therapy selection. As a result, clinical practice will become more efficient, convenient, personalized, and effective.
Ronald M. Razmi (AI Doctor: The Rise of Artificial Intelligence in Healthcare - A Guide for Users, Buyers, Builders, and Investors)
There are periods of history when the visions of madmen and dope fiends are a better guide to reality than the common-sense interpretation of data available to the so-called normal mind. This is one such period, if you haven't noticed already.
Robert Anton Wilson
What we call our data are really our own constructions of other people’s constructions of what they and their compatriots are up to.
Clifford Geertz (The Interpretation of Cultures)
The point is, the brain talks to itself, and by talking to itself changes its perceptions. To make a new version of the not-entirely-false model, imagine the first interpreter as a foreign correspondent, reporting from the world. The world in this case means everything out- or inside our bodies, including serotonin levels in the brain. The second interpreter is a news analyst, who writes op-ed pieces. They read each other's work. One needs data, the other needs an overview; they influence each other. They get dialogues going. INTERPRETER ONE: Pain in the left foot, back of heel. INTERPRETER TWO: I believe that's because the shoe is too tight. INTERPRETER ONE: Checked that. Took off the shoe. Foot still hurts. INTERPRETER TWO: Did you look at it? INTERPRETER ONE: Looking. It's red. INTERPRETER TWO: No blood? INTERPRETER ONE: Nope. INTERPRETER TWO: Forget about it. INTERPRETER ONE: Okay. Mental illness seems to be a communication problem between interpreters one and two. An exemplary piece of confusion. INTERPRETER ONE: There's a tiger in the corner. INTERPRETER TWO: No, that's not a tiger- that's a bureau. INTERPRETER ONE: It's a tiger, it's a tiger! INTERPRETER TWO: Don't be ridiculous. Let's go look at it. Then all the dendrites and neurons and serotonin levels and interpreters collect themselves and trot over to the corner. If you are not crazy, the second interpreter's assertion, that this is a bureau, will be acceptable to the first interpreter. If you are crazy, the first interpreter's viewpoint, the tiger theory, will prevail. The trouble here is that the first interpreter actually sees a tiger. The messages sent between neurons are incorrect somehow. The chemicals triggered are the wrong chemicals, or the impulses are going to the wrong connections. Apparently, this happens often, but the second interpreter jumps in to straighten things out.
Susanna Kaysen (Girl, Interrupted)
We do not realize how deeply our starting assumptions affect the way we go about looking for and interpreting the data we collect. We should recognize that nonhuman organisms need not meet every new definition of human language, tool use, mind, or consciousness in order to have versions of their own that are worthy of serious study. We have set ourselves too much apart, grasping for definitions that will distinguish man from all other life on the planet. We must rejoin the great stream of life from whence we arose and strive to see within it the seeds of all we are and all we may become.
Sue Savage-Rumbaugh (Kanzi: The Ape at the Brink of the Human Mind)
materialism is a fantasy. It’s based on unnecessary postulates, circular reasoning and selective consideration of evidence and data. Materialism is by no stretch of the imagination a scientific conclusion, but merely a metaphysical opinion that helps some people interpret scientific conclusions.
Bernardo Kastrup (Brief Peeks Beyond: Critical Essays on Metaphysics, Neuroscience, Free Will, Skepticism and Culture)
Computational processes are abstract beings that inhabit computers. As they evolve, processes manipulate other abstract things called data. The evolution of a process is directed by a pattern of rules called a program. People create programs to direct processes. In effect, we conjure the spirits of the computer with our spells.
Harold Abelson (Structure and Interpretation of Computer Programs)
To tell an honest story, it is not enough for numbers to be correct. They need to be placed in an appropriate context so that a reader or listener can properly interpret them.
Carl T. Bergstrom (Calling Bullshit: The Art of Skepticism in a Data-Driven World)
ideologues of every stripe, as well as folks with interests economic, political, or personal, can interpret data and statistics to suit their own purposes...
Peter Benchley (Shark Trouble)
It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.
Alan J. Perlis (Structure and Interpretation of Computer Programs)
The worse thing that contemporary qualitative research can imply is that, in this post-modern age, anything goes. The trick is to produce intelligent, disciplined work on the very edge of the abyss.
David Silverman (Interpreting Qualitative Data)
The Scientific Method is a wonderful tool as long as you don't care which way the outcome turns; however, this process fails the second one's perception interferes with the interpretation of data. This is why I don’t take anything in life as an absolute…even if someone can “prove” it “scientifically.
Kent Marrero
The ego is definitely an advancement, but it can be compared to the bark of the tree in many ways. The bark of the tree is flexible, extremely vibrant, and grows with the growth beneath. It is a tree’s contact with the outer world, the tree’s interpreter, and to some degree the tree’s companion. So should man’s ego be. When man’s ego turns instead into a shell, when instead of interpreting outside conditions it reacts too violently against them, then it hardens, becomes an imprisoning form that begins to snuff out important data, and to keep enlarging information from the inner self. The purpose of the ego is protective. It is also a device to enable the inner self to inhabit the physical plane. It is in other words a camouflage. It is the
Jane Roberts (The Early Sessions: Book 1 of The Seth Material)
All so-called ‘quantitative’ data, when scrutinized, turn out to be composites of ‘qualitative’ – i.e., contextually located and indexical – interpretations produced by situated researchers, coders, government officials and others. The
Anthony Giddens (The Constitution of Society: Outline of the Theory of Structuration)
We never know any data before interpreting it through theories. All observations are, as Popper put it, theory-laden,* and hence fallible, as all our theories are. Consider
David Deutsch (The Beginning of Infinity: Explanations That Transform the World)
For Adamatzky, the point of fungal computers is not to replace silicon chips. Fungal reactions are too slow for that. Rather, he thinks humans could use mycelium growing in an ecosystem as a “large-scale environmental sensor.” Fungal networks, he reasons, are monitoring a large number of data streams as part of their everyday existence. If we could plug into mycelial networks and interpret the signals they use to process information, we could learn more about what was happening in an ecosystem.
Merlin Sheldrake (Entangled Life: How Fungi Make Our Worlds, Change Our Minds & Shape Our Futures)
The other buzzword that epitomizes a bias toward substitution is “big data.” Today’s companies have an insatiable appetite for data, mistakenly believing that more data always creates more value. But big data is usually dumb data. Computers can find patterns that elude humans, but they don’t know how to compare patterns from different sources or how to interpret complex behaviors. Actionable insights can only come from a human analyst (or the kind of generalized artificial intelligence that exists only in science fiction).
Peter Thiel (Zero to One: Notes on Startups, or How to Build the Future)
Our emotional reactions depend on the story we tell ourselves, the running commentary in the mind that interprets the data we receive through our senses.
J. Mark G. Williams (The Mindful Way through Depression: Freeing Yourself from Chronic Unhappiness)
Your relevance as a data custodian is your ability to analyse and interpret it. If you can’t, your replacement is due.
Wisdom Kwashie Mensah
The phrase 'the fossil record' sounds impressive and authoritative. As used by some persons it becomes, as intended, intimidating, taking on the aura of esoteric truth as expounded by an elite class of specialists. But what is it, really, this fossil record? Only data in search of interpretation. All claims to the contrary that I know, and I know of several, are so much superstition.
Gareth J. Nelson
If you summarily rule out any single sensation and do not make a distinction between the element of belief that is superimposed on a percept that awaits verification and what is actually present in sensation or in the feelings or some percept of the mind itself, you will cast doubt on all other sensations by your unfounded interpretation and consequently abandon all the criteria of truth. On the other hand, in cases of interpreted data, if you accept as true those that need verification as well as those that do not, you will still be in error, since the whole question at issue in every judgment of what is true or not true will be left intact.
Epicurus (Lettera sulla felicità)
Clark also maintained that someone with no excavation experience was not equipped to interpret archaeological data, thereby implicitly denying the distinction that some British culture-historical archaeologists were drawing between archaeologists and prehistorians.
Bruce G. Trigger (A History of Archaeological Thought)
But all this doesn´t happen effortlessly, as demonstrated by patients who surgically recover their eyesight after decades of blindless: they do not suddenly see the world, but instead must learn to see again. At first the world is buzzing, jangling barrage of shapes and colors, and even when the optics of their eyes are perfectly functional, their brain must learn how to interpret the data coming in.
David Eagleman (Incognito: The Secret Lives of the Brain)
it sometimes happens that evidence accumulates across many studies to the point where scientists must change their minds. I’ve seen this happen in my colleagues (and myself) many times,34 and it’s part of the accountability system of science—you’d look foolish clinging to discredited theories. But for nonscientists, there is no such thing as a study you must believe. It’s always possible to question the methods, find an alternative interpretation of the data, or, if all else fails, question the honesty or ideology of the researchers.
Jonathan Haidt (The Righteous Mind: Why Good People are Divided by Politics and Religion)
The forlorn state of consciousness in our world is due primarily to loss of instinct, and the reason for this lies in the development of the human mind over the past aeon. The more power man had over nature, the more his knowledge and skill went to his head, and the deeper became his contempt for the merely natural and accidental, for all irrational data—including the objective psyche, which is everything that consciousness is not.
C.G. Jung (The Undiscovered Self/Symbols and the Interpretation of Dreams)
NOBODY knows better than you what's right for you. NOBODY. Let me say what I really mean: NOBODY. Advice? Get some. Oracles? Consult them. Friends? Worship them. Actual gurus? Honour them. Final say? YOU. All you. No matter what. No matter how psychic that psychic is, or how rich the business consultant is, or how magical the healer, or bendy the yoga instructor. All that experts offer you is data for you to take into consideration. YOU are the centrifugal force that must filter, interpret, and give meaning to that data.
Danielle LaPorte (White Hot Truth: Clarity for Keeping It Real on Your Spiritual Path from One Seeker to Another)
What the ethnographer is in fact faced with—except when (as, of course, he must do) he is pursuing the more automatized routines of data collection—is a multiplicity of complex conceptual structures, many of them superimposed upon or knotted into one another, which are at once strange, irregular, and inexplicit, and which he must contrive somehow first to grasp and then to render. And this is true at the most down-to-earth, jungle field work levels of his activity; interviewing informants, observing rituals, eliciting kin terms, tracing property lines, censusing households … writing his journal. Doing ethnography is like trying to read (in the sense of “construct a reading of”) a manuscript—foreign, faded, full of ellipses, incoherencies, suspicious emendations, and tendentious commentaries, but written not in conventionalized graphs of sound but in transient examples of shaped behavior.
Clifford Geertz (The Interpretation of Cultures)
Huge volumes of data may be compelling at first glance, but without an interpretive structure they are meaningless.
Tom Boellstorff (Ethnography and Virtual Worlds: A Handbook of Method)
[...] a good interpretation makes sense of such an impressive amount of mutually corroborating data that coincidence must be excluded. Such a result cannot be reached without sound practice and good method.
Y. Duhoux (A Companion to Linear B: Mycenean Greek Texts and Their World (A Companion to Linear B, #2))
As with good history, good psychoanalytic interpretations must also make sense, pull together as much of the known data as possible, provide a coherent and persuasive account, and also facilitate personal growth. Psychoanalytic
Stephen A. Mitchell (Freud and Beyond: A History of Modern Psychoanalytic Thought)
In so far as your metaphysical beliefs are implicit, you vaguely interpret the past on the lines of the present. But when it comes to the primary metaphysical data, the world of which you are immediately conscious is the whole datum.
Alfred North Whitehead (Religion in the Making: Lowell Lectures, 1926)
Stay inquisitive. Question the potential interpretation of every collected data point. Remember that every successful idea has a life cycle, and a bad idea yesterday might be reformed under changing market forces as a good idea tomorrow.
Ken Goldstein
David Buss has amassed a lot of evidence that human females across many cultures tend to prefer males who have high social status, good income, ambition, intelligence, and energy--contrary to the views of some cultural anthropologists, who assume that people vary capriciously in their sexual preferences across different cultures. He interpreted this as evidence that women evolved to prefer good providers who could support their families by acquiring and defending resources I respect his data enormously, but disagree with his interpretation. The traits women prefer are certainly correlated with male abilities to provide material benefits, but they are also correlated with heritable fitness. If the same traits can work both as fitness indicators and as wealth indicators, so much the better. The problem comes when we try to project wealth indicators back into a Pleistocene past when money did not exist, when status did not imply wealth, and when bands did not stay in one place long enough to defend piles of resources. Ancestral women may have preferred intelligent, energetic men for their ability to hunt more effectively and provide their children with more meat. But I would suggest it was much more important that intelligent men tended to produce intelligent, energetic children more likely to survive and reproduce, whether or not their father stayed around. In other words, I think evolutionary psychology has put too much emphasis on male resources instead of male fitness in explaining women's sexual preferences.
Geoffrey Miller (The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature)
From a Dataist perspective, we may interpret the entire human species as a single data-processing system, with individual humans serving as its chips. If so, we can also understand the whole of history as a process of improving the efficiency of this system...
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
One of the first shrinks I went to after Cass died told me that the brain has a hardwired need to find correlations, to make sense of nonsensical data by making connections between unrelated things. Humans have evolved a universal tendency to seek patterns in random information, hence the existence of fortune-tellers and dream interpreters and people who see the face of Jesus in a piece of toast. But the cold, hard truth is that there are no connections between anything. Life—all of existence—is totally random. Your lucky lottery numbers aren’t really lucky, because there’s no such thing as luck. The black cat that crosses your path isn’t a bad omen, it’s just a cat out for a walk. An eclipse doesn’t mean that the gods are angry, just as a bus narrowly missing you as you cross the street doesn’t mean there’s a guardian angel looking out for you. There are no gods. There are no angels. Superstitions aren’t real, and no amount of wishing, praying, or rationalizing can change the fact that life is just one long sequence of random events that ultimately have no meaning. I really hated that shrink.
J.T. Geissinger (Midnight Valentine)
Despite all their surface diversity, most jokes and funny incidents have the following logical structure: Typically you lead the listener along a garden path of expectation, slowly building up tension. At the very end, you introduce an unexpected twist that entails a complete reinterpretation of all the preceding data, and moreover, it's critical that the new interpretation, though wholly unexpected, makes as much "sense" of the entire set of facts as did the originally "expected" interpretation. In this regard, jokes have much in common with scientific creativity, with what Thomas Kuhn calls a "paradigm shift" in response to a single "anomaly." (It's probably not coincidence that many of the most creative scientists have a great sense of humor.) Of course, the anomaly in the joke is the traditional punch line and the joke is "funny" only if the listener gets the punch line by seeing in a flash of insight how a completely new interpretation of the same set of facts can incorporate the anomalous ending. The longer and more tortuous the garden path of expectation, the "funnier" the punch line when finally delivered.
V.S. Ramachandran
There are people who learn political information for reasons other than becoming better voters. Just as sports fans love to follow their favorite teams even if they cannot influence the outcomes of games, so there are also “political fans” who enjoy following political issues and cheering for their favorite candidates, parties, or ideologies. Unfortunately, much like sports fans, political fans tend to evaluate new information in a highly biased way. They overvalue anything that supports their preexisting views, and to undervalue or ignore new data that cuts against them, even to the extent of misinterpreting simple data that they could easily interpret correctly in other contexts. Moreover, those most interested in politics are also particularly prone to discuss it only with others who agree with their views, and to follow politics only through like-minded media.
Ilya Somin
sense data are what they are, and they are infallible, being mechanically transmitted to us by atomic images from the outer world. They may be overlaid with misleading interpretations and lead to “false opinions,” 39 but they are true if confirmed by close inspection or if they are not contradicted.
Epicurus (The Philosophy of Epicurus)
This is our recurring temptation—to live within our camp’s caves, taking turns both as the shadow-puppeteers and the audience. We chant our camp’s mantras repeatedly so they continue reverberating in our skulls. When we stay entrenched within our belief-camps, we create the illusion of secure reality by reinforcing each other’s presuppositions and paradigms. We choose specific watering holes of information and evidence, and we influence each other in interpreting that data in accordance with the conclusions we desire. Our camps reinforce our existing cognitive biases, making cheating all the more common and easy.
Daniel Jones (Shadow Gods)
Vision isn’t about photons that can be readily interpreted by the visual cortex. Instead it’s a whole body experience. The signals coming into the brain can only be made sense of by training, which requires cross-referencing the signals with information from our actions and sensory consequences. It’s the only way our brains can come to interpret what the visual data actually means.
David Eagleman (The Brain: The Story of You)
One of the most significant aspects of our current situation, it should be noted, is the "crisis of meaning." Perspectives on life and the world, often of a scientific temper, have so proliferated that we face an increasing fragmentation of knowledge. This makes the search for meaning difficult and often fruitless. Indeed, still more dramatically, in this maelstrom of data and facts in which we live and which seem to comprise the very fabric of life, many people wonder whether it still makes sense to ask about meaning. The array of theories which vie to give an answer, and the different ways of viewing and of interpreting the world of human life, serve only to aggravate this radical doubt, which can easily lead to skepticism, indifference or to various forms of nihilism.
Pope John Paul II (Fides et Ratio: On the Relationship Between Faith and Reason)
Information freedom is the freedom to access, participate in, understand and benefit from knowledge creation. This includes access to raw data, transparent auditing processes which include both elite knowledge and complete and immediate feedback from user groups and anyone else impacted, and interpretation which flows directly from the audit without interference from coercive manipulation.
Heather Marsh (The Creation of Me, Them and Us)
Science begins with the world we have to live in, accepting its data and trying to explain its laws. From there, it moves toward the imagination: it becomes a mental construct, a model of a possible way of interpreting experience. The further it goes in this direction, the more it tends to speak the languages of mathematics, which is really one of the languages of the imagination, along with literature and music.
Northrop Frye (The Educated Imagination (Midland Book))
The problem is that many authors of papers in the medical literature allow statistics to become their master rather than their servant: numbers are plugged into a statistical program and the results are interpreted in a cut-and-dried fashion. Statistical significance (that two sets of data are not from the same population) is confused with clinical significance (that differences are sufficiently large to have a biological effect).
Richard David Feinman (The World Turned Upside Down: The Second Low-Carbohydrate Revolution)
The materialist interpretation of the world and of science itself is protected not by the facts or by the data of our honest experiences, but by what is essentially social and professional peer pressure, something more akin to the grade-school playground or high school prom. The world is preserved through eyes rolling back, snide remarks, arrogant smirks and subtle, or not so subtle, social cues, and a kind of professional (or conjugal) shaming.
Jeffrey J. Kripal (The Flip: Epiphanies of Mind and the Future of Knowledge)
If scientists can be fooled on the question of the simple interpretation of straightforward data of the sort that they are routinely obtaining from other kinds of astronomical objects, when the stakes are high, when the emotional predispositions are working, what must be the situation where the evidence is much weaker, where the will to believe is much greater, where the skeptical scientific tradition has hardly made a toehold - namely, in the area of religion?
Carl Sagan (The Varieties of Scientific Experience: A Personal View of the Search for God)
Reality is far too diverse, broad, elusive, ambiguous and complex for us to pin down. Even the limited empirical data we do manage to collect can only be interpreted within the framework of a subjective paradigm. It is, therefore, not really neutral. But in our desperate search for closure and reassurance we confabulate entities and explanations to construct huge edifices of assumed truths. They make up the world we actually experience; a self-woven cocoon of stories, not facts.
Bernardo Kastrup (Brief Peeks Beyond: Critical Essays on Metaphysics, Neuroscience, Free Will, Skepticism and Culture)
Tuesday, when you asked me if I would rather be smart or happy. I would rather be smart.” “Why?” “Because intellect can be proven scientifically with machines and litmus tests and IQ evaluations, but happiness is only based on a loose pool of interpretive data drawn from perception and emotion. It’s a theory, see? And I’d rather put my faith in something real than something that’s inconclusive.” “So, you don’t think happiness is real?” “I think it’s tolerable pain. Happy people have a really high tolerance, that’s all.
Whitney Taylor
Schemas, whether you’re aware of yours or not, powerfully influence your thoughts, actions and behaviour. They’re the filter through which you interpret other people’s behaviour and they help you decide how to act with friends and strangers. They also help you anticipate and plan for the future, and they even govern what you notice and what you remember when new information comes along. We all pay more attention to things that reinforce our schemas and downplay or negate incoming data that conflicts with them – something known as confirmation bias.
Leigh Sales (Any Ordinary Day)
I know very little with anything approaching certainty. I know that I was born, that I exist, and that I will die. For the most part, I can trust my brain's interpretation of the data presented to my senses: this is a rose, that is a car, she is my wife. I do not doubt the reality of the thoughts and emotions and impulses I experience in response to these things. . . . Yet apart from these primary perceptions, intuitions, inferences, and bits of information, the views that I hold about the things that really matter to me--meaning, truth, happiness, goodness, beauty--are finely woven tissues of belief and opinion.
Stephen Batchelor (Confession of a Buddhist Atheist)
Note that there’s no option to answer “all of the above.” Prospective workers must pick one option, without a clue as to how the program will interpret it. And some of the analysis will draw unflattering conclusions. If you go to a kindergarten class in much of the country, for example, you’ll often hear teachers emphasize to the children that they’re unique. It’s an attempt to boost their self-esteem and, of course, it’s true. Yet twelve years later, when that student chooses “unique” on a personality test while applying for a minimum-wage job, the program might read the answer as a red flag: Who wants a workforce peopled with narcissists?
Cathy O'Neil (Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy)
Research has established that, oftentimes, when kids are struggling, it is not therapy for the child himself but coaching or therapy for the parent that leads to the most significant changes in the child. This is powerful research, because it suggests that a child’s behavior—which is an expression of a child’s emotion regulation patterns—develops in relation to a parent’s emotional maturity. There are two ways to interpret this data. The first is, “Oh no, I’m messing up my kid because I’m messed up. I’m the worst!” But there’s another, more optimistic and encouraging interpretation: “Wow, this is amazing. If I can work on some of my own emotion regulation abilities—which will feel good for me anyway!—my
Becky Kennedy (Good Inside: A Guide to Becoming the Parent You Want to Be)
The Scientific Revolution proposed a very different formula for knowledge: Knowledge = Empirical Data × Mathematics. If we want to know the answer to some question, we need to gather relevant empirical data, and then use mathematical tools to analyse the data. For example, in order to gauge the true shape of the earth, we can observe the sun, the moon and the planets from various locations across the world. Once we have amassed enough observations, we can use trigonometry to deduce not only the shape of the earth, but also the structure of the entire solar system. In practice, that means that scientists seek knowledge by spending years in observatories, laboratories and research expeditions, gathering more and more empirical data, and sharpening their mathematical tools so they could interpret the data correctly. The scientific formula for knowledge led to astounding breakthroughs in astronomy, physics, medicine and countless other disciplines. But it had one huge drawback: it could not deal with questions of value and meaning. Medieval pundits could determine with absolute certainty that it is wrong to murder and steal, and that the purpose of human life is to do God’s bidding, because scriptures said so. Scientists could not come up with such ethical judgements. No amount of data and no mathematical wizardry can prove that it is wrong to murder. Yet human societies cannot survive without such value judgements.
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
Probability theory naturally comes into play in what we shall call situation 1: When the data-point can be considered to be generated by some randomizing device, for example when throwing dice, flipping coins, or randomly allocating an individual to a medical treatment using a pseudo-random-number generator, and then recording the outcomes of their treatment. But in practice we may be faced with situation 2: When a pre-existing data-point is chosen by a randomizing device, say when selecting people to take part in a survey. And much of the time our data arises from situation 3: When there is no randomness at all, but we act as if the data-point were in fact generated by some random process, for example in interpreting the birth weight of our friend’s baby.
David Spiegelhalter (The Art of Statistics: Learning from Data)
G. Stanley Hall, a creature of his times, believed strongly that adolescence was determined – a fixed feature of human development that could be explained and accounted for in scientific fashion. To make his case, he relied on Haeckel's faulty recapitulation idea, Lombroso's faulty phrenology-inspired theories of crime, a plethora of anecdotes and one-sided interpretations of data. Given the issues, theories, standards and data-handling methods of his day, he did a superb job. But when you take away the shoddy theories, put the anecdotes in their place, and look for alternate explanations of the data, the bronze statue tumbles hard. I have no doubt that many of the street teens of Hall's time were suffering or insufferable, but it's a serious mistake to develop a timeless, universal theory of human nature around the peculiarities of the people of one's own time and place.
Robert Epstein (Teen 2.0: Saving Our Children and Families from the Torment of Adolescence)
The data that drives algorithms isn't just a few numbers now. It's monstrous tables of millions of numbers, thousands upon thousands of rows and columns of numbers....Matrices are created and refined by computers endlessly churning through Big Data's records on everyone, and everything they've done. No human can read those matrices, even with computers helping you interpret them they are simply too large and complex to fully comprehend. But the computers can use them, applying the appropriate matrix to show us the appropriate video that will eventually lead us to make an appropriate purchase. We are not living in "The Matrix," but there's still a matrix controlling us. What does this have to do with the rabbit hole of conspiracy theories? It has everything to do with it. These algorithms are quickly becoming the primary route down the rabbit hole. To a large extent this has already happened, but it's going to get far, far worse. Tufekci described what happened when she tried watching different types of content on YouTube. She started watching videos of Donald Trump rallies. 'I wanted to write about one of [Donald Trump]'s rallies, so I watched it a few times on YouTube. YouTube started recommending to me, and autoplaying to me, white supremacist videos, in increasing order of extremism. If I watched one, it served up one even more extreme. If you watch Hilary Clinton or Bernie Sanders content, YouTube recommends and autoplays [left-wing] conspiracy videos, and it goes downhill from there." Downhill, into the rabbit hole....Without human intervention the algorithm has evolved to a perfect a method of gently stepping up the intensity of the conspiracy videos that it shows you so that you don't get turned off, and so you continue to watch. They get more intense because the algorithm has found (not in any human sense, but found nonetheless) that the deeper it can guide people down the rabbit hole, the more revenue it can make.
Mick West (Escaping the Rabbit Hole: How to Debunk Conspiracy Theories Using Facts, Logic, and Respect)
We are about to study the idea of a computational process. Computational processes are abstract beings that inhabit computers. As they evolve, processes manipulate other abstract things called data. The evolution of a process is directed by a pattern of rules called a program. People create programs to direct processes. In effect, we conjure the spirits of the computer with our spells. A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform. A computational process, in a correctly working computer, executes programs precisely and accurately. Thus, like the sorcerer's apprentice, novice programmers must learn to understand and to anticipate the consequences of their conjuring. Even small errors (usually called bugs or glitches) in programs can have complex and unanticipated consequences.
Harold Abelson (Structure and Interpretation of Computer Programs)
A few years ago my friend Jon Brooks supplied this great illustration of skewed interpretation at work. Here’s how investors react to events when they’re feeling good about life (which usually means the market has been rising): Strong data: economy strengthening—stocks rally Weak data: Fed likely to ease—stocks rally Data as expected: low volatility—stocks rally Banks make $4 billion: business conditions favorable—stocks rally Banks lose $4 billion: bad news out of the way—stocks rally Oil spikes: growing global economy contributing to demand—stocks rally Oil drops: more purchasing power for the consumer—stocks rally Dollar plunges: great for exporters—stocks rally Dollar strengthens: great for companies that buy from abroad—stocks rally Inflation spikes: will cause assets to appreciate—stocks rally Inflation drops: improves quality of earnings—stocks rally Of course, the same behavior also applies in the opposite direction. When psychology is negative and markets have been falling for a while, everything is capable of being interpreted negatively. Strong economic data is seen as likely to make the Fed withdraw stimulus by raising interest rates, and weak data is taken to mean companies will have trouble meeting earnings forecasts. In other words, it’s not the data or events; it’s the interpretation. And that fluctuates with swings in psychology.
Howard Marks (Mastering The Market Cycle: Getting the Odds on Your Side)
Several recent studies (Bliss, 1980; Boon & Draijer, 1993a; Coons & Milstein, 1986; Coons, Bowman, & Milstein, 1988; Putnam et al., 1986; Ross et al., 1989b) are largely consistent in terms of the general trends that they demonstrate. At the time of diagnosis (prior to exploration) approximately two to four personalities are in evidence. In the course of treatment an average of 13 to 15 are encountered, but this figure is deceptive. The mode in virtually all series is three, and median number of alters is eight to ten. Complex cases, with 26 or more alters (described in Kluft, 1988), constitute 15-25% of such series and unduly inflate the mean. Series currently being studied in tertiary referral centers appear to be more complex still (Kluft, Fink, Brenner, & Fine, unpublished data). This is subject to a number of interpretations. It is likely that the complexity of the more difficult and demanding cases treated in such settings may be one aspect of what makes them require such specialized care. It is also possible that the staff of such centers is differentially sensitive to the need to probe for previously undiscovered complexity in their efforts to treat patients who have failed to improve elsewhere. However, it is also possible that patients unduly interested in their disorders and who generate factitious complexity enter such series differently, or that some factor in these units or in those who refer to them encourages such complexity or at least the subjective report thereof.
Richard P. Kluft
The motor activities we take for granted—getting out of a chair and walking across a room, picking up a cup and drinking coffee,and so on—require integration of all the muscles and sensory organs working smoothly together to produce coordinated movements that we don't even have to think about. No one has ever explained how the simple code of impulses can do all that. Even more troublesome are the higher processes, such as sight—in which somehow we interpret a constantly changing scene made of innumerable bits of visual data—or the speech patterns, symbol recognition, and grammar of our languages.Heading the list of riddles is the "mind-brain problem" of consciousness, with its recognition, "I am real; I think; I am something special." Then there are abstract thought, memory, personality,creativity, and dreams. The story goes that Otto Loewi had wrestled with the problem of the synapse for a long time without result, when one night he had a dream in which the entire frog-heart experiment was revealed to him. When he awoke, he knew he'd had the dream, but he'd forgotten the details. The next night he had the same dream. This time he remembered the procedure, went to his lab in the morning, did the experiment, and solved the problem. The inspiration that seemed to banish neural electricity forever can't be explained by the theory it supported! How do you convert simple digital messages into these complex phenomena? Latter-day mechanists have simply postulated brain circuitry so intricate that we will probably never figure it out, but some scientists have said there must be other factors.
Robert O. Becker (The Body Electric: Electromagnetism and the Foundation of Life)
To claim that mathematics is purely a human invention and is successful in explaining nature only because of evolution and natural selection ignores some important facts in the nature of mathematics and in the history of theoretical models of the universe. First, while the mathematical rules (e.g., the axioms of geometry or of set theory) are indeed creations of the human mind, once those rules are specified, we lose our freedom. The definition of the Golden Ratio emerged originally from the axioms of Euclidean geometry; the definition of the Fibonacci sequence from the axioms of the theory of numbers. Yet the fact that the ratio of successive Fibonacci numbers converges to the Golden Ratio was imposed on us-humans had not choice in the matter. Therefore, mathematical objects, albeit imaginary, do have real properties. Second, the explanation of the unreasonable power of mathematics cannot be based entirely on evolution in the restricted sense. For example, when Newton proposed his theory of gravitation, the data that he was trying to explain were at best accurate to three significant figures. Yet his mathematical model for the force between any two masses in the universe achieved the incredible precision of better than one part in a million. Hence, that particular model was not forced on Newton by existing measurements of the motions of planets, nor did Newton force a natural phenomenon into a preexisting mathematical pattern. Furthermore, natural selection in the common interpretation of that concept does not quite apply either, because it was not the case that five competing theories were proposed, of which one eventually won. Rather, Newton's was the only game in town!
Mario Livio (The Golden Ratio: The Story of Phi, the World's Most Astonishing Number)
..."facts" properly speaking are always and never more than interpretations of the data... the Gospel accounts are themselves such data or, if you like, hard facts. But the events to which the Gospels refer are not themselves "hard facts"; they are facts only in the sense that we interpret the text, together with such other data as we have, to reach a conclusion regarding the events as best we are able. They are facts in the same way that the verdict of a jury establishes the facts of the case, the interpretation of the evidence that results in the verdict delivered. Here it is as well to remember that historical methodology can only produce probabilities, the probability that some event took place in such circumstances being greater or smaller, depending on the quality of the data and the perspective of the historical enquirer. The jury which decides what is beyond reasonable doubt is determining that the probability is sufficiently high for a clear-cut verdict to be delivered. Those who like "certainty" in matters of faith will always find this uncomfortable. But faith is not knowledge of "hard facts"...; it is rather confidence, assurance, trust in the reliability of the data and in the integrity of the interpretations derived from that data... It does seem important to me that those who speak for evangelical Christians grasp this nettle firmly, even if it stings! – it is important for the intellectual integrity of evangelicals. Of course any Christian (and particularly evangelical Christians) will want to get as close as possible to the Jesus who ministered in Galilee in the late 20s of the first century. If, as they believe, God spoke in and through that man, more definitively and finally than at any other time and by any other medium, then of course Christians will want to hear as clearly as possible what he said, and to see as clearly as possible what he did, to come as close as possible to being an eyewitness and earwitness for themselves. If God revealed himself most definitively in the historical particularity of a Galilean Jew in the earliest decades of the Common Era, then naturally those who believe this will want to inquire as closely into the historical particularity and actuality of that life and of Jesus’ mission. The possibility that later faith has in some degree covered over that historical actuality cannot be dismissed as out of the question. So a genuinely critical historical inquiry is necessary if we are to get as close to the historical actuality as possible. Critical here, and this is the point, should not be taken to mean negatively critical, hermeneutical suspicion, dismissal of any material that has overtones of Easter faith. It means, more straightforwardly, a careful scrutiny of all the relevant data to gain as accurate or as historically responsible a picture as possible. In a day when evangelical, and even Christian, is often identified with a strongly right-wing, conservative and even fundamentalist attitude to the Bible, it is important that responsible evangelical scholars defend and advocate such critical historical inquiry and that their work display its positive outcome and benefits. These include believers growing in maturity • to recognize gray areas and questions to which no clear-cut answer can be given (‘we see in a mirror dimly/a poor reflection’), • to discern what really matters and distinguish them from issues that matter little, • and be able to engage in genuine dialogue with those who share or respect a faith inquiring after truth and seeking deeper understanding. In that way we may hope that evangelical (not to mention Christian) can again become a label that men and women of integrity and good will can respect and hope to learn from more than most seem to do today.
James D.G. Dunn (The Historical Jesus: Five Views)
But states have difficulty evaluating cybersecurity threats. If a state does detect an intrusion in one of its vital networks and if that intrusion looks to be from another state, what should the state suffering the intrusion conclude? On the one hand, it might be a defensive-minded intrusion, only checking out the intruded-upon state’s capabilities and providing reassuring intelligence to the intruding state. This might seem unsettling but not necessarily threatening, presuming the state suffering the intrusion was not developing capabilities for attack or seeking conflict. On the other hand, the intrusion might be more nefarious. It could be a sign of some coming harm, such as a cyber attack or an expanding espionage operation. The state suffering the intrusion will have to decide which of these two possibilities is correct, interpreting limited and almost certainly insufficient amounts of data to divine the intentions of another state. Thus Chapter Four’s argument is vitally important: intrusions into a state’s strategically important networks pose serious risks and are therefore inherently threatening. Intrusions launched by one state into the networks of another can cause a great deal of harm at inopportune times, even if the intrusion at the moment of discovery appears to be reasonably benign. The intrusion can also perform reconnaissance that enables a powerful and well-targeted cyber attack. Even operations launched with fully defensive intent can serve as beachheads for future attack operations, so long as a command and control mechanism is set up. Depending on its target, the intrusion can collect information that provides great insight into the communications and strategies of policy-makers. Network intrusions can also pose serious counterintelligence risks, revealing what secrets a state has learned about other states and provoking a damaging sense of paranoia. Given these very real threats, states are likely to view any serious intrusion with some degree of fear. They therefore have significant incentive to respond strongly, further animating the cybersecurity dilemma.
Ben Buchanan (The Cybersecurity Dilemma: Hacking, Trust and Fear Between Nations)
The textbooks of history prepared for the public schools are marked by a rather naive parochialism and chauvinism. There is no need to dwell on such futilities. But it must be admitted that even for the most conscientious historian abstention from judgments of value may offer certain difficulties. As a man and as a citizen the historian takes sides in many feuds and controversies of his age. It is not easy to combine scientific aloofness in historical studies with partisanship in mundane interests. But that can and has been achieved by outstanding historians. The historian's world view may color his work. His representation of events may be interlarded with remarks that betray his feelings and wishes and divulge his party affiliation. However, the postulate of scientific history's abstention from value judgments is not infringed by occasional remarks expressing the preferences of the historian if the general purport of the study is not affected. If the writer, speaking of an inept commander of the forces of his own nation or party, says "unfortunately" the general was not equal to his task, he has not failed in his duty as a historian. The historian is free to lament the destruction of the masterpieces of Greek art provided his regret does not influence his report of the events that brought about this destruction. The problem of Wertfreíheit must also be clearly distinguished from that of the choice of theories resorted to for the interpretation of facts. In dealing with the data available, the historian needs ali the knowledge provided by the other disciplines, by logic, mathematics, praxeology, and the natural sciences. If what these disciplines teach is insufficient or if the historian chooses an erroneous theory out of several conflicting theories held by the specialists, his effort is misled and his performance is abortive. It may be that he chose an untenable theory because he was biased and this theory best suited his party spirit. But the acceptance of a faulty doctrine may often be merely the outcome of ignorance or of the fact that it enjoys greater popularity than more correct doctrines. The main source of dissent among historians is divergence in regard to the teachings of ali the other branches of knowledge upon which they base their presentation. To a historian of earlier days who believed in witchcraft, magic, and the devil's interference with human affairs, things hàd a different aspect than they have for an agnostic historian. The neomercantilist doctrines of the balance of payments and of the dollar shortage give an image of presentday world conditions very different from that provided by an examination of the situation from the point of view of modern subjectivist economics.
Ludwig von Mises (Theory and History: An Interpretation of Social and Economic Evolution)
Dr. Hobson (with Dr. Robert McCarley) made history by proposing the first serious challenge to Freud’s theory of dreams, called the “activation synthesis theory.” In 1977, they proposed the idea that dreams originate from random neural firings in the brain stem, which travel up to the cortex, which then tries to make sense of these random signals. The key to dreams lies in nodes found in the brain stem, the oldest part of the brain, which squirts out special chemicals, called adrenergics, that keep us alert. As we go to sleep, the brain stem activates another system, the cholinergic, which emits chemicals that put us in a dream state. As we dream, cholinergic neurons in the brain stem begin to fire, setting off erratic pulses of electrical energy called PGO (pontine-geniculate-occipital) waves. These waves travel up the brain stem into the visual cortex, stimulating it to create dreams. Cells in the visual cortex begin to resonate hundreds of times per second in an irregular fashion, which is perhaps responsible for the sometimes incoherent nature of dreams. This system also emits chemicals that decouple parts of the brain involved with reason and logic. The lack of checks coming from the prefrontal and orbitofrontal cortices, along with the brain becoming extremely sensitive to stray thoughts, may account for the bizarre, erratic nature of dreams. Studies have shown that it is possible to enter the cholinergic state without sleep. Dr. Edgar Garcia-Rill of the University of Arkansas claims that meditation, worrying, or being placed in an isolation tank can induce this cholinergic state. Pilots and drivers facing the monotony of a blank windshield for many hours may also enter this state. In his research, he has found that schizophrenics have an unusually large number of cholinergic neurons in their brain stem, which may explain some of their hallucinations. To make his studies more efficient, Dr. Allan Hobson had his subjects put on a special nightcap that can automatically record data during a dream. One sensor connected to the nightcap registers the movements of a person’s head (because head movements usually occur when dreams end). Another sensor measures movements of the eyelids (because REM sleep causes eyelids to move). When his subjects wake up, they immediately record what they dreamed about, and the information from the nightcap is fed into a computer. In this way, Dr. Hobson has accumulated a vast amount of information about dreams. So what is the meaning of dreams? I asked him. He dismisses what he calls the “mystique of fortune-cookie dream interpretation.” He does not see any hidden message from the cosmos in dreams. Instead, he believes that after the PGO waves surge from the brain stem into the cortical areas, the cortex is trying to make sense of these erratic signals and winds up creating a narrative out of them: a dream.
Michio Kaku (The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind)
What are these substances? Medicines or drugs or sacramental foods? It is easier to say what they are not. They are not narcotics, nor intoxicants, nor energizers, nor anaesthetics, nor tranquilizers. They are, rather, biochemical keys which unlock experiences shatteringly new to most Westerners. For the last two years, staff members of the Center for Research in Personality at Harvard University have engaged in systematic experiments with these substances. Our first inquiry into the biochemical expansion of consciousness has been a study of the reactions of Americans in a supportive, comfortable naturalistic setting. We have had the opportunity of participating in over one thousand individual administrations. From our observations, from interviews and reports, from analysis of questionnaire data, and from pre- and postexperimental differences in personality test results, certain conclusions have emerged. (1) These substances do alter consciousness. There is no dispute on this score. (2) It is meaningless to talk more specifically about the “effect of the drug.” Set and setting, expectation, and atmosphere account for all specificity of reaction. There is no “drug reaction” but always setting-plus-drug. (3) In talking about potentialities it is useful to consider not just the setting-plus-drug but rather the potentialities of the human cortex to create images and experiences far beyond the narrow limitations of words and concepts. Those of us on this research project spend a good share of our working hours listening to people talk about the effect and use of consciousness-altering drugs. If we substitute the words human cortex for drug we can then agree with any statement made about the potentialities—for good or evil, for helping or hurting, for loving or fearing. Potentialities of the cortex, not of the drug. The drug is just an instrument. In analyzing and interpreting the results of our studies we looked first to the conventional models of modern psychology—psychoanalytic, behavioristic—and found these concepts quite inadequate to map the richness and breadth of expanded consciousness. To understand our findings we have finally been forced back on a language and point of view quite alien to us who are trained in the traditions of mechanistic objective psychology. We have had to return again and again to the nondualistic conceptions of Eastern philosophy, a theory of mind made more explicit and familiar in our Western world by Bergson, Aldous Huxley, and Alan Watts. In the first part of this book Mr. Watts presents with beautiful clarity this theory of consciousness, which we have seen confirmed in the accounts of our research subjects—philosophers, unlettered convicts, housewives, intellectuals, alcoholics. The leap across entangling thickets of the verbal, to identify with the totality of the experienced, is a phenomenon reported over and over by these persons.
Alan W. Watts (The Joyous Cosmology: Adventures in the Chemistry of Consciousness)
Well before the end of the 20th century however print had lost its former dominance. This resulted in, among other things, a different kind of person getting elected as leader. One who can present himself and his programs in a polished way, as Lee Quan Yu you observed in 2000, adding, “Satellite television has allowed me to follow the American presidential campaign. I am amazed at the way media professionals can give a candidate a new image and transform him, at least superficially, into a different personality. Winning an election becomes, in large measure, a contest in packaging and advertising. Just as the benefits of the printed era were inextricable from its costs, so it is with the visual age. With screens in every home entertainment is omnipresent and boredom a rarity. More substantively, injustice visualized is more visceral than injustice described. Television played a crucial role in the American Civil rights movement, yet the costs of television are substantial, privileging emotional display over self-command, changing the kinds of people and arguments that are taken seriously in public life. The shift from print to visual culture continues with the contemporary entrenchment of the Internet and social media, which bring with them four biases that make it more difficult for leaders to develop their capabilities than in the age of print. These are immediacy, intensity, polarity, and conformity. Although the Internet makes news and data more immediately accessible than ever, this surfeit of information has hardly made us individually more knowledgeable, let alone wiser, as the cost of accessing information becomes negligible, as with the Internet, the incentives to remember it seem to weaken. While forgetting anyone fact may not matter, the systematic failure to internalize information brings about a change in perception, and a weakening of analytical ability. Facts are rarely self-explanatory; their significance and interpretation depend on context and relevance. For information to be transmuted into something approaching wisdom it must be placed within a broader context of history and experience. As a general rule, images speak at a more emotional register of intensity than do words. Television and social media rely on images that inflamed the passions, threatening to overwhelm leadership with the combination of personal and mass emotion. Social media, in particular, have encouraged users to become image conscious spin doctors. All this engenders a more populist politics that celebrates utterances perceived to be authentic over the polished sound bites of the television era, not to mention the more analytical output of print. The architects of the Internet thought of their invention as an ingenious means of connecting the world. In reality, it has also yielded a new way to divide humanity into warring tribes. Polarity and conformity rely upon, and reinforce, each other. One is shunted into a group, and then the group polices once thinking. Small wonder that on many contemporary social media platforms, users are divided into followers and influencers. There are no leaders. What are the consequences for leadership? In our present circumstances, Lee's gloomy assessment of visual media's effects is relevant. From such a process, I doubt if a Churchill or Roosevelt or a de Gaulle can emerge. It is not that changes in communications technology have made inspired leadership and deep thinking about world order impossible, but that in an age dominated by television and the Internet, thoughtful leaders must struggle against the tide.
Henry Kissinger (Leadership : Six Studies in World Strategy)
Our patients predict the culture by living out consciously what the masses of people are able to keep unconscious for the time being. The neurotic is cast by destiny into a Cassandra role. In vain does Cassandra, sitting on the steps of the palace at Mycenae when Agamemnon brings her back from Troy, cry, “Oh for the nightingale’s pure song and a fate like hers!” She knows, in her ill-starred life, that “the pain flooding the song of sorrow is [hers] alone,” and that she must predict the doom she sees will occur there. The Mycenaeans speak of her as mad, but they also believe she does speak the truth, and that she has a special power to anticipate events. Today, the person with psychological problems bears the burdens of the conflicts of the times in his blood, and is fated to predict in his actions and struggles the issues which will later erupt on all sides in the society. The first and clearest demonstration of this thesis is seen in the sexual problems which Freud found in his Victorian patients in the two decades before World War I. These sexual topics‒even down to the words‒were entirely denied and repressed by the accepted society at the time. But the problems burst violently forth into endemic form two decades later after World War II. In the 1920's, everybody was preoccupied with sex and its functions. Not by the furthest stretch of the imagination can anyone argue that Freud "caused" this emergence. He rather reflected and interpreted, through the data revealed by his patients, the underlying conflicts of the society, which the “normal” members could and did succeed in repressing for the time being. Neurotic problems are the language of the unconscious emerging into social awareness. A second, more minor example is seen in the great amount of hostility which was found in patients in the 1930's. This was written about by Horney, among others, and it emerged more broadly and openly as a conscious phenomenon in our society a decade later. A third major example may be seen in the problem of anxiety. In the late 1930's and early 1940's, some therapists, including myself, were impressed by the fact that in many of our patients anxiety was appearing not merely as a symptom of repression or pathology, but as a generalized character state. My research on anxiety, and that of Hobart Mowrer and others, began in the early 1940's. In those days very little concern had been shown in this country for anxiety other than as a symptom of pathology. I recall arguing in the late 1940's, in my doctoral orals, for the concept of normal anxiety, and my professors heard me with respectful silence but with considerable frowning. Predictive as the artists are, the poet W. H. Auden published his Age of Anxiety in 1947, and just after that Bernstein wrote his symphony on that theme. Camus was then writing (1947) about this “century of fear,” and Kafka already had created powerful vignettes of the coming age of anxiety in his novels, most of them as yet untranslated. The formulations of the scientific establishment, as is normal, lagged behind what our patients were trying to tell us. Thus, at the annual convention of the American Psychopathological Association in 1949 on the theme “Anxiety,” the concept of normal anxiety, presented in a paper by me, was still denied by most of the psychiatrists and psychologists present. But in the 1950's a radical change became evident; everyone was talking about anxiety and there were conferences on the problem on every hand. Now the concept of "normal" anxiety gradually became accepted in the psychiatric literature. Everybody, normal as well as neurotic, seemed aware that he was living in the “age of anxiety.” What had been presented by the artists and had appeared in our patients in the late 30's and 40's was now endemic in the land.
Rollo May (Love and Will)
Simple Regression   CHAPTER OBJECTIVES After reading this chapter, you should be able to Use simple regression to test the statistical significance of a bivariate relationship involving one dependent and one independent variable Use Pearson’s correlation coefficient as a measure of association between two continuous variables Interpret statistics associated with regression analysis Write up the model of simple regression Assess assumptions of simple regression This chapter completes our discussion of statistical techniques for studying relationships between two variables by focusing on those that are continuous. Several approaches are examined: simple regression; the Pearson’s correlation coefficient; and a nonparametric alterative, Spearman’s rank correlation coefficient. Although all three techniques can be used, we focus particularly on simple regression. Regression allows us to predict outcomes based on knowledge of an independent variable. It is also the foundation for studying relationships among three or more variables, including control variables mentioned in Chapter 2 on research design (and also in Appendix 10.1). Regression can also be used in time series analysis, discussed in Chapter 17. We begin with simple regression. SIMPLE REGRESSION Let’s first look at an example. Say that you are a manager or analyst involved with a regional consortium of 15 local public agencies (in cities and counties) that provide low-income adults with health education about cardiovascular diseases, in an effort to reduce such diseases. The funding for this health education comes from a federal grant that requires annual analysis and performance outcome reporting. In Chapter 4, we used a logic model to specify that a performance outcome is the result of inputs, activities, and outputs. Following the development of such a model, you decide to conduct a survey among participants who attend such training events to collect data about the number of events they attended, their knowledge of cardiovascular disease, and a variety of habits such as smoking that are linked to cardiovascular disease. Some things that you might want to know are whether attending workshops increases
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
COEFFICIENT The nonparametric alternative, Spearman’s rank correlation coefficient (r, or “rho”), looks at correlation among the ranks of the data rather than among the values. The ranks of data are determined as shown in Table 14.2 (adapted from Table 11.8): Table 14.2 Ranks of Two Variables In Greater Depth … Box 14.1 Crime and Poverty An analyst wants to examine empirically the relationship between crime and income in cities across the United States. The CD that accompanies the workbook Exercising Essential Statistics includes a Community Indicators dataset with assorted indicators of conditions in 98 cities such as Akron, Ohio; Phoenix, Arizona; New Orleans, Louisiana; and Seattle, Washington. The measures include median household income, total population (both from the 2000 U.S. Census), and total violent crimes (FBI, Uniform Crime Reporting, 2004). In the sample, household income ranges from $26,309 (Newark, New Jersey) to $71,765 (San Jose, California), and the median household income is $42,316. Per-capita violent crime ranges from 0.15 percent (Glendale, California) to 2.04 percent (Las Vegas, Nevada), and the median violent crime rate per capita is 0.78 percent. There are four types of violent crimes: murder and nonnegligent manslaughter, forcible rape, robbery, and aggravated assault. A measure of total violent crime per capita is calculated because larger cities are apt to have more crime. The analyst wants to examine whether income is associated with per-capita violent crime. The scatterplot of these two continuous variables shows that a negative relationship appears to be present: The Pearson’s correlation coefficient is –.532 (p < .01), and the Spearman’s correlation coefficient is –.552 (p < .01). The simple regression model shows R2 = .283. The regression model is as follows (t-test statistic in parentheses): The regression line is shown on the scatterplot. Interpreting these results, we see that the R-square value of .283 indicates a moderate relationship between these two variables. Clearly, some cities with modest median household incomes have a high crime rate. However, removing these cities does not greatly alter the findings. Also, an assumption of regression is that the error term is normally distributed, and further examination of the error shows that it is somewhat skewed. The techniques for examining the distribution of the error term are discussed in Chapter 15, but again, addressing this problem does not significantly alter the finding that the two variables are significantly related to each other, and that the relationship is of moderate strength. With this result in hand, further analysis shows, for example, by how much violent crime decreases for each increase in household income. For each increase of $10,000 in average household income, the violent crime rate drops 0.25 percent. For a city experiencing the median 0.78 percent crime rate, this would be a considerable improvement, indeed. Note also that the scatterplot shows considerable variation in the crime rate for cities at or below the median household income, in contrast to those well above it. Policy analysts may well wish to examine conditions that give rise to variation in crime rates among cities with lower incomes. Because Spearman’s rank correlation coefficient examines correlation among the ranks of variables, it can also be used with ordinal-level data.9 For the data in Table 14.2, Spearman’s rank correlation coefficient is .900 (p = .035).10 Spearman’s p-squared coefficient has a “percent variation explained” interpretation, similar
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
to the measures described earlier. Hence, 90 percent of the variation in one variable can be explained by the other. For the variables given earlier, the Spearman’s rank correlation coefficient is .274 (p < .01), which is comparable to r reported in preceding sections. Box 14.1 illustrates another use of the statistics described in this chapter, in a study of the relationship between crime and poverty. SUMMARY When analysts examine relationships between two continuous variables, they can use simple regression or the Pearson’s correlation coefficient. Both measures show (1) the statistical significance of the relationship, (2) the direction of the relationship (that is, whether it is positive or negative), and (3) the strength of the relationship. Simple regression assumes a causal and linear relationship between the continuous variables. The statistical significance and direction of the slope coefficient is used to assess the statistical significance and direction of the relationship. The coefficient of determination, R2, is used to assess the strength of relationships; R2 is interpreted as the percent variation explained. Regression is a foundation for studying relationships involving three or more variables, such as control variables. The Pearson’s correlation coefficient does not assume causality between two continuous variables. A nonparametric alternative to testing the relationship between two continuous variables is the Spearman’s rank correlation coefficient, which examines correlation among the ranks of the data rather than among the values themselves. As such, this measure can also be used to study relationships in which one or both variables are ordinal. KEY TERMS   Coefficient of determination, R2 Error term Observed value of y Pearson’s correlation coefficient, r Predicted value of the dependent variable y, ŷ Regression coefficient Regression line Scatterplot Simple regression assumptions Spearman’s rank correlation coefficient Standard error of the estimate Test of significance of the regression coefficient Notes   1. See Chapter 3 for a definition of continuous variables. Although the distinction between ordinal and continuous is theoretical (namely, whether or not the distance between categories can be measured), in practice ordinal-level variables with seven or more categories (including Likert variables) are sometimes analyzed using statistics appropriate for interval-level variables. This practice has many critics because it violates an assumption of regression (interval data), but it is often
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
other and distinct from other groups. These techniques usually precede regression and other analyses. Factor analysis is a well-established technique that often aids in creating index variables. Earlier, Chapter 3 discussed the use of Cronbach alpha to empirically justify the selection of variables that make up an index. However, in that approach analysts must still justify that variables used in different index variables are indeed distinct. By contrast, factor analysis analyzes a large number of variables (often 20 to 30) and classifies them into groups based on empirical similarities and dissimilarities. This empirical assessment can aid analysts’ judgments regarding variables that might be grouped together. Factor analysis uses correlations among variables to identify subgroups. These subgroups (called factors) are characterized by relatively high within-group correlation among variables and low between-group correlation among variables. Most factor analysis consists of roughly four steps: (1) determining that the group of variables has enough correlation to allow for factor analysis, (2) determining how many factors should be used for classifying (or grouping) the variables, (3) improving the interpretation of correlations and factors (through a process called rotation), and (4) naming the factors and, possibly, creating index variables for subsequent analysis. Most factor analysis is used for grouping of variables (R-type factor analysis) rather than observations (Q-type). Often, discriminant analysis is used for grouping of observations, mentioned later in this chapter. The terminology of factor analysis differs greatly from that used elsewhere in this book, and the discussion that follows is offered as an aid in understanding tables that might be encountered in research that uses this technique. An important task in factor analysis is determining how many common factors should be identified. Theoretically, there are as many factors as variables, but only a few factors account for most of the variance in the data. The percentage of variation explained by each factor is defined as the eigenvalue divided by the number of variables, whereby the
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Facts are merely data; the interpretation of them, however, is knowledge.
Joshua V. Scher (Here & There)
Agendas are what get people, even historians, out of bed in the mornings, though one might hope that, once at the desk, they allow the data to challenge the hypotheses they have dreamed up overnight.
N.T. Wright (Paul and His Recent Interpreters)
I once had a foreign exchange trader who worked for me who was an unabashed chartist. He truly believed that all the information you needed was reflected in the past history of a currency. Now it's true there can be less to consider in trading currencies than individual equities, since at least for developed country currencies it's typically not necessary to pore over their financial statements every quarter. And in my experience, currencies do exhibit sustainable trends more reliably than, say, bonds or commodities. Imbalances caused by, for example, interest rate differentials that favor one currency over another (by making it more profitable to invest in the higher-yielding one) can persist for years. Of course, another appeal of charting can be that it provides a convenient excuse to avoid having to analyze financial statements or other fundamental data. Technical analysts take their work seriously and apply themselves to it diligently, but it's also possible for a part-time technician to do his market analysis in ten minutes over coffee and a bagel. This can create the false illusion of being a very efficient worker. The FX trader I mentioned was quite happy to engage in an experiment whereby he did the trades recommended by our in-house market technician. Both shared the same commitment to charts as an under-appreciated path to market success, a belief clearly at odds with the in-house technician's avoidance of trading any actual positions so as to provide empirical proof of his insights with trading profits. When challenged, he invariably countered that managing trading positions would challenge his objectivity, as if holding a losing position would induce him to continue recommending it in spite of the chart's contrary insight. But then, why hold a losing position if it's not what the chart said? I always found debating such tortured logic a brief but entertaining use of time when lining up to get lunch in the trader's cafeteria. To the surprise of my FX trader if not to me, the technical analysis trading account was unprofitable. In explaining the result, my Kool-Aid drinking trader even accepted partial responsibility for at times misinterpreting the very information he was analyzing. It was along the lines of that he ought to have recognized the type of pattern that was evolving but stupidly interpreted the wrong shape. It was almost as if the results were not the result of the faulty religion but of the less than completely faithful practice of one of its adherents. So what use to a profit-oriented trading room is a fully committed chartist who can't be trusted even to follow the charts? At this stage I must confess that we had found ourselves in this position as a last-ditch effort on my part to salvage some profitability out of a trader I'd hired who had to this point been consistently losing money. His own market views expressed in the form of trading positions had been singularly unprofitable, so all that remained was to see how he did with somebody else's views. The experiment wasn't just intended to provide a “live ammunition” record of our in-house technician's market insights, it was my last best effort to prove that my recent hiring decision hadn't been a bad one. Sadly, his failure confirmed my earlier one and I had to fire him. All was not lost though, because he was able to transfer his unsuccessful experience as a proprietary trader into a new business advising clients on their hedge fund investments.
Simon A. Lack (Wall Street Potholes: Insights from Top Money Managers on Avoiding Dangerous Products)
The integrity of any theory, Kuhn argued, lies in its falsifiability - that is, its openness to the possibility of repudiation in the light of more evidence, fresh insights or a more creative interpretation of data whose significance was not previously understood.
Hugh Mackay (The Good Life)
The Scheffe test is the most conservative, the Tukey test is best when many comparisons are made (when there are many groups), and the Bonferroni test is preferred when few comparisons are made. However, these post-hoc tests often support the same conclusions.3 To illustrate, let’s say the independent variable has three categories. Then, a post-hoc test will examine hypotheses for whether . In addition, these tests will also examine which categories have means that are not significantly different from each other, hence, providing homogeneous subsets. An example of this approach is given later in this chapter. Knowing such subsets can be useful when the independent variable has many categories (for example, classes of employees). Figure 13.1 ANOVA: Significant and Insignificant Differences Eta-squared (η2) is a measure of association for mixed nominal-interval variables and is appropriate for ANOVA. Its values range from zero to one, and it is interpreted as the percentage of variation explained. It is a directional measure, and computer programs produce two statistics, alternating specification of the dependent variable. Finally, ANOVA can be used for testing interval-ordinal relationships. We can ask whether the change in means follows a linear pattern that is either increasing or decreasing. For example, assume we want to know whether incomes increase according to the political orientation of respondents, when measured on a seven-point Likert scale that ranges from very liberal to very conservative. If a linear pattern of increase exists, then a linear relationship is said to exist between these variables. Most statistical software packages can test for a variety of progressive relationships. ANOVA Assumptions ANOVA assumptions are essentially the same as those of the t-test: (1) the dependent variable is continuous, and the independent variable is ordinal or nominal, (2) the groups have equal variances, (3) observations are independent, and (4) the variable is normally distributed in each of the groups. The assumptions are tested in a similar manner. Relative to the t-test, ANOVA requires a little more concern regarding the assumptions of normality and homogeneity. First, like the t-test, ANOVA is not robust for the presence of outliers, and analysts examine the presence of outliers for each group. Also, ANOVA appears to be less robust than the t-test for deviations from normality. Second, regarding groups having equal variances, our main concern with homogeneity is that there are no substantial differences in the amount of variance across the groups; the test of homogeneity is a strict test, testing for any departure from equal variances, and in practice, groups may have neither equal variances nor substantial differences in the amount of variances. In these instances, a visual finding of no substantial differences suffices. Other strategies for dealing with heterogeneity are variable transformations and the removal of outliers, which increase variance, especially in small groups. Such outliers are detected by examining boxplots for each group separately. Also, some statistical software packages (such as SPSS), now offer post-hoc tests when equal variances are not assumed.4 A Working Example The U.S. Environmental Protection Agency (EPA) measured the percentage of wetland loss in watersheds between 1982 and 1992, the most recent period for which data are available (government statistics are sometimes a little old).5 An analyst wants to know whether watersheds with large surrounding populations have
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
factual data, for example, that by selecting every fiftieth letter from the Book of Genesis, you could spell the word Torah. What meaning to attach to such a message was open to interpretation, but many people agreed that it could not have happened by chance, especially in view of the fact that the same exercise practiced in the Book of Exodus produces the same result.
J.C. Ryan (The 10th Cycle (Rossler Foundation, #1))
When running a Python program, the interpreter spends most of its time figuring out what low-level operation to perform, and extracting the data to give to this low-level operation. Given Python’s design and flexibility, the Python interpreter always has to determine the low-level operation in a completely general way, because a variable can have any type at any time. This is known as dynamic dispatch, and for many reasons, fully general dynamic dispatch is slow.[5] For example, consider what happens when the Python runtime evaluates a + b: The interpreter inspects the Python object referred to by a for its type, which requires at least one pointer lookup at the C level. The interpreter asks the type for an implementation of the addition method, which may require one or more additional pointer lookups and internal function calls. If the method in question is found, the interpreter then has an actual function it can call, implemented either in Python or in C. The interpreter calls the addition function and passes in a and b as arguments. The addition function extracts the necessary internal data from a and b, which may require several more pointer lookups and conversions from Python types to C types. If successful, only then can it perform the actual operation that adds a and b together. The result then must be placed inside a (perhaps new) Python object and returned. Only then is the operation complete. The situation for C is very different. Because C is compiled and statically typed, the C compiler can determine at compile time what low-level operations to perform and what low-level data to pass as arguments. At runtime, a compiled C program skips nearly all steps that the Python interpreter must perform. For something like a + b with a and b both being fundamental numeric types, the compiler generates a handful of machine code instructions to load the data into registers, add them, and store the result.
Anonymous
2. Abstract concepts. It is extremely difficult to explain how any set of purely physical actions and interactions could possibly invest consciousness with the immaterial—which is to say, purely abstract—concepts by which all experience is necessarily interpreted and known. It is almost impossible to say how a purely material system of stimulus and response could generate universal categories of understanding, especially if (and one hopes that most materialists would grant this much) those categories are not mere idiosyncratic personal inflections of experience, but real forms of knowledge about reality. In fact, they are the very substance of our knowledge of reality. As Hegel argued perhaps more persuasively than any other philosopher, simple sense-knowledge of particular things, in itself, would be utterly vacuous. My understanding of anything, even something as humbly particular as that insistently red rose in my garden, is composed not just of a collection of physical data but of the conceptual abstractions that my mind imposes upon them: I know the rose as a discrete object, as a flower, as a particular kind of flower, as a kind of vegetation, as a horticultural achievement, as a biological system, as a feature of an ecology, as an object of artistic interest, as a venerable and multi-faceted symbol, and so on; some of the concepts by which I know it are eidetic, some taxonomic, some aesthetic, some personal, and so on. All of these abstractions belong to various kinds of category and allow me, according to my interests and intentions, to situate the rose in a vast number of different sets: I can associate it eidetically not only with other flowers, but also with pictures of flowers; I can associate it biologically not only with other flowers, but also with non-floriferous sorts of vegetation; and so on. It is excruciatingly hard to see how any mechanical material system could create these categories, or how any purely physical system of interactions, however precisely coordinated, could produce an abstract concept. Surely no sequence of gradual or particulate steps, physiological or evolutionary, could by itself overcome the qualitative abyss between sense experience and mental abstractions.
David Bentley Hart (The Experience of God: Being, Consciousness, Bliss)
scripting language is a programming language that provides you with the ability to write scripts that are evaluated (or interpreted) by a runtime environment called a script engine (or an interpreter). A script is a sequence of characters that is written using the syntax of a scripting language and used as the source for a program executed by an interpreter. The interpreter parses the scripts, produces intermediate code, which is an internal representation of the program, and executes the intermediate code. The interpreter stores the variables used in a script in data structures called symbol tables. Typically, unlike in a compiled programming language, the source code (called a script) in a scripting language is not compiled but is interpreted at runtime. However, scripts written in some scripting languages may be compiled into Java bytecode that can be run by the JVM. Java 6 added scripting support to the Java platform that lets a Java application execute scripts written in scripting languages such as Rhino JavaScript, Groovy, Jython, JRuby, Nashorn JavaScript, and so on. Two-way communication is supported. It also lets scripts access Java objects created by the host application. The Java runtime and a scripting language runtime can communicate and make use of each other’s features. Support for scripting languages in Java comes through the Java Scripting API. All classes and interfaces in the Java Scripting API are in the javax.script package. Using a scripting language in a Java application provides several advantages: Most scripting languages are dynamically typed, which makes it simpler to write programs. They provide a quicker way to develop and test small applications. Customization by end users is possible. A scripting language may provide domain-specific features that are not available in Java. Scripting languages have some disadvantages as well. For example, dynamic typing is good to write simpler code; however, it turns into a disadvantage when a type is interpreted incorrectly and you have to spend a lot of time debugging it. Scripting support in Java lets you take advantage of both worlds: it allows you to use the Java programming language for developing statically typed, scalable, and high-performance parts of the application and use a scripting language that fits the domain-specific needs for other parts. I will use the term script engine frequently in this book. A script engine is a software component that executes programs written in a particular scripting language. Typically, but not necessarily, a script engine is an implementation of an interpreter for a scripting language. Interpreters for several scripting languages have been implemented in Java. They expose programming interfaces so a Java program may interact with them.
Kishori Sharan (Scripting in Java: Integrating with Groovy and JavaScript)
open coding; development of concepts; grouping concepts into categories; formation of a theory. In the open coding stage, we analyze the text and identify any interesting phenomena in the data. Normally each unique phenomenon is given a distinctive name or code. The procedure and methods for identifying coding items are discussed in section 11.5.2. In the second stage, collections of codes that describe similar contents are grouped together to form higher level “concepts.” In the third stage, broader groups of similar concepts are identified to form “categories” and there is a detailed interpretation of each category. In this process, we are constantly searching for and refining the conceptual construct that may explain the relationship between the concepts and categories (Glaser, 1978). In the last stage, theory formulation, we aim at creating inferential and predictive statements about the phenomena recorded in the data.
Jonathan Lazar (Research Methods in Human-Computer Interaction)
INTEGRITY IS: Empirical knowledge is one of the critical ingredients of integrity, followed by truth, which is not necessarily moral, hence the evaluation of the truth. Empirical knowledge, coupled with the evaluation of truth for determination of its morality, equals factual righteousness and subsequently perpetuates integrity. Perceptions can at times influence our evaluation of truth, which may ultimately influence our identification of a subject’s morality. Our perception of what is truthful and righteous is not necessarily others interpretations. Assumption and credulity is the archenemy—the nemesis, if you will—of truth and ultimately of morality. We sometimes have an unwillingness or aversion toward researching a subject before formulating an opinion. But the more knowledge we learn about a subject, the more beneficial it will be. Understanding based on factual data will enable a more realistic and enhanced resolution of questions about a subject’s morality. Do some investigation of facts, situations, ideology, and belief systems for your empirical knowledge. As you do so, the truth shall be brought to the surface. As you analyze empirical knowledge, the truth and its morality will determine the factual righteousness of the subject, and ultimately, its integrity. Integrity Equation: Empirical knowledge + truth + morality = factual righteousness = INTEGRITY
I.Alan Appt
Besides, what is human consciousness if not a data stream? A means for human beings to harvest information and interpret the world around them?
Pippa DaCosta (Girl From Above: Escape (The 1000 Revolution, #2))
Exploratory wells located invariably by a set of deterministic seismic interpretations are drilled into reservoirs under uncertainty that is invariably poorly quantified, the geologic models yawning to be optimized by a mindset that is educated in a data-driven methodology.
Keith Holdaway (Harness Oil and Gas Big Data with Analytics: Optimize Exploration and Production with Data-Driven Models (Wiley and SAS Business Series))
But Jacobs does not seem to have any awareness of how severely the Kroegers’ arguments have been criticized by competent New Testament scholars. Compare Jacobs’s trust in the Kroegers’ writings to the scholarly analyses of Thomas Schreiner, Robert W. Yarbrough, Albert Wolters, and S. M. Baugh mentioned above. (Schreiner is professor of New Testament at The Southern Baptist Theological Seminary in Louisville, Kentucky; Yarbrough is chairman of the New Testament department at Trinity Evangelical Divinity School in Deerfield, Illinois; Wolters is professor of religion and theology/classical languages at Redeemer University College, Ancaster, Ontario, Canada; and Baugh is professor of New Testament at Westminster Theological Seminary in Escondido, California.) These New Testament scholars do not simply say they disagree with the Kroegers (for scholars will always differ in their interpretation of data), but they say that again and again the Kroegers are not even telling the truth about much of the historical data that they claim. But in spite of this widespread rejection of the Kroegers’ argument, evangelical leaders like Cindy Jacobs accept it as true.
Wayne Grudem (Evangelical Feminism: A New Path to Liberalism?)
There’s a phrase that critics of economic forecasting like to use: Give an economist a result you want, and he’ll find the numbers to justify it. This entire city is filled with number crunchers who look at the exact same data and interpret it in widely disparate ways on everything from the federal budget deficit to the Social Security surplus.” “Meaning that data can be manipulated.” “Of course it can, depending on who’s paying the meter and whose political agenda is being furthered,
David Baldacci (Total Control)
Now there is nothing unscientific in utilising, for the interpretation of data, any model that seems promising; and there is therefore nothing unscientific either in Freud’s introduction of his model or in his own or others’ employment of it. Nevertheless, the question arises whether there may by now be an alternative better suited for the purpose in hand.
John Bowlby (Attachment (Attachment & Loss #1))
Apart from outright fraud, there are all those “benevolent mistakes” that scientists make more or less unwittingly: poor experiment design, sloppy data management, bias in the interpretation of facts and inadequate communication of results and methods. Then, of course, there is the devilish complexity of reality itself, which withholds more than it reveals to the prying eyes of science.
Anonymous
From the earliest of times, production of metal from ore (a stone) in the furnace was interpreted as an act of creation of matter (this explains why metallurgists were generally considered as men with divine powers). Interestingly, the name Cain derives from the Semitic root (QN) that formerly referred to acts of creation. Accordingly, it is not surprising that Cain is the common name of the smelters in ancient Canaanite, and that Tubal-cain is regarded in the book of Genesis as 'the father of every smith' (Gen. 4.22). The Kenites (sons of Cain), a small tribe mentioned in the Bible, have been identified for a long time as the Canaanite copper metallurgists. Bringing together data from many biblical sources reveals that this small tribe originated from the land of Edom, and especially to the area of Bozrah-Sela-Punon, the homeland of the Canaanite copper metallurgy. (p. 393) from 'Yahweh, the Canaanite God of Metallurgy?', JSOT 33.4 (2009): 387-404
Nissim Amzallag
More generally, a data scientist is someone who knows how to extract meaning from and interpret data, which requires both tools and methods from statistics and machine learning, as well as being human. She spends a lot of time in the process of collecting, cleaning, and munging data, because data is never clean. This process requires persistence, statistics, and software engineering skills — skills that are also necessary for understanding biases in the data, and for debugging logging output from code. Once she gets the data into shape, a crucial part is exploratory data analysis, which combines visualization and data sense. She’ll find patterns, build models, and algorithms — some with the intention of understanding product usage and the overall health of the product, and others to serve as prototypes that ultimately get baked back into the product. She may design experiments, and she is a critical part of data-driven decision making. She’ll communicate with team members, engineers, and leadership in clear language and with data visualizations so that even if her colleagues are not immersed in the data themselves, they will understand the implications.
Rachel Schutt (Doing Data Science)
their interpretation of the data, to the tendency to "suspend our disbelief" in order to have a more immersive play experience. Kurt Squire found similar patterns when he sought to integrate the commercial game Civilization III into
Henry Jenkins (Confronting the Challenges of Participatory Culture: Media Education for the 21st Century)
Depending on how one interpreted the data and how one weighed all the caveats, the dots could be connected to point in different directions. The ambiguities inherent to nutrition studies opened the door for their interpretation to be influenced by bias—which hardened into a kind of faith.
Nina Teicholz (The Big Fat Surprise: Why Butter, Meat and Cheese Belong in a Healthy Diet)
it’s hard to avoid the suspicion that once the government began advocating fat reduction in the American diet it changed the way many investigators in this science perceived their obligations. Those who believed that dietary fat caused heart disease had always preferentially interpreted their data in the light of that hypothesis.
Gary Taubes (Good Calories, Bad Calories: Challenging the Conventional Wisdom on Diet, Weight Control, and Disease)
The structure of de Prony’s computing office cannot be easily seen in Smith’s example. His computing staff had two distinct classes of workers. The larger of these was a staff of nearly ninety computers. These workers were quite different from Smith’s pin makers or even from the computers at the British Nautical Almanac and the Connaissance des Temps. Many of de Prony’s computers were former servants or wig dressers, who had lost their jobs when the Revolution rendered the elegant styles of Louis XVI unfashionable or even treasonous.35 They were not trained in mathematics and held no special interest in science. De Prony reported that most of them “had no knowledge of arithmetic beyond the two first rules [of addition and subtraction].”36 They were little different from manual workers and could not discern whether they were computing trigonometric functions, logarithms, or the orbit of Halley’s comet. One labor historian has described them as intellectual machines, “grasping and releasing a single piece of ‘data’ over and over again.”37 The second class of workers prepared instructions for the computation and oversaw the actual calculations. De Prony had no special title for this group of workers, but subsequent computing organizations came to use the term “planning committee” or merely “planners,” as they were the ones who actually planned the calculations. There were eight planners in de Prony’s organization. Most of them were experienced computers who had worked for either the Bureau du Cadastre or the Paris Observatory. A few had made interesting contributions to mathematical theory, but the majority had dealt only with the problems of practical mathematics.38 They took the basic equations for the trigonometric functions and reduced them to the fundamental operations of addition and subtraction. From this reduction, they prepared worksheets for the computers. Unlike Nevil Maskelyne’s worksheets, which gave general equations to the computers, these sheets identified every operation of the calculation and left nothing for the workers to interpret. Each step of the calculation was followed by a blank space for the computers to fill with a number. Each table required hundreds of these sheets, all identical except for a single unique starting value at the top of the page. Once the computers had completed their sheets, they returned their results to the planners. The planners assembled the tables and checked the final values. The task of checking the results was a substantial burden in itself. The group did not double-compute, as that would have obviously doubled the workload. Instead the planners checked the final values by taking differences between adjacent values in order to identify miscalculated numbers. This procedure, known as “differencing,” was an important innovation for human computers. As one observer noted, differencing removed the “necessity of repeating, or even of examining, the whole of the work done by the [computing] section.”39 The entire operation was overseen by a handful of accomplished scientists, who “had little or nothing to do with the actual numerical work.” This group included some of France’s most accomplished mathematicians, such as Adrien-Marie Legendre (1752–1833) and Lazare-Nicolas-Marguerite Carnot (1753–1823).40 These scientists researched the appropriate formulas for the calculations and identified potential problems. Each formula was an approximation, as no trigonometric function can be written as an exact combination of additions and subtractions. The mathematicians analyzed the quality of the approximations and verified that all the formulas produced values adequately close to the true values of the trigonometric functions.
David Alan Grier (When Computers Were Human)
How can you say one thing when your data shows something else. One doesn't know what was on the authors' minds and maybe they interpreted things differently but the sense is that the literature maintains an attitude somewhat like the approach of lawyers. If the jury buys it, it doesn't matter whether or not it's true. In scientific publishing, the jury are the reviewers and the editors. If they are already convinced of the conclusion, if there is no voir dire, you will surely win the case.
Richard David Feinman (The World Turned Upside Down: The Second Low-Carbohydrate Revolution)
Memory has been discussed here as though it consisted mainly of a body of data. But experts possess skills as well as knowledge. They acquire not only the ability to recognize situations or to provide information about them; they also acquire powerful special skills for dealing with situations as they encounter them. Physicians prescribe and operate as well as diagnose. The boundary between knowledge and skill is subtle. For example, when we write a computer program in any language except machine language, we are really not writing down processes but data structures. These data structures are then interpreted or compiled into processes that is, into machine-language instructions that the computer can understand and execute. Nevertheless for most purposes it is convenient for us simply to ignore the translation step and to treat the computer programs in higher-level languages as representing processes.
Herbert A. Simon (The Sciences of the Artificial)
The model of semantic interpretation we construct should reflect the particular properties and difficulties of natural language, and not simply be an application of a ready-to-wear logical formalism to a new body of data
James Pustejovsky (The Generative Lexicon)