Accurate Data Quotes

We've searched our database for all the quotes and captions related to Accurate Data. Here they are! All 100 of them:

In deep learning, there’s no data like more data. The more examples of a given phenomenon a network is exposed to, the more accurately it can pick out patterns and identify things in the real world.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
When the tragedies of others become for us diversions, sad stories with which to enthrall our friends, interesting bits of data to toss out at cocktail parties, a means of presenting a pose of political concern, or whatever…when this happens we commit the gravest of sins, condemn ourselves to ignominy, and consign the world to a dangerous course. We begin to justify our casual overview of pain and suffering by portraying ourselves as do-gooders incapacitated by the inexorable forces of poverty, famine, and war. “What can I do?” we say, “I’m only one person, and these things are beyond my control. I care about the world’s trouble, but there are no solutions.” Yet no matter how accurate this assessment, most of us are relying on it to be true, using it to mask our indulgence, our deep-seated lack of concern, our pathological self-involvement.
Lucius Shepard (The Best of Lucius Shepard)
Perception requires imagination because the data people encounter in their lives are never complete and always equivocal. For example, most people consider that the greatest evidence of an event one can obtain is to see it with their own eyes, and in a court of law little is held in more esteem than eyewitness testimony. Yet if you asked to display for a court a video of the same quality as the unprocessed data catptured on the retina of a human eye, the judge might wonder what you were tryig to put over. For one thing, the view will have a blind spot where the optic nerve attaches to the retina. Moreover, the only part of our field of vision with good resolution is a narrow area of about 1 degree of visual angle around the retina’s center, an area the width of our thumb as it looks when held at arm’s length. Outside that region, resolution drops off sharply. To compensate, we constantly move our eyes to bring the sharper region to bear on different portions of the scene we wish to observe. And so the pattern of raw data sent to the brain is a shaky, badly pixilated picture with a hole in it. Fortunately the brain processes the data, combining input from both eyes, filling in gaps on the assumption that the visual properties of neighboring locations are similar and interpolating. The result - at least until age, injury, disease, or an excess of mai tais takes its toll - is a happy human being suffering from the compelling illusion that his or her vision is sharp and clear. We also use our imagination and take shortcuts to fill gaps in patterns of nonvisual data. As with visual input, we draw conclusions and make judgments based on uncertain and incomplete information, and we conclude, when we are done analyzing the patterns, that out “picture” is clear and accurate. But is it?
Leonard Mlodinow (The Drunkard's Walk: How Randomness Rules Our Lives)
I can’t extrapolate a theory of what people would do based on the limited data set of what one person—myself—would do. That’s why I need clones, so I can more accurately gauge what large crowds of people would do in a given situation.
Jarod Kintz (The Days of Yay are Here! Wake Me Up When They're Over.)
Like so many of the decisions to exclude women in the interests of simplicity, from architecture to medical research, this conclusion could only be reached in a culture that conceives of men as the default human and women as a niche aberration. To distort a reality you are supposedly trying to measure makes sense only if you don’t see women as essential. It makes sense only if you see women as an added extra, a complicating factor. It doesn’t make sense if you’re talking about half of the human race. It doesn’t make sense if you care about accurate data.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
It’s not accurate that your memory works like a container, cup, or hard drive in that once it’s full of data no more can fit. It’s more like a muscle in that the more you train it, the stronger it gets and the more you can store.
Jim Kwik (Limitless: Upgrade Your Brain, Learn Anything Faster, and Unlock Your Exceptional Life)
A forecaster should almost never ignore data, especially when she is studying rare events like recessions or presidential elections, about which there isn’t very much data to begin with. Ignoring data is often a tip-off that the forecaster is overconfident, or is overfitting her model—that she is interested in showing off rather than trying to be accurate.
Nate Silver (The Signal and the Noise: Why So Many Predictions Fail-but Some Don't)
Yet like many other human traits that made sense in past ages but cause trouble in the modern age, the knowledge illusion has its downside. The world is becoming ever more complex, and people fail to realise just how ignorant they are of what’s going on. Consequently some who know next to nothing about meteorology or biology nevertheless propose policies regarding climate change and genetically modified crops, while others hold extremely strong views about what should be done in Iraq or Ukraine without being able to locate these countries on a map. People rarely appreciate their ignorance, because they lock themselves inside an echo chamber of like-minded friends and self-confirming newsfeeds, where their beliefs are constantly reinforced and seldom challenged. Providing people with more and better information is unlikely to improve matters. Scientists hope to dispel wrong views by better science education, and pundits hope to sway public opinion on issues such as Obamacare or global warming by presenting the public with accurate facts and expert reports. Such hopes are grounded in a misunderstanding of how humans actually think. Most of our views are shaped by communal groupthink rather than individual rationality, and we hold on to these views out of group loyalty. Bombarding people with facts and exposing their individual ignorance is likely to backfire. Most people don’t like too many facts, and they certainly don’t like to feel stupid. Don’t be so sure that you can convince Tea Party supporters of the truth of global warming by presenting them with sheets of statistical data.
Yuval Noah Harari (21 Lessons for the 21st Century)
Synthesis is the process of converting a lot of data into an accurate picture.
Ray Dalio (Principles: Life and Work)
Lanier is interested in the ways in which people “reduce themselves” in order to make a computer’s description of them appear more accurate. “Information systems,” he writes, “need to have information in order to run, but information underrepresents reality” (my italics). .... When a human being becomes a set of data on a Web site like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it's a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears.
Zadie Smith (Feel Free: Essays)
The longer someone ignores an email before finally responding, the more relative social power that person has. Map these response times across an entire organization and you get a remarkably accurate chart of the actual social standing. The boss leaves emails unanswered for hours or days; those lower down respond within minutes. There’s an algorithm for this, a data mining method called “automated social hierarchy detection,” developed at Columbia University.8 When applied to the archive of email traffic at Enron Corporation before it folded, the method correctly identified the roles of top-level managers and their subordinates just by how long it took them to answer a given person’s emails. Intelligence agencies have been applying the same metric to suspected terrorist gangs, piecing together the chain of influence to spot the central figures.
Daniel Goleman (Focus: The Hidden Driver of Excellence)
Thanks to Blast data, researchers now know that natural walking speed is one of the most accurate predictors of mortality that we have. The slower you walk, statistically speaking, the sooner you are likely to check out.
Bill Gifford (Spring Chicken: Stay Young Forever (or Die Trying))
Monte Carlo is able to discover practical solutions to otherwise intractable problems because the most efficient search of an unmapped territory takes the form of a random walk. Today’s search engines, long descended from their ENIAC-era ancestors, still bear the imprint of their Monte Carlo origins: random search paths being accounted for, statistically, to accumulate increasingly accurate results. The genius of Monte Carlo—and its search-engine descendants—lies in the ability to extract meaningful solutions, in the face of overwhelming information, by recognizing that meaning resides less in the data at the end points and more in the intervening paths.
George Dyson (Turing's Cathedral: The Origins of the Digital Universe)
In 2016, Rachael Tatman, a research fellow in linguistics at the University of Washington, found that Google’s speech-recognition software was 70% more likely to accurately recognise male speech than female speech – and it’s currently the best on the market.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
All I ask is to see accurate and authentic data, analyzed from all directions—free of bias and tunnel vision—before I layer my emotions upon it. In the end, we must live with the consequences of our decisions. After all input of facts and statistical analysis, our emotions may defy reconciliation with data.
Neil deGrasse Tyson (Starry Messenger: Cosmic Perspectives on Civilization)
During sustained stress, the amygdala processes emotional sensory information more rapidly and less accurately, dominates hippocampal function, and disrupts frontocortical function; we’re more fearful, our thinking is muddled, and we assess risks poorly and act impulsively out of habit, rather than incorporating new data.
Robert M. Sapolsky (Behave: The Biology of Humans at Our Best and Worst)
The good news is that these descriptive statistics give us a manageable and meaningful summary of the underlying phenomenon. That’s what this chapter is about. The bad news is that any simplification invites abuse. Descriptive statistics can be like online dating profiles: technically accurate and yet pretty darn misleading.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
They somehow managed to persuade themselves that computer models constitute data. That very complicated guesses become facts. They made themselves believe they had the power to accurately model, not merely something as inconceivably complex as, say, a single zygote…but a national economy, a weather system, a planetary ecosphere, a multiplanet society—even a universe.
Robert A. Heinlein (Variable Star (Tor Science Fiction))
This is a promising new source of insight that can supplement survey data but can’t replace it for the foreseeable future. That’s because the tools have a ways to go before they can accurately gauge sentiment about specific customer interactions as precisely and consistently as a survey. You should consider this option when your measurement program matures, but start out with the tried-and-true approach of fielding surveys.
Harley Manning (Outside In: The Power of Putting Customers at the Center of Your Business)
But why should we accept that the way men do things, the way men see themselves, is the correct way? Recent research has emerged showing that while women tend to assess their intelligence accurately, men of average intelligence think they are more intelligent than two-thirds of people. This being the case, perhaps it wasn’t that women’s rates of putting themselves up for promotion were too low. Perhaps it was that men’s were too high.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
Drift. Down through deltas of former girlfriends, degrees of confirmation of girlfriendhood, personal sightings of Rez or Lo together with whichever woman in whatever public place, each account illuminated with the importance the event had held for whoever had posted it. This being for Laney the most peculiar aspect of this data, the perspective in which these two loomed. Human in every detail but then not so. Everything scrupulously, fanatically accurate, probably, but always assembled around the hollow armature of celebrity. He could see celebrity here, not like Kathy’s idea of a primal substance, but as a paradoxical quality inherent in the substance of the world. He saw that the quantity of data accumulated here by the band’s fans was much greater than everything the band themselves had ever generated. And their actual art, the music and the videos, was the merest fragment of that.
William Gibson (Idoru (Bridge, #2))
The master propagandist, like the advertising expert, avoids obvious emotional appeals and strives for a tone that is consistent with the prosaic quality of modern life—a dry, bland matter-of-factness. Nor does the propagandist circulate "intentionally biased" information. He knows that partial truths serve as more effective instruments of deception than lies. Thus he tries to impress the public with statistics of economic growth that neglect to give the base year from which growth is calculated, with accurate but meaningless facts about the standard of living—with raw and uninterpreted data, in other words, from which the audience is invited to draw the inescapable conclusion that things are getting better and the present régime therefore deserves the people's confidence, or on the other hand that things are getting worse so rapidly that the present régime should be given emergency powers to deal with the developing crisis.
Christopher Lasch (The Culture of Narcissism: American Life in An Age of Diminishing Expectations)
Of those who did do the science fair, when asked whether they enjoyed it, 21 percent said they did—I’ll call these people the Nerds—and 21 percent said they didn’t. I know Twitter polls aren’t the most accurate sources of data, but I think it’s safe to say this issue is pretty evenly split. I’m in the 21 percent of people who didn’t enjoy the science fair. Don’t get me wrong, I love science! Well, that may not be completely true. I really hated biology. It was just memorizing vocabulary words.
James Rallison (The Odd 1s Out: How to Be Cool and Other Things I Definitely Learned from Growing Up)
Nel 1564, l’arcivescovo James Usher (1580–1656) diede alle stampe i suoi Annales Veteri et Novi Testamenti, nei quali proponeva come data della Creazione del Paradiso e della Terra il 4004 a.C. Uno dei suoi allievi spinse i propri calcoli ancora oltre, e fu in grado di annunciare trionfalmente che la Terra è stata creata di domenica, il 21 ottobre 4004 a.C., alle nove di mattina in punto, perché Dio preferiva lavorare nelle prime ore del giorno, quando ancora si sentiva fresco. Anche questa data è errata, di un quarto d’ora abbondante.
Terry Pratchett (Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch)
One of the best-kept secrets in all of health care — understood by few doctors — is that the peer reviewers, medical journal editors, and guideline writers, who are assumed to be performing due diligence to ensure the accuracy and completeness of the data reported from company-sponsored studies, do not have access to the real data from these trials. The published reports that doctors accept as fully vetted scientific evidence can be more accurately described as unverified data summaries prepared largely by or for the sponsoring drug companies.
John Abramson (Sickening: How Big Pharma Broke American Health Care and How We Can Repair It)
Listen, Google,’ I will say, ‘both John and Paul are courting me. I like both of them, but in a different way, and it’s so hard to make up my mind. Given everything you know, what do you advise me to do?’ And Google will answer: ‘Well, I know you from the day you were born. I have read all your emails, recorded all your phone calls, and know your favourite films, your DNA and the entire history of your heart. I have exact data about each date you went on, and if you want, I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. If necessary, I can even provide you with accurate mathematical ranking of every sexual encounter you had with either of them. And naturally enough, I know them as well as I know you. Based on all this information, on my superb algorithms, and on decades’ worth of statistics about millions of relationships – I advise you to go with John, with an 87 per cent probability of being more satisfied with him in the long run. Indeed, I know you so well that I also know you don’t like this answer. Paul is much more handsome than John, and because you give external appearances too much weight, you secretly wanted me to say “Paul”. Looks matter, of course; but not as much as you think. Your biochemical algorithms – which evolved tens of thousands of years ago in the African savannah – give looks a weight of 35 per cent in their overall rating of potential mates. My algorithms – which are based on the most up-to-date studies and statistics – say that looks have only a 14 per cent impact on the long-term success of romantic relationships. So, even though I took Paul’s looks into account, I still tell you that you would be better off with John.
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
It is well-known that a big percentage of all marriages in the United States end in divorce or separation (about 39 percent, according to the latest data).[30] But staying together is not what really counts. Analysis of the Harvard Study data shows that marriage per se accounts for only 2 percent of subjective well-being later in life.[31] The important thing for health and well-being is relationship satisfaction. Popular culture would have you believe the secret to this satisfaction is romantic passion, but that is wrong. On the contrary, a lot of unhappiness can attend the early stages of romance. For example, researchers find that it is often accompanied by rumination, jealousy, and “surveillance behaviors”—not what we typically associate with happiness. Furthermore, “destiny beliefs” about soul mates or love being meant to be can predict low forgiveness when paired with attachment anxiety.[32] Romance often hijacks our brains in a way that can cause the highs of elation or the depths of despair.[33] You might accurately say that falling in love is the start-up cost for happiness—an exhilarating but stressful stage we have to endure to get to the relationships that actually fulfill us. The secret to happiness isn’t falling in love; it’s staying in love, which depends on what psychologists call “companionate love”—love based less on passionate highs and lows and more on stable affection, mutual understanding, and commitment.[34] You might think “companionate love” sounds a little, well, disappointing. I certainly did the first time I heard it, on the heels of great efforts to win my future wife’s love. But over the past thirty years, it turns out that we don’t just love each other; we like each other, too. Once and always my romantic love, she is also my best friend.
Arthur C. Brooks (From Strength to Strength: Finding Success, Happiness, and Deep Purpose in the Second Half of Life)
Clearly, just imprinting a document in clay is not enough to guarantee efficient, accurate and convenient data processing. That requires methods of organisation like catalogues, methods of reproduction like photocopy machines, methods of rapid and accurate retrieval like computer algorithms, and pedantic (but hopefully cheerful) librarians who know how to use these tools. Inventing such methods proved to be far more difficult than inventing writing. Many writing systems developed independently in cultures distant in time and place from each other. Every decade archaeologists discover another few forgotten scripts. Some of them might prove to be even older than the Sumerian scratches in clay. But most of them remain curiosities because those who invented them failed to invent efficient ways of cataloguing and retrieving data. What set apart Sumer, as well as pharaonic Egypt, ancient China and the Inca Empire, is that these cultures developed good techniques of archiving, cataloguing and retrieving written records. They obviously had no computers or photocopying machines, but they did have catalogues, and far more importantly, they did create special schools in which professional scribes, clerks, librarians and accountants were rigorously trained in the secrets of data-processing.
Yuval Noah Harari (Sapiens: A Brief History of Humankind)
Even so, putting all exaggerations aside, sound neuroscience really is providing us with an ever richer picture of the brain and its operations, and in some far distant epoch may actually achieve something like a comprehensive survey of what is perhaps the single most complex physical object in the universe. That is all entirely irrelevant to my argument, however. My claim here is that, whatever we may learn about the brain in the future, it will remain in principle impossible to produce any entirely mechanistic account of the conscious mind, for a great many reasons (many of which I shall soon address), and that therefore consciousness is a reality that defeats mechanistic or materialist thinking. For the intuitions of folk psychology are in fact perfectly accurate; they are not merely some theory about the mind that is either corrigible or dispensable. They constitute nothing less than a full and coherent phenomenological description of the life of the mind, and they are absolutely “primordial data,” which cannot be abandoned in favor of some alternative description without producing logical nonsense. Simply said, consciousness as we commonly conceive of it is quite real (as all of us, apart from a few cognitive scientists and philosophers, already know—and they know it too, really). And this presents a problem for materialism, because consciousness as we commonly conceive of it is also almost certainly irreconcilable with a materialist view of reality.
David Bentley Hart (The Experience of God: Being, Consciousness, Bliss)
The human brain runs first-class simulation software. Our eyes don’t present to our brains a faithful photograph of what is out there, or an accurate movie of what is going on through time. Our brains construct a continuously updated model: updated by coded pulses chattering along the optic nerve, but constructed nevertheless. Optical illusions are vivid reminders of this.47 A major class of illusions, of which the Necker Cube is an example, arise because the sense data that the brain receives are compatible with two alternative models of reality. The brain, having no basis for choosing between them, alternates, and we experience a series of flips from one internal model to the other. The picture we are looking at appears, almost literally, to flip over and become something else.
Richard Dawkins (The God Delusion)
Who among us can predict the future? Who would dare to? The answer to the first question is no one, really, and the answer to the second is everyone, especially every government and business on the planet. This is what that data of ours is used for. Algorithms analyze it for patterns of established behavior in order to extrapolate behaviors to come, a type of digital prophecy that’s only slightly more accurate than analog methods like palm reading. Once you go digging into the actual technical mechanisms by which predictability is calculated, you come to understand that its science is, in fact, anti-scientific, and fatally misnamed: predictability is actually manipulation. A website that tells you that because you liked this book you might also like books by James Clapper or Michael Hayden isn’t offering an educated guess as much as a mechanism of subtle coercion.
Edward Snowden (Permanent Record)
TRUST IN ONE’S ORGANISM A second characteristic of the persons who emerge from therapy is difficult to describe. It seems that the person increasingly discovers that his own organism is trustworthy, that it is a suitable instrument for discovering the most satisfying behavior in each immediate situation. If this seems strange, let me try to state it more fully. Perhaps it will help to understand my description if you think of the individual as faced with some existential choice: “Shall I go home to my family during vacation, or strike out on my own?” “Shall I drink this third cocktail which is being offered?” “Is this the person whom I would like to have as my partner in love and in life?” Thinking of such situations, what seems to be true of the person who emerges from the therapeutic process? To the extent that this person is open to all of his experience, he has access to all of the available data in the situation, on which to base his behavior. He has knowledge of his own feelings and impulses, which are often complex and contradictory. He is freely able to sense the social demands, from the relatively rigid social “laws” to the desires of friends and family. He has access to his memories of similar situations, and the consequences of different behaviors in those situations. He has a relatively accurate perception of this external situation in all of its complexity. He is better able to permit his total organism, his conscious thought participating, to consider, weigh and balance each stimulus, need, and demand, and its relative weight and intensity. Out of this complex weighing and balancing he is able to discover that course of action which seems to come closest to satisfying all his needs in the situation, long-range as well as immediate needs.
Carl R. Rogers (On Becoming a Person)
Search engine query data is not the product of a designed statistical experiment and finding a way to meaningfully analyse such data and extract useful knowledge is a new and challenging field that would benefit from collaboration. For the 2012–13 flu season, Google made significant changes to its algorithms and started to use a relatively new mathematical technique called Elasticnet, which provides a rigorous means of selecting and reducing the number of predictors required. In 2011, Google launched a similar program for tracking Dengue fever, but they are no longer publishing predictions and, in 2015, Google Flu Trends was withdrawn. They are, however, now sharing their data with academic researchers... Google Flu Trends, one of the earlier attempts at using big data for epidemic prediction, provided useful insights to researchers who came after them... The Delphi Research Group at Carnegie Mellon University won the CDC’s challenge to ‘Predict the Flu’ in both 2014–15 and 2015–16 for the most accurate forecasters. The group successfully used data from Google, Twitter, and Wikipedia for monitoring flu outbreaks.
Dawn E. Holmes (Big Data: A Very Short Introduction (Very Short Introductions))
We are about to study the idea of a computational process. Computational processes are abstract beings that inhabit computers. As they evolve, processes manipulate other abstract things called data. The evolution of a process is directed by a pattern of rules called a program. People create programs to direct processes. In effect, we conjure the spirits of the computer with our spells. A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform. A computational process, in a correctly working computer, executes programs precisely and accurately. Thus, like the sorcerer's apprentice, novice programmers must learn to understand and to anticipate the consequences of their conjuring. Even small errors (usually called bugs or glitches) in programs can have complex and unanticipated consequences.
Harold Abelson (Structure and Interpretation of Computer Programs)
How do you commit the perfect crime in science? We’re handicapped from the start because it’s a question we never ask. For more than thirty years, Frank taught me and many others to record our data accurately, compare them with collaborators around the world, discard the outliers, and come to a consensus. We understand there are variations, but if the bulk of the evidence goes in a certain direction, we are confident we have a better understanding of human biological processes. If only that were what happened in the real world. In the real world there are corporations, be they pharmaceutical, agricultural, petroleum, or chemical companies, that have billions of dollars at stake in the work of scientists. If one has billions of dollars, he can use the dark arts of persuasion to hire public relations firms to tout your products, sow the seeds of doubt about those who question your products, buy advertising on news networks so they don’t publicize negative stories unless they have no other choice, and donate to politicians of all ideologies. Then, once those politicians have been elected, they can write laws for the benefit of their generous donors. As it was put so eloquently in the seventeenth century by a prominent member of Queen Elizabeth’s court, “If it prospers, none dare call it treason.
Kent Heckenlively (Plague of Corruption: Restoring Faith in the Promise of Science)
One of the central elements of resilience, Bonanno has found, is perception: Do you conceptualize an event as traumatic, or as an opportunity to learn and grow? “Events are not traumatic until we experience them as traumatic,” Bonanno told me, in December. “To call something a ‘traumatic event’ belies that fact.” He has coined a different term: PTE, or potentially traumatic event, which he argues is more accurate. The theory is straightforward. Every frightening event, no matter how negative it might seem from the sidelines, has the potential to be traumatic or not to the person experiencing it. Take something as terrible as the surprising death of a close friend: you might be sad, but if you can find a way to construe that event as filled with meaning—perhaps it leads to greater awareness of a certain disease, say, or to closer ties with the community—then it may not be seen as a trauma. The experience isn’t inherent in the event; it resides in the event’s psychological construal. It’s for this reason, Bonanno told me, that “stressful” or “traumatic” events in and of themselves don’t have much predictive power when it comes to life outcomes. “The prospective epidemiological data shows that exposure to potentially traumatic events does not predict later functioning,” he said. “It’s only predictive if there’s a negative response.” In other words, living through adversity, be it endemic to your environment or an acute negative event, doesn’t guarantee that you’ll suffer going forward. What matters is whether that adversity becomes traumatizing.
Maria Konnikova
To claim that mathematics is purely a human invention and is successful in explaining nature only because of evolution and natural selection ignores some important facts in the nature of mathematics and in the history of theoretical models of the universe. First, while the mathematical rules (e.g., the axioms of geometry or of set theory) are indeed creations of the human mind, once those rules are specified, we lose our freedom. The definition of the Golden Ratio emerged originally from the axioms of Euclidean geometry; the definition of the Fibonacci sequence from the axioms of the theory of numbers. Yet the fact that the ratio of successive Fibonacci numbers converges to the Golden Ratio was imposed on us-humans had not choice in the matter. Therefore, mathematical objects, albeit imaginary, do have real properties. Second, the explanation of the unreasonable power of mathematics cannot be based entirely on evolution in the restricted sense. For example, when Newton proposed his theory of gravitation, the data that he was trying to explain were at best accurate to three significant figures. Yet his mathematical model for the force between any two masses in the universe achieved the incredible precision of better than one part in a million. Hence, that particular model was not forced on Newton by existing measurements of the motions of planets, nor did Newton force a natural phenomenon into a preexisting mathematical pattern. Furthermore, natural selection in the common interpretation of that concept does not quite apply either, because it was not the case that five competing theories were proposed, of which one eventually won. Rather, Newton's was the only game in town!
Mario Livio (The Golden Ratio: The Story of Phi, the World's Most Astonishing Number)
Listen, Google,’ I will say, ‘both John and Paul are courting me. I like both of them, but in different ways, and it’s so hard to make up my mind. Given everything you know, what do you advise me to do?’ And Google will answer: ‘Well, I’ve known you from the day you were born. I have read all your emails, recorded all your phone calls, and know your favourite films, your DNA and the entire biometric history of your heart. I have exact data about each date you went on and, if you want, I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. If necessary, I can even provide you with an accurate mathematical ranking of every sexual encounter you had with either of them. And naturally, I know them as well as I know you. Based on all this information, on my superb algorithms, and on decades’ worth of statistics about millions of relationships –I advise you to go with John, with an 87 per cent probability that you will be more satisfied with him in the long run. ‘Indeed, I know you so well that I also know you don’t like this answer. Paul is much more handsome than John, and because you give external appearances too much weight, you secretly wanted me to say “Paul”. Looks matter, of course; but not as much as you think. Your biochemical algorithms –which evolved tens of thousands of years ago on the African savannah –give looks a weight of 35 per cent in their overall rating of potential mates. My algorithms –which are based on the most up-to-date studies and statistics –say that looks have only a 14 per cent impact on the long-term success of romantic relationships. So, even though I took Paul’s looks into account, I still tell you that you would be better off with John.
Yuval Noah Harari (Homo Deus: ‘An intoxicating brew of science, philosophy and futurism’ Mail on Sunday)
..."facts" properly speaking are always and never more than interpretations of the data... the Gospel accounts are themselves such data or, if you like, hard facts. But the events to which the Gospels refer are not themselves "hard facts"; they are facts only in the sense that we interpret the text, together with such other data as we have, to reach a conclusion regarding the events as best we are able. They are facts in the same way that the verdict of a jury establishes the facts of the case, the interpretation of the evidence that results in the verdict delivered. Here it is as well to remember that historical methodology can only produce probabilities, the probability that some event took place in such circumstances being greater or smaller, depending on the quality of the data and the perspective of the historical enquirer. The jury which decides what is beyond reasonable doubt is determining that the probability is sufficiently high for a clear-cut verdict to be delivered. Those who like "certainty" in matters of faith will always find this uncomfortable. But faith is not knowledge of "hard facts"...; it is rather confidence, assurance, trust in the reliability of the data and in the integrity of the interpretations derived from that data... It does seem important to me that those who speak for evangelical Christians grasp this nettle firmly, even if it stings! – it is important for the intellectual integrity of evangelicals. Of course any Christian (and particularly evangelical Christians) will want to get as close as possible to the Jesus who ministered in Galilee in the late 20s of the first century. If, as they believe, God spoke in and through that man, more definitively and finally than at any other time and by any other medium, then of course Christians will want to hear as clearly as possible what he said, and to see as clearly as possible what he did, to come as close as possible to being an eyewitness and earwitness for themselves. If God revealed himself most definitively in the historical particularity of a Galilean Jew in the earliest decades of the Common Era, then naturally those who believe this will want to inquire as closely into the historical particularity and actuality of that life and of Jesus’ mission. The possibility that later faith has in some degree covered over that historical actuality cannot be dismissed as out of the question. So a genuinely critical historical inquiry is necessary if we are to get as close to the historical actuality as possible. Critical here, and this is the point, should not be taken to mean negatively critical, hermeneutical suspicion, dismissal of any material that has overtones of Easter faith. It means, more straightforwardly, a careful scrutiny of all the relevant data to gain as accurate or as historically responsible a picture as possible. In a day when evangelical, and even Christian, is often identified with a strongly right-wing, conservative and even fundamentalist attitude to the Bible, it is important that responsible evangelical scholars defend and advocate such critical historical inquiry and that their work display its positive outcome and benefits. These include believers growing in maturity • to recognize gray areas and questions to which no clear-cut answer can be given (‘we see in a mirror dimly/a poor reflection’), • to discern what really matters and distinguish them from issues that matter little, • and be able to engage in genuine dialogue with those who share or respect a faith inquiring after truth and seeking deeper understanding. In that way we may hope that evangelical (not to mention Christian) can again become a label that men and women of integrity and good will can respect and hope to learn from more than most seem to do today.
James D.G. Dunn (The Historical Jesus: Five Views)
All addictions — whether to drugs or to nondrug behaviours — share the same brain circuits and brain chemicals. On the biochemical level the purpose of all addictions is to create an altered physiological state in the brain. This can be achieved in many ways, drug taking being the most direct. So an addiction is never purely “psychological” all addictions have a biological dimension. And here a word about dimensions. As we delve into the scientific research, we need to avoid the trap of believing that addiction can be reduced to the actions of brain chemicals or nerve circuits or any other kind of neurobiological, psychological or sociological data. A multilevel exploration is necessary because it’s impossible to understand addiction fully from any one perspective, no matter how accurate. Addiction is a complex condition, a complex interaction between human beings and their environment. We need to view it simultaneously from many different angles — or, at least, while examining it from one angle, we need to keep the others in mind. Addiction has biological, chemical, neurological, psychological, medical, emotional, social, political, economic and spiritual underpinnings — and perhaps others I haven’t thought about. To get anywhere near a complete picture we must keep shaking the kaleidoscope to see what other patterns emerge. Because the addiction process is too multifaceted to be understood within any limited framework, my definition of addiction made no mention of “disease.” Viewing addiction as an illness, either acquired or inherited, narrows it down to a medical issue. It does have some of the features of illness, and these are most pronounced in hardcore drug addicts like the ones I work with in the Downtown Eastside. But not for a moment do I wish to promote the belief that the disease model by itself explains addiction or even that it’s the key to understanding what addiction is all about. Addiction is “all about” many things. Note, too, that neither the textbook definitions of drug addiction nor the broader view we’re taking here includes the concepts of physical dependence or tolerance as criteria for addiction. Tolerance is an instance of “give an inch, take a mile.” That is, the addict needs to use more and more of the same substance or engage in more and more of the same behaviour, to get the same rewarding effects. Although tolerance is a common effect of many addictions, a person does not need to have developed a tolerance to be addicted.
Gabor Maté (In the Realm of Hungry Ghosts: Close Encounters with Addiction)
Similarly, the computers used to run the software on the ground for the mission were borrowed from a previous mission. These machines were so out of date that Bowman had to shop on eBay to find replacement parts to get the machines working. As systems have gone obsolete, JPL no longer uses the software, but Bowman told me that the people on her team continue to use software built by JPL in the 1990s, because they are familiar with it. She said, “Instead of upgrading to the next thing we decided that it was working just fine for us and we would stay on the platform.” They have developed so much over such a long period of time with the old software that they don’t want to switch to a newer system. They must adapt to using these outdated systems for the latest scientific work. Working within these constraints may seem limiting. However, building tools with specific constraints—from outdated technologies and low bitrate radio antennas—can enlighten us. For example, as scientists started to explore what they could learn from the wait times while communicating with deep space probes, they discovered that the time lag was extraordinarily useful information. Wait times, they realized, constitute an essential component for locating a probe in space, calculating its trajectory, and accurately locating a target like Pluto in space. There is no GPS for spacecraft (they aren’t on the globe, after all), so scientists had to find a way to locate the spacecraft in the vast expanse. Before 1960, the location of planets and objects in deep space was established through astronomical observation, placing an object like Pluto against a background of stars to determine its position.15 In 1961, an experiment at the Goldstone Deep Space Communications Complex in California used radar to more accurately define an “astronomical unit” and help measure distances in space much more accurately.16 NASA used this new data as part of creating the trajectories for missions in the following years. Using the data from radio signals across a wide range of missions over the decades, the Deep Space Network maintained an ongoing database that helped further refine the definition of an astronomical unit—a kind of longitudinal study of space distances that now allows missions like New Horizons to create accurate flight trajectories. The Deep Space Network continued to find inventive ways of using the time lag of radio waves to locate objects in space, ultimately finding that certain ways of waiting for a downlink signal from the spacecraft were less accurate than others. It turned to using the antennas from multiple locations, such as Goldstone in California and the antennas in Canberra, Australia, or Madrid, Spain, to time how long the signal took to hit these different locations on Earth. The time it takes to receive these signals from the spacecraft works as a way to locate the probes as they are journeying to their destination. Latency—or the different time lag of receiving radio signals on different locations of Earth—is the key way that deep space objects are located as they journey through space. This discovery was made possible during the wait times for communicating with these craft alongside the decades of data gathered from each space mission. Without the constraint of waiting, the notion of using time as a locating feature wouldn’t have been possible.
Jason Farman (Delayed Response: The Art of Waiting from the Ancient to the Instant World)
Peter Steiner’s 1993 cartoon in the New Yorker captured the promise of freedom and anonymity that the Internet once offered: “On the Internet, nobody knows you’re a dog.” But our era of big data—especially as more data are captured and at finer-grain resolutions—flips this premise around. Our behaviors online, in virtual worlds, and when using smart mobile devices allow others to make accurate inferences about who we are and what we like. On the Internet, everybody knows you’re a dog. And as marketers and advertisers compete over this growing flood of data, the facts they learn about each of us are likely to become more and more unsettling.
Nick Yee (The Proteus Paradox: How Online Games and Virtual Worlds Change Us - and How They Don't)
market. Instead, we need to keep in mind that the representation is just a shortcut for presenting a lot of information, or even worse, a distortion of that information. Reliability bias People assume that something is accurate when it may not be. For example, market data that you use in your historical testing or that come to you live are often filled with errors. Unless you assume that errors can and do exist, you may make lots of mistakes in your trading and investing decisions. Lotto bias People want to control the market, and so they tend to focus on entry, where they can “force” the market to do a lot of things before they enter. Unfortunately, once they enter, the market is going to do what the market is going to do. And
Van K. Tharp (Trade Your Way to Financial Freedom)
Today we aren’t quite to the place that H. G. Wells predicted years ago, but society is getting closer out of necessity. Global businesses and organizations are being forced to use statistical analysis and data mining applications in a format that combines art and science–intuition and expertise in collecting and understanding data in order to make accurate models that realistically predict the future that lead to informed strategic decisions thus allowing correct actions ensuring success, before it is too late . . . today, numeracy is as essential as literacy. As John Elder likes to say: ‘Go data mining!’ It really does save enormous time and money. For those
Anonymous
The adjective “efficient” in “efficient markets” refers to how investors use information. In an efficient market, every titbit of new information is processed correctly and immediately by investors. As a result, market prices react instantly and appropriately to any relevant news about the asset in question, whether it is a share of stock, a corporate bond, a derivative, or some other vehicle. As the saying goes, there are no $100 bills left on the proverbial sidewalk for latecomers to pick up, because asset prices move up or down immediately. To profit from news, you must be jackrabbit fast; otherwise, you’ll be too late. This is one rationale for the oft-cited aphorism “You can’t beat the market.” An even stronger form of efficiency holds that market prices do not react to irrelevant news. If this were so, prices would ignore will-o’-the-wisps, unfounded rumors, the madness of crowds, and other extraneous factors—focusing at every moment on the fundamentals. In that case, prices would never deviate from fundamental values; that is, market prices would always be “right.” Under that exaggerated form of market efficiency, which critics sometimes deride as “free-market fundamentalism,” there would never be asset-price bubbles. Almost no one takes the strong form of the efficient markets hypothesis (EMH) as the literal truth, just as no physicist accepts Newtonian mechanics as 100 percent accurate. But, to extend the analogy, Newtonian physics often provides excellent approximations of reality. Similarly, economists argue over how good an approximation the EMH is in particular applications. For example, the EMH fits data on widely traded stocks rather well. But thinly traded or poorly understood securities are another matter entirely. Case in point: Theoretical valuation models based on EMH-type reasoning were used by Wall Street financial engineers to devise and price all sorts of exotic derivatives. History records that some of these calculations proved wide of the mark.
Alan S. Blinder (After the Music Stopped: The Financial Crisis, the Response, and the Work Ahead)
Four reasons have been cited for maintaining accurate inventory records: 1. To provide data for cost control 2. To assist in identifying purchasing needs 3. To provide accurate information on type and quantity of food and supplies on hand 4. To monitor usage of products and prevent theft and pilferage
Ruby Parker Puckett (Foodservice Manual for Health Care Institutions (J-B AHA Press Book 150))
Location is also predictive. Researchers at Microsoft found that location data can be used to predict fairly accurately where people will be located in the future.
Julia Angwin (Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance)
A perfect pattern or model is one that (a) accurately describes a situation, (b) is broadly applicable, and (c) can be described in a simple manner.
Anil Maheshwari (Data Analytics Made Accessible)
The factors that usually decide presidential elections—the economy, likability of the candidates, and so on—added up to a wash, and the outcome came down to a few key swing states. Mitt Romney’s campaign followed a conventional polling approach, grouping voters into broad categories and targeting each one or not. Neil Newhouse, Romney’s pollster, said that “if we can win independents in Ohio, we can win this race.” Romney won them by 7 percent but still lost the state and the election. In contrast, President Obama hired Rayid Ghani, a machine-learning expert, as chief scientist of his campaign, and Ghani proceeded to put together the greatest analytics operation in the history of politics. They consolidated all voter information into a single database; combined it with what they could get from social networking, marketing, and other sources; and set about predicting four things for each individual voter: how likely he or she was to support Obama, show up at the polls, respond to the campaign’s reminders to do so, and change his or her mind about the election based on a conversation about a specific issue. Based on these voter models, every night the campaign ran 66,000 simulations of the election and used the results to direct its army of volunteers: whom to call, which doors to knock on, what to say. In politics, as in business and war, there is nothing worse than seeing your opponent make moves that you don’t understand and don’t know what to do about until it’s too late. That’s what happened to the Romney campaign. They could see the other side buying ads in particular cable stations in particular towns but couldn’t tell why; their crystal ball was too fuzzy. In the end, Obama won every battleground state save North Carolina and by larger margins than even the most accurate pollsters had predicted. The most accurate pollsters, in turn, were the ones (like Nate Silver) who used the most sophisticated prediction techniques; they were less accurate than the Obama campaign because they had fewer resources. But they were a lot more accurate than the traditional pundits, whose predictions were based on their expertise. You might think the 2012 election was a fluke: most elections are not close enough for machine learning to be the deciding factor. But machine learning will cause more elections to be close in the future. In politics, as in everything, learning is an arms race. In the days of Karl Rove, a former direct marketer and data miner, the Republicans were ahead. By 2012, they’d fallen behind, but now they’re catching up again.
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
Rather, you take the data you have and randomly divide it into a training set, which you give to the learner, and a test set, which you hide from it and use to verify its accuracy. Accuracy on held-out data is the gold standard in machine learning. You can write a paper about a great new learning algorithm you’ve invented, but if your algorithm is not significantly more accurate than previous ones on held-out data, the paper is not publishable. Accuracy on previously unseen data is a pretty stringent test; so much so, in fact, that a lot of science fails it. That does not make it useless, because science is not just about prediction; it’s also about explanation and understanding. But ultimately, if your models don’t make accurate predictions on new data, you can’t be sure you’ve truly understood or explained the underlying phenomena. And for machine learning, testing on unseen data is indispensable because it’s the only way to tell whether the learner has overfit or not.
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
Confidentiality - seeks to prevent the unauthorized disclosure of information: it keeps data secret • Integrity - seeks to prevent unauthorized modification of information. In other words, integrity seeks to prevent unauthorized write access to data. Integrity also seeks to ensure data that is written in an authorized manner is complete and accurate. • Availability - ensures that information is available when needed • Subject - An active entity on an information system • Object - A passive data file • Annualized Loss Expectancy—the cost of loss due to a risk over a year • Threat—a potentially negative occurrence • Vulnerability—a weakness in a system • Risk—a matched threat and vulnerability • Safeguard—a measure taken to reduce risk • Total Cost of Ownership—the cost of a safeguard • Return on Investment—money saved by deploying a safeguard
Eric Conrad (CISSP Study Guide)
However, in other circumstances, such as with PSR 1913 + 16, the situation is very different, and gravitational radiation from the system indeed has a significant role to play. Here, Einstein's theory provides a firm prediction of the detailed nature of the gravitational radiation that the system ought to be emitting, and of the energy that should be carried away. This loss of energy should result in a slow spiralling inwards of the two neutron stars, and a corresponding speeding up of their orbital rotation period. Joseph Taylor and Russell Hulse first observed this binary pulsar at the enormous Aricebo radio telescope in Puerto Rico in 1974. Since that time, the rotation period has been closely monitored by Taylor and his colleagues, and the speed-up is in precise agreement with the expectations of general relativity (cf. Fig. 4.11). For this work, Hulse and Taylor were awarded the 1993 Nobel Prize for Physics. In fact, as the years have rolled by, the accumulation of data from this system has provided a stronger and stronger confirmation of Einstein's theory. Indeed, if we now take the system as a whole and compare it with the behaviour that is computed from Einstein's theory as a whole-from the Newtonian aspects of the orbits, through the corrections to these orbits from standard general relativity effects, right up to the effects on the orbits due to loss of energy in gravitational radiation-we find that the theory is confirmed overall to an error of no more than about 10^-14. This makes Einstein's general relativity, in this particular sense, the most accurately tested theory known to science!
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
Despite the messiness of the input, Google’s service works the best. Its translations are more accurate than those of other systems (though still highly imperfect).
Viktor Mayer-Schönberger (Big Data: A Revolution That Will Transform How We Live, Work and Think)
The art of accounting and finance is the art of using limited data to come as close as possible to an accurate description of how well a company is performing.
Karen Berman (Financial Intelligence for Entrepreneurs: What You Really Need to Know About the Numbers)
Remember, accounting is the art of using limited data to come as close as possible to an accurate description of how well a company is performing.
Karen Berman (Financial Intelligence: A Manager's Guide to Knowing What the Numbers Really Mean)
You need to get some basic data and you need to get it accurately. One option is to inform the client that recording information accurately is the purpose of your note-taking and ask her if she feels comfortable with your doing that. Most clients will say “yes”; however, if one doesn’t you will simply be confronted with the need to cultivate an essential habit: that is, making some notes after every interview. The word some is emphasized because you will not always have time to write down everything. If you make it a practice to note five or six key phrases or observations, you will probably be able to reconstruct much of what happened.
Susan Lukas (Where to Start and What to Ask: An Assessment Handbook)
And here a word about dimensions. As we delve into the scientific research, we need to avoid the trap of believing that addiction can be reduced to the actions of brain chemicals or nerve circuits or any other kind of neurobiological, psychological, or sociological data. A multilevel exploration is necessary because it’s impossible to understand addiction fully from any one perspective, no matter how accurate. Addiction is a complex condition, a complex interaction between human beings and their environment. We need to view it simultaneously from many different angles—or, at least, while examining it from one angle, we need to keep the others in mind. Addiction has biological, chemical, neurological, psychological, medical, emotional, social, political, economic, and spiritual underpinnings—and perhaps others I haven’t thought about. To get anywhere near a complete picture we must keep shaking the kaleidoscope to see what other patterns emerge.
Gabor Maté (In the Realm of Hungry Ghosts: Close Encounters with Addiction)
Specifically, the Clinton team could see Trump closing ground across the Rust Belt. It was the area that he had targeted, despite conventional wisdom that held the Democratic “blue wall” would come through for Hillary in the end. Suddenly, Trump’s quixotic ride into the heartland looked a lot more strategically sound, and Hillary’s attention to expanding the electoral map seemed misguided. Her aides knew the trend lines were bad, and they had no way of making sure their survey data was accurate.
Jonathan Allen (Shattered: Inside Hillary Clinton's Doomed Campaign)
Since the summer of 2017, the Trump administration has taken at least 5,556 kids from their parents. But still today, nobody knows for sure exactly how many families have been separated. In February 2020, the United States Government Accountability Office noted, “it is unclear the extent to which Border Patrol has accurate records of separated [families] in its data system.” Scarce few of their stories have been told.
Jacob Soboroff (Separated: Inside an American Tragedy)
For an enterprise, the digital readiness in a volatile, uncertain, complex, and ambiguous (VUCA) business environment an accurate, reliable, and timely information flow along with the customer trust, play a fundamental role. Destructive and demoralising, the financial impact of experiencing a data breach continues to increase year over year for businesses. A very complex situation of a data breach / ransomware / malware attack (to name a few cyberthreats) leads to even more complex and challenging reputational damage, making, potentially, a cyber-attack costs ongoing for years. As threat actors are innovating, cybersecurity experts assert their own unique interpretation of trust. The Zero Trust approach therefore is a powerful and promising concept.
Ludmila Morozova-Buss
So for a survey of 1,000 people (the industry standard), the margin of error is generally quoted as ± 3%:fn8 if 400 of them said they preferred coffee, and 600 of them said they preferred tea, then you could roughly estimate the underlying percentage of people in the population who prefer coffee as 40 ± 3%, or between 37% and 43%. Of course, this is only accurate if the polling company really did take a random sample, and everyone replied, and they all had an opinion either way and they all told the truth. So although we can calculate margins of error, we must remember that they only hold if our assumptions are roughly correct. But can we rely on these assumptions?
David Spiegelhalter (The Art of Statistics: Learning from Data)
A common feature of epidemiological data is that they are almost certain to be biased, of doubtful quality, or incomplete (and sometimes all three),” explained the epidemiologist John Bailar in The New England Journal of Medicine in 1980. “Problems do not disappear even if one has flawless data, since the statistical associations in almost any nontrivial set of observations are subject to many interpretations. This ambiguity exists because of the difficulty of sorting out causes, effects, concomitant variables, and random fluctuations when the causes are multiple or diffuse, the exposure levels low, irregular, or hard to measure, and the relevant biologic mechanisms poorly understood. Even when the data are generally accepted as accurate, there is much room for individual judgment, and the considered conclusions of the investigators on these matters determine what they will label ‘cause’…
Gary Taubes (Good Calories, Bad Calories: Challenging the Conventional Wisdom on Diet, Weight Control, and Disease)
Second, even if the underlying data could accurately predict future risk, the 99 percent assurance offered by the VaR model was dangerously useless, because it’s the 1 percent that is going to really mess you up.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
In deep learning, there’s no data like more data. The more examples of a given phenomenon a network is exposed to, the more accurately it can pick out patterns and identify things in the real world. Given much more data, an algorithm designed by a handful of mid-level AI engineers usually outperforms one designed by a world-class deep-learning researcher. Having a monopoly on the best and the brightest just isn’t what it used to be.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
Using an Internet search engine to learn more about these sayings can be both a revelatory and exasperating experience. Search engines contain link after link to websites with faulty information, repetitive text, and incomplete data. Moving beyond this mélange of misinformation is nearly impossible for the average web user. It’s no wonder, then, that such mistakes are perpetuated and duplicated to the extreme. Many truth-seekers have struggled with the cacophony of conflicting information online; too often, accurate data is overwhelmed by inaccurate data.
Garson O'Toole (Hemingway Didn't Say That: The Truth Behind Familiar Quotations)
Accurate data on shark attacks on World War II servicemen may never be known since medical records did not note them. In fact, the navy was sufficiently concerned about loss of morale that it discouraged public mention of the menace.
Doug Stanton (In Harm's Way: The Sinking of the USS Indianapolis and the Extraordinary Story of Its Survivors)
We might be excused our ignorance in this case, because ocean-atmosphere systems are, after all, almost inconceivably complex. Less easy to excuse is our astounding lack of knowledge of much more visible features of our planet’s natural resources and ecology—features that have a direct impact on our well-being. For instance, we know surprisingly little about the state of the planet’s soils. While we have good information for some areas, like the Great Plains of the United States, soil data are sketchy for vast tracts of Africa, Asia, and Latin America, where billions of people depend directly on agriculture for survival. So we can’t accurately judge how badly we’ve degraded these soils through overuse and poor husbandry, though we do have patchy evidence that the damage is severe and getting worse in many places.18 Similarly, despite extensive satellite photography, our estimates of the rate and extent of tropical deforestation are rudimentary. We know even less about the natural ecology and species diversity inside these forests, where biologists presume most animal and plant species live. As a result, credible figures on the number of Earth’s species range from 5 to 30 million.19 And when it comes to broader questions—questions of how all these components of the planet’s ecology fit together; how they interact to produce Earth’s grand cycles of energy, carbon, oxygen, nitrogen, and sulfur; and how we’re perturbing these components and cycles—we find a deep and pervasive lack of knowledge, with unknown unknowns everywhere. Our ignorance, for all practical purposes, knows no bounds.
Thomas Homer-Dixon (The Ingenuity Gap: How Can We Solve the Problems of the Future?)
Today’s overwhelming volume and variety of information makes it possible—by selecting and connecting data points carefully—to paint practically any picture of the world and make it seem accurate. So the pictures we paint are often more a reflection of our deepest personal orientation, especially of our basic optimism or pessimism, than of empirical evidence.28 All the same, amidst the welter of information that sometimes seems to point in every direction, certain facts about long-term trends around the world ultimately shift the balance of evidence, in my mind, against the economic optimists. These facts indicate that there are chronic and widening ingenuity gaps in a number of domains of human activity. Significant problems, some of them fundamentally new in their character and scope, remain unsolved or are getting worse, in part because we haven’t generated and delivered enough ingenuity to address them. For instance, although average incomes and quality of life around the world are improving, these statistics—which are, again, highly aggregated—hide extreme and growing differences in wealth. Income per person, averaged globally, currently rises by about 0.8 percent per year, but in more than one hundred countries in the last fifteen years income has actually dropped. Some 1.3 billion people—about 30 percent of the population of the developing world—remain in absolute poverty, living on less than a dollar a day.29 And the gulf between the poorest and wealthiest people on the planet is widening very fast.
Thomas Homer-Dixon (The Ingenuity Gap: How Can We Solve the Problems of the Future?)
Do not assume that a source agrees with a writer when the source summarizes that writer’s line of reasoning. Quote only what a source believes, not its account of someone else’s beliefs, unless that account is relevant. 2.  Record why sources agree, because why they agree can be as important as why they don’t. Two psychologists might agree that teenage drinking is caused by social influences, but one might cite family background, the other peer pressure. 3.  Record the context of a quotation. When you note an important conclusion, record the author’s line of reasoning: Not Bartolli (p. 123): The war was caused … by Z. But    Bartolli: The war was caused by Y and Z (p. 123), but the most important was Z (p. 123), for two reasons: First,… (pp. 124–26); Second,… (p. 126) Even if you care only about a conclusion, you’ll use it more accurately if you record how a writer reached it. 4.  Record the scope and confidence of each statement. Do not make a source seem more certain or expansive than it is. The second sentence below doesn’t report the first fairly or accurately. One study on the perception of risk (Wilson 1988) suggests a correlation between high-stakes gambling and single-parent families. Wilson (1988) says single-parent families cause high-stakes gambling. 5.  Record how a source uses a statement. Note whether it’s an important claim, a minor point, a qualification or concession, and so on. Such distinctions help you avoid mistakes like this: Original by Jones: We cannot conclude that one event causes another because the second follows the first. Nor can statistical correlation prove causation. But no one who has studied the data doubts that smoking is a causal factor in lung cancer. Misleading report: Jones claims “we cannot conclude that one event causes another because the second follows the first. Nor can statistical correlation prove causation.” Therefore, statistical evidence is not a reliable indicator that smoking causes lung cancer.
Kate L. Turabian (A Manual for Writers of Research Papers, Theses, and Dissertations: Chicago Style for Students and Researchers)
If, as I believe, the conceptual structures we construct today are too complicated to be accurately specified in advance, and too complex to be built faultlessly, then we must take a radically different approach. Let us turn to nature and study complexity in living things, instead of just the dead works of man. Here we find constructs whose complexities thrill us with awe. The brain alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-protecting, and self-renewing. The secret is that it is grown, not built. So it must be with our software systems. Some years ago Harlan Mills proposed that any software system should be grown by incremental development.[11] That is, the system should first be made to run, even though it does nothing useful except call the proper set of dummy subprograms. Then, bit by bit it is fleshed out, with the subprograms in turn being developed into actions or calls to empty stubs in the level below. I have seen the most dramatic results since I began urging this technique on the project builders in my software engineering laboratory class. Nothing in the past decade has so radically changed my own practice, or its effectiveness. The approach necessitates top-down design, for it is a top-down growing of the software. It allows easy backtracking. It lends itself to early prototypes. Each added function and new provision for more complex data or circumstances grows organically out of what is already there. The morale effects are startling. Enthusiasm jumps when there is a running system, even a simple one. Efforts redouble when the first picture from a new graphics software system appears on the screen, even if it is only a rectangle. One always has, at every stage in the process, a working system. I find that teams can grow much more complex entities in four months than they can build.
Frederick P. Brooks Jr. (The Mythical Man-Month: Essays on Software Engineering)
At my “office” I wear an AR visor on my forehead. The visor is a curved band about hand width wide that is held a few inches away from my eyes for extra comfort during daylong use. The powerful visor throws up virtual screens all around me. I have about 12 virtual screens of all sizes and large data sets I can wrestle with my hands. The visor provides enough resolution and speed that most of my day I am communicating with virtual colleagues. But I see them in a real room, so I am fully present in reality as well. Their photorealistic 3-D avatar captures their life-size likeness accurately. My coworkers and I usually sit at a virtual table in a real room while we work independently, but we can walk around each other’s avatar. We converse and overhear each other just as if we are in the same room. It is so convenient to pop up an avatar that even if my real coworker is on the other side of the real room, we’ll just meet in the AR rather than walk across the gap.
Kevin Kelly (The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future)
Match. A successful platform creates efficiencies by matching the right users with one another and ensuring that the most relevant goods and services are exchanged. It accomplishes this by using data about producers, consumers, the value units created, and the goods and services to be exchanged. The more data the platform has to work with—and the better designed the algorithms used to collect, organize, sort, parse, and interpret the data—the more accurate the filters, the more relevant and useful the information exchanged, and the more rewarding the ultimate match between producer and consumer.
Geoffrey G. Parker (Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You: How Networked Markets Are Transforming the Economy―and How to Make Them Work for You)
What does a merchant do when a potential customer walks into a store and wants to purchase a ton of goods on credit? A solution was offered by the “The Society of Guardians for the Protection of Trade against Swindlers and Sharpers,” established in 1776. This society pooled data from 550 merchants to collect information on the reputation of customers. This would make it much harder for a bad customer to defraud multiple merchants. Its key principle: “Every member is bound to communicate to the Society without delay, the Name and Description of any Person who may be unfit to trust.” In other words, this was the beginning of credit scores as a means to assess the trustworthiness of a customer for loans—no swindlers or sharpers allowed. This Society of Guardians was not the only credit bureau—thousands of similar small organizations were formed over the years, collecting individual names and publishing books with various comments and gossip. Modern giants Experian and Equifax grew from these small, local bureaus. Experian started as the Manchester Guardian Society in the early 1800s, eventually acquiring other bureaus to become one of the world’s largest. And Equifax grew from a Tennessee grocery store in the late 1800s, where the owners started compiling their own lists of creditworthy consumers. These bureaus tended to combine into larger bureaus over time because of what’s often described as a “data network effect.” When a bureau works with more merchants, it means more data, which means the risk predictions on loans will be more accurate. This makes it more attractive for additional merchants to join, who contribute even more data, and so on. Being able to accurately assess lending risk allows the rest of the network to function—consumers can borrow to get the goods they want, merchants can sell their products profitably, and banks can help underwrite the loans. This network is held together by credit bureaus like Equifax and Experian, who centralize consumer data.
Andrew Chen (The Cold Start Problem: How to Start and Scale Network Effects)
The conclusion that “all cryo-organisms like dark places” is not mathematically justified, because the cryo-organism that actually did sit in a dark place only represented one data point, one representation of a cryo organism that prefers dark places. Statistically, having a larger amount of data points that fit the criteria of a conclusion would make the conclusion more accurate, but having only one data point does not substantiate the veracity of the conclusion! And who said that any definition of the term “cold” can be used in any context?! This reasoning, unfortunately, did not occur to the minds of any of the extraterrestrials.
Lucy Carter (Logicalard Fallacoid)
Whatever its other purposes, positive emotion is strongly correlated with good health and a longer life expectancy. A 2010 review of dozens of studies concluded that there are several pathways through which positive emotion exerts its beneficial effects—your hormonal, immune, and anti-inflammatory systems.[34] In one study health experts in London collected data on the well-being of hundreds of men and women between the ages of forty-five and sixty.[35] They assessed their subjects’ positive emotion using a method designed by the Nobel Prize–winning psychologist Daniel Kahneman, author of Thinking, Fast and Slow. Kahneman realized that you don’t get a very accurate picture by asking people if they are happy in life. Instead, you tend to get an answer that is reflective of how they feel at that moment, or of whatever event has just happened, or whether the sun is out. What they are reporting is a momentary feeling and not their general state.
Leonard Mlodinow (Emotional: How Feelings Shape Our Thinking)
Small data is big data in disguise. The reason we can often make good predictions from a small number of observations—or just a single one—is that our priors are so rich. Whether we know it or not, we appear to carry around in our heads surprisingly accurate priors about movie grosses and running times, poem lengths, and political terms of office, not to mention human life spans. We don’t need to gather them explicitly; we absorb them from the world.
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
. . . the only thing I can tell you about systems like yours is that they are rare because they are unstable. Dynamite flows through their veins. A single jolt and they're gone. If my data sets are accurate, you are rare, fragile, and precious . . . In other words, someone somewhere was pretty sure you were going to destroy yourself, and they felt like you were worth saving, so they sent me.
Hank Green (A Beautifully Foolish Endeavor (The Carls, #2))
More than just vehicles on a map, CompassCom empowers GIS centric fleet tracking and management that supports data-driven decisions, bringing efficiencies and accuracy across all departments of your operation.Enhanced command and control with real-time asset tracking and after-action analytics leveraging the power of ArcGIS. The knowledge base of records supports continuous improvement using location intelligence to empower results you can trust. An effective fleet tracking solution can help improve fleet operations in a number of ways. For example, it can reduce engine idling time and harsh cornering, make smart routing decisions for drivers, improve customer satisfaction with accurate ETAs, and track vehicle maintenance costs.
CompassCom
Data Extraction Services at Rely Services help to grow your business by providing accurate, timely, and relevant data.
Rely Services
Tell me about data fiction.” “It’s what can happen if we’re so reliant on technology that we become completely dependent on things we can’t see. Therefore we can no longer judge for ourselves what’s true, what’s false, what’s accurate, what isn’t. In other words if reality is defined by software that does all the work for us then what if this software lies? What if everything we believe isn’t true but is a facade, a mirage? What if we go to war, pull the plug, make life-and-death decisions based on data fiction?
Patricia Cornwell (Depraved Heart (Kay Scarpetta, #23))
Mija Survey provides highly qualified and experienced Setting-Out Engineers, Surveyors and site engineering in Norfolk specialists to clients throughout the construction industry. Offering a reliable and professional yet flexible service, we supply East Anglia’s architects, designers, planners, property developers, civil engineers, land agents, construction professionals, and local authorities with reliable and accurate data.
Land Surveyor Norfolk
YouTube: "Jordan Peterson | The Most Terrifying IQ Statistic" JORDAN PETERSON: One of the most terrifying statistics I ever came across was one detailing out the rationale of the United States Armed Forces for not allowing the induct … you can't induct anyone into the Armed Forces into the Armed Forces in the U.S. if they have an IQ of less than 83. Okay, so let's just take that apart for a minute, because it's a horrifying thing. So, the U.S. Armed Forces have been in the forefront of intelligence research since World War I because they were onboard early with the idea that, especially during war time when you are ramping up quickly that you need to sort people effectively and essentially without prejudice so that you can build up the officer corps so you don't lose the damned war, okay. So, there is real motivation to get it right, because it's a life-and-death issue, so they used IQ. They did a lot of the early psychometric work on IQ. Okay, so that's the first thing, they are motivated to find an accurate predictor, so they settled on IQ. The second thing was, the United States Armed Forces is also really motivated to get people into the Armed Forces, peacetime or wartime. Wartime, well, for obvious reasons. Peacetime, because, well, first of all you've got to keep the Armed Forces going and second you can use the Armed Forces during peacetime as a way of taking people out of the underclass and moving them up into the working class or the middle class, right. You can use it as a training mechanism, and so left and right can agree on that, you know. It's a reasonable way of promoting social mobility. So again, the Armed Forces even in peacetime is very motivated to get as many people in as they possibly can. And it's difficult as well. It's not that easy to recruit people, so you don't want to throw people out if you don't have to. So, what's the upshot of all that? Well, after one hundred years, essentially, of careful statistical analysis, the Armed Forces concluded that if you had an IQ of 83 or less there wasn't anything you could possibly be trained to do in the military at any level of the organization that wasn't positively counterproductive. Okay, you think, well, so what, 83, okay. Yeah, one in ten! One in ten! That's one in ten people! And what that really means, as far as I can tell, is if you imagine that the military is approximately as complex as the broader society, which I think is a reasonable proposition, then there is no place in our cognitively complex society for one in ten people. So what are we going to do about that? The answer is, no one knows. You say, "well, shovel money down the hierarchy." It's like, the problem isn't lack of money. I mean sometimes that's the problem, but the problem is rarely absolute poverty. It's rarely that. It is sometimes, but rarely. It's not that easy to move money down the hierarchy. So, first of all, it's not that easy to manage money. So, it's a vicious problem, man. And so... INTERVIEWER: It's hard to train people to become creative, adaptive problem solvers. PETERSON: It's impossible! You can't do it! You can't do it! You can interfere with their cognitive ability, but you can't do that! The training doesn't work. INTERVIEWER: It's not going to work in six months, but it could have worked in six years. PETERSON: No, it doesn't work. Sorry, it doesn't work. The data on that is crystal clear. [note that “one in ten” applies to a breeding group with an average IQ of 100]
Jordan B. Peterson
For example, a bank might want to make decisions about making financial loans in a more scientific way using data-based models. A decision-tree-based model could provide a consistently accurate loan decisions. Developing such decision tree models is one of the main applications of data mining techniques.
Anil Maheshwari (Data Analytics Made Accessible)
At South West Surveys we travel throughout the UK delivering fast, effective, highly detailed and accurate surveys. Our staff have a minimum of 15 years experience using Trimble robotic total stations, GPS and Faro 3D laser scanners, along with the latest data collection technologies available. Our client’s requirements are paramount, therefore we are happy to tailor our services to suit your individual needs or simply use a predetermined specification.
Land Surveys Wiltshire
We became the commodity. Our data. This is the secret of modern life. We went from being citizens to consumers, and now to commodities. Our personality profiles, our social and financial history, our likes and dislikes, all used to accurately predict future behavior. How will we vote? Will we take to the streets or roll over? The data knows all, which is why today our data is more valuable than our bodies.
Noah Hawley (Anthem)
Much of the signal system was installed in the 1930s and transit employees now have to fabricate their own replacement parts for obsolete equipment. While subway riders have to rely on this century-old technology, New York's automobile drivers take advantage of traffic signals that are part of a sophisticated information network. Above the streets, the city's Department of Transportation monitors data from sensors and video cameras to identify congestion choke points, and the remotely adjusts computerized traffic signals to optimize the flow of vehicles. Drivers obtain accurate, real-time traffic condition information via electronic signals, computers and smartphones.
Philip Mark Plotch (Last Subway: The Long Wait for the Next Train in New York City)
At South West Surveys we travel throughout the UK delivering fast, effective, highly detailed and accurate surveys. Our staff have a minimum of 15 years experience using Trimble robotic total stations, GPS and Faro 3D laser scanners, along with the latest data collection technologies available. Our client’s requirements are paramount, therefore we are happy to tailor our services to suit your individual needs or simply use a predetermined specification.
Land Surveys Worcestershire
Our mission is to provide you with accurate data, tools and resources for you to make smart decisions about improving your golf game. With accurate data we look to provide complete answers to every golf related question. We make It easy digestible so that every reader leaves our site golf smarter. Golf equipment is a procrastinators dream market place. An array product giving you the same result. At Golf Gifted we simplify the jargon and create detailed golf product reviews that will make your buying decision stress free.
Golf Gifted
Even PayPal’s millions of dollars in bad transactions could be justified for the extensive data set they generated. “Losing a lot of money to fraud was a necessary byproduct in gathering the data needed to understand the problem and build good predictive models,” Greenfield later wrote on a personal blog. “With millions of transactions and tens of thousands of fraudulent transactions, our fraud analytics team could find subtler patterns and detect fraud more accurately.” Taken together, PayPal turned fraud from an existential threat to one of the company’s defining triumphs. It also had the unexpected benefit of thinning out the competition. “As the Russian mobsters got better and better,” Thiel said, “they got better and better at destroying all our competitors.” Thieves forced to work ever harder to fleece PayPal customers moved on to easier prey. “We’d also find that fraudsters were kind of lazy, right? They want to do just the least amount of work… So we just kind of hoped to push them off onto [our competitors],” Miller observed.
Jimmy Soni (The Founders: The Story of Paypal and the Entrepreneurs Who Shaped Silicon Valley)
Scientific method is a set of data collection procedures that have been developed over a several hundred year period, procedures designed to collect data as cleanly and accurately as we possibly can.
Dean Richards (Psychology in Plain English)
In 1965, Daniel Patrick Moynihan, then an official in the U.S. Department of Labor, called the inner cities after the arrival of the southern migrants “a tangle of pathology.” He argued that what had attracted southerners like Ida Mae, George, and Robert was welfare: “the differential in payments between jurisdictions has to encourage some migration toward urban centers in the North,” he wrote, adding his own italics. Their reputation had preceded them. It had not been good. Neither was it accurate. The general laws of migration hold that the greater the obstacles and the farther the distance traveled, the more ambitious the migrants. “It is the higher status segments of a population which are most residentially mobile,” the sociologists Karl and Alma Taeuber wrote in a 1965 analysis of census data on the migrants, published the same year as the Moynihan Report. “As the distance of migration increases,” wrote the migration scholar Everett Lee, “the migrants become an increasingly superior group.” Any migration takes some measure of energy, planning, and forethought. It requires not only the desire for something better but the willingness to act on that desire to achieve it. Thus the people who undertake such a journey are more likely to be either among the better educated of their homes of origin or those most motivated to make it in the New World, researchers have found. “Migrants who overcome a considerable set of intervening obstacles do so for compelling reasons, and such migrations are not taken lightly,” Lee wrote. “Intervening obstacles serve to weed out some of the weak or the incapable.” The South had erected some of the highest barriers to migration of any people seeking to leave one place for another in this country. By the time the migrants made it out, they were likely willing to do whatever it took to make it, so as not to have to return south and admit defeat. It would be decades before census data could be further analyzed and bear out these observations. One myth they had to overcome was that they were bedraggled hayseeds just off the plantation. Census figures paint a different picture. By the 1930s, nearly two out of every three colored migrants to the big cities of the North and West were coming from towns or cities in the South, as did George Starling and Robert Foster, rather than straight from the field. “The move to northern cities was dominated by urban southerners,” wrote the scholar J. Trent Alexander. Thus the latter wave of migrants brought a higher level of sophistication than was assumed at the time. “Most Negro migrants to northern metropolitan areas have had considerable previous experience with urban living,” the Taeuber study observed. Overall, southern migrants represented the most educated segment of the southern black population they left, the sociologist Stewart Tolnay wrote in 1998. In 1940 and 1950, colored people who left the South “averaged nearly two more years of completed schooling than those who remained in the South.” That middle wave of migrants found themselves, on average, more than two years behind the blacks they encountered
Isabel Wilkerson (The Warmth of Other Suns: The Epic Story of America's Great Migration)
You should see that a bigger sample makes for a shrinking standard error, which is how large national polls can end up with shockingly accurate results.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
What is a good correlation? How high should it be? These are commonly asked questions. I have seen several schemes that attempt to classify correlations as strong, medium, and weak. However, there is only one correct answer. The correlation coefficient should accurately reflect the strength of the relationship. Take a look at the correlation between the height and weight data, 0.705. It’s not a very strong relationship, but it accurately represents our data. An accurate representation is the best-case scenario for using a statistic to describe an entire dataset.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
The machine takes all of these multitudinous pulls and forces which are fed in as data, and quickly computes the course of action which would be the most economical vector of need satisfaction in this existential situation. This is the behavior of our hypothetical person. The defects which in most of us make this process untrustworthy are the inclusion of information which does not belong to this present situation, or the exclusion of information which does. It is when memories and previous learnings are fed into the computations as if they were this reality, and not memories and learnings, that erroneous behavioral answers arise. Or when certain threatening experiences are inhibited from awareness, and hence are withheld from the computation or fed into it in distorted form, this too produces error. But our hypothetical person would find his organism thoroughly trustworthy, because all of the available data would be used, and it would be present in accurate rather than distorted form. Hence his behavior would come as close as possible to satisfying all his needs—for enhancement, for affiliation with others, and the like.
Carl R. Rogers (On Becoming a Person: A Therapist's View of Psychotherapy)
Considering the Scottish census through these theoretical lenses, where the census is not a neutral representation of a reality but a tool to construct a governable population, raises questions as to whether the census is an exercise in knowledge construction or a tool to bolster the state’s capacity to manage its population. These two objectives are not exclusive: improved knowledge likely facilitates the design of more efficient ways to coerce, control and discipline people who live within a state's jurisdiction. However, if the construction of knowledge is no longer the primary purpose of a census, this throws into doubt then need for a census to collect accurate information that authentically represents the lives and experiences of the people about whom the data relates.
Kevin Guyan (Queer Data: Using Gender, Sex and Sexuality Data for Action (Bloomsbury Studies in Digital Cultures))
In deep learning, there’s no data like more data. The more examples of a given phenomenon a network is exposed to, the more accurately it can pick out patterns and identify things in the real world. Given much more data, an algorithm designed by a handful of mid-level AI engineers usually outperforms one designed by a world-class deep-learning researcher. Having a monopoly on the best and the brightest just isn’t what it used to be. Elite AI researchers still have the potential to push the field to the next level, but those advances have occurred once every several decades. While we wait for the next breakthrough, the burgeoning availability of data will be the driving force behind deep learning’s disruption of countless industries around the world.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
Think of your project as “one of those,” gather data, and learn from all the experience those numbers represent by making reference-class forecasts. Use the same focus to spot and mitigate risks. Switching the focus from your project to the class your project belongs to will lead, paradoxically, to a more accurate understanding of your project.
Bent Flyvbjerg (How Big Things Get Done: The Surprising Factors That Determine the Fate of Every Project, from Home Renovations to Space Exploration and Everything In Between)
The deck is usually owned by someone in finance. Or more accurately, the data in the deck are certified as accurate by finance.
Colin Bryar (Working Backwards: Insights, Stories, and Secrets from Inside Amazon)
Exploration is about formulating hypotheses or best guesses; confirmation is about rigorously testing preliminary conclusions. Confirmation turns best guesses into sure bets. As in scientific discovery, the less we know about a phenomenon, the more openended our questions. As relevant knowledge builds up, we become more precise about what we seek to learn, and we start to anticipate (more and more accurately) what we will find. Because hypothesis-testing experiments (for example, taking a new job on a provisional basis) are usually more costly than exploratory experiments (for example, working on a side project without leaving one’s job), we prefer to defer the former until we have solid data suggesting that we are going in the right direction. Variety for its own sake is not enough. In fact, a prolonged exploratory phase can be a defense mechanism against changing, and it can signal to others that we are not serious about making change. A true experimental method almost always leads to formulating new goals and new means to achieve them. As we learn from experience, we have to be willing to close avenues of exploration, to accept that what we thought we knew was wrong and that what we were hoping to find no longer suits
Herminia Ibarra (Working Identity: Unconventional Strategies for Reinventing Your Career)
Amazon Comprehend is a natural language processing (NLP) solution that uses machine learning to find and extract insights and relationships from documents. •​Amazon Forecast combines your historical data with other variables, such as weather, to forecast outcomes. •​Amazon Kendra is an intelligent search service powered by machine learning. •​Amazon Lex is a solution for building conversational interfaces that can understand user intent and enable humanlike interactions. •​Amazon Lookout for Metrics detects and diagnoses anomalies in business and marketing data, such as unexpected drops in sales or unusual spikes in customer churn rates. •​Amazon Personalize powers personalized recommendations using the same machine-learning technology as Amazon.com. •​Amazon Polly converts text into natural-sounding speech, enabling you to create applications that talk. •​Amazon Rekognition makes it possible to identify objects, people, text, scenes, and activities in images and videos. •​Amazon Textract automatically reads and processes scanned documents to extract text, handwriting, tables, and data. •​Amazon Transcribe converts speech to text. •​Amazon Translate uses deep-learning models to deliver accurate, natural-sounding translation.
Paul Roetzer (Marketing Artificial Intelligence: Ai, Marketing, and the Future of Business)