Data Analysis Quotes

We've searched our database for all the quotes and captions related to Data Analysis. Here they are! All 200 of them:

If you torture the data long enough, it will confess.
Ronald H. Coase (Essays on Economics and Economists)
When it comes to riding a trend for business growth, there are three important steps that we should always remember: data analysis, trend identification, and fast and effective decision making.
Pooja Agnihotri (17 Reasons Why Businesses Fail :Unscrew Yourself From Business Failure)
I would defer to your expertise in shooting and killing things. You should defer to mine in data analysis.
Martha Wells (Artificial Condition (The Murderbot Diaries, #2))
We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.
Pierre-Simon Laplace
Once you have analyzed data, you have to mine that data to find insights from it. At this point, you can involve your marketing or product team to work with the data analysis team.
Pooja Agnihotri (Market Research Like a Pro)
In current times, we have access to so much data. Having said that, data analysis can uncover so many hidden patterns about customer behavior and how they interact with various products.
Pooja Agnihotri (Market Research Like a Pro)
Serving humanity intelligently is held up as the “gold standard” of AI based systems. But, with the emergence of new technologies and AI systems with bio-metric data storage, surveillance, tracking and big data analysis, humanity and the society is facing a threat today from evilly designed AI systems in the hands of monster governments and irresponsible people. Humanity is on the verge of digital slavery.
Amit Ray (Compassionate Artificial Superintelligence AI 5.0)
An analysis of 248 performance reviews collected from a variety of US-based tech companies found that women receive negative personality criticism that men simply don’t.7 Women are told to watch their tone, to step back. They are called bossy, abrasive, strident, aggressive, emotional and irrational. Out of all these words, only aggressive appeared in men’s reviews at all – ‘twice with an exhortation to be more of it’.
Caroline Criado Pérez (Invisible Women: Exposing Data Bias in a World Designed for Men)
To find signals in data, we must learn to reduce the noise - not just the noise that resides in the data, but also the noise that resides in us. It is nearly impossible for noisy minds to perceive anything but noise in data.
Stephen Few (Signal: Understanding What Matters in a World of Noise)
Data is a form of capital. And as is the case with all capital - it has to be efficient utilized.
Hendrith Vanlon Smith Jr, CEO of Mayflower-Plymouth
That’s the chain of thinking: D-A-D-A. Getting data leads to analysis. Analysis leads to a decision. A decision leads to an action. Simple. That’s how thinking works.
John Braddock (A Spy's Guide to Thinking)
In the business people with expertise, experience and evidence will make more profitable decisions than people with instinct, intuition and imagination.
Amit Kalantri (Wealth of Words)
In this era of fake news and paid news artificial intelligence is more and more used as a political tool to manipulate and dictate common people, through big data, biometric data, and AI analysis of online profiles and behaviors in social media and smart phones. But the days are not far when AI will also control the politicians and the media too.
Amit Ray
Computers bootstrap their own offspring, grow so wise and incomprehensible that their communiqués assume the hallmarks of dementia: unfocused and irrelevant to the barely-intelligent creatures left behind. And when your surpassing creations find the answers you asked for, you can't understand their analysis and you can't verify their answers. You have to take their word on faith.
Peter Watts (Blindsight (Firefall, #1))
The truth is that every business answer that you’re looking for is out there—covered and surrounded by many layers of unnecessary data.
Pooja Agnihotri (Market Research Like a Pro)
Market research gives you enough time to learn from your competitors’ mistakes, take inspiration from their strengths, and exploit their weaknesses.
Pooja Agnihotri (Market Research Like a Pro)
Good market research would have made you aware of the real picture at the right time, and maybe you would have been able to write a different future because of that.
Pooja Agnihotri (Market Research Like a Pro)
Let market research be a permanent, ongoing part of your business strategy.
Pooja Agnihotri (Market Research Like a Pro)
If you are going to use the results of market research to make a big business decision, then it’s a good idea to do quantitative research rather than qualitative.
Pooja Agnihotri (Market Research Like a Pro)
The more numbers you know through market research, the more you will be able to cut down your business risk.
Pooja Agnihotri (Market Research Like a Pro)
The biggest advice I can give for setting up your market research objectives is to be very clear and concise.
Pooja Agnihotri (Market Research Like a Pro)
Two things happen when your objectives are too broad—you don’t achieve the right results and you lose a lot of your resources. You want to avoid both of those.
Pooja Agnihotri (Market Research Like a Pro)
Interviews are a qualitative form of collecting data. The reason it generates good responses is because it’s way more personal than other forms of data gathering techniques.
Pooja Agnihotri (Market Research Like a Pro)
Your market research project is of no use if you are not going to use the results of it in your decision-making process.
Pooja Agnihotri (Market Research Like a Pro)
The best way to improve your decision-making and decrease your business risk is to use data to guide your decisions.
Pooja Agnihotri (Market Research Like a Pro)
There is no better tool to bring you closer to your competitors than market research. So, keep your friends close and your competitors even closer with the help of market research.
Pooja Agnihotri (Market Research Like a Pro)
You don’t have to wait until you have a heart attack to understand the importance of a healthy lifestyle.
Pooja Agnihotri (Market Research Like a Pro)
A market research project starts when you have the answers to the following questions: 1. Why are you researching? 2. What are you going to do with the results?
Pooja Agnihotri (Market Research Like a Pro)
Discovering a cure for a disease is one thing, but making that cure available to everyone so it can actually be used to eradicate the problem is another thing.
Pooja Agnihotri (Market Research Like a Pro)
Having an objective for any project is highly important as we are living in a world full of data—some useful but mostly useless.
Pooja Agnihotri (Market Research Like a Pro)
For identifying the objective of your market research project, it is highly advisable that you should zero in on the exact information you want to collect and from who.
Pooja Agnihotri (Market Research Like a Pro)
If your objectives are too broad, they can dilute your project.
Pooja Agnihotri (Market Research Like a Pro)
One market research project should have only one objective. More than one objective can affect the effectiveness of your research.
Pooja Agnihotri (Market Research Like a Pro)
Never guess anything. You will make bad business decisions if you do that. If you don’t have data on something, start a research project on that topic.
Pooja Agnihotri (Market Research Like a Pro)
If you can't understand a study, the problem is with the study, not with you.
Seth Stephens-Davidowitz (Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are)
Your market research objectives need to fit into your marketing strategy. If your objectives are not supporting your marketing strategy, then it’s going to be a waste of your resources.
Pooja Agnihotri (Market Research Like a Pro)
All the data you have collected is of no use if you don’t know how to gain insights from it, how to make profitable decisions with the help of this, and how to put your data into action.
Pooja Agnihotri (Market Research Like a Pro)
Refining a data set implies getting rid of unnecessary, duplicate, or misleading data. If you don't do that, you can see results that are not actually true, thus hurting your decision-making.
Pooja Agnihotri (Market Research Like a Pro)
To save ourselves from getting lost in this sea of data and ending up directionless, it becomes vital for every business owner to not just set up their market research objectives but also to stick to those.
Pooja Agnihotri (Market Research Like a Pro)
The real point of the matter is that what we call a 'wrong datum' is one which is inconsistent with all other known data. It is our only criterion for right and wrong.
Isaac Asimov (Robot Visions (Robot, #0.5))
By the time your perfect information has been gathered, the world has moved on.
Phil Dourado (The 60 Second Leader: Everything You Need to Know about Leadership, in One Minute Bites)
The male-unless-otherwise-indicated approach to research seems to have infected all sorts of ethnographic fields. Cave paintings, for example, are often of game animals and so researchers have assumed they were done by men - the hunters. But new analysis of handprints that appear alongside such paintings in cave sites in France and Spain has suggested that the majority were actually done by women.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
If the research is done for brand awareness and the results don't show any rosy picture, then maybe you want to share those results with your marketing head who can include more brand awareness strategies in her marketing plan.
Pooja Agnihotri (Market Research Like a Pro)
Break down your problem as much as you can, but don’t do it on the basis of your guesses. If you don’t know exactly what is wrong, use market research to break down your problem further and further until you reach the very point of your trouble.
Pooja Agnihotri (Market Research Like a Pro)
Without solid insights gained through market research, any kind of marketing you do is like throwing your pamphlets at Times Square and hoping somebody will pick them up, read them, become interested, and get the product. That happens… just not so often.
Pooja Agnihotri (Market Research Like a Pro)
Data is one of the most valuable resources of a bank.
Hendrith Vanlon Smith Jr, CEO of Mayflower-Plymouth
The latest technologies are often sexy, but beware of solutions that vendors dress up like trollops, unless you're looking for a one-night stand.
Stephen Few (Signal: Understanding What Matters in a World of Noise)
To her data analysis was the ugly love child of science and Kafka,
Kim Stanley Robinson (New York 2140)
What I need I carry in my head. Everything in that machine came from me. My fat burned into knowledge. My calories pedaled into data analysis" -- The Calorie Man
James Patrick Kelly (Rewired: The Post-Cyberpunk Anthology)
If you have no idea whether the problem is with your product, price, or something else, then it’s a good idea to start with a little research. It’s going to help you understand what the main problem is, or why people are not buying more of your products.
Pooja Agnihotri (Market Research Like a Pro)
In this new age where data is so abundant, our task as a civilization now is effective beneficial utilization. The challenge now is doing good things with that data - things that make our lives and the lives of future generations of people more fulfilling and more joyful and more prosperous.
Hendrith Vanlon Smith Jr
We've passed through the era of data accumulation. We're entering now into the era of data amalgamation - data combined and directed in service of a greater purpose.
Hendrith Vanlon Smith Jr, CEO of Mayflower-Plymouth
Use the data from the clients you already have to help you find new clients just like them.
Hendrith Vanlon Smith Jr, CEO of Mayflower-Plymouth
Use the data from the clients you already have to help you find new clients just like them. Eventually you'll be tapped into a whole market segment.
Hendrith Vanlon Smith Jr, CEO of Mayflower-Plymouth
So it is with statistics; no amount of fancy analysis can make up for fundamentally flawed data. Hence the expression “garbage in, garbage out.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
Regression analysis is the hydrogen bomb of the statistics arsenal.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
Data is the blood of any organization; coming from everywhere, used everywhere, connecting all the body, transferring messages and when analyzed it reflects the whole picture of the body.
Khalid Abulmajd
Many modern businesses have become proficient at mining data. In fact the mining of data is becoming almost routine. But as we advance further into the 21rst century and the 22nd century, the utilization of data begins to take priority. So it's not just about collecting all this data, but also about getting really creative with generating new ways to utilize that data in the quest to add value.
Hendrith Vanlon Smith Jr
Signals always point to something. In this sense, a signal is not a thing but a relationship. Data becomes useful knowledge of something that matters when it builds a bridge between a question and an answer. This connection is the signal.
Stephen Few (Signal: Understanding What Matters in a World of Noise)
Hobbes's analysis of the causes of violence, borne out by modern data on crime and war, shows that violence is not a primitive, irrational urge, nor is it a "pathology" except in the metaphorical sense of a condition that everyone would like to eliminate. Instead, it is a near-inevitable outcome of the dynamics of self-interested, rational social organisms.
Steven Pinker (The Blank Slate: The Modern Denial of Human Nature)
Fear of this uncertainty motivates people to spin their wheels for days considering all the possible outcomes, calculating them in a spreadsheet using utility cost analysis or some other fancy method that even the guy who invented it doesn't use. But all that analysis just keeps you on the sidelines. Often you're better off flipping a coin and moving in any clear direction. Once you start moving, you get new data regardless of where you're trying to go. And the new data makes the next decision and the next better than staying on the sidelines desperately trying to predict the future without that time machine.
Berkun, Scott (The Year Without Pants: WordPress.com and the Future of Work)
Writing programs (or programming) is a very creative and rewarding activity. You can write programs for many reasons ranging from making your living to solving a difficult data analysis problem to having fun to helping someone else solve a problem.
Charles Severance (Python for Informatics: Exploring Information: Exploring Information)
We think that if we have made a clever and thoughtful argument, based on data and smart analysis, then people will change their minds. This isn’t true. If you want to change people’s behavior, you need to touch their hearts, not just win the argument.
Eric Schmidt (How Google Works)
Blogging, writing conventional articles, and being science consultant and pocket protector ninja to various web portals and TV programs, quite often trying to promote the penicillin of hard data to people who had no interest in being cured of their ignorance.
Stephen L. Burns (Analog Science Fiction and Fact, 2012 December)
A dispassionate conceptual development of the typology of violence must by definition ignore its traumatic impact. Yet there is a sense in which a cold analysis of violence somehow reproduces and participates in its horror. A distinction needs to be made, as well, between (factual) truth and truthfulness: what renders a report of a raped woman (or any other narrative of a trauma) truthful is its very factual unreliability, its confusion, its inconsistency. If the victim were able to report on her painful and humiliating experience in a clear manner, with all the data arranged in a consistent order, this very quality would make us suspicious of its truth.
Slavoj Žižek (Violence: Six Sideways Reflections)
An analysis of G-rated (suitable for children) films released between 1990 and 2005 found that only 28% of speaking roles went to female characters – and perhaps even more tellingly in the context of humans being male by default, women made up only 17% of crowd scenes.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
It is useful for companies to look at AI through the lens of business capabilities rather than technologies. Broadly speaking, AI can support three important business needs: automating business processes, gaining insight through data analysis, and engaging with customers and employees.
Harvard Business Review (HBR's 10 Must Reads on AI, Analytics, and the New Machine Age (with bonus article "Why Every Company Needs an Augmented Reality Strategy" by Michael E. Porter and James E. Heppelmann))
Individual data points are of miniscule value. In the first twenty years of this century, data has become a common commodity. But the next level is amalgamation - bringing hundreds or thousands or millions of data points together and then making of them something greater than the sum of the parts.
Hendrith Vanlon Smith Jr, CEO of Mayflower-Plymouth
Here is one of the most important things to remember when doing research that involves regression analysis: Try not to kill anyone. You can even put a little Post-it note on your computer monitor: “Do not kill people with your research.” Because some very smart people have inadvertently violated that rule.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
Investors look at economic fundamentals; traders look at each other; ‘quants’ look at the data. Dealing on the basis of historic price series was once described as technical analysis, or chartism (and there are chartists still). These savants identify visual patterns in charts of price data, often favouring them with arresting names such as ‘head and shoulders’ or ‘double bottoms’. This is pseudo-scientific bunk, the financial equivalent of astrology. But more sophisticated quantitative methods have since proved profitable for some since the 1970s’ creation of derivative markets and the related mathematics. Profitable
John Kay (Other People's Money: The Real Business of Finance)
The more time I spent in Finland, the more I started to worry that the reforms sweeping across the United States had the equation backwards. We were trying to reverse engineer a high-performance teaching culture through dazzlingly complex performance evaluations and value-added data analysis. It made sense to reward, train, and dismiss more teachers based on their performance, but that approach assumed that the worst teachers would be replaced with much better ones, and that the mediocre teachers would improve enough to give students the kind of education they deserved. However, there was not much evidence that either scenario was happening in reality.
Amanda Ripley (The Smartest Kids in the World: And How They Got That Way)
The information he did not get was formal information. The data. The details. The options. The analysis. He didn’t do PowerPoint. For anything that smacked of a classroom or of being lectured to—“professor” was one of his bad words, and he was proud of never going to class, never buying a textbook, never taking a note—he got up and left the room.
Michael Wolff (Fire and Fury: Inside the Trump White House)
All revolutions are impossible till they happen, then they become inevitable.
Randy Bartlett (A PRACTITIONER'S GUIDE TO BUSINESS ANALYTICS: Using Data Analysis Tools to Improve Your Organization's Decision Making and Strategy)
Most people use statistics the way a drunkard uses a lamp post, more for support than illumination.
Randy Bartlett (A PRACTITIONER'S GUIDE TO BUSINESS ANALYTICS: Using Data Analysis Tools to Improve Your Organization's Decision Making and Strategy)
opinion-based decision making, statistical malfeasance, and counterfeit analysis are pandemic. We are swimming in make-believe analytics.
Randy Bartlett (A PRACTITIONER'S GUIDE TO BUSINESS ANALYTICS: Using Data Analysis Tools to Improve Your Organization's Decision Making and Strategy)
Reducing intelligence to the statistical analysis of large data sets “can lead us,” says Levesque, “to systems with very impressive performance that are nonetheless idiot-savants.
Nicholas Carr (The Glass Cage: Automation and Us)
Data-Analysis-Decision-Action chain.
John Braddock (A Spy's Guide to Thinking)
Your relevance as a data custodian is your ability to analyse and interpret it. If you can’t, your replacement is due.
Wisdom Kwashie Mensah
America has already taken in more than one-quarter of Mexico’s entire population, according to the Pew Research Center’s analysis of census data.
Ann Coulter (¡Adios, America!: The Left's Plan to Turn Our Country into a Third World Hellhole)
Data levels all arguments.
Anthony W. Richardson (Full-Scale: How to Grow Any Startup Without a Plan or a Clue)
Everything that informs us of something useful that we didn't already know is a potential signal. If it matters and deserves a response, its potential is actualized.
Stephen Few
There's a strand of the data viz world that argues that everything could be a bar chart. That's possibly true but also possibly a world without joy.
Amanda Cox
The life of a visual communicator should be one of systematic and exciting intellectual chaos.
Alberto Cairo (The Functional Art: An Introduction to Information Graphics and Visualization)
One way to appreciate the brilliance of this acquisition is to look at Instagram’s “Power Index,” the number of people a platform reaches times their level of engagement. This social index reveals Instagram as the world’s most powerful platform, as it has 400 million users, a third of Facebook’s, but garners fifteen times the level of engagement. L2 Analysis of Unmetric Data.
Scott Galloway (The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google)
No data are excluded on subjective or arbitrary grounds. No one piece of data is more highly valued than another. The consequences of this policy have to be accepted, even if they prove awkward.
Jennifer K. McArthur (Place-Names in the Knossos Tablets Identification and Location (Suplementos a MINOS, #9))
Data, investigation, analysis, news, rumor: A dystopia flattens those terms into one. There is what the state says, and then there is everything else, and that everything else falls into one category: information.
Hanya Yanagihara (To Paradise)
The success of college towns and big cities is striking when you just look at the data. But I also delved more deeply to undertake a more sophisticated empirical analysis. Doing so showed that there was another variable that was a strong predictor of a person’s securing an entry in Wikipedia: the proportion of immigrants in your county of birth. The greater the percentage of foreign-born residents in an area, the higher the proportion of children born there who go on to notable success. (Take that, Donald Trump!) If two places have similar urban and college populations, the one with more immigrants will produce more prominent Americans. What
Seth Stephens-Davidowitz (Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are)
One analysis of 2013 financial reports calculated that the value of each user to Google is $40 per year, and only $6 to Facebook, LinkedIn, and Yahoo. This is why companies like Google and Facebook keep raising the ante.
Bruce Schneier (Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World)
In a different direction, the necessity to model the analysis of noisy incomplete sensory data not by logic but by Bayesian inference first came to the forefront in the robotics community with their use of Kalman filters.
Ulf Grenander (A Calculus of Ideas:A Mathematical Study of Human Thought)
Everything points to the same conclusion: that Twitter hasn’t so much altered our writing as just gotten it to fit into a smaller place. Looking through the data, instead of a wasteland of cut stumps, we find a forest of bonsai. This kind of in-depth analysis (lexical density, word frequency) hints at the real nature of the transformation under way. The change Twitter has wrought on language itself is nothing compared with the change it is bringing to the study of language. Twitter gives us a sense of words not only as the building blocks of thought but as a social connector, which indeed has been the purpose of language since humanity hunched its way across the Serengeti.
Christian Rudder (Dataclysm: Who We Are (When We Think No One's Looking))
The inadequacy of unidimensional plotting along a continuum (in this case the diagonal of a symmetric matrix) inevitably would make "buffer" elements appear non-conformist when in fact they may be part of an interconnected pattern.
Jennifer K. McArthur (Place-Names in the Knossos Tablets Identification and Location (Suplementos a MINOS, #9))
Liberals are more likely to see people as victims of circumstance and oppression, and doubt whether individuals can climb without governmental help. My own analysis using 2005 survey data from Syracuse University shows that about 90 percent of conservatives agree that “While people may begin with different opportunities, hard work and perseverance can usually overcome those disadvantages.” Liberals — even upper-income liberals — are a third less likely to say this.
Arthur C. Brooks
Throughout the primary, he’d report back from the field on what he was hearing at campaign events and from friends across the country. Mook’s response was always a variation on the same analysis: the data run counter to your anecdotes.
Jonathan Allen (Shattered: Inside Hillary Clinton's Doomed Campaign)
Here is one of the most important things to remember when doing research that involves regression analysis: Try not to kill anyone. You can even put a little Post-it note on your computer monitor: “Do not kill people with your research.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
Analysis of how gender affected support for Trump revealed that ‘the more hostile voters were toward women, the more likely they were to support Trump’.93 In fact, hostile sexism was nearly as good at predicting support for Trump as party identification.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
We never know as much as we think we do, and we are never as good as we think we are. When we forget that, the market will remind us. Approach this work with a sense of humility, realizing that however much we learn and however much we know, much more remains undiscovered.
Adam H. Grimes (Quantitative Analysis of Market Data)
Digital analytics is the analysis of qualitative and quantitative data from your business and the competition to drive a continual improvement of the online experience that your customers and potential customers have which translates to your desired outcomes (both online and offline).
Anonymous
Web Analytics 2.0 is: the analysis of qualitative and quantitative data from your website and the competition, to drive a continual improvement of the online experience that your customers, and potential customers have, which translates into your desired outcomes (online and offline).
Anonymous
Democrats need to catch up and leapfrog ahead. And this isn’t just about data. We need an “always-on” content distribution network that can match what the right-wing has built. That means an array of loosely connected Facebook pages, Instagram accounts, Twitter feeds, Snapchat stories, and Reddit communities churning out memes, graphics, and videos. More sophisticated data collection and analysis can support and feed this network. I’m no expert in these matters, but I know enough to understand that most people get their news from screens, so we have to be there 24/7.
Hillary Rodham Clinton (What Happened)
At the highest levels of authority, we will probably retain human figureheads, who will give us the illusion that the algorithms are only advisors, and that ultimate authority is still in human hands. We will not appoint an AI to be the chancellor of Germany or the CEO of Google. However, the decisions taken by the chancellor and the CEO will be shaped by AI. The chancellor could still choose between several different options, but all these options will be the outcome of Big Data analysis, and they will reflect the way AI views the world more than the way humans view
Yuval Noah Harari (21 Lessons for the 21st Century)
All good decisions are Data dependent. To make good decisions, you need good data. And you need that good data to be organized according to it's applicable use value. So every business should be mining data and organizing data to enable business leaders to make good decisions on behalf of the business.
Hendrith Vanlon Smith Jr, CEO of Mayflower-Plymouth
Dr. Helen Fisher divides love into three categories that correspond to different hormones and brain systems. Her analysis of the data suggests that high androgen and estrogen levels generate lust, romantic love correlates with high dopamine and norepinephrine and low serotonin, and attachment is driven by oxytocin and vasopressin. To make matters more complicated, these three systems interact. For example, testosterone can “kickstart the two love neurotransmitters while an orgasm can elevate the attachment hormone,” according to Fisher. “Don’t copulate with people you don’t want to fall in love with,” she warns.4
Deborah Anapol (Polyamory in the 21st Century: Love and Intimacy With Multiple Partners)
Avoid succumbing to the gambler’s fallacy or the base rate fallacy. Anecdotal evidence and correlations you see in data are good hypothesis generators, but correlation does not imply causation—you still need to rely on well-designed experiments to draw strong conclusions. Look for tried-and-true experimental designs, such as randomized controlled experiments or A/B testing, that show statistical significance. The normal distribution is particularly useful in experimental analysis due to the central limit theorem. Recall that in a normal distribution, about 68 percent of values fall within one standard deviation, and 95 percent within two. Any isolated experiment can result in a false positive or a false negative and can also be biased by myriad factors, most commonly selection bias, response bias, and survivorship bias. Replication increases confidence in results, so start by looking for a systematic review and/or meta-analysis when researching an area.
Gabriel Weinberg (Super Thinking: The Big Book of Mental Models)
That so far the material has been dealt with in a rather subjective way provokes the question whether a means can be found of handling it objectively. [...] This chapter considers the applicability of the statistical tests employed by Wilson and the general problem whether the Linear B data are suited to statistical analysis.
Jennifer K. McArthur (Place-Names in the Knossos Tablets Identification and Location (Suplementos a MINOS, #9))
There is a mistake technical and scientific people make. We think that if we have made a clever and thoughtful argument, based on data and smart analysis, then people will change their minds. This isn't true. If yoy want to change people's behavior, you need to touch their hearts, not just win the arguments. We call this the Oprah Winfrey rule.
Eric Schmidt
In May 2014 Deep Knowledge Ventures – a Hong Kong venture-capital firm specialising in regenerative medicine – broke new ground by appointing an algorithm named VITAL to its board. Like the other five board members, VITAL gets to vote on whether or not the firm invests in a specific company, basing its opinions on a meticulous analysis of huge amounts of data.
Yuval Noah Harari (Homo Deus: A Brief History of Tomorrow)
Storytellers need their stories to resonate with their listeners. So when you envision a goal and describe the action needed to reach that goal, you transform your vision into a portrayal that depicts what’s truly possible to move people to action. While you need enough material (data and reasoned analysis) to flesh out the tale, you don’t want to bombard people with charts and tables.
Steven Haines (The Product Manager's Survival Guide: Everything You Need to Know to Succeed as a Product Manager)
The display, which was called 'Can Democracy Survive the Internet?' was dedicated to a 'global election management' company called Cambridge Analytica. Cambridge Analytica claimed to have gathered 5,000 data points on every American voter online: what you liked and what you shared on social media; how and where you shopped; who your friends were... They claimed to be able to take this imprint of your online self, use it to understand your deepest drives and desires, and then draw on that analysis to change your voting behaviour. The boast seemed to be backed up by success: Cambridge Analytica had worked on the victorious American presidential campaign of Donald Trump; it had also run successful campaigns for US Senator Ted Cruz (twice); and others all across Africa, Asia, the Caribbean, Latin America.
Peter Pomerantsev (This Is Not Propaganda: Adventures in the War Against Reality)
In a way, science might be described as paranoid thinking applied to Nature: we are looking for natural conspiracies, for connections among apparently disparate data. Our objective is to abstract patterns from Nature (right-hemisphere thinking), but many proposed patterns do not in fact correspond to the data. Thus all proposed patterns must be subjected to the sieve of critical analysis (left-hemisphere thinking).
Carl Sagan (Dragons of Eden: Speculations on the Evolution of Human Intelligence)
The results of decades of neurotransmitter-depletion studies point to one inescapable conclusion: low levels or serotonin, norepinephrine or dopamine do not cause depression. here is how the authors of the most complete meta-analysis of serotonin-depletion studies summarized the data: "Although previously the monoamine systems were considered to be responsible for the development of major depressive disorder (MDD), the available evidence to date does not support a direct causal relationship with MDD. There is no simple direct correlation of serotonin or norepinephrine levels in the brain and mood.' In other words, after a half-century of research, the chemical-imbalance hypothesis as promulgated by the drug companies that manufacture SSRIs and other antidepressants is not only with clear and consistent support, but has been disproved by experimental evidence.
Irving Kirsch (The Emperor's New Drugs: Exploding the Antidepressant Myth)
I like to ensure that I have music and art all around me. My personal favorite is old maps. What I love about old maps is that they are both beautiful and imperfect. These imperfections represent that some of the most talented in history were still very wrong (early cartography was very difficult). As the majority of my work is analysis and advisory, I find it a valuable reminder that my knowledge is limited. No matter how much data or insight I have, I can never fully “map out” any business. Yet, despite the incompleteness of these early cartographers, so much was learned of the world. So much done and accomplished. Therefore, these maps, or art pieces, serve as something to inspire both humility and achievement. This simple environmental factor helps my productivity and the overall quality of my work. Again, it’s like adding positive dice to my hand that are rolled each day.
Evan Thomsen (Don’t Chase The Dream Job, Build It: The unconventional guide to inventing your career and getting any job you want)
Take the issue of women being interrupted. An analysis of fifteen years of Supreme Court oral arguments found that ‘men interrupt more than women, and they particularly interrupt women more than they interrupt other men’.73 This goes for male lawyers (female lawyers weren’t found to interrupt at all) as well as judges, even though lawyers are meant to stop speaking when a justice starts speaking. And, as in the political sphere, the problem seems to have got worse as female representation on the bench has increased. An individualist solution might be to tell women to interrupt right back74 – perhaps working on their ‘polite interrupting’75 skills. But there’s a problem with this apparently gender-neutral approach, which is that it isn’t gender-neutral in effect: interrupting simply isn’t viewed the same way when women do it. In June 2017 US Senator Kamala Harris was asking an evasive Attorney General Jeff Sessions some tough questions. When he prevaricated once too often, she interrupted him and pressed him to answer. She was then in turn (on two separate occasions) interrupted and admonished by Senator John McCain for her questioning style.76 He did not do the same to her colleague Senator Rob Wyden, who subjected Sessions to similarly dogged questioning, and it was only Harris who was later dubbed ‘hysterical’.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
Promote a high-performance culture: "Stretch" goals, and value those that exceed them. Promote the culture of decision-making based on facts and data. Encourage the practice of analysis and synthesis as the main element in planning and as being fundamental to the learning process. Require the presentation of analyses at your meetings. Value intellectual honesty. Value the search for truth in facts and data (see Chapters 5 and 6). Promote a "facing facts
Vicente Falconi (TRUE POWER)
In general, software engineering teams and IT departments seemed to be at the mercy of other groups who would negotiate, cajole, intimidate, and overrule even the most defensible and objectively derived plans. Even plans based on thorough analysis and backed by years of historical data were vulnerable. Most teams, which had neither a thorough analysis method nor any historical data, were powerless at the hands of others who would push them to commit to unknown (and often completely unreasonable) deliverables.
David J. Anderson (Kanban)
If we had enough data then this statistical approach would undoubtedly sort out these things, and a lot of problems are arising precisely because we haven't got enough documents for the statistical approach to be wholly valid. I know you can calculate levels of probability and so forth, but to establish this really clearly we want a lot more information than we have actually got available. This is surely our major problem that we are still at the very limits at which you can use a technique of this sort. - John Chadwick
Jennifer K. McArthur (Place-Names in the Knossos Tablets Identification and Location (Suplementos a MINOS, #9))
Many researchers have sought the secret of successful education by identifying the most successful schools in the hope of discovering what distinguishes them from others. One of the conclusions of this research is that the most successful schools, on average, are small. In a survey of 1,662 schools in Pennsylvania, for instance, 6 of the top 50 were small, which is an overrepresentation by a factor of 4. These data encouraged the Gates Foundation to make a substantial investment in the creation of small schools, sometimes by splitting large schools into smaller units. At least half a dozen other prominent institutions, such as the Annenberg Foundation and the Pew Charitable Trust, joined the effort, as did the U.S. Department of Education’s Smaller Learning Communities Program. This probably makes intuitive sense to you. It is easy to construct a causal story that explains how small schools are able to provide superior education and thus produce high-achieving scholars by giving them more personal attention and encouragement than they could get in larger schools. Unfortunately, the causal analysis is pointless because the facts are wrong. If the statisticians who reported to the Gates Foundation had asked about the characteristics of the worst schools, they would have found that bad schools also tend to be smaller than average. The truth is that small schools are not better on average; they are simply more variable. If anything, say Wainer and Zwerling, large schools tend to produce better results, especially in higher grades where a variety of curricular options is valuable. Thanks to recent advances in cognitive psychology,
Daniel Kahneman (Thinking, Fast and Slow)
The general laws of migration hold that the greater the obstacles and the farther the distance traveled, the more ambitious the migrants. “It is the higher status segments of a population which are most residentially mobile,” the sociologists Karl and Alma Taeuber wrote in a 1965 analysis of census data on the migrants, published the same year as the Moynihan Report. “As the distance of migration increases,” wrote the migration scholar Everett Lee, “the migrants become an increasingly superior group.” Any migration takes some measure of energy, planning, and forethought. It requires not only the desire for something better but the willingness to act on that desire to achieve it. Thus the people who undertake such a journey are more likely to be either among the better educated of their homes of origin or those most motivated to make it in the New World, researchers have found. “Migrants who overcome a considerable set of intervening obstacles do so for compelling reasons, and such migrations are not taken lightly,” Lee wrote. “Intervening obstacles serve to weed out some of the weak or the incapable.” The
Isabel Wilkerson (The Warmth of Other Suns: The Epic Story of America's Great Migration)
Even revolution, particularly revolution, which claims to be materialist, is only a limitless metaphysical crusade. But can totality claim to be unity? That is the question which this book must answer. So far we can only say that the purpose of this analysis is not to give, for the hundredth time, a description of the revolutionary phenomenon, nor once more to examine the historic or economic causes of great revolutions. Its purpose is to discover in certain revolutionary data the logical sequence, the explanations, and the invariable themes of metaphysical rebellion.
Albert Camus (The Rebel)
Analysis of your social network and its members can also be highly revealing of your life, politics, and even sexual orientation, as demonstrated in a study carried out at MIT. In an analysis known as Gaydar, researchers studied the Facebook profiles of fifteen hundred students at the university, including those whose profile sexual orientation was either blank or listed as heterosexual. Based on prior research that showed gay men have more friends who are also gay (not surprising), the MIT investigators had a valuable data point to review the friend associations of their fifteen hundred students. As a result, researchers were able to predict with 78 percent accuracy whether or not a student was gay. At least ten individuals who had not previously identified as gay were flagged by the researchers’ algorithm and confirmed via in-person interviews with the students. While these findings might not be troubling in liberal Cambridge, Massachusetts, they could prove problematic in the seventy-six countries where homosexuality remains illegal, such as Sudan, Iran, Yemen, Nigeria, and Saudi Arabia, where such an “offense” is punished by death.
Marc Goodman (Future Crimes)
Besides increasing or decreasing the stimulation level of the environment, you can also achieve an optimal level of arousal by drinking beverages that have a direct impact on neocortical arousal.38 Alcohol, at least initially, has the effect of lowering arousal. After a couple of glasses of wine the extraverts are more likely to dip below the optimal arousal level, whereas their introverted friends, nudged closer to optimal arousal, may appear unexpectedly garrulous. Coffee, being a stimulant, has the opposite effect. After ingesting about two cups of coffee, extraverts carry out tasks more efficiently, whereas introverts perform less well. This deficit is magnified if the task they are engaged in is quantitative and if it is done under time pressure. For an introvert, an innocent couple of cups of coffee before a meeting may prove challenging, particularly if the purpose of the meeting is a rapid-fire discussion of budget projections, data analysis, or similar quantitative concerns. In the same meeting an extraverted colleague is likely to benefit from a caffeine kick that creates, in the eyes of the introverts, the illusion of competency.
Brian Little (Me, Myself, and Us: The Science of Personality and the Art of Well-Being)
They asked forty-two experienced investors in the firm to estimate the fair value of a stock (the price at which the investors would be indifferent to buying or selling). The investors based their analysis on a one-page description of the business; the data included simplified profit and loss, balance sheet, and cash flow statements for the past three years and projections for the next two. Median noise, measured in the same way as in the insurance company, was 41%. Such large differences among investors in the same firm, using the same valuation methods, cannot be good news.
Daniel Kahneman (Noise: A Flaw in Human Judgment)
More generally, a data scientist is someone who knows how to extract meaning from and interpret data, which requires both tools and methods from statistics and machine learning, as well as being human. She spends a lot of time in the process of collecting, cleaning, and munging data, because data is never clean. This process requires persistence, statistics, and software engineering skills — skills that are also necessary for understanding biases in the data, and for debugging logging output from code. Once she gets the data into shape, a crucial part is exploratory data analysis, which combines visualization and data sense. She’ll find patterns, build models, and algorithms — some with the intention of understanding product usage and the overall health of the product, and others to serve as prototypes that ultimately get baked back into the product. She may design experiments, and she is a critical part of data-driven decision making. She’ll communicate with team members, engineers, and leadership in clear language and with data visualizations so that even if her colleagues are not immersed in the data themselves, they will understand the implications.
Rachel Schutt (Doing Data Science: Straight Talk from the Frontline)
Political correspondent Jim Rutenberg’s New York Times account of the data scientists’ seminal role in the 2012 Obama victory offers a vivid picture of the capture and analysis of behavioral surplus as a political methodology. The campaign knew “every single wavering voter in the country that it needed to persuade to vote for Obama, by name, address, race, sex, and income,” and it had figured out how to target its television ads to these individuals. One breakthrough was the “persuasion score” that identified how easily each undecided voter could be persuaded to vote for the Democratic candidate.103
Shoshana Zuboff (The Age of Surveillance Capitalism)
Due to the various pragmatic obstacles, it is rare for a mission-critical analysis to be done in the “fully Bayesian” manner, i.e., without the use of tried-and-true frequentist tools at the various stages. Philosophy and beauty aside, the reliability and efficiency of the underlying computations required by the Bayesian framework are the main practical issues. A central technical issue at the heart of this is that it is much easier to do optimization (reliably and efficiently) in high dimensions than it is to do integration in high dimensions. Thus the workhorse machine learning methods, while there are ongoing efforts to adapt them to Bayesian framework, are almost all rooted in frequentist methods. A work-around is to perform MAP inference, which is optimization based. Most users of Bayesian estimation methods, in practice, are likely to use a mix of Bayesian and frequentist tools. The reverse is also true—frequentist data analysts, even if they stay formally within the frequentist framework, are often influenced by “Bayesian thinking,” referring to “priors” and “posteriors.” The most advisable position is probably to know both paradigms well, in order to make informed judgments about which tools to apply in which situations.
Jake Vanderplas (Statistics, Data Mining, and Machine Learning in Astronomy: A Practical Python Guide for the Analysis of Survey Data)
Computer cores made if liquid crystal that can re-form itself into any configuration, creating the ultimate efficiency for any particular piece of cybernetic business that needs doing, shifting from storage of data to moving it to analyzing it and the altering to a form most efficient for acting on the analysis.Hearts that can make minds, from little bits if brightness in Cowboy's skull that let him move his panzer, to large models that create working analogs of the human brain, the vast artificial intelligences that keep things moving smoothly for the Orbitals and the governments of the planet. All in miniature potential, here in the cardboard box.
Walter Jon Williams (Hardwired (Hardwired, #1))
Computer cores made of liquid crystal that can re-form itself into any configuration, creating the ultimate efficiency for any particular piece of cybernetic business that needs doing, shifting from storage of data to moving it to analyzing it and the altering to a form most efficient for acting on the analysis. Hearts that can make minds, from little bits if brightness in Cowboy's skull that let him move his panzer, to large models that create working analogs of the human brain, the vast artificial intelligences that keep things moving smoothly for the Orbitals and the governments of the planet. All in miniature potential, here in the cardboard box.
Walter Jon Williams (Hardwired (Hardwired, #1))
In a rigorous statistical analysis linking county-level slave ownership from the 1860 US census and public opinion data collected between 2016 and 2011 by the Cooperative Congressional Election Study (CCES), a large-scale national survey of the American electorate conducted by nearly forty universities, they find that whites residing in areas that had the highest levels of slavery in 1860 demonstrate significantly different attitudes today from whites who reside in areas that had lower historical levels of slavery: (1) they are more politically conservative and Republican leaning; (2) they are more opposed to affirmative action; and (3) they score higher on questions measuring racial resentment.
Robert P. Jones (White Too Long: The Legacy of White Supremacy in American Christianity)
Scalable Social Network Analysis. The SSNA would monitor telephone calls, conference calls, and ATM withdrawals, but it also sought to develop a far more invasive surveillance technology, one that could “capture human activities in surveillance environments.” The Activity Recognition and Monitoring program, or ARM, was modeled after England’s CCTV camera. Surveillance cameras would be set up across the nation, and through the ARM program, they would capture images of people as they went about their daily lives, then save these images to massive data storage banks for computers to examine. Using state-of-the-art facial recognition software, ARM would seek to identify who was behaving outside the computer’s pre-programmed threshold for “ordinary.” The parameters for “ordinary” remain classified.
Annie Jacobsen (The Pentagon's Brain: An Uncensored History of DARPA, America's Top-Secret Military Research Agency)
When heuristics don’t yield the results we expect, you’d think we would eventually realize that something’s wrong. Even if we don’t locate the biases, we should be able to see the discrepancy between what we wanted and what we got, right? Well, not necessarily. As it turns out, we have biases that support our biases! If we’re partial to one option—perhaps because it’s more memorable, or framed to minimize loss, or seemingly consistent with a promising pattern—we tend to search for information that will justify choosing that option. On the one hand, it’s sensible to make choices that we can defend with data and a list of reasons. On the other hand, if we’re not careful, we’re likely to conduct an imbalanced analysis, falling prey to a cluster of errors collectively known as “confirmation biases.
Sheena Iyengar (The Art of Choosing)
In 2010, the dominance of inclusive fitness theory was finally broken. After struggling as a member of the small but still muted contrarian school for a decade, I joined two Harvard mathematicians and theoretical biologists, Martin Nowak and Corina Tarnita, for a top-to-bottom analysis of inclusive fitness. Nowak and Tarnita had independently discovered that the foundational assumptions of inclusive fitness theory were unsound, while I had demonstrated that the field data used to support the theory could be explained equally well, or better, with direct natural selection—as in the sex-allocation case of ants just described. Our joint report was published on August 26, 2010, as the cover article of the prestigious journal Nature. Knowing the controversy involved, the Nature editors had proceeded with unusual caution. One of them familiar with the subject and the mode of mathematical analysis came from London to Harvard to hold a special meeting with Nowak, Tarnita, and myself. He approved, and the manuscript was next examined by three anonymous experts. Its appearance, as we expected, caused a Vesuvian explosion of protest—the kind cherished by journalists. No fewer than 137 biologists committed to inclusive fitness theory in their research or teaching signed a protest in a Nature article published the following year. When I repeated part of my argument as a chapter in the 2012 book The Social Conquest of Earth, Richard Dawkins responded with the indignant fervor of a true believer. In his review for the British magazine Prospect, he urged others not to read what I had written, but instead to cast the entire book away, “with great force,” no less.
Edward O. Wilson (The Meaning of Human Existence)
An analysis of 15 years of Supreme Court oral arguments found that men interrupt more than women. And they particularly interrupt women more than they interrupt other men... An individualist solution might be to tell women to interrupt right back... But there’s a problem... interrupting simply isn’t viewed the same way when women do it. In June 2017, US senator Kamala Harris was asking an evasive attorney general Jeff Sessions some tough questions. When he prevaricated once too often, she interrupted him and pressed him to answer. She was then in turn, on two separate occasions, interrupted and admonished by senator John McCain for her questioning style. He did not do the same to her (male) colleague Senator Ron Wyden who subjected Sessions to similarly dogged questioning. And it was only Harris who was then dubbed ‘hysterical’.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
G. Stanley Hall, a creature of his times, believed strongly that adolescence was determined – a fixed feature of human development that could be explained and accounted for in scientific fashion. To make his case, he relied on Haeckel's faulty recapitulation idea, Lombroso's faulty phrenology-inspired theories of crime, a plethora of anecdotes and one-sided interpretations of data. Given the issues, theories, standards and data-handling methods of his day, he did a superb job. But when you take away the shoddy theories, put the anecdotes in their place, and look for alternate explanations of the data, the bronze statue tumbles hard. I have no doubt that many of the street teens of Hall's time were suffering or insufferable, but it's a serious mistake to develop a timeless, universal theory of human nature around the peculiarities of the people of one's own time and place.
Robert Epstein (Teen 2.0: Saving Our Children and Families from the Torment of Adolescence)
The need for managers with data-analytic skills The consulting firm McKinsey and Company estimates that “there will be a shortage of talent necessary for organizations to take advantage of big data. By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.” (Manyika, 2011). Why 10 times as many managers and analysts than those with deep analytical skills? Surely data scientists aren’t so difficult to manage that they need 10 managers! The reason is that a business can get leverage from a data science team for making better decisions in multiple areas of the business. However, as McKinsey is pointing out, the managers in those areas need to understand the fundamentals of data science to effectively get that leverage.
Foster Provost (Data Science for Business: What you need to know about data mining and data-analytic thinking)
Ron Rivest, one of the inventors of RSA, thinks that restricting cryptography would be foolhardy: It is poor policy to clamp down indiscriminately on a technology just because some criminals might be able to use it to their advantage. For example, any U.S. citizen can freely buy a pair of gloves, even though a burglar might use them to ransack a house without leaving fingerprints. Cryptography is a data-protection technology, just as gloves are a hand-protection technology. Cryptography protects data from hackers, corporate spies, and con artists, whereas gloves protect hands from cuts, scrapes, heat, cold, and infection. The former can frustrate FBI wiretapping, and the latter can thwart FBI fingerprint analysis. Cryptography and gloves are both dirt-cheap and widely available. In fact, you can download good cryptographic software from the Internet for less than the price of a good pair of gloves.
Simon Singh (The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography)
While poorhouses have been physically demolished, their legacy remains alive and well in the automated decision-making systems that encage and entrap today's poor. For all their high-tech polish, our modern systems of poverty management - automated decision-making, data mining, and predictive analysis - retain a remarkable kinship with the poorhouses of the past. Our new digital tools spring from punitive, moralistic views of poverty and create a system of high-tech containment and investigation. The digital poorhouse deters the poor from accessing public resources; polices their labor, spending, sexuality, and parenting; tries to predict their future behavior; and punishes and criminalizes those who do not comply with its dictates. In the process, it creates ever-finer moral distinctions between the 'deserving' and 'undeserving' poor, categorizations that rationalize our national failure to care for one another.
Virginia Eubanks (Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor)
If we look at the way an industrial producer creates new products, we see a long list of trials and errors and eventually improvement in quality at a lower cost. Urban policies and strategies, by contrast, often do not follow this logic; they are often repeated even when it is well known that they failed. For instance, policies like rent control, greenbelts, new light rail transports, among others, are constantly repeated in spite of a near consensus on their failure to achieve their objectives. A quantitative evaluation of the failure of these policies is usually well documented through special reports or academic papers; it is seldom produced internally by cities, however, and the information does not seem to reach urban decision makers. Only a systematic analysis of data through indicators allows urban policies to be improved over time and failing policies to be abandoned. But as Angus Deaton wrote: 'without data, anyone who does anything is free to claim success.
Alain Bertaud (Order Without Design: How Markets Shape Cities)
The issues of antidepressant-associated suicide has become front-page news, the result of an analysis suggesting a link between medication use and suicidal ideation among children, adolescents, a link between medication use and suicidal ideation among children, adolescents, and adults up to age 24 in short term (4 to 16 weeks), placebo-controlled trials of nine newer antidepressant drugs. The data from trials involving more than 4.4(K) patients suggested that the average risk of suicidal thinking or behavior (suicidality) during the first few months of treatment in those receiving antidepressants was 4 percent, twice the placebo risk of 2 percent. No suicides occured in these trials. The analysis also showed no increase in suicide risk among the 25 to 65 age group. Antidepressants reduced suicidality among those over age 65. Following public hearings on the subject, in October 2004, the FDA requested the addition of “black box” warnings—the most serious warning placed on the labeling of a prescription medication—to all antidepressant drugs, old and new.
Benjamin James Sadock (Kaplan & Sadock's Synopsis of Psychiatry: Behavioral Sciences/Clinical Psychiatry)
As a thought experiment, von Neumann's analysis was simplicity itself. He was saying that the genetic material of any self-reproducing system, whether natural or artificial, must function very much like a stored program in a computer: on the one hand, it had to serve as live, executable machine code, a kind of algorithm that could be carried out to guide the construction of the system's offspring; on the other hand, it had to serve as passive data, a description that could be duplicated and passed along to the offspring. As a scientific prediction, that same analysis was breathtaking: in 1953, when James Watson and Francis Crick finally determined the molecular structure of DNA, it would fulfill von Neumann's two requirements exactly. As a genetic program, DNA encodes the instructions for making all the enzymes and structural proteins that the cell needs in order to function. And as a repository of genetic data, the DNA double helix unwinds and makes a copy of itself every time the cell divides in two. Nature thus built the dual role of the genetic material into the structure of the DNA molecule itself.
M. Mitchell Waldrop (The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal)
I will show that this spectacular increase in inequality largely reflects an unprecedented explosion of very elevated incomes from labor, a veritable separation of the top managers of large firms from the rest of the population. One possible explanation of this is that the skills and productivity of these top managers rose suddenly in relation to those of other workers. Another explanation, which to me seems more plausible and turns out to be much more consistent with the evidence, is that these top managers by and large have the power to set their own remuneration, in some cases without limit and in many cases without any clear relation to their individual productivity, which in any case is very difficult to estimate in a large organization. This phenomenon is seen mainly in the United States and to a lesser degree in Britain, and it may be possible to explain it in terms of the history of social and fiscal norms in those two countries over the past century. The tendency is less marked in other wealthy countries (such as Japan, Germany, France, and other continental European states), but the trend is in the same direction. To expect that the phenomenon will attain the same proportions elsewhere as it has done in the United States would be risky until we have subjected it to a full analysis—which unfortunately is not that simple, given the limits of the available data.
Thomas Piketty (Capital in the Twenty-First Century)
Military analysis is not an exact science. To return to the wisdom of Sun Tzu, and paraphrase the great Chinese political philosopher, it is at least as close to art. But many logical methods offer insight into military problems-even if solutions to those problems ultimately require the use of judgement and of broader political and strategic considerations as well. Military affairs may not be as amenable to quantification and formal methodological treatment as economics, for example. However, even if our main goal in analysis is generally to illuminate choices, bound problems, and rule out bad options - rather than arrive unambiguously at clear policy choices-the discipline of military analysis has a great deal to offer. Moreover, simple back-of-the envelope methodologies often provide substantial insight without requiring the churning of giant computer models or access to the classified data of official Pentagon studies, allowing generalities and outsiders to play important roles in defense analytical debates. We have seen all too often (in the broad course of history as well as in modern times) what happens when we make key defense policy decisions based solely on instinct, ideology, and impression. To avoid cavalier, careless, and agenda-driven decision-making, we therefore need to study the science of war as well-even as we also remember the cautions of Clausewitz and avoid hubris in our predictions about how any war or other major military endeavor will ultimately unfold.
Michael O'Hanlon
The largest and most rigorous study that is currently available in this area is the third one commissioned by the British Home Office (Kelly, Lovett, & Regan, 2005). The analysis was based on the 2,643 sexual assault cases (where the outcome was known) that were reported to British police over a 15-year period of time. Of these, 8% were classified by the police department as false reports. Yet the researchers noted that some of these classifications were based simply on the personal judgments of the police investigators, based on the victim’s mental illness, inconsistent statements, drinking or drug use. These classifications were thus made in violation of the explicit policies of their own police agencies. There searchers therefore supplemented the information contained in the police files by collecting many different types of additional data, including: reports from forensic examiners, questionnaires completed by police investigators, interviews with victims and victim service providers, and content analyses of the statements made by victims and witnesses. They then proceeded to evaluate each case using the official criteria for establishing a false allegation, which was that there must be either “a clear and credible admission by the complainant” or “strong evidential grounds” (Kelly, Lovett, & Regan,2005). On the basis of this analysis, the percentage of false reports dropped to 2.5%." Lonsway, Kimberly A., Joanne Archambault, and David Lisak. "False reports: Moving beyond the issue to successfully investigate and prosecute non-stranger sexual assault." The Voice 3.1 (2009): 1-11.
David Lisak
It is well-known that a big percentage of all marriages in the United States end in divorce or separation (about 39 percent, according to the latest data).[30] But staying together is not what really counts. Analysis of the Harvard Study data shows that marriage per se accounts for only 2 percent of subjective well-being later in life.[31] The important thing for health and well-being is relationship satisfaction. Popular culture would have you believe the secret to this satisfaction is romantic passion, but that is wrong. On the contrary, a lot of unhappiness can attend the early stages of romance. For example, researchers find that it is often accompanied by rumination, jealousy, and “surveillance behaviors”—not what we typically associate with happiness. Furthermore, “destiny beliefs” about soul mates or love being meant to be can predict low forgiveness when paired with attachment anxiety.[32] Romance often hijacks our brains in a way that can cause the highs of elation or the depths of despair.[33] You might accurately say that falling in love is the start-up cost for happiness—an exhilarating but stressful stage we have to endure to get to the relationships that actually fulfill us. The secret to happiness isn’t falling in love; it’s staying in love, which depends on what psychologists call “companionate love”—love based less on passionate highs and lows and more on stable affection, mutual understanding, and commitment.[34] You might think “companionate love” sounds a little, well, disappointing. I certainly did the first time I heard it, on the heels of great efforts to win my future wife’s love. But over the past thirty years, it turns out that we don’t just love each other; we like each other, too. Once and always my romantic love, she is also my best friend.
Arthur C. Brooks (From Strength to Strength: Finding Success, Happiness, and Deep Purpose in the Second Half of Life)
In April, Dr. Vladimir (Zev) Zelenko, M.D., an upstate New York physician and early HCQ adopter, reproduced Dr. Didier Raoult’s “startling successes” by dramatically reducing expected mortalities among 800 patients Zelenko treated with the HCQ cocktail.29 By late April of 2020, US doctors were widely prescribing HCQ to patients and family members, reporting outstanding results, and taking it themselves prophylactically. In May 2020, Dr. Harvey Risch, M.D., Ph.D. published the most comprehensive study, to date, on HCQ’s efficacy against COVID. Risch is Yale University’s super-eminent Professor of Epidemiology, an illustrious world authority on the analysis of aggregate clinical data. Dr. Risch concluded that evidence is unequivocal for early and safe use of the HCQ cocktail. Dr. Risch published his work—a meta-analysis reviewing five outpatient studies—in affiliation with the Johns Hopkins Bloomberg School of Public Health in the American Journal of Epidemiology, under the urgent title, “Early Outpatient Treatment of Symptomatic, High-Risk COVID-19 Patients that Should be Ramped-Up Immediately as Key to Pandemic Crisis.”30 He further demonstrated, with specificity, how HCQ’s critics—largely funded by Bill Gates and Dr. Tony Fauci31—had misinterpreted, misstated, and misreported negative results by employing faulty protocols, most of which showed HCQ efficacy administered without zinc and Zithromax which were known to be helpful. But their main trick for ensuring the protocols failed was to wait until late in the disease process before administering HCQ—when it is known to be ineffective. Dr. Risch noted that evidence against HCQ used late in the course of the disease is irrelevant. While acknowledging that Dr. Didier Raoult’s powerful French studies favoring HCQ efficacy were not randomized, Risch argued that the results were, nevertheless, so stunning as to far outweigh that deficit: “The first study of HCQ + AZ [ . . . ] showed a 50-fold benefit of HCQ + AZ vs. standard of care . . . This is such an enormous difference that it cannot be ignored despite lack of randomization.”32 Risch has pointed out that the supposed need for randomized placebo-controlled trials is a shibboleth. In 2014 the Cochrane Collaboration proved in a landmark meta-analysis of 10,000 studies, that observational studies of the kind produced by Didier Raoult are equal
Robert F. Kennedy Jr. (The Real Anthony Fauci: Bill Gates, Big Pharma, and the Global War on Democracy and Public Health)
This is not a hypothetical example. In the middle of the nineteenth century Karl Marx reached brilliant economic insights. Based on these insights he predicted an increasingly violent conflict between the proletariat and the capitalists, ending with the inevitable victory of the former and the collapse of the capitalist system. Marx was certain that the revolution would start in countries that spearheaded the Industrial Revolution – such as Britain, France and the USA – and spread to the rest of the world. Marx forgot that capitalists know how to read. At first only a handful of disciples took Marx seriously and read his writings. But as these socialist firebrands gained adherents and power, the capitalists became alarmed. They too perused Das Kapital, adopting many of the tools and insights of Marxist analysis. In the twentieth century everybody from street urchins to presidents embraced a Marxist approach to economics and history. Even diehard capitalists who vehemently resisted the Marxist prognosis still made use of the Marxist diagnosis. When the CIA analysed the situation in Vietnam or Chile in the 1960s, it divided society into classes. When Nixon or Thatcher looked at the globe, they asked themselves who controls the vital means of production. From 1989 to 1991 George Bush oversaw the demise of the Evil Empire of communism, only to be defeated in the 1992 elections by Bill Clinton. Clinton’s winning campaign strategy was summarised in the motto: ‘It’s the economy, stupid.’ Marx could not have said it better. As people adopted the Marxist diagnosis, they changed their behaviour accordingly. Capitalists in countries such as Britain and France strove to better the lot of the workers, strengthen their national consciousness and integrate them into the political system. Consequently when workers began voting in elections and Labour gained power in one country after another, the capitalists could still sleep soundly in their beds. As a result, Marx’s predictions came to naught. Communist revolutions never engulfed the leading industrial powers such as Britain, France and the USA, and the dictatorship of the proletariat was consigned to the dustbin of history. This is the paradox of historical knowledge. Knowledge that does not change behaviour is useless. But knowledge that changes behaviour quickly loses its relevance. The more data we have and the better we understand history, the faster history alters its course, and the faster our knowledge becomes outdated.
Yuval Noah Harari (Homo Deus: A Brief History of Tomorrow)
What are these substances? Medicines or drugs or sacramental foods? It is easier to say what they are not. They are not narcotics, nor intoxicants, nor energizers, nor anaesthetics, nor tranquilizers. They are, rather, biochemical keys which unlock experiences shatteringly new to most Westerners. For the last two years, staff members of the Center for Research in Personality at Harvard University have engaged in systematic experiments with these substances. Our first inquiry into the biochemical expansion of consciousness has been a study of the reactions of Americans in a supportive, comfortable naturalistic setting. We have had the opportunity of participating in over one thousand individual administrations. From our observations, from interviews and reports, from analysis of questionnaire data, and from pre- and postexperimental differences in personality test results, certain conclusions have emerged. (1) These substances do alter consciousness. There is no dispute on this score. (2) It is meaningless to talk more specifically about the “effect of the drug.” Set and setting, expectation, and atmosphere account for all specificity of reaction. There is no “drug reaction” but always setting-plus-drug. (3) In talking about potentialities it is useful to consider not just the setting-plus-drug but rather the potentialities of the human cortex to create images and experiences far beyond the narrow limitations of words and concepts. Those of us on this research project spend a good share of our working hours listening to people talk about the effect and use of consciousness-altering drugs. If we substitute the words human cortex for drug we can then agree with any statement made about the potentialities—for good or evil, for helping or hurting, for loving or fearing. Potentialities of the cortex, not of the drug. The drug is just an instrument. In analyzing and interpreting the results of our studies we looked first to the conventional models of modern psychology—psychoanalytic, behavioristic—and found these concepts quite inadequate to map the richness and breadth of expanded consciousness. To understand our findings we have finally been forced back on a language and point of view quite alien to us who are trained in the traditions of mechanistic objective psychology. We have had to return again and again to the nondualistic conceptions of Eastern philosophy, a theory of mind made more explicit and familiar in our Western world by Bergson, Aldous Huxley, and Alan Watts. In the first part of this book Mr. Watts presents with beautiful clarity this theory of consciousness, which we have seen confirmed in the accounts of our research subjects—philosophers, unlettered convicts, housewives, intellectuals, alcoholics. The leap across entangling thickets of the verbal, to identify with the totality of the experienced, is a phenomenon reported over and over by these persons.
Alan W. Watts (The Joyous Cosmology: Adventures in the Chemistry of Consciousness)
A famous British writer is revealed to be the author of an obscure mystery novel. An immigrant is granted asylum when authorities verify he wrote anonymous articles critical of his home country. And a man is convicted of murder when he’s connected to messages painted at the crime scene. The common element in these seemingly disparate cases is “forensic linguistics”—an investigative technique that helps experts determine authorship by identifying quirks in a writer’s style. Advances in computer technology can now parse text with ever-finer accuracy. Consider the recent outing of Harry Potter author J.K. Rowling as the writer of The Cuckoo’s Calling , a crime novel she published under the pen name Robert Galbraith. England’s Sunday Times , responding to an anonymous tip that Rowling was the book’s real author, hired Duquesne University’s Patrick Juola to analyze the text of Cuckoo , using software that he had spent over a decade refining. One of Juola’s tests examined sequences of adjacent words, while another zoomed in on sequences of characters; a third test tallied the most common words, while a fourth examined the author’s preference for long or short words. Juola wound up with a linguistic fingerprint—hard data on the author’s stylistic quirks. He then ran the same tests on four other books: The Casual Vacancy , Rowling’s first post-Harry Potter novel, plus three stylistically similar crime novels by other female writers. Juola concluded that Rowling was the most likely author of The Cuckoo’s Calling , since she was the only one whose writing style showed up as the closest or second-closest match in each of the tests. After consulting an Oxford linguist and receiving a concurring opinion, the newspaper confronted Rowling, who confessed. Juola completed his analysis in about half an hour. By contrast, in the early 1960s, it had taken a team of two statisticians—using what was then a state-of-the-art, high-speed computer at MIT—three years to complete a project to reveal who wrote 12 unsigned Federalist Papers. Robert Leonard, who heads the forensic linguistics program at Hofstra University, has also made a career out of determining authorship. Certified to serve as an expert witness in 13 states, he has presented evidence in cases such as that of Christopher Coleman, who was arrested in 2009 for murdering his family in Waterloo, Illinois. Leonard testified that Coleman’s writing style matched threats spray-painted at his family’s home (photo, left). Coleman was convicted and is serving a life sentence. Since forensic linguists deal in probabilities, not certainties, it is all the more essential to further refine this field of study, experts say. “There have been cases where it was my impression that the evidence on which people were freed or convicted was iffy in one way or another,” says Edward Finegan, president of the International Association of Forensic Linguists. Vanderbilt law professor Edward Cheng, an expert on the reliability of forensic evidence, says that linguistic analysis is best used when only a handful of people could have written a given text. As forensic linguistics continues to make headlines, criminals may realize the importance of choosing their words carefully. And some worry that software also can be used to obscure distinctive written styles. “Anything that you can identify to analyze,” says Juola, “I can identify and try to hide.
Anonymous
Two observations take us across the finish line. The Second Law ensures that entropy increases throughout the entire process, and so the information hidden within the hard drives, Kindles, old-fashioned paper books, and everything else you packed into the region is less than that hidden in the black hole. From the results of Bekenstein and Hawking, we know that the black hole's hidden information content is given by the area of its event horizon. Moreover, because you were careful not to overspill the original region of space, the black hole's event horizon coincides with the region's boundary, so the black hole's entropy equals the area of this surrounding surface. We thus learn an important lesson. The amount of information contained within a region of space, stored in any objects of any design, is always less than the area of the surface that surrounds the region (measured in square Planck units). This is the conclusion we've been chasing. Notice that although black holes are central to the reasoning, the analysis applies to any region of space, whether or not a black hole is actually present. If you max out a region's storage capacity, you'll create a black hole, but as long as you stay under the limit, no black hole will form. I hasten to add that in any practical sense, the information storage limit is of no concern. Compared with today's rudimentary storage devices, the potential storage capacity on the surface of a spatial region is humongous. A stack of five off-the-shelf terabyte hard drives fits comfortable within a sphere of radius 50 centimeters, whose surface is covered by about 10^70 Planck cells. The surface's storage capacity is thus about 10^70 bits, which is about a billion, trillion, trillion, trillion, trillion terabytes, and so enormously exceeds anything you can buy. No one in Silicon Valley cares much about these theoretical constraints. Yet as a guide to how the universe works, the storage limitations are telling. Think of any region of space, such as the room in which I'm writing or the one in which you're reading. Take a Wheelerian perspective and imagine that whatever happens in the region amounts to information processing-information regarding how things are right now is transformed by the laws of physics into information regarding how they will be in a second or a minute or an hour. Since the physical processes we witness, as well as those by which we're governed, seemingly take place within the region, it's natural to expect that the information those processes carry is also found within the region. But the results just derived suggest an alternative view. For black holes, we found that the link between information and surface area goes beyond mere numerical accounting; there's a concrete sense in which information is stored on their surfaces. Susskind and 'tHooft stressed that the lesson should be general: since the information required to describe physical phenomena within any given region of space can be fully encoded by data on a surface that surrounds the region, then there's reason to think that the surface is where the fundamental physical processes actually happen. Our familiar three-dimensional reality, these bold thinkers suggested, would then be likened to a holographic projection of those distant two-dimensional physical processes. If this line of reasoning is correct, then there are physical processes taking place on some distant surface that, much like a puppeteer pulls strings, are fully linked to the processes taking place in my fingers, arms, and brain as I type these words at my desk. Our experiences here, and that distant reality there, would form the most interlocked of parallel worlds. Phenomena in the two-I'll call them Holographic Parallel Universes-would be so fully joined that their respective evolutions would be as connected as me and my shadow.
Brian Greene (The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos)
Data sources All these components give you feedback and insight into how best to configure your campaigns, although the data sources are often spread around in different places and sometimes difficult to find and interpret. Campaign types Search & Partner Dynamic Search Display Network Remarketing & Dynamic Remarketing Google Shopping for eCommerce Google Merchant Center Data feeds Google Shopping Campaigns Device selection PC / Tablets Mobiles & Smartphones Location Targets & Exclusions Country Metro State City Custom and Radius Daily Budgets Manual CPC Enhanced CPC Flexible Bidding strategies Conversion Optimizer (CPA) Return on Ad Spend (ROAS) Conversion Tracking Setup and configuration Transaction-Specific Conversion Tracking Offline Conversion import Phone call tracking - website call conversions Conversion Rates Conversion Costs Conversion Values Ad Groups Default Bids Keyword Themes Ads Ad Messaging & Demographics Creative Text & Formatting Images* Display Ad Builder* Ad Preview and Diagnosis Account, Campaign and Ad Group Ad Extensions Sitelinks Locations Calls Reviews Apps Callouts Ad Rotation & Frequency Capping Rotate Optimise for Clicks Optimise for Conversions Keywords Bids Broad Modified Broad Phrase Exact Destination urls Keyword Diagnosis User Search Queries Keyword Opportunities Negative Keywords & Match Types Shared Library Shared Budgets* Automated Rules Flexible Bid Strategies Audiences & Exclusions* Campaign Negative Keywords Display Campaign Placement Exclusions* NEW! Business Data and Ad Customizers Advanced Delivery Methods Standard Accelerated Impression Share Lost IS (Budget) Lost IS (Rank) Search Funnels Assisted Impressions & Clicks Assisted Conversions Segmentation Analysis Device performance Network performance Top vs Other position performance Dimension Analysis Days & Times Shopping Geographic User Locations & Distance Search Terms Automatic Placements* Call Details (Call Extensions) Tools Change history Keyword Planner* Display Planner* Opportunities* Scheduling & Day Parting Automated Rules Competitor Ad Auction Insights Reporting* AdWords Campaign Experiments* Browser Languages* *indicates an item not covered in this version of the book
David Rothwell ("Clicks Customers Cashflow: AdWords Marketing that Pays - Why there's No More Excuses": or "How I Learned to Stop Worrying about Budgets and Love Conversions")
All I ask is to see accurate and authentic data, analyzed from all directions—free of bias and tunnel vision—before I layer my emotions upon it. In the end, we must live with the consequences of our decisions. After all input of facts and statistical analysis, our emotions may defy reconciliation with data.
Neil deGrasse Tyson (Starry Messenger: Cosmic Perspectives on Civilization)
Entering college students show the same trend: in 2016, only 37% said that “becoming successful in a business of my own” was important, down from 50% in 1984 (adjusted for relative centrality). So, compared to GenX college students, iGen’ers are less likely to be drawn to entrepreneurship. These beliefs are affecting actual behavior. A Wall Street Journal analysis of Federal Reserve data found that only 3.6% of households headed by adults younger than 30 owned at least part of a private company in 2013, down from 10.6% in 1989. All the talk about the young generation being attracted to entrepreneurship turns out to be just that—talk.
Jean M. Twenge (iGen: Why Today's Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy--and Completely Unprepared for Adulthood--and What That Means for the Rest of Us)
Kirsch and his colleagues did a second meta-analysis, this time on the 35 clinical trials conducted for four of the six most widely prescribed antidepressants approved between 1987 and 1999.16 Now looking at data from more than 5,000 patients, the researchers found again that placebos worked just as well as the popular antidepressant drugs Prozac, Effexor, Serzone, and Paxil a whopping 81 percent of the time. In most of the remaining cases where the drug did perform better, the benefit was so small that it wasn’t statistically significant. Only with severely depressed patients were the prescription drugs clearly better than placebo.
Joe Dispenza (You Are the Placebo: Making Your Mind Matter)
in the end, I found that the proportions obtain­ing in Colebrooke (British Orientalist, d. 1837)’s 1818 donation to the India Office Library generally held up. Out of a total of some twenty thousand manuscripts listed in these catalogs on Yoga, Nyaya­ Vaisheshika, and Vedanta philoso­phy, a mere 260 were Yoga Sutra manuscripts (in­cluding commentaries), with only thirty­ five dating from before 1823 ; 513 were manuscripts on Hatha or Tantric Yoga, manuscripts of works attributed to Ya­jnavalkya, or of the Yoga Vasistha; 9,032 were Nyaya manuscripts, and 10,320 were Vedanta manuscripts. (...) What does this quantitative analysis tell us ? For every manuscript on Yoga philosophy proper (excluding Hatha and Tantric Yoga) held in major Indian manu­script libraries and archives, there exist some forty Ve­danta manuscripts and nearly as many Nyaya­ Vaisheshika manuscripts. Manuscripts of the Yoga Sutra and its commentaries account for only one­ third of all manuscripts on Yoga philosophy, the other two­ thirds being devoted mainly to Hatha and Tantric Yoga. But it is the figure of 1.27 percent that stands out in highest relief, because it tells us that after the late sixteenth century virtually no one was copying the Yoga Sutra because no one was commissioning Yoga Sutra manuscripts, and no one was commissioning Yoga Sutra manuscripts because no one was interested in reading the Yoga Sutra. Some have argued that instruction in the Yoga Sutra was based on rote memorization or chanting : this is the position of Krishnam­acharya’s biographers as well as of a number of critical scholars. But this is pure speculation, undercut by the nineteenth­ century observations of James Ballantyne, Dayananda Saraswati, Rajendralal Mitra, Friedrich Max Müller, and others. There is no explicit record, in either the commentarial tradition itself or in the sa­cred or secular literatures of the past two thousand years, of adherents of the Yoga school memorizing, chanting, or claiming an oral transmission for their traditions. Given these data, we may conclude that Cole­brooke’s laconic, if not hostile, treatment of the Yoga Sutra undoubtedly stemmed from the fact that by his time, Patanjali’s system had become an empty signifier, with no traditional schoolmen to expound or defend it and no formal or informal outlets of instruction in its teachings. It had become a moribund tradition, an object of universal indifference. The Yoga Sutra had for all intents and purposes been lost until Colebrooke found it.
David Gordon White (The Yoga Sutra of Patanjali: A Biography)
A recent analysis tried to figure out how much some of these services are worth to people by asking them how much they’d have to be paid to give them up. They estimated that search engines are worth $17,530 every year to the average American; email is worth $8,414; digital maps $3,648; and social media $322. We pay $0 for these services. Pretty amazing!
Seth Stephens-Davidowitz (Don't Trust Your Gut: Using Data to Get What You Really Want in LIfe)
And while the details of how it is implemented vary somewhat from company to company, the core elements of the method are: the creation of a cross-functional team, or a set of teams that break down the traditional silos of marketing and product development and combine talents; the use of qualitative research and quantitative data analysis to gain deep insights into user behavior and preferences; and the rapid generation and testing of ideas, and the use of rigorous metrics to evaluate—and then act on—those results.
Sean Ellis (Hacking Growth: How Today's Fastest-Growing Companies Drive Breakout Success)
Jakob Bernoulli had shown that through mathematical analysis one could learn how the inner hidden probabilities that underlie natural systems are reflected in the data those systems produce.
Leonard Mlodinow (The Drunkard's Walk: How Randomness Rules Our Lives)
Solvay Business School Professor Paul Verdin and I developed a perspective that frames an organization's strategy as a hypothesis rather than a plan.62 Like all hypotheses, it starts with situation assessment and analysis –strategy's classic tools. Also, like all hypotheses, it must be tested through action. When strategy is seen as a hypothesis to be continually tested, encounters with customers provide valuable data of ongoing interest to senior executives.
Amy C. Edmondson (The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth)
Clustering analysis developed originally from anthropology in 1932, before it was introduced to psychology in 1938 and was later adopted by personality psychology in 1943 for trait theory classification. Today, clustering analysis is used in data mining, information retrieval, machine learning, text mining, web analysis, marketing, medical diagnosis, and numerous other fields.
Oliver Theobald (Statistics for Absolute Beginners: A Plain English Introduction)
One of those 48 studies is the Danish analysis published in November 2020 in the world-renowned journal Annals of Internal Medicine, which concluded: „The trial found no statistically significant benefit of wearing a face mask.“1416 Shortly before, U.S. researcher Yinon Weiss updated his charts on cloth face masks mandates in various countries and U.S. states—and they also showed that mask mandates have made no difference or may even have been counterproductive.1417 The aforementioned website „Ärzte klären auf“ showed a graph with data going until December 4, 2020, which also refutes the effectiveness of the mask obligation.
Torsten Engelbrecht (Virus Mania: Corona/COVID-19, Measles, Swine Flu, Cervical Cancer, Avian Flu, SARS, BSE, Hepatitis C, AIDS, Polio, Spanish Flu. How the Medical Industry ... Billion-Dollar Profits At Our Expense)
Leaders are particularly prone to loneliness, in no small part because real friendships at work are difficult or impossible with people under one’s authority and supervision. Work friendships are so important that 70 percent of people say friendship at work is the most important element to a happy work life, and 58 percent say they would turn down a higher-paying job if it meant not getting along with coworkers.[21] According to a data analysis conducted by Gallup in 2020, employees who say they have a best friend at work are almost twice as likely as others to enjoy their workday and almost 50 percent more likely to report high social well-being.
Arthur C. Brooks (From Strength to Strength: Finding Success, Happiness, and Deep Purpose in the Second Half of Life)
Why Shiba Inu is losing all its popularity? Several cryptocurrencies are losing their lustre as a result of the crypto crisis. Shiba Inu (SHIB) is one such case, created as an Ethereum-compatible alternative to Dogecoin (DOGE), previously in the top ten digital tokens but currently struggling to pass DOGE in market cap. The dog-based meme coin has declined almost 90%% from its high value of ₹0.0061 in late October 2021. Currently, it is trading around ₹0.000950. Furthermore, the price has not shown any exponential growth in the last several months. Related- What Are Meme Coins | Leading Meme Coins: Dogecoin, Shiba Inu Shiba Inu's overall market capitalization has plummeted to ₹519B. During its peak, it was around ₹3,031B up to five times what it is now. However, because of the token's cheap pricing, trade volumes have remained steady. Last year, Shiba Inu (SHIB) became a worldwide sensation, and everyone wanted to get on board the hype train. Now, that hype is non-existent, and interest in Shiba has vanished. In this article, we will explain why this happened. But first, let's go through some analysis. Google searches and social media mentions As per the google trends, searches for "Shiba Inu" had returned to pre-hype levels in October and November of last year. It's also worth mentioning that "Shiba Inu" refers to a dog breed, after which the cryptocurrency is named. As a result, those interested in the coin itself are likely to be even fewer.’ While many may argue that this is due to the overall downturn in the crypto market (and they would be somewhat correct), Bitcoin and Ethereum did not see such a significant drop in retail interest. According to data, searches for "bitcoin" are down approximately 65% from their 12-month peak, while searches for "Ethereum" are down around 60%. These are significant drops, but nowhere like the 95% drop in interest that Shiba Inu suffered from its 12-month peak.
coingabbar
I came up with three classifications of potential gain and added that information to all the previously entered data. The classifications were: Meaningless Synchronicity, Synchronicity of Need, and Synchronicity of Want. I had some three thousand predictions already in the database when I re-ran the analysis. "That analysis showed me I was predicting Meaningless Synchronicities with a really high degree of statistical certainty.
John Aubrey (Enoch's Thread)
We broke out of this mess by changing our positioning. It started with a customer telling me he didn’t believe we were a database at all. “We aren’t?” I said, completely baffled. “What the heck are we?” He went on to explain that in his eyes we were more of a business intelligence tool, or even more specifically, a data warehouse (a specialized system used for data analysis). This wasn’t exactly true in our minds — we lacked some of the features associated with a data warehouse. We did, however, deliver value that was much more clearly aligned with that category of solutions than it was with databases.
April Dunford (Obviously Awesome: How to Nail Product Positioning so Customers Get It, Buy It, Love It)
When assessing and prioritizing the opportunity space, it’s important that we find the right balance between being data-informed and not getting stuck in analysis paralysis. It’s easy to fall into the trap of wanting more data, spending just a little bit more time, trying to get to a more perfect decision. However, we’ll learn more by making a decision and then seeing the consequences of having made that decision than we will from trying to think our way to the perfect decision. Jeff Bezos, founder and CEO of Amazon, made this exact argument in his 2015 letter to shareholders,33 where he introduced the idea of Level 1 and Level 2 decisions. He describes a Level 1 decision as one that is hard to reverse, whereas a Level 2 decision is one that is easy to reverse. Bezos argues that we should be slow and cautious when making Level 1 decisions, but that we should move fast and not wait for perfect data when making Level 2 decisions.
Teresa Torres (Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value)
2. MIGRATE YOUR PRODUCT LEK had to move away from ‘standard’ strategy towards analysis of competitors. This led to ‘relative cost position’ and ‘acquisition analysis’. Your task is to find a unique product or service, one not offered in that form by anyone else. Your raw material is, of course, what you and the rest of your industry do already. Tweak it in ways that could generate an attractive new product. The ideal product is: ★ close to something you already do very well, or could do very well; ★ something customers are already groping towards or you know they will like; ★ capable of being ‘automated’ or otherwise done at low cost, by using a new process (cutting out costly steps, such as self-service), a new channel (the phone or Internet), new lower-cost employees (LEK’s ‘kids’, highly educated people in India), new raw materials (cheap resins, free data from the Internet), excess capacity from a related industry (especially manufacturing capacity), new technology or simply new ideas; ★ able to be ‘orchestrated’ by your firm while you yourself are doing as little as possible; ★ really valuable or appealing to a clearly defined customer group - therefore commanding fatter margins; ★ difficult for any rival to provide as well or as cheaply - ideally something they cannot or would not want to do. Because you are already in business, you can experiment with new products in a way that someone thinking of starting a venture cannot do. Sometimes the answer is breathtakingly simple. The Filofax system didn’t start to take off until David Collischon provided ‘filled organisers’ - a wallet with a standard set of papers installed. What could you do that is simple, costs you little or nothing and yet is hugely attractive to customers? Ask customers if they would like something different. Mock up a prototype; show it around. Brainstorm new ideas. Evolution needs false starts. If an idea isn’t working, don’t push it uphill. If a possible new product resonates at all, keep tweaking it until you have a winner. At the same time . . .
Richard Koch (The Star Principle: How it Can Make You Rich)
Analysis of M & E data aids in verifying whether the project goals are achieved, project targets are achieved, and concluding the effectiveness of project completion and delivery. In the pursuit of project analysis, project processes or stages of implementation are also analyzed along with examining the milestone achievement markers.
Henrietta Newton Martin -Author - Project Monitoring & Evaluation - A Primer
Analysis of project monitoring and evaluation of data aids in verifying whether the project goals are achieved, project targets are achieved, and concluding the effectiveness of project completion and delivery. In the pursuit of project analysis, project processes or stages of implementation are also analyzed along with examining the milestone achievement markers.
Henrietta Newton Martin -Author - Project Monitoring & Evaluation - A Primer
The credit card companies are at the forefront of this kind of analysis, both because they are privy to so much data on our spending habits and because their business model depends so heavily on finding customers who are just barely a good credit risk.
Charles Wheelan (Naked Statistics: Stripping the Dread from the Data)
here are some steps to identify and track code that should be reviewed carefully: Tagging user stories for security features or business workflows which handle money or sensitive data. Grepping source code for calls to dangerous function calls like crypto functions. Scanning code review comments (if you are using a collaborative code review tool like Gerrit). Tracking code check-in to identify code that is changed often: code with a high rate of churn tends to have more defects. Reviewing bug reports and static analysis to identify problem areas in code: code with a history of bugs, or code that has high complexity and low automated test coverage. Looking out for code that has recently undergone large-scale “root canal” refactoring. While day-to-day, in-phase refactoring can do a lot to simplify code and make it easier to understand and safer to change, major refactoring or redesign work can accidentally change the trust model of an application and introduce regressions.
Laura Bell (Agile Application Security: Enabling Security in a Continuous Delivery Pipeline)
Pearson’s correlation measures only linear relationships. Consequently, if your data contain a curvilinear relationship, the correlation coefficient will not detect it.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
Correlations have a hypothesis test. As with any hypothesis test, this test takes sample data and evaluates two mutually exclusive statements about the population from which the sample was drawn. For Pearson correlations, the two hypotheses are the following: Null hypothesis: There is no linear relationship between the two variables. ρ = 0. Alternative hypothesis: There is a linear relationship between the two variables. ρ ≠ 0. A correlation of zero indicates that no linear relationship exists. If your p-value is less than your significance level, the sample contains sufficient evidence to reject the null hypothesis and conclude that the correlation does not equal zero. In other words, the sample data support the notion that the relationship exists in the population.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
What is a good correlation? How high should it be? These are commonly asked questions. I have seen several schemes that attempt to classify correlations as strong, medium, and weak. However, there is only one correct answer. The correlation coefficient should accurately reflect the strength of the relationship. Take a look at the correlation between the height and weight data, 0.705. It’s not a very strong relationship, but it accurately represents our data. An accurate representation is the best-case scenario for using a statistic to describe an entire dataset.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
When the value is in-between 0 and +1/-1, there is a relationship, but the points don’t all fall on a line. As r approaches -1 or 1, the strength of the relationship increases and the data points tend to fall closer to a line.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
Pearson’s correlation takes all of the data points on this graph and represents them with a single summary statistic. In this case, the statistical output below indicates that the correlation is 0.705.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
squared is a primary measure of how well a regression model fits the data. This statistic represents the percentage of variation in one variable that other variables explain. For a pair of variables, R-squared is simply the square of the Pearson’s correlation coefficient. For example, squaring the height-weight correlation coefficient of 0.705 produces an R-squared of 0.497, or 49.7%. In other words, height explains about half the variability of weight in preteen girls.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
This graph shows all the observations together with a line that represents the fitted relationship. As is traditional, the Y-axis displays the dependent variable, which is weight. The X-axis shows the independent variable, which is height. The line is the fitted line. If you enter the full range of height values that are on the X-axis into the regression equation that the chart displays, you will obtain the line shown on the graph. This line produces a smaller SSE than any other line you can draw through these observations. Visually, we see that that the fitted line has a positive slope that corresponds to the positive correlation we obtained earlier. The line follows the data points, which indicates that the model fits the data. The slope of the line equals the coefficient that I circled. This coefficient indicates how much mean weight tends to increase as we increase height. We can also enter a height value into the equation and obtain a prediction for the mean weight. Each point on the fitted line represents the mean weight for a given height. However, like any mean, there is variability around the mean. Notice how there is a spread of data points around the line. You can assess this variability by picking a spot on the line and observing the range of data points above and below that point. Finally, the vertical distance between each data point and the line is the residual for that observation.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
Continuous variables can take on almost any numeric value and can be meaningfully divided into smaller increments, including fractional and decimal values. You often measure a continuous variable on a scale. For example, when you measure height, weight, and temperature, you have continuous data. Categorical variables have values that you can put into a countable number of distinct groups based on a characteristic. Categorical variables are also called qualitative variables or attribute variables. For example, college major is a categorical variable that can have values such as psychology, political science, engineering, biology, etc.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
Graphically, residuals are the vertical distances between the observed values and the fitted values. On the graph, the line represents the fitted values from the regression model. We call this line . . . the fitted line! The lines that connect the data points to the fitted line represent the residuals. The length of the line is the value of the residual.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
SSE is a measure of variability. As the points spread out further from the fitted line, SSE increases. Because the calculations use squared differences, the variance is in squared units rather the original units of the data. While higher values indicate greater variability, there is no intuitive interpretation of specific values. However, for a given data set, smaller SSE values signal that the observations fall closer to the fitted values. OLS minimizes this value, which means you’re getting the best possible line.
Jim Frost (Regression Analysis: An Intuitive Guide for Using and Interpreting Linear Models)
In 2013, on the auspicious date of April 1, I received an email from Tetlock inviting me to join what he described as “a major new research program funded in part by Intelligence Advanced Research Projects Activity, an agency within the U.S. intelligence community.” The core of the program, which had been running since 2011, was a collection of quantifiable forecasts much like Tetlock’s long-running study. The forecasts would be of economic and geopolitical events, “real and pressing matters of the sort that concern the intelligence community—whether Greece will default, whether there will be a military strike on Iran, etc.” These forecasts took the form of a tournament with thousands of contestants; the tournament ran for four annual seasons. “You would simply log on to a website,” Tetlock’s email continued, “give your best judgment about matters you may be following anyway, and update that judgment if and when you feel it should be. When time passes and forecasts are judged, you could compare your results with those of others.” I did not participate. I told myself I was too busy; perhaps I was too much of a coward as well. But the truth is that I did not participate because, largely thanks to Tetlock’s work, I had concluded that the forecasting task was impossible. Still, more than 20,000 people embraced the idea. Some could reasonably be described as having some professional standing, with experience in intelligence analysis, think tanks, or academia. Others were pure amateurs. Tetlock and two other psychologists, Barbara Mellers (Mellers and Tetlock are married) and Don Moore, ran experiments with the cooperation of this army of volunteers. Some were given training in some basic statistical techniques (more on this in a moment); some were assembled into teams; some were given information about other forecasts; and others operated in isolation. The entire exercise was given the name Good Judgment Project, and the aim was to find better ways to see into the future. This vast project has produced a number of insights, but the most striking is that there was a select group of people whose forecasts, while they were by no means perfect, were vastly better than the dart-throwing-chimp standard reached by the typical prognosticator. What is more, they got better over time rather than fading away as their luck changed. Tetlock, with an uncharacteristic touch of hyperbole, called this group “superforecasters.” The cynics were too hasty: it is possible to see into the future after all. What makes a superforecaster? Not subject-matter expertise: professors were no better than well-informed amateurs. Nor was it a matter of intelligence; otherwise Irving Fisher would have been just fine. But there were a few common traits among the better forecasters.
Tim Harford (The Data Detective: Ten Easy Rules to Make Sense of Statistics)
No matter how intelligent and advanced the analytical tools we use, they will still process data from the past to predict the future.
Sukant Ratnakar (Quantraz)
Every bit of data is obsolete. How relevant is data in predicting the future whose strings to the past do not exist?
Sukant Ratnakar (Quantraz)
In the midst of World War II, Quincy Wright, a leader in the quantitative study of war, noted that people view war from contrasting perspectives: “To some it is a plague to be eliminated; to others, a crime which ought to be punished; to still others, it is an anachronism which no longer serves any purpose. On the other hand, there are some who take a more receptive attitude toward war, and regard it as an adventure which may be interesting, an instrument which may be legitimate and appropriate, or a condition of existence for which one must be prepared” Despite the millions of people who died in that most deadly war, and despite widespread avowals for peace, war remains as a mechanism of conflict resolution. Given the prevalence of war, the importance of war, and the enormous costs it entails, one would assume that substantial efforts would have been made to comprehensively study war. However, the systematic study of war is a relatively recent phenomenon. Generally, wars have been studied as historically unique events, which are generally utilized only as analogies or examples of failed or successful policies. There has been resistance to conceptualizing wars as events that can be studied in the aggregate in ways that might reveal patterns in war or its causes. For instance, in the United States there is no governmental department of peace with funding to scientifically study ways to prevent war, unlike the millions of dollars that the government allocates to the scientific study of disease prevention. This reluctance has even been common within the peace community, where it is more common to deplore war than to systematically figure out what to do to prevent it. Consequently, many government officials and citizens have supported decisions to go to war without having done their due diligence in studying war, without fully understanding its causes and consequences. The COW Project has produced a number of interesting observations about wars. For instance, an important early finding concerned the process of starting wars. A country’s goal in going to war is usually to win. Conventional wisdom was that the probability of success could be increased by striking first. However, a study found that the rate of victory for initiators of inter-state wars (or wars between two countries) was declining: “Until 1910 about 80 percent of all interstate wars were won by the states that had initiated them. . . . In the wars from 1911 through 1965, however, only about 40 percent of the war initiators won.” A recent update of this analysis found that “pre-1900, war initiators won 73% of wars. Since 1945 the win rate is 33%.”. In civil war the probability of success for the initiators is even lower. Most rebel groups, which are generally the initiators in these wars, lose. The government wins 57 percent of the civil wars that last less than a year and 78 percent of the civil wars lasting one to five years. So, it would seem that those initiating civil and inter-state wars were not able to consistently anticipate victory. Instead, the decision to go to war frequently appears less than rational. Leaders have brought on great carnage with no guarantee of success, frequently with no clear goals, and often with no real appreciation of the war’s ultimate costs. This conclusion is not new. Studying the outbreak of the first carefully documented war, which occurred some 2,500 years ago in Greece, historian Donald Kagan concluded: “The Peloponnesian War was not caused by impersonal forces, unless anger, fear, undue optimism, stubbornness, jealousy, bad judgment and lack of foresight are impersonal forces. It was caused by men who made bad decisions in difficult circumstances.” Of course, wars may also serve leaders’ individual goals, such as gaining or retaining power. Nonetheless, the very government officials who start a war are sometimes not even sure how or why a war started.
Frank Wayman (Resort to War: 1816 - 2007)
YouTube: "Jordan Peterson | The Most Terrifying IQ Statistic" JORDAN PETERSON: One of the most terrifying statistics I ever came across was one detailing out the rationale of the United States Armed Forces for not allowing the induct … you can't induct anyone into the Armed Forces into the Armed Forces in the U.S. if they have an IQ of less than 83. Okay, so let's just take that apart for a minute, because it's a horrifying thing. So, the U.S. Armed Forces have been in the forefront of intelligence research since World War I because they were onboard early with the idea that, especially during war time when you are ramping up quickly that you need to sort people effectively and essentially without prejudice so that you can build up the officer corps so you don't lose the damned war, okay. So, there is real motivation to get it right, because it's a life-and-death issue, so they used IQ. They did a lot of the early psychometric work on IQ. Okay, so that's the first thing, they are motivated to find an accurate predictor, so they settled on IQ. The second thing was, the United States Armed Forces is also really motivated to get people into the Armed Forces, peacetime or wartime. Wartime, well, for obvious reasons. Peacetime, because, well, first of all you've got to keep the Armed Forces going and second you can use the Armed Forces during peacetime as a way of taking people out of the underclass and moving them up into the working class or the middle class, right. You can use it as a training mechanism, and so left and right can agree on that, you know. It's a reasonable way of promoting social mobility. So again, the Armed Forces even in peacetime is very motivated to get as many people in as they possibly can. And it's difficult as well. It's not that easy to recruit people, so you don't want to throw people out if you don't have to. So, what's the upshot of all that? Well, after one hundred years, essentially, of careful statistical analysis, the Armed Forces concluded that if you had an IQ of 83 or less there wasn't anything you could possibly be trained to do in the military at any level of the organization that wasn't positively counterproductive. Okay, you think, well, so what, 83, okay. Yeah, one in ten! One in ten! That's one in ten people! And what that really means, as far as I can tell, is if you imagine that the military is approximately as complex as the broader society, which I think is a reasonable proposition, then there is no place in our cognitively complex society for one in ten people. So what are we going to do about that? The answer is, no one knows. You say, "well, shovel money down the hierarchy." It's like, the problem isn't lack of money. I mean sometimes that's the problem, but the problem is rarely absolute poverty. It's rarely that. It is sometimes, but rarely. It's not that easy to move money down the hierarchy. So, first of all, it's not that easy to manage money. So, it's a vicious problem, man. And so... INTERVIEWER: It's hard to train people to become creative, adaptive problem solvers. PETERSON: It's impossible! You can't do it! You can't do it! You can interfere with their cognitive ability, but you can't do that! The training doesn't work. INTERVIEWER: It's not going to work in six months, but it could have worked in six years. PETERSON: No, it doesn't work. Sorry, it doesn't work. The data on that is crystal clear. [note that “one in ten” applies to a breeding group with an average IQ of 100]
Jordan B. Peterson
In 2012, a World Economic Forum analysis found that countries with gender-inflected languages, which have strong ideas of masculine and feminine present in almost every utterance, are the most unequal in terms of gender. 33 But here’s an interesting quirk: countries with genderless languages (such as Hungarian and Finnish) are not the most equal. Instead, that honour belongs to a third group, countries with ‘natural gender languages’ such as English. These languages allow gender to be marked (female teacher, male nurse) but largely don’t encode it into the words themselves. The study authors suggested that if you can’t mark gender in any way you can’t ‘correct’ the hidden bias in a language by emphasising ‘women’s presence in the world’. In short: because men go without saying, it matters when women literally can’t get said at all.
Caroline Criado Pérez (Invisible Women: Exposing Data Bias in a World Designed for Men)
A 2007 international study of 25,439 children’s TV characters found that only 13% of non-human characters are female (the figure for female human characters was slightly better, although still low at 32%).43 An analysis of G-rated (suitable for children) films released between 1990 and 2005 found that only 28% of speaking roles went to female characters – and perhaps even more tellingly in the context of humans being male by default, women made up only 17% of crowd scenes.44
Caroline Criado Pérez (Invisible Women: Exposing Data Bias in a World Designed for Men)
We SPSS tutor serving the best data analysis help from last 10 years across the globe. Our experts are having 20 years of experience in data analysis and research methodology. We assisting 4000 clients and students with a well-knowledged experts team.
davidjonesstc
More information means less ignorance and a greater chance of rational and better decisions and not those based on illusions, hope, preconceived notions or perceptions. The danger from so much data—there is no definition of what is optimum—is that there are chances of overanalysis or falling into a conspiracy theory trap.
Vikram Sood (The Unending Game: A Former R&AW Chief’s Insights into Espionage)
In the eyes of Travis Kalanick, Uber’s co-founder and chief executive, the entire system was rigged against startups like his. Like many in Silicon Valley, he believed in the transformative power of technology. His service harnessed the incredible powers of code—smartphones, data analysis, real-time GPS readings—to improve people’s lives, to make services more efficient, to connect people who wanted to buy things with people who wanted to sell them, to make society a better place. He grew frustrated by people with cautious minds, who wanted to uphold old systems, old structures, old ways of thinking. The corrupt institutions that controlled and upheld the taxi industry had been built in the nineteenth and twentieth centuries, he thought. Uber was here to disrupt their outmoded ideas and usher in the twenty-first.
Mike Isaac (Super Pumped: The Battle for Uber)
After collecting a stool sample from its customers, Viome (which Peter and I invested in through his venture firm, BOLD Capital Partners) uses its genetic sequencing technology to identify trillions of microbes in the gut and analyze their activities, including their biochemical interactions with the foods you eat. (Another great company that does biome analysis is called GI Map.) “There wasn’t even a supercomputer that was built ten years ago that could have analyzed this massive set of data,” says Viome’s CEO, Naveen Jain. Using advanced artificial intelligence, Viome crunches that data to offer individualized advice on which foods and supplements may positively or negatively affect your microbiome.
Tony Robbins (Life Force: How New Breakthroughs in Precision Medicine Can Transform the Quality of Your Life & Those You Love)
Let us look at the correlation between temperature, humidity and wind speed and all other features. Since the data also contains categorical features, we cannot only use the Pearson correlation coefficient, which only works if both features are numerical. Instead, I train a linear model to predict, for example, temperature based on one of the other features as input. Then I measure how much variance the other feature in the linear model explains and take the square root. If the other feature was numerical, then the result is equal to the absolute value of the standard Pearson correlation coefficient. But this model-based approach of “variance-explained” (also called ANOVA, which stands for ANalysis Of VAriance) works even if the other feature is categorical. The “variance-explained” measure lies always between 0 (no association) and 1 (temperature can be perfectly predicted from the other feature). We calculate the explained variance of temperature, humidity and wind speed with all the other features. The higher the explained variance (correlation), the more (potential) problems with PD plots. The following figure visualizes how strongly the weather features are correlated with other features.
Christoph Molnar (Interpretable Machine Learning: A Guide For Making Black Box Models Explainable)
Data analysis has become a natural part of the company's culture and processes, which has led to the fact-based approach being seen as natural. At the same time, increased insight has led to improved intuition for the decision makers. This leads to higher-quality intuitive decisions that saves time. The most successful are good at finding the right combination between intuitive and fact-based decision-making. Businesses can now better predict what will happen, when it will happen, and why.
Joakim Jansson (Leading Digital Transformation: You can't stop the waves but you can learn to surf)
When the US Department of Labor conducted an analysis of Google’s pay practices in 2017 it found ‘systemic compensation disparities against women pretty much across the entire workforce’, with ‘six to seven standard deviations between pay for men and women in nearly every job category’.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
(Bernie) Sanders came to think all (Hilary) Clinton said was ‘vote for me I’m a woman’. The data shows that she certainly didn’t. A word frequency analysis of her speeches by Vox revealed that Clinton mostly talked about workers jobs, education, and the economy... She mentioned jobs almost 600 times... and women’s right a few dozen times.
Caroline Criado Pérez (Invisible Women: Data Bias in a World Designed for Men)
As William Davies has argued, its unique innovation is to make social interactions visible and susceptible to data analytics and sentiment analysis
Richard Seymour (The Twittering Machine)
meta-analyses of the same data can come to different conclusions depending on their design and inclusion criteria, so bias among the designers of the meta-analysis is highly important but often overlooked.
Elisabeth Askin (The Health Care Handbook: A Clear and Concise Guide to the United States Health Care System)
Point 1: Four years of software engineering experience. Point 2: Data analysis was his major job function. Point 3: Had C# and C++ programming experience. Point 4: Supervisor constantly said he had excellent report writing skills. Point 5: Trained new team members.
Robin Ryan (60 Seconds and You're Hired!)
As I sat one day with the president in an advisory capacity, he said, “Stephen, I just can’t believe what this man has done. He’s not only given me the information I requested, but he’s provided additional information that’s exactly what we needed. He even gave me his analysis of it in terms of my deepest concerns, and a list of his recommendations. “The recommendations are consistent with the analysis, and the analysis is consistent with the data. He’s remarkable! What a relief not to have to worry about this part of the business.” At the next meeting, it was “go for this” and “go for that” to all the executives… but one. To this man, it was “What’s your opinion?” His Circle of Influence had grown.
Stephen R. Covey (The 7 Habits of Highly Effective People)
The current opinion is that the human brain is better at comparing relative sizes of rectangles than pie slices or donut sections.
Brian Larson (Data Analysis with Microsoft Power Bi)
Do the current keepers of the organization’s data feel threatened by a BI implementation? Those who know the organization’s data best may feel their value depends on their being the sole possessor of that inside information. Getting these information gatekeepers to cooperate in sharing inside information and explaining those calculations is essential. These key individuals must be shown how, with a functioning BI infrastructure, they will be able to move beyond shepherding data to shepherding the organization by concentrating on making the key business decisions they were hired to make.
Brian Larson (Data Analysis with Microsoft Power Bi)
Political correspondent Jim Rutenberg’s New York Times account of the data scientists’ seminal role in the 2012 Obama victory offers a vivid picture of the capture and analysis of behavioral surplus as a political methodology. The campaign knew “every single wavering voter in the country that it needed to persuade to vote for Obama, by name, address, race, sex, and income,” and it had figured out how to target its television ads to these individuals. One breakthrough was the “persuasion score” that identified how easily each undecided voter could be persuaded to vote for the Democratic candidate.103 The
Shoshana Zuboff (The Age of Surveillance Capitalism)