Experimental Design Quotes

We've searched our database for all the quotes and captions related to Experimental Design. Here they are! All 100 of them:

O Deep Thought computer," he said, "the task we have designed you to perform is this. We want you to tell us...." he paused, "The Answer." "The Answer?" said Deep Thought. "The Answer to what?" "Life!" urged Fook. "The Universe!" said Lunkwill. "Everything!" they said in chorus. Deep Thought paused for a moment's reflection. "Tricky," he said finally. "But can you do it?" Again, a significant pause. "Yes," said Deep Thought, "I can do it." "There is an answer?" said Fook with breathless excitement. "Yes," said Deep Thought. "Life, the Universe, and Everything. There is an answer. But, I'll have to think about it." ... Fook glanced impatiently at his watch. “How long?” he said. “Seven and a half million years,” said Deep Thought. Lunkwill and Fook blinked at each other. “Seven and a half million years...!” they cried in chorus. “Yes,” declaimed Deep Thought, “I said I’d have to think about it, didn’t I?" [Seven and a half million years later.... Fook and Lunkwill are long gone, but their descendents continue what they started] "We are the ones who will hear," said Phouchg, "the answer to the great question of Life....!" "The Universe...!" said Loonquawl. "And Everything...!" "Shhh," said Loonquawl with a slight gesture. "I think Deep Thought is preparing to speak!" There was a moment's expectant pause while panels slowly came to life on the front of the console. Lights flashed on and off experimentally and settled down into a businesslike pattern. A soft low hum came from the communication channel. "Good Morning," said Deep Thought at last. "Er..good morning, O Deep Thought" said Loonquawl nervously, "do you have...er, that is..." "An Answer for you?" interrupted Deep Thought majestically. "Yes, I have." The two men shivered with expectancy. Their waiting had not been in vain. "There really is one?" breathed Phouchg. "There really is one," confirmed Deep Thought. "To Everything? To the great Question of Life, the Universe and everything?" "Yes." Both of the men had been trained for this moment, their lives had been a preparation for it, they had been selected at birth as those who would witness the answer, but even so they found themselves gasping and squirming like excited children. "And you're ready to give it to us?" urged Loonsuawl. "I am." "Now?" "Now," said Deep Thought. They both licked their dry lips. "Though I don't think," added Deep Thought. "that you're going to like it." "Doesn't matter!" said Phouchg. "We must know it! Now!" "Now?" inquired Deep Thought. "Yes! Now..." "All right," said the computer, and settled into silence again. The two men fidgeted. The tension was unbearable. "You're really not going to like it," observed Deep Thought. "Tell us!" "All right," said Deep Thought. "The Answer to the Great Question..." "Yes..!" "Of Life, the Universe and Everything..." said Deep Thought. "Yes...!" "Is..." said Deep Thought, and paused. "Yes...!" "Is..." "Yes...!!!...?" "Forty-two," said Deep Thought, with infinite majesty and calm.
Douglas Adams (The Hitchhiker’s Guide to the Galaxy (Hitchhiker's Guide to the Galaxy, #1))
I suspect God is more glorified in a humble room of earnest worshipers than a massive production designed to sound "relevant" to the listeners but no longer relevant to God. When the worship of God turns into a "worship experience," we have derailed as the body of Christ.
Jen Hatmaker (7: An Experimental Mutiny Against Excess)
He had found a Nutri-Matic machine which had provided him with a plastic cup filled with a liquid that was almost, but not quite, entirely unlike tea. The way it functioned was very interesting. When the Drink button was pressed it made an instant but highly detailed examination of the subject’s taste buds, a spectroscopic analysis of the subject’s metabolism and then sent tiny experimental signals down the neural pathways to the taste centers of the subject’s brain to see what was likely to go down well. However, no one knew quite why it did this because it invariable delivered a cupful of liquid that was almost, but not quite, entirely unlike tea. The Nutri-Matic was designed and manufactured by the Sirius Cybernetics Corporation whose complaint department now covers all the major landmasses of the first three planets in the Sirius Tau Star system.
Douglas Adams (The Hitchhiker’s Guide to the Galaxy (The Hitchhiker's Guide to the Galaxy, #1))
The most effective experimenters are usually those who give much thought to the problem beforehand and resolve it into crucial questions and then give much thought to designing experiments to answer the questions.
William Ian Beardmore Beveridge (The Art of Scientific Investigation)
The major goal of the Cold War mind control programs was to create dissociative symptoms and disorders, including full multiple personality disorder. The Manchurian Candidate is fact, not fiction, and was created by the CIA in the 1950’s under BLUEBIRD and ARTICHOKE mind control programs. Experiments with LSD, sensory deprivation, electro-convulsive treatment, brain electrode implants and hypnosis were designed to create amnesia, depersonalization, changes in identity and altered states of consciousness. (p. iii) “Denial of the reality of multiple personality by these doctors [See page 114 for names] in the mind control network, who are also on the FMSF [False Memory Syndrome Foundation] Scientific and Professional Advisory Board, could be disinformation. The disinformation could be amplified by attacks on specialists in multiple personality as CIA conspiracy lunatics” (P.10) “If clinical multiple personality is buried and forgotten, then the Manchurian Candidate Programs will be safe from public scrutiny. (p.141)
Colin A. Ross (Bluebird: Deliberate Creation of Multiple Personality by Psychiatrists)
They don’t know what it’s about, or…you know, at least they don’t know what’s going to happen. This isn’t even built like a torture chamber. It’s all being watched, right? Water and air sensors. It’s a Petri dish. They don’t know what that s*** that killed Julie does, and this is how they’re finding out.” Holden frowned. “Don’t they have laboratories? Places where you could maybe put that crap on some animals or something? Because as experimental design goes, this seems a little messed up.
James S.A. Corey (Leviathan Wakes (The Expanse, #1))
Some people treat systems of human origin [and maintenance] as if they existed above and beyond any human agent, beyond the control of whim or human feeling. The human element behind agencies and institutions is denied. Thus, when the experimenter says, "This experiment requires that you continue," the subject feels this to be an imperative that goes beyond any merely human command. He does not ask the seemingly obvious question, "Whose experiment? Why should the designer be served while the victim suffers?
Stanley Milgram (Obedience to Authority)
The seven men pressed on. They were tired of the designation of “capsule” for the Mercury vehicle. The term as much as declared that the man inside was not a pilot but an experimental animal in a pod. Gradually, everybody began trying to work the term “spacecraft” into NASA publications and syllabuses. Next
Tom Wolfe (The Right Stuff)
Avoid succumbing to the gambler’s fallacy or the base rate fallacy. Anecdotal evidence and correlations you see in data are good hypothesis generators, but correlation does not imply causation—you still need to rely on well-designed experiments to draw strong conclusions. Look for tried-and-true experimental designs, such as randomized controlled experiments or A/B testing, that show statistical significance. The normal distribution is particularly useful in experimental analysis due to the central limit theorem. Recall that in a normal distribution, about 68 percent of values fall within one standard deviation, and 95 percent within two. Any isolated experiment can result in a false positive or a false negative and can also be biased by myriad factors, most commonly selection bias, response bias, and survivorship bias. Replication increases confidence in results, so start by looking for a systematic review and/or meta-analysis when researching an area.
Gabriel Weinberg (Super Thinking: The Big Book of Mental Models)
The English team’s revisions showed that the Cambrian had been a time of unparalleled innovation and experimentation in body designs. For almost four billion years life had dawdled along without any detectable ambitions in the direction of complexity, and then suddenly, in the space of just five or ten million years, it had created all the basic body designs still in use today. Name a creature, from a nematode worm to Cameron Diaz, and they all use architecture first created in the Cambrian party.
Bill Bryson (A Short History of Nearly Everything)
Mrs. Watanabe loved hand painting, quilting, and the discipline of woven textiles, but she worried these techniques were a dying art. “Computers make everything too easy,” she said with a sigh. “People design very quickly on a monitor, and they print on some enormous industrial printer in a warehouse in a distant country, and the designer hasn’t touched a piece of fabric at any point in the process or gotten her hands dirty with ink. Computers are great for experimentation, but they’re bad for deep thinking.
Gabrielle Zevin (Tomorrow, and Tomorrow, and Tomorrow)
Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge.
Sir Ronald Fisher
Whatever Web 1.0 might’ve lacked in user-friendliness and design sensibility, it more than made up for by its fostering of experimentation and originality of expression, and by its emphasis on the creative primacy of the individual.
Edward Snowden (Permanent Record)
Some experimenters have reported that an angry face “pops out” of a crowd of happy faces, but a single happy face does not stand out in an angry crowd. The brains of humans and other animals contain a mechanism that is designed to give priority to bad news.
Daniel Kahneman
Any chemist reading this book can see, in some detail, how I have spent most of my mature life. They can become familiar with the quality of my mind and imagination. They can make judgements about my research abilities. They can tell how well I have documented my claims of experimental results. Any scientist can redo my experiments to see if they still work—and this has happened! I know of no other field in which contributions to world culture are so clearly on exhibit, so cumulative, and so subject to verification.
Donald J. Cram (From design to discovery (Profiles, pathways, and dreams))
When I examine my experimental research, I find to my embarrassment I rarely provided a control condition. What could I have possibly learned from these ill-designed experiments? The answer (it surprised me) is that you can test theoretical models without a control condition.
Herbert A. Simon (Models of My Life (Mit Press))
People design very quickly on a monitor, and they print on some enormous industrial printer in a warehouse in a distant country, and the designer hasn’t touched a piece of fabric at any point in the process or gotten her hands dirty with ink. Computers are great for experimentation, but they’re bad for deep thinking.
Gabrielle Zevin (Tomorrow, and Tomorrow, and Tomorrow)
How much ever we may underpin cognitive learning theories in technical communication and document design, the users invariably learn more when they are unknowingly involved in the learning process: users learn more when they aren’t learning. Conclusively, we must focus on experimentation and empowerment, and not on learning alone.
Suyog Ketkar (The Write Stride)
Morals, including especially, our institutions of property, freedom and justice, are not a creation of man’s reason but a distinct second endowment conferred on him by cultural evolution - runs counter to the main intellectual outlook of the twentieth century. The influence of rationalism has indeed been so profound and pervasive that, in general, the more intelligent an educated person is, the more likely he or she now is not only to be a rationalist, but also to hold socialist views (regardless of whether he or she is sufficiently doctrinal to attach to his or her views any label, including ‘socialist’). The higher we climb up the ladder of intelligence, the more we talk with intellectuals, the more likely we are to encounter socialist convictions. Rationalists tend to be intelligent and intellectual; and intelligent intellectuals tend to be socialist. One’s initial surprise at finding that intelligent people tend to be socialist diminishes when one realises that, of course, intelligent people will tend to overvalue intelligence, and to suppose that we must owe all the advantages and opportunities that our civilisation offers to deliberate design rather than to following traditional rules, and likewise to suppose that we can, by exercising our reason, eliminate any remaining undesired features by still more intelligence reflection, and still more appropriate design and ’rational coordination’ of our undertakings. This leads one to be favorably disposed to the central economic planning and control that lie at the heart of socialism… And since they have been taught that constructivism and scientism are what science and the use of reason are all about, they find it hard to believe that there can exist any useful knowledge that did not originate in deliberate experimentation, or to accept the validity of any tradition apart from their own tradition of reason. Thus [they say]: ‘Tradition is almost by definition reprehensible, something to be mocked and deplored’.
Friedrich A. Hayek (The Fatal Conceit: The Errors of Socialism)
Well . . . anyhow, I’ve got another idea. This place is an electronics man’s dream—all that vacuum! I’m going to try to gimmick up some really big power tubes—only they won’t be tubes. I can just mount the elements out in the open without having to bother with glass. It’s the easiest way to do experimental tube design anybody ever heard of.
Robert A. Heinlein (Rocket Ship Galileo)
The biggest problem is that the vast majority of studies are not experimental, randomized designs. Simply by observing what people eat—or even worse, what they recall they ate—and trying to link this to disease outcomes is moreover a waste of effort. These studies need to be largely abandoned. We’ve wasted enough resources and caused enough confusion.” —Professor John Ioannidis, MD, 2018
Shawn Baker (The Carnivore Diet)
that there is no agreement on what a programming language really is and what its main purpose is supposed to be. Is a programming language a tool for instructing machines? A means of communicating between programmers? A vehicle for expressing high-level designs? A notation for algorithms? A way of expressing relationships between concepts? A tool for experimentation? A means of controlling computerized devices
Anonymous
We can roll up two-dimensional graphene to make one-dimensional tubes, the so-called nanotubes. This can be done in many ways, giving nanotubes with different radii and pitches (see plate FF). Nanotubes that differ only slightly in geometry can have radically different physical properties. It is a triumph of quantum theory that these delicate properties can be predicted unambiguously, purely through calculation, and that they agree with experimental measurements.
Frank Wilczek (A Beautiful Question: Finding Nature's Deep Design)
Jacobson had been told to remove the Fourth Amendment protections from an experimental surveillance system, one of the most powerful spying programs the NSA had ever developed. The advanced system was still just a pilot project, but top NSA officials wanted to make it operational immediately—and use it to collect data on Americans. They had ordered Jacobson to strip away the carefully calibrated restrictions built into the system, which were designed to prevent it from illegally collecting information on U.S. citizens.
James Risen (Pay Any Price: Greed, Power, and Endless War)
The natural man receiveth not the things of the Spirit of God: for they are foolishness unto him. 1 Corinthians 2:14 It has pleased God to say many things which leave room for misunderstanding, and not to explain them. Often in the Bible there seem to be conflicting statements or statements that seem to violate the known facts of life, and it has pleased Him to leave them there. There are many scriptures we cannot clearly explain. Had we been writing, we would have put things far more plainly so that men should have before them all the doctrine in foolproof systematic order. But would they have had the life? The mighty eternal truths of God are half obscured in Scripture so that the natural man may not lay hold of them. God has hidden them from the wise to reveal them to babes, for they are spiritually discerned. His Word is not a study book. It is intended to meet us in the course of our day-to-day walk in the Spirit and to speak to us there. It is designed to give us knowledge that is experimental because related to life. If we are trying through systematic theology to know God, we are absolutely on the wrong road.
Watchman Nee (A Table in the Wilderness)
It has always been asked in the spirit of: ‘What are the best sources of our knowledge – the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist – no more than ideal rulers – and that all ‘sources’ are liable to lead us into errors at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’ The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimize itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, ‘How can we hope to detect error?’ may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth. …. The proper answer to my question ‘How can we hope to detect and eliminate error?’ is I believe, ‘By criticizing the theories or guesses of others and – if we can train ourselves to do so – by criticizing our own theories or guesses.’ …. So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source, or the sources, from which it may spring – there are many possible sources, and I may not be aware of half of them; and origins or pedigrees have in any case little bearing upon truth. But if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.
Karl Popper
Even in the 1950s, computers were described in the popular press as “super-brains” that were “faster than Einstein.” So can we say now, finally, that computers are as powerful as the human brain? No. Focusing on raw computing power misses the point entirely. Speed alone won’t give us AI. Running a poorly designed algorithm on a faster computer doesn’t make the algorithm better; it just means you get the wrong answer more quickly. (And with more data there are more opportunities for wrong answers!) The principal effect of faster machines has been to make the time for experimentation shorter, so that research can progress more quickly. It’s not hardware that is holding AI back; it’s software. We don’t yet know how to make a machine really intelligent—even if it were the size of the universe.
Stuart Russell (Human Compatible: Artificial Intelligence and the Problem of Control)
A common refrain among theoretical physicists is that the fields of quantum field theory are the “real” entities while the particles they represent are images like the shadows in Plato's cave. As one who did experimental particle physics for forty years before retiring in 2000, I say, “Wait a minute!” No one has ever measured a quantum field, or even a classical electric, magnetic, or gravitational field. No one has ever measured a wavicle, the term used to describe the so-called wavelike properties of a particle. You always measure localized particles. The interference patterns you observe in sending light through slits are not seen in the measurements of individual photons, just in the statistical distributions of an ensemble of many photons. To me, it is the particle that comes closest to reality. But then, I cannot prove it is real either.
Victor J. Stenger (The Fallacy of Fine-Tuning: Why the Universe Is Not Designed for Us)
Have you ever thought, not only about the airplane but whatever man builds, that all of man’s industrial efforts, all his computations and calculations, all the nights spent working over draughts and blueprints, invariably culminate in the production of a thing whose sole and guiding principle is the ultimate principle of simplicity? It is as if there were a natural law which ordained that to achieve this end, to refine the curve of a piece of furniture, or a ship’s keel, or the fuselage of an airplane, until gradually it partakes of the elementary purity of the curve of the human breast or shoulder, there must b experimentation of several generations of craftsmen. In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away, when a body has been stripped down to its nakedness.
Antoine de Saint-Exupéry
Have you ever thought, not only about the airplane but whatever man builds, that all of man’s industrial efforts, all his computations and calculations, all the nights spent working over draughts and blueprints, invariably culminate in the production of a thing whose sole and guiding principle is the ultimate principle of simplicity? It is as if there were a natural law which ordained that to achieve this end, to refine the curve of a piece of furniture, or a ship’s keel, or the fuselage of an airplane, until gradually it partakes of the elementary purity of the curve of the human breast or shoulder, there must be experimentation of several generations of craftsmen. In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away, when a body has been stripped down to its nakedness.
Antoine de Saint-Exupéry
Living organisms were not independently created, but have descended and diversified over time from common ancestors. And thus, no other biological theory so elegantly explains this. Evolutionary theory has withstood the test of time—by way of vicarious experimentation, observation, analysis, and relentless criticism, though opposing viewpoints still cling to the concept of "design." As a person of the biological sciences, I cannot subscribe to such misguided notions that suggest static biological states. Clearly, proper examination of the natural world reveal evolutionary trajectories—some random, others nonrandom—and all having observable genetic implications. It is only when we apply evolutionary explanations to living systems that it becomes ever so clear. The world was not specifically designed with us in mind, but rather we long since adapted and conformed to our surroundings, only giving it the illusionary appearance of "design.
Tommy Rodriguez (Diaries of Dissension: A Case Against the Irrational and Absurd)
Today, the 4-billion-year-old regime of natural selection is facing a completely different challenge. In laboratories throughout the world, scientists are engineering living beings. They break the laws of natural selection with impunity, unbridled even by an organism’s original characteristics. Eduardo Kac, a Brazilian bio-artist, decided in 2000 to create a new work of art: a fluorescent green rabbit. Kac contacted a French laboratory and offered it a fee to engineer a radiant bunny according to his specifications. The French scientists took a run-of-the-mill white rabbit embryo, implanted in its DNA a gene taken from a green fluorescent jellyfish, and voilà! One green fluorescent rabbit for le monsieur. Kac named the rabbit Alba. It is impossible to explain the existence of Alba through the laws of natural selection. She is the product of intelligent design. She is also a harbinger of things to come. If the potential Alba signifies is realised in full – and if humankind doesn’t annihilate itself meanwhile – the Scientific Revolution might prove itself far greater than a mere historical revolution. It may turn out to be the most important biological revolution since the appearance of life on earth. After 4 billion years of natural selection, Alba stands at the dawn of a new cosmic era, in which life will be ruled by intelligent design. If this happens, the whole of human history up to that point might, with hindsight, be reinterpreted as a process of experimentation and apprenticeship that revolutionised the game of life. Such a process should be understood from a cosmic perspective of billions of years, rather than from a human perspective of millennia. Biologists the world over are locked in battle with the intelligent-design movement, which opposes the teaching of Darwinian evolution in schools and claims that biological complexity proves there must be a creator who thought out all biological details in advance. The biologists are right about the past, but the proponents of intelligent design might, ironically, be right about the future. At the time of writing, the replacement of natural selection by intelligent design could happen in any of three ways: through biological engineering, cyborg engineering (cyborgs are beings that combine organic with non-organic parts) or the engineering of in-organic life.
Yuval Noah Harari (Sapiens: A Brief History of Humankind)
The story of the Internet's origins departs from the explanations of technical innovation that center on individual inventors or on the pull of markets. Cerf and Kahn were neither captains of industry nor "two guys tinkering in a garage." The Internet was not built in response to popular demand, real or imagined; its subsequent mass appeal had no part in the decisions made in 1973. Rather, the project reflected the command economy of military procurement, where specialized performance is everything and money is no object, and the research ethos of the university, where experimental interest and technical elegance take precedence over commercial application. This was surely an unlikely context for the creation of what would become a popular and profitable service. Perhaps the key to the Internet's later commercial success was that the project internalized the competitive forces of the market by bringing representatives of diverse interest groups together and allowing them to argue through design issues. Ironically, this unconventional approach produced a system that proved to have more appeal for potential "customers"—people building networks—than did the overtly commercial alternatives that appeared soon after.
Janet Abbate (Inventing the Internet (Inside Technology))
Before we conclude that human cognitive mechanisms are riddled with biases and errors of judgment, we need to ask which adaptive problems human cognitive mechanisms evolved to solve and what would be “sound judgment” or “successful reasoning” from an evolutionary perspective. If humans have trouble locating their cars by color at night in parking lots illuminated with sodium vapor lamps, we would not conclude that our visual system is riddled with errors. Our eyes were designed to perceive the color of objects under natural, not artificial, light (Shepard, 1992). Many of the research programs that have documented “biases” in judgment, it turns out, have used artificial, evolutionarily unprecedented experimental stimuli that are analogous to sodium vapor lamps. Many, for example, require subjects to make probability judgments based on a single event (Gigerenzer, 1991, 1998). “Reliable numerical statements about the probability of a single event were rare or nonexistent in the Pleistocene—a conclusion reinforced by the relative poverty of number terms in modern band level societies” (Tooby & Cosmides, 1998 p. 40). A specific woman cannot have a 35 percent chance of being pregnant; she either is pregnant or is not, so probabilities hardly make sense when applied to a single case.
David M. Buss (Evolutionary Psychology: The New Science of the Mind)
Former member of CSICOP Marcello Truzzi summed up the history of laboratory parapsychology: As proponents of anomalies produce stronger evidence, critics have sometimes moved the goal posts further away. . . . To convince scientists of what had merely been supported by widespread but weak anecdotal evidence, parapsychologists moved psychical research into the laboratory. When experimental results were presented, designs were criticized. When protocols were improved, a “fraud proof” or “critical experiment” was demanded. When those were put forward, replications were demanded. When those were produced, critics argued that new forms of error might be the cause (such as the “file drawer” error that could result from unpublished negative studies). When meta-analyses were presented to counter that issue, these were discounted as controversial, and ESP was reduced to being some present but unspecified “error some place” in the form of what Ray Hyman called the “dirty test tube argument” (claiming dirt was in the tube making the seeming psi result a mere artifact). And in one instance, when the scoffer found no counter-explanations, he described the result as a “mere anomaly” not to be taken seriously so just belonging on a puzzle page. The goal posts have now been moved into a zone where some critics hold unfalsifiable positions.30
Christopher David Carter (Science and Psychic Phenomena: The Fall of the House of Skeptics)
Using graphite as a moderator can be highly dangerous, as it means that the nuclear reaction will continue - or even increase - in the absence of cooling water or the presence of steam pockets (called ‘voids’). This is known as a positive void coefficient and its presence in a reactor is indicative of very poor design. Graphite moderated reactors were used in the USA in the 1950s for research and plutonium production, but the Americans soon realised their safety disadvantages. Almost all western nuclear plants now use either Pressurised Water Reactors (PWRs) or Boiling Water Reactors (BWRs), which both use water as a moderator and coolant. In these designs, the water that is pumped into the reactor as coolant is the same water that is enabling the chain reaction as a moderator. Thus, if the water supply is stopped, fission will cease because the chain reaction cannot be sustained; a much safer design. Few commercial reactor designs still use a graphite moderator. Other than the RBMK and its derivative, the EGP-6, Britain’s Advanced Gas-Cooled Reactor (AGR) design is the only other graphite-moderated reactor in current use. The AGR will soon be joined by a new type of experimental reactor at China’s Shidao Bay Nuclear Power Plant, which is currently under construction. The plant will house two graphite-moderated ‘High Temperature Reactor-Pebble-bed Modules’ reactors, the first of which is undergoing commissioning tests as of mid-2019.
Andrew Leatherbarrow (Chernobyl 01:23:40: The Incredible True Story of the World's Worst Nuclear Disaster)
Life within a Templar house was designed where possible to resemble that of a Cistercian monastery. Meals were communal and to be eaten in near silence, while a reading was given from the Bible. The rule accepted that the elaborate sign language monks used to ask for necessities while eating might not be known to Templar recruits, in which case "quietly and privately you should ask for what you need at table, with all humility and submission." Equal rations of food and wine were to be given to each brother and leftovers would be distributed to the poor. The numerous fast days of the Church calendar were to be observed, but allowances would be made for the needs of fighting men: meat was to be served three times a week, on Tuesdays, Thursdays and Saturdays. Should the schedule of annual fast days interrupt this rhythm, rations would be increased to make up for lost sustenance as soon as the fasting period was over. It was recognized that the Templars were killers. "This armed company of knights may kill the enemies of the cross without stated the rule, neatly summing up the conclusion of centuries of experimental Christian philosophy, which had concluded that slaying humans who happened to be "unbelieving pagans" and "the enemies of the son of the Virgin Mary" was an act worthy of divine praise and not damnation. Otherwise, the Templars were expected to live in pious self-denial. Three horses were permitted to each knight, along with one squire whom "the brother shall not beat." Hunting with hawks—a favorite pastime of warriors throughout Christendom—was forbidden, as was hunting with dogs. only beasts Templars were permitted to kill were the mountain lions of the Holy Land. They were forbidden even to be in the company of hunting men, for the reason that "it is fitting for every religious man to go simply and humbly without laughing or talking too much." Banned, too, was the company of women, which the rule scorned as "a dangerous thing, for by it the old devil has led man from the straight path to paradise the flower of chastity is always [to be] maintained among you.... For this reason none Of you may presume to kiss a woman' be it widow, young girl, mother, sister, aunt or any other.... The Knighthood of Christ should avoid at all costs the embraces of women, by which men have perished many times." Although married men were permitted to join the order, they were not allowed to wear the white cloak and wives were not supposed to join their husbands in Templar houses.
Dan Jones (The Templars: The Rise and Spectacular Fall of God's Holy Warriors)
Are vegetarian diets an effective alternative, or complement, to drugs and surgery? Although studies designed to answer this question are limited in number and small in size, their results are encouraging. In 1990, Dr. Dean Ornish demonstrated that a very low-fat vegetarian diet (less than 10 per cent calories from fat) and lifestyle changes (stress management, aerobic exercise, and group therapy) could not only slow the progression of atherosclerosis, but significantly reverse it. After one year, 82 per cent of the experimental group participants experienced regression of their disease, while in the control group the disease continued to progress. The control group followed a “heart healthy” diet commonly prescribed by physicians that provided less than 30 per cent calories from fat and less than 200 milligrams of cholesterol a day. Over the next four years, people in the experimental group continued to reverse their arterial damage, while those in the control group became steadily worse and had twice as many cardiac events. In 1999, Dr. Caldwell Esselstyn reported on a twelve-year study of eleven patients following a very low-fat vegan diet, coupled with cholesterol-lowering medication. Approximately 70 per cent experienced reversal of their disease. In the eight years prior to the study, these patients experienced a total of forty-eight cardiac events, while in over a decade of the trial, only one non-compliant patient experienced an event.
Vesanto Melina (Becoming Vegetarian, Revised: The Complete Guide to Adopting a Healthy Vegetarian Diet)
Imagine you’re a playwright on an experimental theater production. You get to write the lines for every character — except one. The protagonist is played by a random audience member who is pulled on stage and thrust into the role with no script or training.​ Think that sounds hard? Now imagine that this audience member is drunk. And he’s distracted because he’s texting on his cellphone. And he’s decided to amuse himself by deliberately interfering with the ​story. He randomly tosses insults at other cast members, steals objects off the stage, and doesn’t even show up for the climactic scene. For a playwright, this is a writing nightmare. The fool on stage will disrupt his finely crafted turns of dialogue, contradict his characterization, and break his ​story. Game designers face this every day because games give players agency.​
Tynan Sylvester (Designing Games: A Guide to Engineering Experiences)
When you honestly assess the strengths of human and machine workers, and what they do well when they collaborate, a whole new world of possibilities emerges for running a business and designing your processes—that is, the important mindset part of MELDS. And by exploring those possibilities, companies can often develop novel businesses, like vertical farms. Indeed, it’s through the experimentation part of MELDS that executives will be able to discover game-changing innovations that could potentially transform their company, if not their entire industry.
Paul R. Daugherty (Human + Machine: Reimagining Work in the Age of AI)
The five most highly correlated factors are: Organizational culture. Strong feelings of burnout are found in organizations with a pathological, power-oriented culture. Managers are ultimately responsible for fostering a supportive and respectful work environment, and they can do so by creating a blame-free environment, striving to learn from failures, and communicating a shared sense of purpose. Managers should also watch for other contributing factors and remember that human error is never the root cause of failure in systems. Deployment pain. Complex, painful deployments that must be performed outside of business hours contribute to high stress and feelings of lack of control.4 With the right practices in place, deployments don’t have to be painful events. Managers and leaders should ask their teams how painful their deployments are and fix the things that hurt the most. Effectiveness of leaders. Responsibilities of a team leader include limiting work in process and eliminating roadblocks for the team so they can get their work done. It’s not surprising that respondents with effective team leaders reported lower levels of burnout. Organizational investments in DevOps. Organizations that invest in developing the skills and capabilities of their teams get better outcomes. Investing in training and providing people with the necessary support and resources (including time) to acquire new skills are critical to the successful adoption of DevOps. Organizational performance. Our data shows that Lean management and continuous delivery practices help improve software delivery performance, which in turn improves organizational performance. At the heart of Lean management is giving employees the necessary time and resources to improve their own work. This means creating a work environment that supports experimentation, failure, and learning, and allows employees to make decisions that affect their jobs. This also means creating space for employees to do new, creative, value-add work during the work week—and not just expecting them to devote extra time after hours. A good example of this is Google’s 20% time policy, where the company allows employees 20% of their week to work on new projects, or IBM’s “THINK Friday” program, where Friday afternoons are designated for time without meetings and employees are encouraged to work on new and exciting projects they normally don’t have time for.
Nicole Forsgren (Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations)
Things may even be worse than that, however. There’s some reason to think that the rise in ethical consumerism could even be harmful for the world, on balance. Psychologists have discovered a phenomenon that they call moral licensing, which describes how people who perform one good action often compensate by doing fewer good actions in the future. For example, in a recent experiment, participants were told to choose a product from either a selection of mostly “green” items (like an energy-efficient lightbulb) or from a selection of mostly conventional items (like a regular lightbulb). They were then told to perform a supposedly unrelated visual perception task: a square box with a diagonal line across it was displayed on a computer screen, and a pattern of twenty dots would flash up on the screen; the subjects had to press a key to indicate whether there were more dots on the left or right side of the line. It was always obvious which was the correct answer, and the experimenters emphasized the importance of being as accurate as possible, telling the subjects that the results of the test would be used in designing future experiments. However, the subjects were told that, whether or not their answers were correct, they’d be paid five cents every time they indicated there were more dots on the left-hand side of the line and five cents every time they indicated there were more dots on the right-hand side. They therefore had a financial incentive to lie, and they were alone, so they knew they wouldn’t be caught if they did so. Moreover, they were invited to pay themselves out of an envelope, so they had an opportunity to steal as well. What happened? People who had previously purchased a “green” product were significantly more likely to both lie and steal than those who had purchased the conventional product. Their
William MacAskill (Doing Good Better: How Effective Altruism Can Help You Make a Difference)
Leaders should encourage experimentation and accept that there is nothing wrong with failure as long as it happens early and becomes a source of learning.
Tim Brown (Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation)
What they all shared was optimism, openness to experimentation, a love of storytelling, a need to collaborate, and an instinct to think with their hands—to build, to prototype, and to communicate complex ideas with masterful simplicity. They did not just do design, they lived design.
Tim Brown (Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation)
Specifications for the new fighter had been very clear—two liquid-cooled engines and a speed of 367 miles per hour. We advised the Air Corps that our design would fly faster than 400 miles per hour, a speed unequaled then. Lockheed received a contract for such a plane in 1937, with construction of the first beginning in July 1938. First flight of the XP-38—X for experimental, P for pursuit—was scheduled for early 1939.
Clarence L. Johnson (Kelly: More Than My Share of It All)
Vaccine trials in general, and childhood vaccine trials specifically, are purposely designed to obscure the true incidence of adverse events of the vaccine being tested. How do they do this? By using a two-step scheme: First, a new vaccine (one which does not have a predecessor), is always tested in a Phase 3 RCT in which the control group receives another vaccine (or a compound very similar to the experimental vaccine, see explanation below). A new pediatric vaccine is never tested during its formal approval process against a neutral solution (placebo). Comparing a trial group to a control group that was given a compound that is likely to cause a similar rate of adverse events facilitates the formation of a false safety profile.
Anonymous (Turtles All The Way Down: Vaccine Science and Myth)
Resilience versus Robustness. Typically when we want to improve a system’s ability to avoid outages, handle failures gracefully when they occur and recover quickly when they happen, we often talk about resilience. (…) Robustness is the ability of a system that is able to react to expected variations, Resilience is having an organisation capable of adapting to things that have not been thought of, which could very well include creating a culture of experimentation through things like chaos engineering. For example, we are aware a specific machine could die, so we might bring redundancy into our system by load-balancing an instance, that is an example of addressing Robustness. Resiliency is the process of an organisation preparing itself to the fact that it cannot anticipate all potential problems. An important consideration here is that microservices do not necessarily give you robustness for free, rather they open up opportunities to design a system in such a way that it can better tolerate network partitions, service outages, and the like. Just spreading your functionality over multiple separate processed and separate machines does not guarantee improved robustness, quite the contrary, it may just increase your surface area of failure.
Sam Newman (Monolith to Microservices: Evolutionary Patterns to Transform Your Monolith)
Die-Face Analysis In the 1930s, J. B. Rhine and his colleagues recognized and took into account the possibility that some dice studies may have been flawed because the probabilities of die faces are not equal. With some dice, it is slightly more likely that one will roll a 6 face than a 1 face because the die faces are marked by scooping out bits of material. The 6 face, for example, has six scoops removed from the surface of that side of the die, so it has slightly less mass than the other die faces. On any random toss, that tiny difference in mass will make the 6 slightly more likely to land face up, followed in decreasing probability by the 5, 4, 3, 2, and 1 faces. Thus, an experiment that relied exclusively upon the 6 face as the target may have been flawed because, unless there were also control tosses with no mental intention applied, we could not tell whether above-chance results were due to a mind-matter interaction or to the slightly higher probability of rolling a 6. To see whether this bias was present in these dice studies, we sifted out all reports for which the published data allowed us to calculate the effective hit rate separately for each of the six die faces used under experimental and control conditions. In fact, the suspected biases were found, as shown in figure 8.3. The hit rates for both experimental and control tosses tended to increase from die faces 1 to 6. However, most of the experimental hit rates were also larger than the corresponding control hit rates, suggested some thing interesting beyond the artifacts caused by die-face biases. For example, for die face 6 the experimental condition was significantly larger than the control with odds against chance of five thousand to one. Figure 8.3. Relationship between die face and hit rates for experimental and control conditions. The error bars are 65 percent confidence intervals. Because of the evidence that the die faces were slightly biased, we examined a subset of studies that controlled for these dice biases—studies using design protocols where die faces were equally distributed among the six targets. We referred to such studies as the “balanced-protocol subset.” Sixty-nine experiments met the balanced-protocol criteria. Our examination of those experiments resulted in three notable points: there was still highly significant evidence for mind-matter interaction, with odds against chance of greater than a trillion to one; the effects were constant across different measures of experimental quality; and the selective-reporting “file drawer” required a twenty-to-one ratio of unretrieved, nonsignificant studies for each observed study. Thus chance, quality, and selective reporting could not explain away the results. Dice Conclusions Our meta-analysis findings led us to conclude that a genuine mind-matter interaction did exist with experiments testing tossed dice. The effect had been successfully replicated in more than a hundred experiments by more than fifty investigators for more than a half-century.
Dean Radin (The Conscious Universe: The Scientific Truth of Psychic Phenomena)
Science has an established tradition when it comes to aspects of nature that defy logical explanation and resist experimentation. It ignores them. This is actually a proper response, since no one wants researchers offering fudgy guesses. Official silence may not be helpful, but it is respectable. However, as a result, the very word “consciousness” may seem out of place in science books or articles, despite the fact that, as we’ll see, most famous names in quantum mechanics regarded it as central to the understanding of the cosmos.
Robert Lanza (The Grand Biocentric Design: How Life Creates Reality)
The first phase is from birth until the age of 28.6 years. During this first stage, your life is all about experiences and experimentation. You act very similarly to a third line profile. The second phase goes from 28.6 years until approximately fifty years of age. Most people don't really feel this phase until they are about thirty-five years old. During this second phase, you may notice that life isn't as “edgy” and sharp as it was in your
Karen Curry (Understanding Human Design: The New Science of Astrology: Discover Who You Really Are)
punishment is in part non-strategic comes from the public goods experiment of Drew Fudenberg and Parag Pathak (2010). As in the standard game, following each round of contributions subjects were given information on the contributions of fellow group members and had the opportunity to deduct some of their own payoffs in order to lower the payoffs of another in the group. But unlike the usual treatment, in which the targets of punishment were informed of the level of punishment they received after each round, in the Fudenberg and Pathak experiment the levels of punishment were not to be revealed until the experiment was over, and those who punished others knew this. Thus the experimental design ruled out modifying the behavior of shirkers as a motive for punishment.
Samuel Bowles (A Cooperative Species: Human Reciprocity and Its Evolution)
This life? It is yours. Anyone who suggests that it is not or that it should not be is not here for you. You get to decide how you want to live it, what you want to call it, how and when and why you want to change it. No matter how many times you shift, no matter how often you adjust, no matter the experimentation or the wild exploration, no matter how many times you've been lost or how many times you've been found or any of the missteps you took along the way. Be willing to reinvent yourself fiercely, relentlessly, endlessly in the face of their anger, in response to their fear, in righteous rebellion. A holy(r)evolution. Take to the streets if you wish. Paint the protest sign with your own name. You are not required to stay who you were, or who you are, or even who you will be. You were made for metamorphosis. Designed for course correction. Built for shifting trajectories and smashing paradigms. You are here to become. And nobody can write the terms of your contract but you.
Jeanette LeBlanc
Here, in a pedagogical context, we might think of difficulty not simply as a problem to overcome but an ambivalent space from which to experiment with our historical present through critical play.
Patrick Jagoda (Experimental Games: Critique, Play, and Design in the Age of Gamification)
Instead of looking at leadership as decision making—as a rational process of sifting through data, analyzing trends, and making decisions based on predicting futures—a design framework emphasizes pragmatic experimentation.
Frank J. Barrett (Yes to the Mess: Surprising Leadership Lessons from Jazz)
Beyond One-Way ANOVA The approach described in the preceding section is called one-way ANOVA. This scenario is easily generalized to accommodate more than one independent variable. These independent variables are either discrete (called factors) or continuous (called covariates). These approaches are called n-way ANOVA or ANCOVA (the “C” indicates the presence of covariates). Two way ANOVA, for example, allows for testing of the effect of two different independent variables on the dependent variable, as well as the interaction of these two independent variables. An interaction effect between two variables describes the way that variables “work together” to have an effect on the dependent variable. This is perhaps best illustrated by an example. Suppose that an analyst wants to know whether the number of health care information workshops attended, as well as a person’s education, are associated with healthy lifestyle behaviors. Although we can surely theorize how attending health care information workshops and a person’s education can each affect an individual’s healthy lifestyle behaviors, it is also easy to see that the level of education can affect a person’s propensity for attending health care information workshops, as well. Hence, an interaction effect could also exist between these two independent variables (factors). The effects of each independent variable on the dependent variable are called main effects (as distinct from interaction effects). To continue the earlier example, suppose that in addition to population, an analyst also wants to consider a measure of the watershed’s preexisting condition, such as the number of plant and animal species at risk in the watershed. Two-way ANOVA produces the results shown in Table 13.4, using the transformed variable mentioned earlier. The first row, labeled “model,” refers to the combined effects of all main and interaction effects in the model on the dependent variable. This is the global F-test. The “model” row shows that the two main effects and the single interaction effect, when considered together, are significantly associated with changes in the dependent variable (p < .000). However, the results also show a reduced significance level of “population” (now, p = .064), which seems related to the interaction effect (p = .076). Although neither effect is significant at conventional levels, the results do suggest that an interaction effect is present between population and watershed condition (of which the number of at-risk species is an indicator) on watershed wetland loss. Post-hoc tests are only provided separately for each of the independent variables (factors), and the results show the same homogeneous grouping for both of the independent variables. Table 13.4 Two-Way ANOVA Results As we noted earlier, ANOVA is a family of statistical techniques that allow for a broad range of rather complex experimental designs. Complete coverage of these techniques is well beyond the scope of this book, but in general, many of these techniques aim to discern the effect of variables in the presence of other (control) variables. ANOVA is but one approach for addressing control variables. A far more common approach in public policy, economics, political science, and public administration (as well as in many others fields) is multiple regression (see Chapter 15). Many analysts feel that ANOVA and regression are largely equivalent. Historically, the preference for ANOVA stems from its uses in medical and agricultural research, with applications in education and psychology. Finally, the ANOVA approach can be generalized to allow for testing on two or more dependent variables. This approach is called multiple analysis of variance, or MANOVA. Regression-based analysis can also be used for dealing with multiple dependent variables, as mentioned in Chapter 17.
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
usually does not present much of a problem. Some analysts use t-tests with ordinal rather than continuous data for the testing variable. This approach is theoretically controversial because the distances among ordinal categories are undefined. This situation is avoided easily by using nonparametric alternatives (discussed later in this chapter). Also, when the grouping variable is not dichotomous, analysts need to make it so in order to perform a t-test. Many statistical software packages allow dichotomous variables to be created from other types of variables, such as by grouping or recoding ordinal or continuous variables. The second assumption is that the variances of the two distributions are equal. This is called homogeneity of variances. The use of pooled variances in the earlier formula is justified only when the variances of the two groups are equal. When variances are unequal (called heterogeneity of variances), revised formulas are used to calculate t-test test statistics and degrees of freedom.7 The difference between homogeneity and heterogeneity is shown graphically in Figure 12.2. Although we needn’t be concerned with the precise differences in these calculation methods, all t-tests first test whether variances are equal in order to know which t-test test statistic is to be used for subsequent hypothesis testing. Thus, every t-test involves a (somewhat tricky) two-step procedure. A common test for the equality of variances is the Levene’s test. The null hypothesis of this test is that variances are equal. Many statistical software programs provide the Levene’s test along with the t-test, so that users know which t-test to use—the t-test for equal variances or that for unequal variances. The Levene’s test is performed first, so that the correct t-test can be chosen. Figure 12.2 Equal and Unequal Variances The term robust is used, generally, to describe the extent to which test conclusions are unaffected by departures from test assumptions. T-tests are relatively robust for (hence, unaffected by) departures from assumptions of homogeneity and normality (see below) when groups are of approximately equal size. When groups are of about equal size, test conclusions about any difference between their means will be unaffected by heterogeneity. The third assumption is that observations are independent. (Quasi-) experimental research designs violate this assumption, as discussed in Chapter 11. The formula for the t-test test statistic, then, is modified to test whether the difference between before and after measurements is zero. This is called a paired t-test, which is discussed later in this chapter. The fourth assumption is that the distributions are normally distributed. Although normality is an important test assumption, a key reason for the popularity of the t-test is that t-test conclusions often are robust against considerable violations of normality assumptions that are not caused by highly skewed distributions. We provide some detail about tests for normality and how to address departures thereof. Remember, when nonnormality cannot be resolved adequately, analysts consider nonparametric alternatives to the t-test, discussed at the end of this chapter. Box 12.1 provides a bit more discussion about the reason for this assumption. A combination of visual inspection and statistical tests is always used to determine the normality of variables. Two tests of normality are the Kolmogorov-Smirnov test (also known as the K-S test) for samples with more than 50 observations and the Shapiro-Wilk test for samples with up to 50 observations. The null hypothesis of
Evan M. Berman (Essential Statistics for Public Managers and Policy Analysts)
Complex environments for social interventions and innovations are those in which what to do to solve problems is uncertain and key stakeholders are in conflict about how to proceed. Informed by systems thinking and sensitive to complex nonlinear dynamics, developmental evaluation supports social innovation and adaptive management. Evaluation processes include asking evaluative questions, applying evaluation logic, and gathering realtime data to inform ongoing decision making and adaptations. The evaluator is often part of a development team whose members collaborate to conceptualize, design, and test new approaches in a long-term, ongoing process of continuous development, adaptation, and experimentation, keenly sensitive to unintended results and side effects. The evaluator’s primary function in the team is to infuse team discussions with evaluative questions, thinking, and data, and to facilitate systematic data-based reflection
Michael Quinn Patton (Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use)
Here is how HCI projects are typically done: 1. Observe people to find out what their real problems are. 2. Design innovative tools to help alleviate those problems. 3. Experimentally evaluate the tools to see if they actually help people
Philip J. Guo (The Ph.D. Grind: A Ph.D. Student Memoir)
Three major points are: You get probabilities, not definite answers. You don't get access to the wave function itself, but only a peek at processed versions of it. Answering different questions may require processing the wave function in different ways. Each of those three points raises big issues. The first raises the issue of determinism. Is calculating probabilities really the best we can do? The second raises the issue of many worlds. What does the full wave-function describe, when we're not peeking? Does it represent a gigantic expansion of reality, or is it just a mind tool, no more real than a dream? The third raises the issue of complementarity. To address different questions, we must process information in different ways. In important examples, those methods of processing prove to be mutually incompatible. Thus no one approach, however clever, can provide answers to all possible questions. To do full justice to reality, we must engage it from different perspectives. That is the philosophical principle of complementarity. It is a lesson in humility that quantum theory forces to our attention. We have, for example, Heisenberg's uncertainty principle: You can't measure both the position and the momentum of particles at the same time. Theoretically, it follows from the mathematics of wave functions. Experimentally, it arises because measurement requires active involvement with the object being measured. To probe is to interact, and to interact is potentially to disturb. Each of these issues is fascinating, and the first two have gotten a lot of attention. To me, however, the third seems especially well-grounded and meaningful. Complementarity is both a feature of physical reality and a lesson in wisdom, to which we shall return.
Frank Wilczek (A Beautiful Question: Finding Nature's Deep Design)
I am very fortunate to know T. Colin Campbell, PhD, professor emeritus of Cornell University and coauthor of the ground-breaking The China Study. I strongly recommend this book; it’s an expansive and hugely informative work on the effects of food on health. Campbell’s work is regarded by many as the definitive epidemiological examination of the relationship between diet and disease. He has received more than seventy grant years of peer-reviewed research funding (the gold standard of research), much of it from the National Institutes of Health (NIH), and he has authored more than 300 research papers. Dr. Campbell grew up on a dairy farm and believed wholeheartedly in the health value of eating animal protein. Indeed, he set out in his career to investigate how to produce more and better animal protein. Troublesome to his preconceived opinion about the goodness of dairy, Campbell kept running up against results that pointed to a different truth: that animal protein is disastrous to human health. Through a variety of experimental study designs, epidemiological evidence (studies of what affects the illness and health of populations), and observation of real-life conditions that had rational, biological explanations, Dr. Campbell has made a direct and powerful correlation between cancer and animal protein. For this book I asked Dr. Campbell to explain a little about how and why nutrition (both good and bad) affects cancer in our bodies.
Kathy Freston (Veganist: Lose Weight, Get Healthy, Change the World)
When I work with experimental gadgets, like new variations on virtual reality, in a lab environment, I am always reminded of how small changes in the details of a digital design can have profound unforeseen effects on the experiences of the humans who are playing with it. The slightest change in something as seemingly trivial as the use of a button can sometimes completely alter behavior patterns. For instance, Stanford University researcher Jeremy Bailenson has demonstrated that changing the height of one's avatar in immersive virtual reality transforms self-esteem and social self-perception. Technologies are extensions of ourselves, and, like the avatars in Jeremy's lab, our identities can be shifted by the quirks of gadgets. It is impossible to work with information technology without also engaging in social engineering.
Jaron Lanier (You Are Not a Gadget)
When technology enables such a huge leap in the prototyping process, it empowers the designer—who can immediately see all the implications of any idea he or she attempts. At the same time, the process removes so much guessing and therefore so many mistakes, and so much lost time and money. It also invites more experimentation and creativity. And
Thomas L. Friedman (Thank You for Being Late: An Optimist's Guide to Thriving in the Age of Accelerations)
Evolution is a formidable process that brings forth unfathomable beauty and complexity not through a grand design, but by means of relentless, small-scale, parallel experimentation. Evolution
Frederic Laloux (Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness)
Aluminum Hulls In 1931 England, aluminum was used in the construction of the Diana II, a 55 foot express cruiser, which is still in use. Aluminum is non-magnetic and is almost never found in the elemental state. As a ductile metal it is malleable and has about one-third the density and stiffness of steel. Aluminum is a corrosion resistant, easily machined, cast, drawn and extruded, however the procedure to weld it is more difficult and different from other metals. In 1935 the Bath Iron Works in Maine, built an experimental hull for Alcoa. Named the Alumette, it was floated to the James River in Newport News, Virginia for the purpose of testing its structural properties. The MV Sacal Borincano was an all-aluminum constructed Roll on Roll off, or Ro-Ro ship, designed to carry 40 highway trailers between Miami, FL and San Juan. PR. The relatively small ship was 226 feet in length and has a displacement of 2000 tons. The South Atlantic and Caribbean Line Inc. operated the vessel which was constructed by American Marine in 1967, with help from the Reynolds Metal Company. The vessel was constructed completely of heli-arced aluminum plates to achieve a working speed of 14 knots with a diesel electric power plant of 3000 hp generating 2240kW.
Hank Bracker (Suppressed I Rise)
Manage Your Team’s Collective Time Time management is a group endeavor. The payoff goes far beyond morale and retention. ILLUSTRATION: JAMES JOYCE by Leslie Perlow | 1461 words Most professionals approach time management the wrong way. People who fall behind at work are seen to be personally failing—just as people who give up on diet or exercise plans are seen to be lacking self-control or discipline. In response, countless time management experts focus on individual habits, much as self-help coaches do. They offer advice about such things as keeping better to-do lists, not checking e-mail incessantly, and not procrastinating. Of course, we could all do a better job managing our time. But in the modern workplace, with its emphasis on connectivity and collaboration, the real problem is not how individuals manage their own time. It’s how we manage our collective time—how we work together to get the job done. Here is where the true opportunity for productivity gains lies. Nearly a decade ago I began working with a team at the Boston Consulting Group to implement what may sound like a modest innovation: persuading each member to designate and spend one weeknight out of the office and completely unplugged from work. The intervention was aimed at improving quality of life in an industry that’s notorious for long hours and a 24/7 culture. The early returns were positive; the initiative was expanded to four teams of consultants, and then to 10. The results, which I described in a 2009 HBR article, “Making Time Off Predictable—and Required,” and in a 2012 book, Sleeping with Your Smartphone , were profound. Consultants on teams with mandatory time off had higher job satisfaction and a better work/life balance, and they felt they were learning more on the job. It’s no surprise, then, that BCG has continued to expand the program: As of this spring, it has been implemented on thousands of teams in 77 offices in 40 countries. During the five years since I first reported on this work, I have introduced similar time-based interventions at a range of companies—and I have come to appreciate the true power of those interventions. They put the ownership of how a team works into the hands of team members, who are empowered and incentivized to optimize their collective time. As a result, teams collaborate better. They streamline their work. They meet deadlines. They are more productive and efficient. Teams that set a goal of structured time off—and, crucially, meet regularly to discuss how they’ll work together to ensure that every member takes it—have more open dialogue, engage in more experimentation and innovation, and ultimately function better. CREATING “ENHANCED PRODUCTIVITY” DAYS One of the insights driving this work is the realization that many teams stick to tried-and-true processes that, although familiar, are often inefficient. Even companies that create innovative products rarely innovate when it comes to process. This realization came to the fore when I studied three teams of software engineers working for the same company in different cultural contexts. The teams had the same assignments and produced the same amount of work, but they used very different methods. One, in Shenzen, had a hub-and-spokes org chart—a project manager maintained control and assigned the work. Another, in Bangalore, was self-managed and specialized, and it assigned work according to technical expertise. The third, in Budapest, had the strongest sense of being a team; its members were the most versatile and interchangeable. Although, as noted, the end products were the same, the teams’ varying approaches yielded different results. For example, the hub-and-spokes team worked fewer hours than the others, while the most versatile team had much greater flexibility and control over its schedule. The teams were completely unaware that their counterparts elsewhere in the world were managing their work differently. My research provide
Anonymous
The multiple and dissident lifestyles emerging in the 1960s also indicated to Revel that the United States had more internal flexibility to tolerate change than did any other country and that this diversity would produce sufficient human vitality to make the United States a society of experimentation in new expressions of human experience. But the most important quality Revel found in Americans was their willingness to admit collective guilt in the treatment of racial minorities. Pointing out that the educational system of the Western nations from the time of the Greeks until the present had been designed to justify crimes committed against humanity in the name of national honor or religion, Revel noted that “the Germans refused to admit the crimes of the Nazi; and the English, the French, and the Italians all refused to admit the atrocities committed during their colonial wars.”4 The United States, as Revel saw it, was the first nation in history to confront seriously its own misdeeds and to make some effort to change national policy to make amends for acknowledged wrongs. This manifestation of a collective conscience indicated a greater sensitivity to human needs and an ability to empathetically deal with foreign cultures and values. This was the vital characteristic needed to provide a stance of moral leadership to support a planetary transformation of cultures. Rather
Vine Deloria Jr. (Metaphysics of Modern Existence)
Tesla applied for a patent on an electrical coil that is the most likely candidate for a non mechanical successor of his energy extractor. This is his “Coil for Electro magnets,” patent #512,340. It is a curious design, unlike an ordinary coil made by turning wire on a tube form, this one uses two wires laid next to each other on a form but with the end of the first one connected to the beginning of the second one. In the patent Tesla explains that this double coil will store many times the energy of a conventional coil.   The patent, however, gives no hint of what might have been its more unusual capability. In an article for Century Magazine, Tesla compares extracting energy from the environment to the work of other scientists who were, at that time, learning to condense atmospheric gases into liquids. In particular, he cited the work of a Dr. Karl Linde who had discovered what Tesla described as a self-cooling method for liquefying air. As Tesla said, “This was the only experimental proof which I was still wanting that energy was obtainable from the medium in the manner contemplated by me.” What ties the Linde work with Tesla's electromagnet coil is that both of them used a double path for the material they were working with. Linde had a compressor to pump the air to a high pressure, let the pressure fall as it traveled through a tube, and then used that cooled air to reduce the temperature of the incoming air by having it travel back up the first tube through a second tube enclosing the first. The already cooled air added to the cooling process of the machine and quickly condensed the gases to a liquid. Tesla's intent was to condense the energy trapped between the earth and its upper atmosphere and to turn it into an electric current. He pictured the sun as an immense ball of electricity, positively charged with a potential of some 200 billion volts. The Earth, on the other hand, is charged with negative electricity. The tremendous electrical force between these two bodies constituted, at least in part, what he called cosmic-energy. It varied from night to day and from season to season but it is always present. Tesla's patents for electrical generators and motors were granted in the late 1880's. During the 1890's the large electric power industry, in the form of Westinghouse and General Electric, came into being. With tens of millions of dollars invested in plants and equipment, the industry was not about to abandon a very profitable ten-year-old technology for yet another new one. Tesla saw that profits could be made from the self-acting generator, but somewhere along the line, it was pointed out to him, the negative impact the device would have on the newly emerging technological revolution of the late 19th and early 20th centuries. At the end of his article in Century he wrote: “I worked for a long time fully convinced that the practical realization of the method of obtaining energy from the sun would be of incalculable industrial value, but the continued study of the subject revealed the fact that while it will be commercially profitable if my expectations are well founded, it will not be so to an extraordinary degree.
Tim R. Swartz (The Lost Journals of Nikola Tesla: Time Travel - Alternative Energy and the Secret of Nazi Flying Saucers)
The history of Unix should have prepared us for what we’re learning from Linux (and what I’ve verified experimentally on a smaller scale by deliberately copying Linus’s methodsNote 12). That is, while coding remains an essentially solitary activity, the really great hacks come from harnessing the attention and brainpower of entire communities. The developer who uses only his or her own brain in a closed project is going to fall behind the developer who knows how to create an open, evolutionary context in which feedback exploring the design space, code contributions, bug-spotting, and other improvements come from from hundreds (perhaps thousands) of people.
Eric S. Raymond (The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary)
A psychologist has noted the “garbage in/garbage out” circularity of “elegant experimental designs and statistical analyses applied to biologically meaningless racial categories.”20
Barbara J. Fields (Racecraft: The Soul of Inequality in American Life)
Prefer experimentation over analysis. It’s far more reliable to get good at cheap validation than it is to get great at consistently picking the right solution. Even if you’re brilliant, you are almost always missing essential information when you begin designing. Analysis can often uncover missing information, but it depends on knowing where to look, whereas experimentation allows you to find problems that you didn’t anticipate.
Will Larson (An Elegant Puzzle: Systems of Engineering Management)
How Should I Structure My Pricing? Pricing is the biggest lever in SaaS, and almost no one gets it right out of the gate. Fortunately, you don’t need a PhD to structure your pricing well. Like most things in SaaS, finding the right pricing structure is one part theory, one part experimentation, and one part founder intuition. I wish I could tell you a single “correct” structure, but it varies based on your customer base, the value provided, and the competitive landscape. Most founders price their product too low or create confusing tiers that don’t align with the value a customer receives from the product. On the low end, if you have a product aimed at consumers, you can get away with charging $10 to $15 a month. The problem is at that price point, you’re going to be dealing with high churn, and you won’t have much budget to acquire customers. That can be brutal, but if you have a no-touch sign-up process with a product that sells itself, you can get away with it. Castos’s podcasting software and Snappa’s quick graphic design software are good examples of products that do well with a low average revenue per account (ARPA). You’ll have more breathing room (and less churn) if you aim for an ARPA of $50 a month or more. In niche markets—or where a demo is required or sales cycles are longer—aim higher (e.g., $250 a month and up). If you have a high-touch sales process that involves multiple calls, you need to charge enough to justify the cost of selling it. For example, $1,000 a month and up is a reasonable place to start. If you’re making true enterprise sales that require multiple demos and a procurement process, aim for $30,000 a year and up (into six figures). One of the best signals to guide your pricing is other SaaS tools, and I don’t just mean competition. Any SaaS tool a company in your space might replace you with, a complementary tool or a tool similar to yours in a different vertical can offer guidance, but make sure you don’t just compare features; compare how it’s sold. As mentioned above, the sales process has tremendous influence over how a product should be priced. There are so many SaaS tools out now that a survey of competitive and adjacent tools can give you a mental map of the range of prices you can charge. No matter where your business sits, one thing is true: “If no one’s complaining about your price, you’re probably priced too low.
Rob Walling (The SaaS Playbook: Build a Multimillion-Dollar Startup Without Venture Capital)
anything said about how useful the material was.” The material was not just useful; it was priceless. In August 1941, another spy for the Soviet Union, the British civil servant John Cairncross, gave his handler a copy of the Maud Committee report outlining the aims of the nuclear weapons program. Fuchs provided the detailed reality of the bomb’s development, step by experimental step: the designs for a diffusion plant, estimates of the critical mass for explosive U-235, the measurement of fission, and the increasing British cooperation with American nuclear scientists. At the end of 1941, Fuchs co-authored two important papers on the separation of the isotopes of U-235
Ben Macintyre (Agent Sonya: Moscow's Most Daring Wartime Spy)
Happiness hacking is the experimental design practice of translating positive-psychology research findings into game mechanics.
Jane McGonigal (Reality Is Broken: Why Games Make Us Better and How They Can Change the World)
Clearly, written in Standard English "The photon is a wave" and "The photon is a particle" contradict each other, just like the sentences "Robin is a boy" and "Robin is a girl." Nonetheless, all through the nineteenth century physicists found themselves debating about this and, by the early 1920s, it became obvious that the experimental evidence could not resolve the question, since the experimental evidence depended on the instruments or the instrumental set-up (design) of the total experiment. One type of experiment always showed light traveling in waves, and another type always showed light traveling as discrete particles.
Robert Anton Wilson (Quantum Psychology: How Brain Software Programs You and Your World)
Two men are travelling together along a road. One of them believes that it leads to a Celestial City, the other that it leads nowhere; but since this is the only road there is, both must travel it. Neither has been this way before, and therefore neither is able to say what they will find around each next comer. During their journey they meet both with moments of refreshment and delight, and with moments of hardship and danger. All the time one of them thinks of his journey as a pilgrimage to the Celestial City and interprets the pleasant parts as encouragements and the obstacles as trials of his purpose and lessons in endurance, prepared by the king of that city and designed to make of him a worthy citizen of the place when at last he arrives there. The other, however, believes none of this and sees their journey as an unavoidable and aimless ramble. Since he has no choice in the matter, he enjoys the good and endures the bad. But for him there is no Celestial City to be reached, no all-encompassing purpose ordaining their journey; only the road itself and the luck of the road in good weather and in bad. During the course of the journey the issue between them is not an experimental one. They do not entertain different expectations about the corning details of the road, but only about its ultimate destination. And yet when they do turn the last corner of it will be apparent that one of them has been right all the time and the other wrong. Thus although the issue between them has not been experimental, it has nevertheless from the start been a real issue. They have not merely felt differently about the road; for one was feeling appropriately and the other inappropriately in relation to the actual state of affairs. Their opposed interpretations of the road constituted genuinely rival assertions, though assertions whose assertion-status has the peculiar characteristic of being guaranteed retrospectively by a future crux.
John Hick
A physicist decides to demonstrate the inaccuracy of a proposition; in order to deduce from this proposition the prediction of a phenomenon and institute the experiment which is to show whether this phenomenon is or is not produced, in order to interpret the results of this experiment and establish that the predicted phenomenon is not produced, he does not confine himself to making use of the proposition in question; he makes use also of a whole group of theories accepted by him as beyond dispute. The prediction of the phenomenon, whose nonproduction is to cut off debate, does not derive from the proposition challenged if taken by itself, but from the proposition at issue joined to that whole group of theories; if the predicted phenomenon is not produced, the only thing the experiment teaches us is that among the propositions used to predict the phenomenon and to establish whether it would be produced, there is at least one error; but where this error lies is just what it does not tell us... In sum, the physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed.
Pierre Duhem (The Aim and Structure of Physical Theory (Princeton Science Library))
The Most Important Thing About a Technology Is How It Changes People When I work with experimental digital gadgets, like new variations on virtual reality, in a lab environment, I am always reminded of how small changes in the details of a digital design can have profound unforeseen effects on the experiences of the humans who are playing with it. The slightest change in something as seemingly trivial as the ease of use of a button can sometimes completely alter behavior patterns.
Jaron Lanier (You Are Not A Gadget)
Stochastic and Reactive Effects Replication may be difficult to achieve if the phenomenon under study is inherently stochastic, that is, if it changes with time. Moreover, the phenomenon may react to the experimental situation, altering its characteristics because of the experiment. These are particularly sticky problems in the behavioral and social sciences, for it is virtually impossible to guarantee that an individual tested once will be exactly the same when tested later. In fact, when dealing with living organisms, we cannot realistically expect strict stability of behavior over time. Researchers have developed various experimental designs that attempt to counteract this problem of large fluctuations in behavior. Replication is equally problematic in medical research, for the effects of a drug as well as the symptoms of a disease change with time, confounding the observed course of the illness. Was the cure accelerated or held back by the introduction of the test drug? Often the answer can only be inferred based on what happens on average to a group of test patients compared to a group of control patients. Even attempts to keep experimenters and test participants completely blind to the experimental manipulations do not always address the stochastic and reactive elements of the phenomena under study. Besides the possibility that an effect may change over time, some phenomena may be inherently statistical; that is, they may exist only as probabilities or tendencies to occur. Experimenter Effects In a classic book entitled Pitfalls in Human Research, psychologist Theodore X. Barber discusses ten ways in which behavioral research can go wrong.11 These include such things as the “investigator paradigm effect,” in which the investigator’s conceptual framework biases the way an experiment is conducted and interpreted, and the “experimenter personal attributes effect,” where variables such as age, sex, and friendliness interact with the test participants’ responses. A third pitfall is the “experimenter unintentional expectancy effect”; that is, the experimenter’s prior expectations can influence the outcome of an experiment. Researchers’ expectations and prior beliefs affect how their experiments are conducted, how the data are interpreted, and how other investigators’ research is judged. This topic, discussed in chapter 14, is relevant to understanding the criticisms of psi experiments and how the evidence for psi phenomena has often been misinterpreted.
Dean Radin (The Conscious Universe: The Scientific Truth of Psychic Phenomena)
The results of these experiments also bear some resemblance to Jung’s concept of “synchronicity,” or meaningful coincidences in time.21 As with synchronicity, we seem to be witnessing meaningful relationships between mind and matter at certain times. But synchronicity, according to Jung, involves acausal relationships, and here we were able to predict synchronistic-like events. Jung believed that people could experience but not understand in causal terms how synchronicities occurred: We delude ourselves with the thought that we know much more about matter than about a “metaphysical” mind or spirit and so we overestimate material causation and believe that it alone affords us a true explanation of life. But matter is just as inscrutable as mind. As to the ultimate things we can know nothing, and only when we admit this do we return to a state of equilibrium.22 We are more confident than Jung about what may be possible because it appears that with clever experimental designs, some aspects of Jung’s unus mundus (one world) are in fact responsive to experimental probes, and some forms of synchronistic events can be—paradoxically—planned. We expect that Nature will reveal to us anything we are clever enough to ask for, but we also know that the revealed information is usually shrouded in unstated (and often unexamined) assumptions. At a minimum, we’re beginning to glimpse that past assumptions about rigid separations between mind and matter were probably wrong.
Dean Radin (The Conscious Universe: The Scientific Truth of Psychic Phenomena)
The significance of shared arousal was demonstrated in an ingenious experiment designed by researcher Joshua Conrad Jackson and published in the journal Scientific Reports in 2018. Jackson and his colleagues set out “to simulate conditions found in actual marching rituals”—which, they noted, “required the use of a larger venue than a traditional psychology laboratory.” They chose as the setting for their study a professional sports stadium, with a high-definition camera mounted twenty-five meters above the action. After gathering 172 participants in the stadium and dividing them into groups, the experimenters manipulated their experience of both synchrony and arousal: one group was directed to walk with their fellow members in rank formation, while a second group walked in a loose and uncoordinated fashion; a third group speed-walked around the stadium, boosting their physiological arousal, while a fourth group strolled at a leisurely pace. Jackson and his collaborators then had each group engage in the same set of activities, asking them to gather themselves into cliques, to disperse themselves as they wished across the stadium’s playing field, and finally to cooperate in a joint task (collecting five hundred metal washers scattered across the field). The result: when participants had synchronized with one another, and when they had experienced arousal together, they then behaved in a distinctive way—forming more inclusive groups, standing closer to one another, and working together more efficiently (observations made possible by analyzing footage recorded by the roof-mounted camera). The findings suggest that “behavioral synchrony and shared physiological arousal in small groups independently increase social cohesion and cooperation,” the researchers write; they help us understand “why synchrony and arousal often co-occur in rituals around the world.
Annie Murphy Paul (The Extended Mind: The Power of Thinking Outside the Brain)
As he shifted from one foot to the other, he recalled the fully mechanized saloon, he, Finnerty, and Shepherd had designed when they'd been playful young engineers. To their surprise, the owner of a restaurant chain had been interested enough to give the idea a try. They'd set up the experimental unit about five doors down from where Paul now stood, with coin machines and endless belt to do the serving, with germicidal lamps cleaning the air, with uniform, healthful light, with continuous soft music from a tape recorder, with seats scientifically designed by an anthropologist to give the average man the absolute maximum in comfort. The first day had been a sensation, with a waiting line extending blocks. Within a week of the opening, curiosity had been satisfied, and it was a book day when five customers stopped in. Then this place had opened up almost next door, with a dust-and-germ trap of a Victorian bar, bad light, poor ventilation, and an unsanitary, inefficient, and probably dishonest bartender. It was an immediate and unflagging success
Kurt Vonnegut Jr. (Player Piano)
During the past decade, observant readers have seen many news items about stunning breakthroughs in battery designs, but I cannot find any ever-accelerating growth in the performance of these portable energy storage devices in the past fifty years. In 1900 the best battery (lead-acid) had an energy density of 25 watt-hours per kilogram; in 2022 the best lithium-ion batteries deployed on a large commercial scale (not the best experimental devices) had an energy density twelve times higher—and this gain corresponds to exponential growth of just 2 percent a year. That is very much in line with the growth of performances of many other industrial techniques and devices—and an order of magnitude below Moore’s law expectations. Moreover, even batteries with ten times the 2022 (commercial) energy density (that is, approaching 3,000 Wh/kg) would store only about a quarter of the energy contained in a kilogram of kerosene, making it clear that jetliners energized by batteries are not on any practical horizon.
Vaclav Smil (Invention and Innovation: A Brief History of Hype and Failure)
One philosopher, Karl Popper, contended that the limitations of the inferential, experimental method, which characterized science since Bacon, could not establish the truth of a proposition; it could only eliminate the alternative explanations that were tested.13 Thus, “truth” was tentative, waiting to be modified or even upended by the next set of experiments. The other, Thomas Kuhn, contended that, in fact, scientists were not objective seekers of truth, but rather, engaged in confirming the current “truth,” what Kuhn called the prevailing “paradigm” in the discipline. In the practice of what Kuhn called “normal science,” scientists were merely elaborating on this paradigm or using it to explain away any anomalies in their findings. It was only when anomalies accumulate to the point of crisis, when the current paradigm can no longer hold up, that the science opens to new, revolutionary ways of thinking that replace the old.
Robert Kozma (Make the World a Better Place: Design with Passion, Purpose, and Values)
Like every tokamak, ITER has central solenoid coils, large toroidal and poloidal magnets (respectively around and along the doughnut shape). The basic specifications are a vacuum vessel plasma of 6.2 meter radius and 830 cubic meters in volume, with a confining magnetic field of 5.3 tesla and a rated fusion power of 500 MW (thermal). This heat output would correspond to Q ≥ 10 (it would require the injection of 50 MW to heat the hydrogen plasma to about 150 million degrees) and hence would achieve, for the first time on Earth, a burning plasma of the kind required for any continuously operating fusion reactor. ITER would generate burning plasmas during pulses lasting 400 to 600 seconds, time spans sufficient to demonstrate the feasibility of building an actual electricity-generating fusion power plant. But it is imperative to understand that ITER is an experimental device designed to demonstrate the feasibility of net energy generation and to provide the foundation for larger, and eventually commercial, fusion designs, not to be a prototype of an actual energy-generating device.
Vaclav Smil (Invention and Innovation: A Brief History of Hype and Failure)
Modern science has not created anything that does not exist in the universe. Rather, it has made use of matter which already exists in the universe. The properties in matter exist not because humans have created them. Science is knowledge established by observation and experimentation through an objective process. Scientific knowledge substantiates that the design, variety and balance found in the universe illustrate complexity, intricacy and detail. Science tries to disentangle useful knowledge about the matter so that this knowledge can be put to effective use. But, as Nobel Laureate Richard Feynman explains ‘science cannot be an arbiter in moral matters’.
Salman Ahmed Shaikh (Reflections on the Origins in the Post COVID-19 World)
THE FIRST STEP of forecasting the future requires a trip to the fringes of science, technology, design, and society, to where unusual experimentation is taking place. For it’s at the fringe that all trends are born.
Amy Webb (The Signals Are Talking: Why Today's Fringe Is Tomorrow's Mainstream)
There was a crossover study in which women were instructed to eat plant-based foods for a few months to see how it would affect their menstrual cycles. But then they were to switch back to their baseline diets to note the contrast, a so-called A-B-A study design where you reverse the experimental variable. The problem is that some participants felt so good eating healthfully—they were losing weight without any calorie counting or portion control, they had more energy, their periods got better, and they experienced better digestion and better sleep—that some refused to go back to their regular diets, which kind of messes up the study.2462
Michael Greger (How Not to Diet)
By year’s end Diebner had dozens of scientists under his watch across Germany refining the uranium-machine theory and building the first small experimental designs.
Neal Bascomb (The Winter Fortress: The Epic Mission to Sabotage Hitler's Atomic Bomb)
design – É aplicar, na vida real, tudo o que foi aprendido sobre a maneira de pensar e os processos. Não basta saber, é preciso praticar. Por isso, esteja atento às experiências, incluindo as emocionais; experimente potenciais soluções; tolere os erros; incentive a expressão de diferentes pensamentos; aprenda durante o processo. Se eu fosse sintetizar muito do aprendizado sobre esse novo jeito de pensar, eu diria: “Não se contente
Marcelo Pimenta (Economia da Paixão: Como ganhar dinheiro e viver mais e melhor fazendo o que ama (Portuguese Edition))
PayPal’s big challenge was to get new customers. They tried advertising. It was too expensive. They tried BD [business development] deals with big banks. Bureaucratic hilarity ensued. … the PayPal team reached an important conclusion: BD didn’t work. They needed organic, viral growth. They needed to give people money. So that’s what they did. New customers got $10 for signing up, and existing ones got $10 for referrals. Growth went exponential, and PayPal wound up paying $20 for each new customer. It felt like things were working and not working at the same time; 7 to 10 percent daily growth and 100 million users was good. No revenues and an exponentially growing cost structure were not. Things felt a little unstable. PayPal needed buzz so it could raise more capital and continue on. (Ultimately, this worked out. That does not mean it’s the best way to run a company. Indeed, it probably isn’t.)2 Thiel’s account captures both the desperation of those early days and the almost random experimentation the company resorted to in an effort to get PayPal off the ground. But in the end, the strategy worked. PayPal dramatically increased its base of consumers by incentivizing new sign-ups. Most important, the PayPal team realized that getting users to sign up wasn’t enough; they needed them to try the payment service, recognize its value to them, and become regular users. In other words, user commitment was more important than user acquisition. So PayPal designed the incentives to tip new customers into the ranks of active users. Not only did the incentive payments make joining PayPal feel riskless and attractive, they also virtually guaranteed that new users would start participating in transactions—if only to spend the $10 they’d been gifted in their accounts. PayPal’s explosive growth triggered a number of positive feedback loops. Once users experienced the convenience of PayPal, they often insisted on paying by this method when shopping online, thereby encouraging sellers to sign up. New users spread the word further, recommending PayPal to their friends. Sellers, in turn, began displaying PayPal logos on their product pages to inform buyers that they were prepared to honor this method of online payment. The sight of those logos informed more buyers of PayPal’s existence and encouraged them to sign up. PayPal also introduced a referral fee for sellers, incentivizing them to bring in still more sellers and buyers. Through these feedback loops, the PayPal network went to work on its own behalf—it served the needs of users (buyers and sellers) while spurring its own growth.
Geoffrey G. Parker (Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You: How Networked Markets Are Transforming the Economy―and How to Make Them Work for You)
From ancient times and into the Middle Ages, man had dreamed of taking to the sky, of soaring into the blue like the birds. One savant in Spain in the year 875 is known to have covered himself with feathers in the attempt. Others devised wings of their own design and jumped from rooftops and towers—some to their deaths—in Constantinople, Nuremberg, Perugia. Learned monks conceived schemes on paper. And starting about 1490, Leonardo da Vinci made the most serious studies. He felt predestined to study flight, he said, and related a childhood memory of a kite flying down onto his cradle. According to brothers Wilbur and Orville Wright of Dayton, Ohio, it began for them with a toy from France, a small helicopter brought home by their father, Bishop Milton Wright, a great believer in the educational value of toys. The creation of a French experimenter of the nineteenth century, Alphonse Pénaud, it was little more than a stick with twin propellers and twisted rubber bands, and probably cost 50 cents. “Look here, boys,” said the Bishop, something concealed in his hands. When he let go it flew to the ceiling. They called it the “bat.” Orville’s first teacher in grade school, Ida Palmer, would remember him at his desk tinkering with bits of wood. Asked what he was up to, he told her he was making a machine of a kind that he and his brother were going to fly someday.
David McCullough (The Wright Brothers)
Each of the sub-experiments was run similarly—with each engineer owning his mission and design and collaborating at will with their peers. The primary management pull is simply to ensure that their work is reusable, shareable, and avoids limitations. Continually ask them to think bigger even as they code individual features. Google’s culture of sharing ideas—supporting bottom-up experimentation—and organizational flexibility creates a fertile world for test innovation. Whether it ends up valuable or not, you will never know unless you build it and try it out on real-world engineering problems. Google gives engineers the ability to try if they want to take the initiative, as long as they know how to measure success.
James A. Whittaker (How Google Tests Software)
Successful architects tend to be highly skilled in presenting and arguing for a proposed design, both to clients and to colleagues. The rhetoric of these presentation is often filled with metaphors that capture the leading concepts as well as the important experimental or sensual qualities of the design.
Christoph Grafe (OASE 70: Architecture and Literature)
In his landmark 1966 book, The Triumph of the Therapeutic, Rieff said the death of God in the West had given birth to a new civilization devoted to liberating the individual to seek his own pleasures and to managing emergent anxieties. Religious Man, who lived according to belief in transcendent principles that ordered human life around communal purposes, had given way to Psychological Man, who believed that there was no transcendent order and that life’s purpose was to find one’s own way experimentally. Man no longer understood himself to be a pilgrim on a meaningful journey with others, but as a tourist who traveled through life according to his own self-designed itinerary, with personal happiness his ultimate goal.
Rod Dreher (Live Not by Lies: A Manual for Christian Dissidents)
Encouraged by the success of free-response methods, investigators began to reason that because foreknowledge is commonly associated with visions, dreams, and other nonordinary states of awareness,183 then perhaps conscious precognitive impressions are indistinct or distorted versions of information that is filtered through psychological biases. This speculation led to experiments designed to monitor bodily responses to future targets before those responses reached conscious awareness. I began to conduct this type of experiment in the early 1990s, after reading about a few promising studies published decades before but apparently not followed up. I called these studies of presentiment rather than precognition to highlight the distinction between unconscious prefeeling as opposed to conscious preknowing. I also decided to use experimental designs that were virtually identical to thousands of studies conducted within the conventional discipline of psychophysiology, anticipating that this might make the experimental paradigm more palatable to the mainstream, and because I could also employ commonly used methods of analysis. In its simplest form, a presentiment experiment predicts that if the immediate future contains an emotional response, then that will cause more nervous system activity to occur before that response than it would if the future response was going to be calm. That is, the concept of presentiment hypothesizes that aspects of our future experience that we pay special attention to, like emotional upsets or startling events, “ripple backward” in time and can affect us now. A classical real-world example is when you’re driving along a street, approaching
Dean Radin (Supernormal: Science, Yoga and the Evidence for Extraordinary Psychic Abilities)
Encouraged by the success of free-response methods, investigators began to reason that because foreknowledge is commonly associated with visions, dreams, and other nonordinary states of awareness,183 then perhaps conscious precognitive impressions are indistinct or distorted versions of information that is filtered through psychological biases. This speculation led to experiments designed to monitor bodily responses to future targets before those responses reached conscious awareness. I began to conduct this type of experiment in the early 1990s, after reading about a few promising studies published decades before but apparently not followed up. I called these studies of presentiment rather than precognition to highlight the distinction between unconscious prefeeling as opposed to conscious preknowing. I also decided to use experimental designs that were virtually identical to thousands of studies conducted within the conventional discipline of psychophysiology, anticipating that this might make the experimental paradigm more palatable to the mainstream, and because I could also employ commonly used methods of analysis. In its simplest form, a presentiment experiment predicts that if the immediate future contains an emotional response, then that will cause more nervous system activity to occur before that response than it would if the future response was going to be calm. That is, the concept of presentiment hypothesizes that aspects of our future experience that we pay special attention to, like emotional upsets or startling events, “ripple backward” in time and can affect us now. A classical real-world example is when you’re driving along a street, approaching an intersection, and you get a bad feeling.
Dean Radin (Supernormal: Science, Yoga and the Evidence for Extraordinary Psychic Abilities)
Recognition of the danger of drawing false inferences from incomplete, though correct information has led scientists to a preference for designed experimentation above mere observation of natural phenomena.
John Mandel (The Statistical Analysis of Experimental Data (Dover Books on Mathematics))
From one test session to the next, the interference patterns tended to differ because of slight variations in ambient temperature and vibration. So for the sake of simplicity I based the formal statistical analysis not on a change in the precise shape of the interference pattern, but rather on a decrease in the average illumination level over the entire camera image during the concentration or “mental blocking” condition as compared to the relaxed or “mental passing” condition. To test the design and analytical procedures for possible problems, I also included control runs to allow the system to record interference patterns automatically without anyone being present in the laboratory or paying attention to the interferometer. Data from those control sessions were analyzed in the same way as in the experimental sessions. Results I was fortunate to recruit five meditators, four of whom had many decades of daily meditative practice. Those five contributed nine test sessions. Five other individuals with no meditation experience, or less than two years of practice, contributed nine additional sessions. I referred to the latter group as nonmeditators. I predicted an overall negative score for each experimental session (illustrated by the idealized negative curve shown in Figure 15). The combined results were in fact significantly negative, with odds against chance of 500 to 1. The identical analysis across all the control sessions resulted in odds against chance of close to 1 to 1, indicating that the experimental results were not due to procedural or analytical biases. Figure 16 shows the cumulative score (in terms of standard normal deviates, or z-scores) for the nine sessions contributed by experienced meditators and nine other sessions involving nonmeditators. The experienced meditators resulted in a combined odds against chance of 107,000 to 1, and the nonmeditators obtained results close to chance expectation. This supported my conjecture that meditators would be better at this task than nonmeditators. Figure 16. Experienced meditators (more than two years of daily practice) obtained combined odds against chance of 107,000 to 1. Nonmeditators obtained results close to chance.
Dean Radin (Supernormal: Science, Yoga and the Evidence for Extraordinary Psychic Abilities)
So far, we’ve discussed evidence indicating that one person’s intention can influence the physiology of a distant person, and that the intentions of trained meditators have a somewhat larger effect than nonmeditators. Through the TM-Sidhi program we found that meditators’ intentions may affect the behavior of the local population as measured through reduced indices of violence. A third class of studies, called the “attention focusing facilitation” design, takes a variation of the TM-Sidhi claim into a controlled laboratory context to see if one person’s focused attention can remotely help a distant person to focus his or her attention. In a “distant facilitation of attention” experiment, the distant person—let’s call her Holly—is asked to focus her attention on a candle flame. As soon as Holly notices that her mind is wandering from the candle, she is asked to press a button and return her attention to the candle. The frequency of her button presses is used to measure Holly’s level of focused attention and is the measurement of interest. A second participant, let’s call him Vernon, is located in a distant, isolated room. Holly and Vernon are strictly isolated to exclude any normal means of communication. Vernon’s role is to act as the “attention facilitator.” He has a computer monitor in front of him displaying an experimental condition, either “control” or “help.” During help periods, Vernon focuses his attention on a candle flame in front of him while simultaneously holding the intention to enhance Holly’s ability to focus on her candle. During control periods, Vernon withdraws his attention from the candle and Holly and thinks about other matters.
Dean Radin (Supernormal: Science, Yoga and the Evidence for Extraordinary Psychic Abilities)
Within three years, a Russian diplomat in Saint Petersburg who was an amateur experimenter, Baron Pavel L’vovitch Schilling, had begun designing a telegraph system based on Oersted’s discoveries. Schilling demonstrated the system to Czar Alexander I sometime before the Czar’s death in 1825.
Richard Rhodes (Energy: A Human History)