Computer Scientist Quotes

We've searched our database for all the quotes and captions related to Computer Scientist. Here they are! All 100 of them:

Science is what we understand well enough to explain to a computer; art is everything else.
Donald Ervin Knuth (Things a Computer Scientist Rarely Talks About (Volume 136) (Lecture Notes))
The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’—that, somehow, is much harder!
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
Even the best strategy sometimes yields bad results—which is why computer scientists take care to distinguish between “process” and “outcome.” If you followed the best possible process, then you’ve done all you can, and you shouldn’t blame yourself if things didn’t go your way.
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
computer scientists call validation: whereas verification asks “Did I build the system right?,” validation asks “Did I build the right system?
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
A computer is like a violin. You can imagine a novice trying first a phonograph and then a violin. The latter, he says, sounds terrible. That is the argument we have heard from our humanists and most of our computer scientists. Computer programs are good, they say, for particular purposes, but they aren’t flexible. Neither is a violin, or a typewriter, until you learn how to use it.
Marvin Minsky
If you’re a lazy and not-too-bright computer scientist, machine learning is the ideal occupation, because learning algorithms do all the work but let you take all the credit.
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
When economists insist that they too are scientists because they use mathematics, they are no different from astrologists protesting that they are just as scientific as astronomers because they also use computers and complicated charts.
Yanis Varoufakis (Talking to My Daughter About the Economy: A Brief History of Capitalism)
I've never believed that they're separate. Leonardo da Vinci was a great artist and a great scientist. Michelangelo knew a tremendous amount about how to cut stone at the quarry. The finest dozen computer scientists I know are all musicians. Some are better than others, but they all consider that an important part of their life. I don't believe that the best people in any of these fields see themselves as one branch of a forked tree. I just don't see that.People bring these things together a lot. Dr. Land at Polaroid said, "I want Polaroid to stand at the intersection of art and science," and I've never forgotten that. I think that that's possible, and I think a lot of people have tried.
Steve Jobs
Meaning is like pornography, you know it when you see it.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
We can pull atoms apart, peer back at the first light and predict the end of the universe with just a handful of equations, squiggly lines and arcane symbols that normal people cannot fathom, even though they hold sway over their lives. But it's not just regular folks; even scientists no longer comprehend the world. Take quantum mechanics, the crown jewel of our species, the most accurate, far-ranging and beautiful of all our physical theories. It lies behind the supremacy of our smartphones, behind the Internet, behind the coming promise of godlike computing power. It has completely reshaped our world. We know how to use it, it works as if by some strange miracle, and yet there is not a human soul, alive or dead, who actually gets it. The mind cannot come to grips with its paradoxes and contradictions. It's as if the theory had fallen to earth from another planet, and we simply scamper around it like apes, toying and playing with it, but with no true understanding.
Benjamín Labatut (When We Cease to Understand the World)
The computing scientist’s main challenge is not to get confused by the complexities of his own making.
Edsger W. Dijkstra
The American punctuation rule sticks in the craw of every computer scientist, logician, and linguist, because any ordering of typographical delimiters that fails to reflect the logical nesting of the content makes a shambles of their work.
Steven Pinker (The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century)
Programmed by quanta, physics gave rise first to chemistry and then to life; programmed by mutations and recombination, life gave rise to Shakespeare; programmed by experience and imagination, Shakespeare gave rise to Hamlet.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
They took one look at Zip2’s code and began rewriting the vast majority of the software. Musk bristled at some of their changes, but the computer scientists needed just a fraction of the lines of code that Musk used to get their jobs done. They had a knack for dividing software projects into chunks that could be altered and refined whereas Musk fell into the classic self-taught coder trap of writing what developers call hairballs—big, monolithic hunks of code that could go berserk for mysterious reasons.
Ashlee Vance (Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future)
While we may continue to use the words smart and stupid, and while IQ tests may persist for certain purposes, the monopoly of those who believe in a single general intelligence has come to an end. Brain scientists and geneticists are documenting the incredible differentiation of human capacities, computer programmers are creating systems that are intelligent in different ways, and educators are freshly acknowledging that their students have distinctive strengths and weaknesses.
Howard Gardner (Intelligence Reframed: Multiple Intelligences for the 21st Century)
Even there, something inside me (and, I suspect, inside many other computer scientists!) is suspicious of those parts of mathematics that bear the obvious imprint of physics, such as partial differential equations, differential geometry, Lie groups, or anything else that's “too continuous.
Scott Aaronson (Quantum Computing since Democritus)
The computer scientist Christopher Langton observed several decades ago that innovative systems have a tendency to gravitate toward the “edge of chaos”:
Steven Johnson (Where Good Ideas Come From)
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? —Vernor Vinge, author, professor, computer scientist
James Barrat (Our Final Invention: Artificial Intelligence and the End of the Human Era)
Boys who cry can work for Google. Boys who trash computers cannot. I once was at a science conference, and I saw a NASA scientist who had just found out that his project was canceled—a project he’d worked on for years. He was maybe sixty-five years old, and you know what? He was crying. And I thought, Good for him. That’s why he was able to reach retirement age working in a job he loved.
Temple Grandin (The Autistic Brain: Thinking Across the Spectrum)
But what if the universe was always there, in a state or condition we have yet to identify—a multiverse, for instance, that continually births universes? Or what if the universe just popped into existence from nothing? Or what if everything we know and love were just a computer simulation rendered for entertainment by a superintelligent alien species? These philosophically fun ideas usually satisfy nobody. Nonetheless, they remind us that ignorance is the natural state of mind for a research scientist. People who believe they are ignorant of nothing have neither looked for, nor stumbled upon, the boundary between what is known and unknown in the universe. What we do know, and what we can assert without
Neil deGrasse Tyson (Astrophysics for People in a Hurry (Astrophysics for People in a Hurry Series))
Google gets $59 billion, and you get free search and e-mail. A study published by the Wall Street Journal in advance of Facebook’s initial public offering estimated the value of each long-term Facebook user to be $80.95 to the company. Your friendships were worth sixty-two cents each and your profile page $1,800. A business Web page and its associated ad revenue were worth approximately $3.1 million to the social network. Viewed another way, Facebook’s billion-plus users, each dutifully typing in status updates, detailing his biography, and uploading photograph after photograph, have become the largest unpaid workforce in history. As a result of their free labor, Facebook has a market cap of $182 billion, and its founder, Mark Zuckerberg, has a personal net worth of $33 billion. What did you get out of the deal? As the computer scientist Jaron Lanier reminds us, a company such as Instagram—which Facebook bought in 2012—was not valued at $1 billion because its thirteen employees were so “extraordinary. Instead, its value comes from the millions of users who contribute to the network without being paid for it.” Its inventory is personal data—yours and mine—which it sells over and over again to parties unknown around the world. In short, you’re a cheap date.
Marc Goodman (Future Crimes)
I quite like the idea that we live in a computer simulation. It gives me hope that things will be better on the next level.
Sabine Hossenfelder (Existential Physics: A Scientist's Guide to Life's Biggest Questions)
‎Literature is the equivalent of the climate scientist’s computer simulations: set up some new starting conditions, run the whole complicated process and see what happens.
Alison Gopnik
The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.
Cathy O'Neil (Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy)
The significance of a bit depends not just on its value but on how that value affects other bits over time, as part of the continued information processing that makes up the dynamical evolution of the universe.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
We now know that groups of neurons create new connections and pathways among themselves every time we acquire a new skill. Computer scientists use the term "open architecture" to describe a system that is versatile enough to change--or rearrange--to accommodate the varying demands on it.
Maryanne Wolf (Proust and the Squid: The Story and Science of the Reading Brain)
Aristotle said that happiness is the settling of the soul into its most appropriate spot.” I doubled down on my belief that computer scientists should never dabble in philosophy. “What does that mean, exactly?” “What makes you happy, Todd Keane? What’s your work? How do you define a day well spent?
Richard Powers (Playground)
The difficulty in making sense of even simple speech is well appreciated by computer scientists who struggle to create machines that can respond to natural language. Their frustration is illustrated by a possibly apocryphal story of the early computer that was given the task of translating the homily „The spirit is willing but the flesh is weak.“ into Russian and then back to English. According to the story, it came out: „The vodka is strong but the meat is rotten.
Leonard Mlodinow (Subliminal: How Your Unconscious Mind Rules Your Behavior)
Biologists invent the contraceptive pill – and the Pope doesn’t know what to do about it. Computer scientists develop the Internet – and rabbis argue whether orthodox Jews should be allowed to surf it. Feminist thinkers call upon women to take possession of their bodies – and learned muftis debate how to confront such incendiary ideas.
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
Moore’s law is a law not of nature, but of human ingenuity.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
the best data scientists tend to be “hard scientists,” particularly physicists, rather than computer science majors.
Mike Loukides (What Is Data Science?)
Let me tell you as a brain scientist and a computer engineering dropout - transhumanism is to brain computer interface, what nuclear weapons are to nuclear physics.
Abhijit Naskar (Amantes Assemble: 100 Sonnets of Servant Sultans)
The distinction that only sciences are useful and only arts are spirit-enhancing is a nonsensical one. I couldn't write much without scientists designing my computer. And some of them must want to read about Greek myth after a long day at work. These Muses always remind me that scientists and artists should disregard the idiotic attempts to separate us. We are all nerds, in the end.
Natalie Haynes (Divine Might: Goddesses in Greek Myth)
Computer scientists would call this a “ping attack” or a “denial of service” attack: give a system an overwhelming number of trivial things to do, and the important things get lost in the chaos.
Brian Christian (Algorithms to Live By: The Computer Science of Human Decisions)
All you have to do is fool Google—because if you can fool Google, you can fool everybody,” said Wellesley College computer scientist Panagiotis Takis Metaxas, who has researched the spread of online rumors.
Nathan Bomey (After the Fact: The Erosion of Truth and the Inevitable Rise of Donald Trump)
By the early 1960’s America had reluctantly come to realize that it possessed, as a nation, the most potent scientific complex in the history of the world. Eighty per cent of all scientific discoveries in the preceding three decades had been made by Americans. The United States had 75 per cent of the world’s computers, and 90 per cent of the world’s lasers. The United States had three and a half times as many scientists as the Soviet Union and spent three and a half times as much money on research; the U.S. had four times as many scientists as the European Economic Community and spent seven times as much on research.
Michael Crichton (The Andromeda Strain)
Of a techno-human culture that wants to be more than a successful barbarism, two things above all are required: psychological cultural formation and the cultural capacity for translation. Mathematicians must become poets, cyberneticists must become philosophers of religion, doctors must become composers, computer scientists must become shamans. Was humanity ever something other than the art of managing transitions?
Peter Sloterdijk
Turing’s vision was shared by his fellow computer scientists in America, who codified their curiosity in 1956 with a now famous Dartmouth College research proposal in which the term “artificial intelligence” was coined.
Fei-Fei Li (The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI)
Using MRI scans, scientists can now read thoughts circulating in our brains. Scientists can also insert a chip into the brain of a patient who is totally paralyzed and connect it to a computer, so that through thought alone that patient can surf the web, read and write e-mails, play video games, control their wheelchair, operate household appliances, and manipulate mechanical arms. In fact, such patients can do anything a normal person can do via a computer.
Michio Kaku (The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind)
Did you know that Jacques Benveniste, one of the world's leading homeopathic "scientists," now claims that you can *email* homeopathic remedies? Yeah, see, what you do is you can take the "memory" of the diluted substance out of the water electromagnetically, put it on your computer, email it, and play it back on a sound card into new water. I mean, that could work, right? (Nick's thoughts after reading Francis Wheen's book "How Mumbo-Jumbo Conquered the World")
Nick Hornby (The Polysyllabic Spree)
AI scientists tried to program computers to act like humans without first answering what intelligence is and what it means to understand. They left out the most important part of building intelligent machines, the intelligence! "Real intelligence" makes the point that before we attempt to build intelligent machines, we have to first understand how the brain thinks, and there is nothing artificial about that. Only then can we ask how we can build intelligent machines
Jeff Hawkins
Scientists can also insert a chip into the brain of a patient who is totally paralyzed and connect it to a computer, so that through thought alone that patient can surf the web, read and write e-mails, play video games, control their wheelchair, operate household appliances, and manipulate mechanical arms. In fact, such patients can do anything a normal person can do via a computer.
Michio Kaku (The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind)
It is our glory to use artificial intelligence, swarm drones, quantum computing and other modern technologies for removing the pain of the humanity. But it is a big disaster for the scientists and the humanity to use these technologies for developing mass destruction weapons.
Amit Ray (Compassionate Artificial Intelligence: Frameworks and Algorithms)
We know this because finding an apartment belongs to a class of mathematical problems known as “optimal stopping” problems. The 37% rule defines a simple series of steps—what computer scientists call an “algorithm”—for solving these problems. And as it turns out, apartment hunting is just one of the ways that optimal stopping rears its head in daily life. Committing to or forgoing a succession of options is a structure that appears in life again and again, in slightly different incarnations. How many times to circle the block before pulling into a parking space? How far to push your luck with a risky business venture before cashing out? How long to hold out for a better offer on that house or car? The same challenge also appears in an even more fraught setting: dating. Optimal stopping is the science of serial monogamy.
Brian Christian (Algorithms To Live By: The Computer Science of Human Decisions)
What's the point of talking about philosophical questions? Because we're going to be doing a fair bit of it here – I mean, of philosophical bullshitting. Well, there's a standard answer, and it's that philosophy is an intellectual clean-up job – the janitors who come in after the scientists have made a mess, to try and pick up the pieces. So in this view, philosophers sit in their armchairs waiting for something surprising to happen in science – like quantum mechanics, like the Bell inequality, like Gödel's Theorem – and then (to switch metaphors) swoop in like vultures and say, ah, this is what it really meant. Well, on its face, that seems sort of boring. But as you get more accustomed to this sort of work, I think what you'll find is...it's still boring!
Scott Aaronson (Quantum Computing since Democritus)
In this way the extortion game is similar to the economics of sending spam e-mail. When receiving an e-mail promising a share of a lost Nigerian inheritance or cheap Viagra, nearly everyone clicks delete. But a tiny number takes the bait. Computer scientists at the University of California–Berkeley and UC–San Diego hijacked a working spam network to see how the business operated. They found that the spammers, who were selling fake “herbal aphrodisiacs,” made only one sale for every 12.5 million e-mails they sent: a response rate of 0.00001 percent. Each sale was worth an average of less than $100. It doesn’t look like much of a business. But sending out the e-mails was so cheap and easy—it was done using a network of hijacked PCs, which the fraudsters used free of charge—that the spammers made a healthy profit. Pumping out hundreds of millions of e-mails a day, they had a daily income of about $7,000, or more than $2.5 million a year, the researchers figured.3
Tom Wainwright (Narconomics: How to Run a Drug Cartel)
Some research suggests that collecting vast amounts of data simply can’t predict rare events like terrorism. A 2006 paper by Jeff Jonas, an IBM research scientist, and Jim Harper, the director of information policy at the Cato Institute, concluded that terrorism events aren’t common enough to lend themselves to large-scale computer data mining.
Julia Angwin (Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance)
If you could understand impermanence deeply, you would develop more equanimity. You would not get too excited about either the ups or downs of life. And only then would you be ready to develop that deeper sense of empathy and compassion for everything around you. The computer scientist in me loved this compact instruction set for life. Don’t get me wrong.
Satya Nadella (Hit Refresh)
The illiteracy of the 21st century will no longer be defined by the inability to read or write, but by the incapacity to adapt and innovate through the language of computer code.
Norbertus Krisnu Prabowo
Ultimately, information and energy play complementary roles in the universe: Energy makes physical systems do things. Information tells them what to do.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
Quantum fluctuations are the monkeys that program the universe.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
not only the whats of the design, but also the whys by which it was reached.
Frederick P. Brooks Jr. (Design of Design, The: Essays from a Computer Scientist)
Computer science is no more about computers than astronomy is about telescopes.” —Edsger W. Dijkstra
Peter Gottschling (Discovering Modern C++: An Intensive Course for Scientists, Engineers, and Programmers (C++ In-Depth))
Everything has been composed, just not yet written down. LETTER TO LEOPOLD MOZART [1780]
Frederick P. Brooks Jr. (Design of Design, The: Essays from a Computer Scientist)
Douglas Adams amusingly satirized computer addiction of exactly the kind that hit me. The target of his satire was the programmer who had a particular problem X, which needed solving. He could have written a program in five minutes to solve X and then got on and used his solution. But instead of just doing that, he spent days and weeks writing a more general program that could be used by anybody at any time to solve all similar problems of the general class of X. The fascination lies in the generality and in the purveying of an aesthetically pleasing, user-friendly product for the benefit of a population of hypothetical and very probably non-existent users – not in actually finding the answer to the particular problem X.
Richard Dawkins (An Appetite For Wonder: The Making Of A Scientist)
An even more advanced form of uploading your mind into a computer was envisioned by computer scientist Hans Moravec. When I interviewed him, he claimed that his method of uploading the human mind could even be done without losing consciousness. First you would be placed on a hospital gurney, next to a robot. Then a surgeon would take individual neurons from your brain and create a duplicate of these neurons (made of transistors) inside the robot. A cable would connect these transistorized neurons to your brain. As time goes by, more and more neurons are removed from your brain and duplicated in the robot. Because your brain is connected to the robot brain, you are fully conscious even as more and more neurons are replaced by transistors. Eventually, without losing consciousness, your entire brain and all its neurons are replaced by transistors. Once all one hundred billion neurons have been duplicated, the connection between you abd the artificial brain is finally cut. When you gaze back at the stretcher, you see your body, lacking its brain, while your consciousness now exists inside a robot.
Michio Kaku (The Future of Humanity: Terraforming Mars, Interstellar Travel, Immortality and Our Destiny Beyond Earth)
Scientists have found that the amount of time spent milkshake-multitasking among American young people has increased by 120 percent in the last ten years. According to a report in the Archives of General Psychiatry, simultaneous exposure to electronic media during the teenage years—such as playing a computer game while watching television—appears to be associated with increased depression and anxiety in young adulthood, especially among men.[1] Considering that teens are exposed to an average of eight and a half hours of multitasking electronic media per day, we need to change something quickly.[2] Social Media Enthusiast or Addict? Another concern this raises is whether you are or your teen is a social media enthusiast or simply a
Caroline Leaf (Switch On Your Brain: The Key to Peak Happiness, Thinking, and Health (Includes the '21-Day Brain Detox Plan'))
The combination of Bayes and Markov Chain Monte Carlo has been called "arguably the most powerful mechanism ever created for processing data and knowledge." Almost instantaneously MCMC and Gibbs sampling changed statisticians' entire method of attacking problems. In the words of Thomas Kuhn, it was a paradigm shift. MCMC solved real problems, used computer algorithms instead of theorems, and led statisticians and scientists into a worked where "exact" meant "simulated" and repetitive computer operations replaced mathematical equations. It was a quantum leap in statistics.
Sharon Bertsch McGrayne (The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy)
Most of the messaging and chatting I did was in search of answers to questions I had about how to build my own computer, and the responses I received were so considered and thorough, so generous and kind, they’d be unthinkable today. My panicked query about why a certain chipset for which I’d saved up my allowance didn’t seem to be compatible with the motherboard I’d already gotten for Christmas would elicit a two-thousand-word explanation and note of advice from a professional tenured computer scientist on the other side of the country. Not cribbed from any manual, this response was composed expressly for me, to troubleshoot my problems step-by-step until I’d solved them. I was twelve years old, and my correspondent was an adult stranger far away, yet he treated me like an equal because I’d shown respect for the technology. I attribute this civility, so far removed from our current social-media sniping, to the high bar for entry at the time. After all, the only people on these boards were the people who could be there—who wanted to be there badly enough—who had the proficiency and passion, because the Internet of the 1990s wasn’t just one click away. It took significant effort just to log on.
Edward Snowden (Permanent Record)
You can buy a clock, but you cannot buy time. You can buy a bed, but you cannot buy sleep. You can buy excitement, but you cannot buy bliss. You can buy luxuries, but you cannot buy satisfaction. You can buy pleasure, but you cannot buy peace. You can buy possessions, but you cannot buy contentment. You can buy entertainment, but you cannot buy fulfillment. You can buy amusement, but you cannot buy happiness. You can buy books, but you cannot buy intelligence. You can buy degrees, but you cannot buy wisdom. You can buy fame, but you cannot buy honor. You can buy a reputation, but you cannot buy character. You can buy a priest, but you cannot buy a miracle. You can buy a doctor, but you cannot buy health. You can buy a scientist, but you cannot buy discoveries. You can buy a leader, but you cannot buy power. You can buy acceptance, but you cannot buy friendship. You can buy companions, but you cannot buy loyalty. You can buy allies, but you cannot buy dependability. You can buy partners, but you cannot buy fidelity. You can buy clothes, but you cannot buy class. You can buy toys, but you cannot buy youth. You can buy women, but you cannot buy love. You can buy houses, but you cannot buy homes. You can buy a computer, but you cannot buy intellect. You can buy makeup, but you cannot buy beauty. You can buy a pen, but you cannot buy imagination. You can buy a paintbrush, but you cannot buy inspiration. You can buy opinions, but you cannot buy truth. You can buy assumptions, but you cannot buy facts. You can buy evidence, but you cannot buy faith. You can buy fantasies, but you cannot buy reality.
Matshona Dhliwayo
For example, if you believe that 3% of graduate students are enrolled in computer science (the base rate), and you also believe that the description of Tom W is 4 times more likely for a graduate student in that field than in other fields, then Bayes’s rule says you must believe that the probability that Tom W is a computer scientist is now 11%. If the base rate had been 80%, the new degree of belief would be 94.1%.
Daniel Kahneman (Thinking, Fast and Slow)
Consider a cognitive scientist concerned with the empirical study of the mind, especially the cognitive unconscious, and ultimately committed to understanding the mind in terms of the brain and its neural structure. To such a scientist of the mind, Anglo-American approaches to the philosophy of mind and language of the sort discussed above seem odd indeed. The brain uses neurons, not languagelike symbols. Neural computation works by real-time spreading activation, which is neither akin to prooflike deductions in a mathematical logic, nor like disembodied algorithms in classical artificial intelligence, nor like derivations in a transformational grammar.
George Lakoff (Philosophy In The Flesh: The Embodied Mind and Its Challenge to Western Thought)
Our mind is nothing but accumulated thoughts-good or evil recorded from the day the child is born. For memory or thought to work, a brain is needed. Software cannot work without a hardware. When a computer is damaged can we believe that its software is still somewhere in the sky? How can memory or thinking faculty exist outside brain? The neurotransmitters are responsible for the thought process and memory retention and retrival. All are elecrochemical impulses which cannot travel to sky. Our personality, individuality etc. are result of the accumulated thoughts in our brain. It is quality and nature of accumulated thoughts which decides if one is to become a scientist,poet or a terrorist. A guitar in the hands of a layman does not make any sense. If it is in the hands of a musician melodious tunes can come out. A child in the hands of lovable and intelligent parents go to heights.
V.A. Menon
Sliding Doors and Run Lola Run (1998)—These two movies, neither of which is technically science fiction, were released in the same year. We see the idea of timelines branching from a single point which lead to different outcomes. In the example of Sliding Doors, a separate timeline branches off of the first timeline and then exists in parallel for some time, overlapping the main timeline, before merging back in. In Run Lola Run, on the other hand, we see Lola trying to rescue her boyfriend Manni by rewinding what happened and making different choices multiple times. We see visually what running our Core Loop might look like in a real-world, high-stress situation.
Rizwan Virk (The Simulated Multiverse: An MIT Computer Scientist Explores Parallel Universes, The Simulation Hypothesis, Quantum Computing and the Mandela Effect)
philosopher Archie Bahm: “Nature can never be completely described, for such a description of nature would have to duplicate nature.” That is, a perfect description of the universe is indistinguishable from the universe itself.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
In a classic study of how names impact people’s experience on the job market, researchers show that, all other things being equal, job seekers with White-sounding first names received 50 percent more callbacks from employers than job seekers with Black-sounding names.5 They calculated that the racial gap was equivalent to eight years of relevant work experience, which White applicants did not actually have; and the gap persisted across occupations, industry, employer size – even when employers included the “equal opportunity” clause in their ads.6 With emerging technologies we might assume that racial bias will be more scientifically rooted out. Yet, rather than challenging or overcoming the cycles of inequity, technical fixes too often reinforce and even deepen the status quo. For example, a study by a team of computer scientists at Princeton examined whether a popular algorithm, trained on human writing online, would exhibit the same biased tendencies that psychologists have documented among humans. They found that the algorithm associated White-sounding names with “pleasant” words and Black-sounding names with “unpleasant” ones.7 Such findings demonstrate what I call “the New Jim Code”: the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era.
Ruha Benjamin (Race After Technology: Abolitionist Tools for the New Jim Code)
Just as the theory of information led to the theory of universal computation, this theory I am envisaging could be the seed for designing a machine that generalises the universal computer, which scientists call the universal constructor.
Chiara Marletto (The Science of Can and Can't: A Physicist's Journey Through the Land of Counterfactuals)
In collaboration with Sharif Razzaque There are many ways of making a fool of yourself with a digital computer, and to have one more can hardly make any difference. SIR MAURICE WILKES [1959], “THE EDSAC” Be careful how you fix what you don’t understand.
Frederick P. Brooks Jr. (Design of Design, The: Essays from a Computer Scientist)
THE MYSTERY OF LANGUAGE EVOLUTIONa It seems that eight heavyweight Evolutionistsb—linguists, biologists, anthropologists, and computer scientists—had published an article announcing they were giving up, throwing in the towel, folding, crapping out when it came to the question of where speech—language—comes from and how it works. “The most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever,” they concluded. Not only that, they sounded ready to abandon all hope of ever finding the answer. Oh, we’ll keep trying, they said gamely…but we’ll have to start from zero again. One of the eight was the biggest name in the history of linguistics, Noam Chomsky. “In the last 40 years,” he and the other seven were saying, “there has been an explosion of research on this problem,” and all it had produced was a colossal waste of time by some of the greatest minds in academia. Now, that was odd…I had never heard of a group of experts coming together to announce what abject failures they were…
Tom Wolfe (The Kingdom of Speech)
But the “jobs of the future” do not need scientists who have memorized the periodic table. In fact, business leaders say they are looking for creative, independent problem solvers in every field, not just math and science. Yet in most schools, STEM subjects are taught as a series of memorized procedures and vocabulary words, when they are taught at all. In 2009, only 3% of high school graduates had any credits in an engineering course. (National Science Board, 2012) Technology is increasingly being relegated to using computers for Internet research and test taking.
Sylvia Libow Martinez (Invent To Learn: Making, Tinkering, and Engineering in the Classroom)
I still remember the day I first came across the Internet. It was back in 1993, when I was in high school. I went with a couple of buddies to visit our friend Ido (who is now a computer scientist). We wanted to play table tennis. Ido was already a huge computer fan, and before opening the ping-pong table he insisted on showing us the latest wonder. He connected the phone cable to his computer and pressed some keys. For a minute all we could hear were squeaks, shrieks and buzzes, and then silence. It didn’t succeed. We mumbled and grumbled, but Ido tried again. And again. And again. At last he gave a whoop and announced that he had managed to connect his computer to the central computer at the nearby university. ‘And what’s there, on the central computer?’ we asked. ‘Well,’ he admitted, ‘there’s nothing there yet. But you could put all kinds of things there.’ ‘Like what?’ we questioned. ‘I don’t know,’ he said, ‘all kinds of things.’ It didn’t sound very promising. We went to play ping-pong, and for the following weeks enjoyed a new pastime, making fun of Ido’s ridiculous idea. That was less than twenty-five years ago (at the time of writing).
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
The computational universe is not an alternative to the physical universe. The universe that evolves by processing information and the universe that evolves by the laws of physics are one and the same. The two descriptions, computational and physical, are complementary ways of capturing the same phenomena.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
Tower and get up to the top floor. There’s a room up there with a computer in it where you can turn off the incinerator. There’s another computer that will override the lockdown system. It’s pretty simple. The hard part is getting in there. My card will get you into The Alpha Tower, but once you’re in, you’ll need a scientist’s card to get to the last room. As far as I know, they’ve all been bitten. It’s a tower full of diseased now, but if you can kill one in a lab coat, you may find a card. I think it’s suicide though, Rhys.” When Rhys looked at Flynn, the light glistened off his tear-streaked cheeks. “Can
Michael Robertson (The Alpha Plague)
The nuclear arms race is over, but the ethical problems raised by nonmilitary technology remain. The ethical problems arise from three "new ages" flooding over human society like tsunamis. First is the Information Age, already arrived and here to stay, driven by computers and digital memory. Second is the Biotechnology Age, due to arrive in full force early in the next century, driven by DNA sequencing and genetic engineering. Third is the Neurotechnology Age, likely to arrive later in the next century, driven by neural sensors and exposing the inner workings of human emotion and personality to manipulation.
Freeman Dyson (The Scientist as Rebel)
But Mandelbrot continued to feel oppressed by France’s purist mathematical establishment. “I saw no compatibility between a university position in France and my still-burning wild ambition,” he writes. So, spurred by the return to power in 1958 of Charles de Gaulle (for whom Mandelbrot seems to have had a special loathing), he accepted the offer of a summer job at IBM in Yorktown Heights, north of New York City. There he found his scientific home. As a large and somewhat bureaucratic corporation, IBM would hardly seem a suitable playground for a self-styled maverick. The late 1950s, though, were the beginning of a golden age of pure research at IBM. “We can easily afford a few great scientists doing their own thing,” the director of research told Mandelbrot on his arrival. Best of all, he could use IBM’s computers to make geometric pictures. Programming back then was a laborious business that involved transporting punch cards from one facility to another in the backs of station wagons.
Jim Holt (When Einstein Walked with Gödel: Excursions to the Edge of Thought)
But what if the universe was always there, in a state or condition we have yet to identify—a multiverse, for instance, that continually births universes? Or what if the universe just popped into existence from nothing? Or what if everything we know and love were just a computer simulation rendered for entertainment by a superintelligent alien species? These philosophically fun ideas usually satisfy nobody. Nonetheless, they remind us that ignorance is the natural state of mind for a research scientist. People who believe they are ignorant of nothing have neither looked for, nor stumbled upon, the boundary between what is known and unknown in the universe. What
Neil deGrasse Tyson (Astrophysics for People in a Hurry (Astrophysics for People in a Hurry Series))
Schopenhauer’s framing kicked the problem of consciousness onto a much larger playing field. The mind, with all of its rational processes, is all very well but the “will,” the thing that gives us our “oomph,” is the key: “The will … again fills the consciousness through wishes, emotions, passions, and cares.”14 Today, the subconscious rumblings of the “will” are still unplumbed; only a few inroads have been made. As I write these words, enthusiasts for the artificial intelligence (AI) agenda, the goal of programming machines to think like humans, have completely avoided and ignored this aspect of mental life. That is why Yale’s David Gelernter, one of the leading computer scientists in the world, says the AI agenda will always fall short, explaining, “As it now exists, the field of AI doesn’t have anything that speaks to emotions and the physical body, so they just refuse to talk about it.” He asserts that the human mind includes feelings, along with data and thoughts, and each particular mind is a product of a particular person’s experiences, emotions, and memories hashed and rehashed over a lifetime: “The mind is in a particular body, and consciousness is the work of the whole body.” Putting it in computer lingo, he declares, “I can run an app on any device, but can I run someone else’s mind on your brain? Obviously not.”15
Michael S. Gazzaniga (The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind)
Paul Graham, computer scientist and cofounder of Y Combinator—the start-up funder of Airbnb, Dropbox, Stripe, and Twitch—encapsulated Ibarra’s tenets in a high school graduation speech he wrote, but never delivered: It might seem that nothing would be easier than deciding what you like, but it turns out to be hard, partly because it’s hard to get an accurate picture of most jobs. . . . Most of the work I’ve done in the last ten years didn’t exist when I was in high school. . . . In such a world it’s not a good idea to have fixed plans. And yet every May, speakers all over the country fire up the Standard Graduation Speech, the theme of which is: don’t give up on your dreams. I know what they mean, but this is a bad way to put it, because it implies you’re supposed to be bound by some plan you made early on. The computer world has a name for this: premature optimization. . . . . . . Instead of working back from a goal, work forward from promising situations. This is what most successful people actually do anyway. In the graduation-speech approach, you decide where you want to be in twenty years, and then ask: what should I do now to get there? I propose instead that you don’t commit to anything in the future, but just look at the options available now, and choose those that will give you the most promising range of options afterward.
David Epstein (Range: Why Generalists Triumph in a Specialized World)
Perhaps we are all living inside a giant computer simulation, Matrix-style. That would contradict all our national, religious and ideological stories. But our mental experiences would still be real. If it turns out that human history is an elaborate simulation run on a super-computer by rat scientists from the planet Zircon, that would be rather embarrassing for Karl Marx and the Islamic State. But these rat scientists would still have to answer for the Armenian genocide and for Auschwitz. How did they get that one past the Zircon University’s ethics committee? Even if the gas chambers were just electric signals in silicon chips, the experiences of pain, fear and despair were not one iota less excruciating for that. Pain is pain, fear is fear, and love is love – even in the matrix. It doesn’t matter if the fear you feel is inspired by a collection of atoms in the outside world or by electrical signals manipulated by a computer. The fear is still real. So if you want to explore the reality of your mind, you can do that inside the matrix as well as outside it.
Yuval Noah Harari (21 Lessons for the 21st Century)
It’s easy to raise graduation rates, for example, by lowering standards. Many students struggle with math and science prerequisites and foreign languages. Water down those requirements, and more students will graduate. But if one goal of our educational system is to produce more scientists and technologists for a global economy, how smart is that? It would also be a cinch to pump up the income numbers for graduates. All colleges would have to do is shrink their liberal arts programs, and get rid of education departments and social work departments while they’re at it, since teachers and social workers make less money than engineers, chemists, and computer scientists. But they’re no less valuable to society.
Cathy O'Neil (Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy)
Today the Catholic Church continues to enjoy the loyalties and tithes of hundreds of millions of followers. Yet it and the other theist religions have long since turned from creative into reactive forces. They are busy with rearguard holding operations more than with pioneering novel technologies, innovative economic methods or groundbreaking social ideas. They now mostly agonise over the technologies, methods and ideas propagated by other movements. Biologists invent the contraceptive pill – and the Pope doesn’t know what to do about it. Computer scientists develop the Internet – and rabbis argue whether orthodox Jews should be allowed to surf it. Feminist thinkers call upon women to take possession of their bodies – and learned muftis debate how to confront such incendiary ideas.
Yuval Noah Harari (Homo Deus: A Brief History of Tomorrow)
Computers certainly possess the ability to reason and the capacity for self-reference. And just because they do, their actions are intrinsically inscrutable. Consequently, as they become more powerful and perform a more varied set of tasks, computers exhibit an unpredictability approaching that of human beings. Indeed, by Averroës’s standards, they possess the same degree of immortality as humans.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
Without sex, the only way for bacteria to adapt is through mutation, which is caused by reproductive error or environmental damage. Most mutations are hurtful; they make for even less successful bacteria—though eventually, with luck, a mutation would arise that made for a more heat-resistant bacterium. Asexual adaptation is problematic because the dictate of the world, “Change or die,” runs directly counter to one of the primary dictates of life: “Maintain the integrity of the genome.” In engineering, this type of clash is called a coupled design. Two functions of a system clash so that it is not possible to adjust one without negatively affecting the other. In sexual reproduction, by contrast, the inherent scrambling, or recombination, affords a vast scope for change, yet still maintains genetic integrity.
Seth Lloyd (Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos)
If life has accelerated, and we have become overwhelmed by information to the point that we are less and less able to focus on any of it, why has there been so little pushback? Why haven’t we tried to slow things down to a pace where we can think clearly? I was able to find the first part of an answer to this—and it’s only the first part—when I went to interview Professor Earl Miller. He has won some of the top awards in neuroscience in the world, and he was working at the cutting edge of brain research when I went to see him in his office at the Massachusetts Institute of Technology (MIT). He told me bluntly that instead of acknowledging our limitations and trying to live within them, we have—en masse—fallen for an enormous delusion. There’s one key fact, he said, that every human being needs to understand—and everything else he was going to explain flows from that. “Your brain can only produce one or two thoughts” in your conscious mind at once. That’s it. “We’re very, very single-minded.” We have “very limited cognitive capacity.” This is because of the “fundamental structure of the brain,” and it’s not going to change. But rather than acknowledge this, Earl told me, we invented a myth. The myth is that we can actually think about three, five, ten things at the same time. To pretend this was the case, we took a term that was never meant to be applied to human beings at all. In the 1960s, computer scientists invented machines with more than one processor, so they really could do two things (or more) simultaneously. They called this machine-power “multitasking.” Then we took the concept and applied it to ourselves.
Johann Hari (Stolen Focus: Why You Can't Pay Attention—and How to Think Deeply Again)
Progress in science and technology is real, but it builds on past truths without rejecting them. Computers don’t have to be re-invented in order to keep getting better; innovations expand what they already do. Knowledge accumulates, so it can increase. Scientists and engineers know this, but artists, authors, and philosophers keep trying to start over from ground zero in the humanities. Thus, they don’t really progress—they become primitive.
Gene Edward Veith Jr.
The world has been changing even faster as people, devices and information are increasingly connected to each other. Computational power is growing and quantum computing is quickly being realised. This will revolutionise artificial intelligence with exponentially faster speeds. It will advance encryption. Quantum computers will change everything, even human biology. There is already one technique to edit DNA precisely, called CRISPR. The basis of this genome-editing technology is a bacterial defence system. It can accurately target and edit stretches of genetic code. The best intention of genetic manipulation is that modifying genes would allow scientists to treat genetic causes of disease by correcting gene mutations. There are, however, less noble possibilities for manipulating DNA. How far we can go with genetic engineering will become an increasingly urgent question. We can’t see the possibilities of curing motor neurone diseases—like my ALS—without also glimpsing its dangers. Intelligence is characterised as the ability to adapt to change. Human intelligence is the result of generations of natural selection of those with the ability to adapt to changed circumstances. We must not fear change. We need to make it work to our advantage. We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfil our potential and create a better world for the whole human race. We need to take learning beyond a theoretical discussion of how AI should be and to make sure we plan for how it can be. We all have the potential to push the boundaries of what is accepted, or expected, and to think big. We stand on the threshold of a brave new world. It is an exciting, if precarious, place to be, and we are the pioneers. When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technologies such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only chance we will get. Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.
Stephen Hawking (Brief Answers to the Big Questions)
Chaos and disruption, I later learned, are central tenets of Bannon's animating ideology. Before catalyzing America's dharmic rebalancing, his movement would first need to instill chaos through society so that a new order could emerge. He was an avid reader of a computer scientist and armchair philosopher who goes by the name Mencius Moldbug, a hero of the alt-right who writes long-winded essays attacking democracy and virtually everything about how modern societies are ordered. Moldbug’s views on truth influenced Bannon, and what Cambridge Analytica would become. Moldbug has written that “nonsense is a more effective organizing tool than the truth,” and Bannon embraced this. “Anyone can believe in the truth,” Moldbug writes, “to believe in nonsense is an unforgettable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army It serves as a political uniform. And if you have a uniform, you have an army.
Christopher Wylie (Mindf*ck: Cambridge Analytica and the Plot to Break America)
Running? Jumping?" Anthony turned an anxious face to William. "He'll hurt himself. You can handle the Computer. Override. Make him stop." And William said sharply, "No. I won't. I'll take the chance of his hurting himself. Don't you understand? He's happy. He was on Earth, a world he was never equipped to handle. Now he's on Mercury with a body perfectly adapted to its environment, as perfectly adapted as a hundred specialized scientists could make it be. It's paradise for him; let him enjoy it." "Enjoy? He's a robot." "I'm not talking about the robot. I'm talking about the brain-the brain-that's living here." The Mercury Computer, enclosed in glass, carefully and delicately wired, its integrity most subtly preserved, breathed and lived. "It's Randall who's in paradise," said William. "He's found the world for whose sake he autistically fled this one. He has a world his new body fits perfectly in exchange for the world his old body did not fit at all.
Isaac Asimov (The Bicentennial Man and Other Stories)
The Institute had explored the behavior of a great variety of complex systems—corporations in the marketplace, neurons in the human brain, enzyme cascades within a single cell, the group behavior of migratory birds—systems so complex that it had not been possible to study them before the advent of the computer. The research was new, and the findings were surprising. It did not take long before the scientists began to notice that complex systems showed certain common behaviors. They started to think of these behaviors as characteristic of all complex systems. They realized that these behaviors could not be explained by analyzing the components of the systems. The time-honored scientific approach of reductionism—taking the watch apart to see how it worked—didn’t get you anywhere with complex systems, because the interesting behavior seemed to arise from the spontaneous interaction of the components. The behavior wasn’t planned or directed; it just happened. Such behavior was therefore called “self-organizing.
Michael Crichton (The Lost World (Jurassic Park, #2))
Only years later—as an investigative journalist writing about poor scientific research—did I realize that I had committed statistical malpractice in one section of the thesis that earned me a master’s degree from Columbia University. Like many a grad student, I had a big database and hit a computer button to run a common statistical analysis, never having been taught to think deeply (or at all) about how that statistical analysis even worked. The stat program spit out a number summarily deemed “statistically significant.” Unfortunately, it was almost certainly a false positive, because I did not understand the limitations of the statistical test in the context in which I applied it. Nor did the scientists who reviewed the work. As statistician Doug Altman put it, “Everyone is so busy doing research they don’t have time to stop and think about the way they’re doing it.” I rushed into extremely specialized scientific research without having learned scientific reasoning. (And then I was rewarded for it, with a master’s degree, which made for a very wicked learning environment.) As backward as it sounds, I only began to think broadly about how science should work years after I left it.
David Epstein (Range: Why Generalists Triumph in a Specialized World)
In theory, if some holy book misrepresented reality, its disciples would sooner or later discover this, and the text’s authority would be undermined. Abraham Lincoln said you cannot deceive everybody all the time. Well, that’s wishful thinking. In practice, the power of human cooperation networks depends on a delicate balance between truth and fiction. If you distort reality too much, it will weaken you, and you will not be able to compete against more clear-sighted rivals. On the other hand, you cannot organise masses of people effectively without relying on some fictional myths. So if you stick to unalloyed reality, without mixing any fiction with it, few people will follow you. If you used a time machine to send a modern scientist to ancient Egypt, she would not be able to seize power by exposing the fictions of the local priests and lecturing the peasants on evolution, relativity and quantum physics. Of course, if our scientist could use her knowledge in order to produce a few rifles and artillery pieces, she could gain a huge advantage over pharaoh and the crocodile god Sobek. Yet in order to mine iron ore, build blast furnaces and manufacture gunpowder the scientist would need a lot of hard-working peasants. Do you really think she could inspire them by explaining that energy divided by mass equals the speed of light squared? If you happen to think so, you are welcome to travel to present-day Afghanistan or Syria and try your luck. Really powerful human organisations – such as pharaonic Egypt, the European empires and the modern school system – are not necessarily clear-sighted. Much of their power rests on their ability to force their fictional beliefs on a submissive reality. That’s the whole idea of money, for example. The government makes worthless pieces of paper, declares them to be valuable and then uses them to compute the value of everything else. The government has the power to force citizens to pay taxes using these pieces of paper, so the citizens have no choice but to get their hands on at least some of them. Consequently, these bills really do become valuable, the government officials are vindicated in their beliefs, and since the government controls the issuing of paper money, its power grows. If somebody protests that ‘These are just worthless pieces of paper!’ and behaves as if they are only pieces of paper, he won’t get very far in life.
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
Eyebrows were raised in 1994 when Peter Shor, working at Bell Labs, came up with a quantum algorithm that could break most modern encryption by using quantum computing algorithms. Today’s encryption is based on the difficulty of factoring large numbers. Even today, although there are no quantum computers that can implement Shor’s algorithm in full yet, there is worry that most of our encryption will be broken in a few years as more capable quantum computers come along. When this happens, there will be a rush to quantum-safe encryption algorithms (which cannot be broken quickly by either classic or quantum computers).
Rizwan Virk (The Simulated Multiverse: An MIT Computer Scientist Explores Parallel Universes, The Simulation Hypothesis, Quantum Computing and the Mandela Effect)
As I became older, I was given many masks to wear. I could be a laborer laying railroad tracks across the continent, with long hair in a queue to be pulled by pranksters; a gardener trimming the shrubs while secretly planting a bomb; a saboteur before the day of infamy at Pearl Harbor, signaling the Imperial Fleet; a kamikaze pilot donning his headband somberly, screaming 'Banzai' on my way to my death; a peasant with a broad-brimmed straw hat in a rice paddy on the other side of the world, stooped over to toil in the water; an obedient servant in the parlor, a houseboy too dignified for my own good; a washerman in the basement laundry, removing stains using an ancient secret; a tyrant intent on imposing my despotism on the democratic world, opposed by the free and the brave; a party cadre alongside many others, all of us clad in coordinated Mao jackets; a sniper camouflaged in the trees of the jungle, training my gunsights on G.I. Joe; a child running with a body burning from napalm, captured in an unforgettable photo; an enemy shot in the head or slaughtered by the villageful; one of the grooms in a mass wedding of couples, having met my mate the day before through our cult leader; an orphan in the last airlift out of a collapsed capital, ready to be adopted into the good life; a black belt martial artist breaking cinderblocks with his head, in an advertisement for Ginsu brand knives with the slogan 'but wait--there's more' as the commercial segued to show another free gift; a chef serving up dog stew, a trick on the unsuspecting diner; a bad driver swerving into the next lane, exactly as could be expected; a horny exchange student here for a year, eager to date the blonde cheerleader; a tourist visiting, clicking away with his camera, posing my family in front of the monuments and statues; a ping pong champion, wearing white tube socks pulled up too high and batting the ball with a wicked spin; a violin prodigy impressing the audience at Carnegie Hall, before taking a polite bow; a teen computer scientist, ready to make millions on an initial public offering before the company stock crashes; a gangster in sunglasses and a tight suit, embroiled in a turf war with the Sicilian mob; an urban greengrocer selling lunch by the pound, rudely returning change over the counter to the black patrons; a businessman with a briefcase of cash bribing a congressman, a corrupting influence on the electoral process; a salaryman on my way to work, crammed into the commuter train and loyal to the company; a shady doctor, trained in a foreign tradition with anatomical diagrams of the human body mapping the flow of life energy through a multitude of colored points; a calculus graduate student with thick glasses and a bad haircut, serving as a teaching assistant with an incomprehensible accent, scribbling on the chalkboard; an automobile enthusiast who customizes an imported car with a supercharged engine and Japanese decals in the rear window, cruising the boulevard looking for a drag race; a illegal alien crowded into the cargo hold of a smuggler's ship, defying death only to crowd into a New York City tenement and work as a slave in a sweatshop. My mother and my girl cousins were Madame Butterfly from the mail order bride catalog, dying in their service to the masculinity of the West, and the dragon lady in a kimono, taking vengeance for her sisters. They became the television newscaster, look-alikes with their flawlessly permed hair. Through these indelible images, I grew up. But when I looked in the mirror, I could not believe my own reflection because it was not like what I saw around me. Over the years, the world opened up. It has become a dizzying kaleidoscope of cultural fragments, arranged and rearranged without plan or order.
Frank H. Wu (Yellow)
As a society we are only now getting close to where Dogen was eight hundred years ago. We are watching all our most basic assumptions about life, the universe, and everything come undone, just like Dogen saw his world fall apart when his parents died. Religions don’t seem to mean much anymore, except maybe to small groups of fanatics. You can hardly get a full-time job, and even if you do, there’s no stability. A college degree means very little. The Internet has leveled things so much that the opinions of the greatest scientists in the world about global climate change are presented as being equal to those of some dude who read part of the Bible and took it literally. The news industry has collapsed so that it’s hard to tell a fake headline from a real one. Money isn’t money anymore; it’s numbers stored in computers. Everything is changing so rapidly that none of us can hope to keep up. All this uncertainty has a lot of us scrambling for something certain to hang on to. But if you think I’m gonna tell you that Dogen provides us with that certainty, think again. He actually gives us something far more useful. Dogen gives us a way to be okay with uncertainty. This isn’t just something Buddhists need; it’s something we all need. We humans can be certainty junkies. We’ll believe in the most ridiculous nonsense to avoid the suffering that comes from not knowing something. It’s like part of our brain is dedicated to compulsive dot-connecting. I think we’re wired to want to be certain. You have to know if that’s a rope or a snake, if the guy with the chains all over his chest is a gangster or a fan of bad seventies movies. Being certain means being safe. The downfall is that we humans think about a lot of stuff that’s not actually real. We crave certainty in areas where there can never be any. That’s when we start in with believing the crazy stuff. Dogen is interesting because he tries to cut right to the heart of this. He gets into what is real and what is not. Probably the main reason he’s so difficult to read is that Dogen is trying to say things that can’t actually be said. So he has to bend language to the point where it almost breaks. He’s often using language itself to show the limitations of language. Even the very first readers of his writings must have found them difficult. Dogen understood both that words always ultimately fail to describe reality and that we human beings must rely on words anyway. So he tried to use words to write about that which is beyond words. This isn’t really a discrepancy. You use words, but you remain aware of their limitations. My teacher used to say, “People like explanations.” We do. They’re comforting. When the explanation is reasonably correct, it’s useful.
Brad Warner (It Came from Beyond Zen!: More Practical Advice from Dogen, Japan's Greatest Zen Master (Treasury of the True Dharma Eye Book 2))
The Xerox Corporation’s Palo Alto Research Center, known as Xerox PARC, had been established in 1970 to create a spawning ground for digital ideas. It was safely located, for better and for worse, three thousand miles from the commercial pressures of Xerox corporate headquarters in Connecticut. Among its visionaries was the scientist Alan Kay, who had two great maxims that Jobs embraced: “The best way to predict the future is to invent it” and “People who are serious about software should make their own hardware.” Kay pushed the vision of a small personal computer, dubbed the “Dynabook,” that would be easy enough for children to use. So Xerox PARC’s engineers began to develop user-friendly graphics that could replace all of the command lines and DOS prompts that made computer screens intimidating. The metaphor they came up with was that of a desktop. The screen could have many documents and folders on it, and you could use a mouse to point and click on the one you wanted to use.
Walter Isaacson (Steve Jobs)
Although some scientists questioned the validity of these studies, others went along willingly. People from a wide range of disciplines were recruited, including psychics, physicists, and computer scientists, to investigate a variety of unorthodox projects: experimenting with mind-altering drugs such as LSD, asking psychics to locate the position of Soviet submarines patrolling the deep oceans, etc. In one sad incident, a U.S. Army scientist was secretly given LSD. According to some reports, he became so violently disoriented that he committed suicide by jumping out a window. Most of these experiments were justified on the grounds that the Soviets were already ahead of us in terms of mind control. The U.S. Senate was briefed in another secret report that the Soviets were experimenting with beaming microwave radiation directly into the brains of test subjects. Rather than denouncing the act, the United States saw “great potential for development into a system for disorienting or disrupting the behavior pattern of military or diplomatic personnel.” The U.S. Army even claimed that it might be able to beam entire words and speeches into the minds of the enemy: “One decoy and deception concept … is to remotely create noise in the heads of personnel by exposing them to low power, pulsed microwaves.… By proper choice of pulse characteristics, intelligible speech may be created.… Thus, it may be possible to ‘talk’ to selected adversaries in a fashion that would be most disturbing to them,” the report said. Unfortunately, none of these experiments was peer-reviewed, so millions of taxpayer dollars were spent on projects like this one, which most likely violated the laws of physics, since the human brain cannot receive microwave radiation and, more important, does not have the ability to decode microwave messages. Dr. Steve Rose, a biologist at the Open University, has called this far-fetched scheme a “neuro-scientific impossibility.” But for all the millions of dollars spent on these “black projects,” apparently not a single piece of reliable science emerged. The use of mind-altering drugs did, in fact, create disorientation and even panic among the subjects who were tested, but the Pentagon failed to accomplish the key goal: control of the conscious mind of another person. Also, according to psychologist Robert Jay Lifton, brainwashing by the communists had little long-term effect. Most of the American troops who denounced the United States during the Korean War reverted back to their normal personalities soon after being released. In addition, studies done on people who have been brainwashed by certain cults also show that they revert back to their normal personality after leaving the cult. So it seems that, in the long run, one’s basic personality is not affected by brainwashing.
Michio Kaku (The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind)
When General Genius built the first mentar [Artificial Intelligence] mind in the last half of the twenty-first century, it based its design on the only proven conscious material then known, namely, our brains. Specifically, the complex structure of our synaptic network. Scientists substituted an electrochemical substrate for our slower, messier biological one. Our brains are an evolutionary hodgepodge of newer structures built on top of more ancient ones, a jury-rigged system that has gotten us this far, despite its inefficiency, but was crying out for a top-to-bottom overhaul. Or so the General genius engineers presumed. One of their chief goals was to make minds as portable as possible, to be easily transferred, stored, and active in multiple media: electronic, chemical, photonic, you name it. Thus there didn't seem to be a need for a mentar body, only for interchangeable containers. They designed the mentar mind to be as fungible as a bank transfer. And so they eliminated our most ancient brain structures for regulating metabolic functions, and they adapted our sensory/motor networks to the control of peripherals. As it turns out, intelligence is not limited to neural networks, Merrill. Indeed, half of human intelligence resides in our bodies outside our skulls. This was intelligence the mentars never inherited from us. ... The genius of the irrational... ... We gave them only rational functions -- the ability to think and feel, but no irrational functions... Have you ever been in a tight situation where you relied on your 'gut instinct'? This is the body's intelligence, not the mind's. Every living cell possesses it. The mentar substrate has no indomitable will to survive, but ours does. Likewise, mentars have no 'fire in the belly,' but we do. They don't experience pure avarice or greed or pride. They're not very curious, or playful, or proud. They lack a sense of wonder and spirit of adventure. They have little initiative. Granted, their cognition is miraculous, but their personalities are rather pedantic. But probably their chief shortcoming is the lack of intuition. Of all the irrational faculties, intuition in the most powerful. Some say intuition transcends space-time. Have you ever heard of a mentar having a lucky hunch? They can bring incredible amounts of cognitive and computational power to bear on a seemingly intractable problem, only to see a dumb human with a lucky hunch walk away with the prize every time. Then there's luck itself. Some people have it, most don't, and no mentar does. So this makes them want our bodies... Our bodies, ape bodies, dog bodies, jellyfish bodies. They've tried them all. Every cell knows some neat tricks or survival, but the problem with cellular knowledge is that it's not at all fungible; nor are our memories. We're pretty much trapped in our containers.
David Marusek (Mind Over Ship)
Similarly, the computers used to run the software on the ground for the mission were borrowed from a previous mission. These machines were so out of date that Bowman had to shop on eBay to find replacement parts to get the machines working. As systems have gone obsolete, JPL no longer uses the software, but Bowman told me that the people on her team continue to use software built by JPL in the 1990s, because they are familiar with it. She said, “Instead of upgrading to the next thing we decided that it was working just fine for us and we would stay on the platform.” They have developed so much over such a long period of time with the old software that they don’t want to switch to a newer system. They must adapt to using these outdated systems for the latest scientific work. Working within these constraints may seem limiting. However, building tools with specific constraints—from outdated technologies and low bitrate radio antennas—can enlighten us. For example, as scientists started to explore what they could learn from the wait times while communicating with deep space probes, they discovered that the time lag was extraordinarily useful information. Wait times, they realized, constitute an essential component for locating a probe in space, calculating its trajectory, and accurately locating a target like Pluto in space. There is no GPS for spacecraft (they aren’t on the globe, after all), so scientists had to find a way to locate the spacecraft in the vast expanse. Before 1960, the location of planets and objects in deep space was established through astronomical observation, placing an object like Pluto against a background of stars to determine its position.15 In 1961, an experiment at the Goldstone Deep Space Communications Complex in California used radar to more accurately define an “astronomical unit” and help measure distances in space much more accurately.16 NASA used this new data as part of creating the trajectories for missions in the following years. Using the data from radio signals across a wide range of missions over the decades, the Deep Space Network maintained an ongoing database that helped further refine the definition of an astronomical unit—a kind of longitudinal study of space distances that now allows missions like New Horizons to create accurate flight trajectories. The Deep Space Network continued to find inventive ways of using the time lag of radio waves to locate objects in space, ultimately finding that certain ways of waiting for a downlink signal from the spacecraft were less accurate than others. It turned to using the antennas from multiple locations, such as Goldstone in California and the antennas in Canberra, Australia, or Madrid, Spain, to time how long the signal took to hit these different locations on Earth. The time it takes to receive these signals from the spacecraft works as a way to locate the probes as they are journeying to their destination. Latency—or the different time lag of receiving radio signals on different locations of Earth—is the key way that deep space objects are located as they journey through space. This discovery was made possible during the wait times for communicating with these craft alongside the decades of data gathered from each space mission. Without the constraint of waiting, the notion of using time as a locating feature wouldn’t have been possible.
Jason Farman (Delayed Response: The Art of Waiting from the Ancient to the Instant World)
To their surprise, they found that dopamine actively regulates both the formation and the forgetting of new memories. In the process of creating new memories, the dCA1 receptor was activated. By contrast, forgetting was initiated by the activation of the DAMB receptor. Previously, it was thought that forgetting might be simply the degradation of memories with time, which happens passively by itself. This new study shows that forgetting is an active process, requiring intervention by dopamine. To prove their point, they showed that by interfering with the action of the dCA1 and DAMB receptors, they could, at will, increase or decrease the ability of fruit flies to remember and forget. A mutation in the dCA1 receptor, for example, impaired the ability of the fruit flies to remember. A mutation in the DAMB receptor decreased their ability to forget. The researchers speculate that this effect, in turn, may be partially responsible for savants’ skills. Perhaps there is a deficiency in their ability to forget. One of the graduate students involved in the study, Jacob Berry, says, “Savants have a high capacity for memory. But maybe it isn’t memory that gives them this capacity; maybe they have a bad forgetting mechanism. This might also be the strategy for developing drugs to promote cognition and memory—what about drugs that inhibit forgetting as a cognitive enhancers?” Assuming that this result holds up in human experiments as well, it could encourage scientists to develop new drugs and neurotransmitters that are able to dampen the forgetting process. One might thus be able to selectively turn on photographic memories when needed by neutralizing the forgetting process. In this way, we wouldn’t have the continuous overflow of extraneous, useless information, which hinders the thinking of people with savant syndrome. What is also exciting is the possibility that the BRAIN project, which is being championed by the Obama administration, might be able to identify the specific pathways involved with acquired savant syndrome. Transcranial magnetic fields are still too crude to pin down the handful of neurons that may be involved. But using nanoprobes and the latest in scanning technologies, the BRAIN project might be able to isolate the precise neural pathways that make possible photographic memory and incredible computational, artistic, and musical skills. Billions of research dollars will be channeled into identifying the specific neural pathways involved with mental disease and other afflictions of the brain, and the secret of savant skills may be revealed in the process. Then it might be possible to take normal individuals and make savants out of them. This has happened many times in the past because of random accidents. In the future, this may become a precise medical process.
Michio Kaku (The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind)
As the subject watches the movies, the MRI machine creates a 3-D image of the blood flow within the brain. The MRI image looks like a vast collection of thirty thousand dots, or voxels. Each voxel represents a pinpoint of neural energy, and the color of the dot corresponds to the intensity of the signal and blood flow. Red dots represent points of large neural activity, while blue dots represent points of less activity. (The final image looks very much like thousands of Christmas lights in the shape of the brain. Immediately you can see that the brain is concentrating most of its mental energy in the visual cortex, which is located at the back of the brain, while watching these videos.) Gallant’s MRI machine is so powerful it can identify two to three hundred distinct regions of the brain and, on average, can take snapshots that have one hundred dots per region of the brain. (One goal for future generations of MRI technology is to provide an even sharper resolution by increasing the number of dots per region of the brain.) At first, this 3-D collection of colored dots looks like gibberish. But after years of research, Dr. Gallant and his colleagues have developed a mathematical formula that begins to find relationships between certain features of a picture (edges, textures, intensity, etc.) and the MRI voxels. For example, if you look at a boundary, you’ll notice it’s a region separating lighter and darker areas, and hence the edge generates a certain pattern of voxels. By having subject after subject view such a large library of movie clips, this mathematical formula is refined, allowing the computer to analyze how all sorts of images are converted into MRI voxels. Eventually the scientists were able to ascertain a direct correlation between certain MRI patterns of voxels and features within each picture. At this point, the subject is then shown another movie trailer. The computer analyzes the voxels generated during this viewing and re-creates a rough approximation of the original image. (The computer selects images from one hundred movie clips that most closely resemble the one that the subject just saw and then merges images to create a close approximation.) In this way, the computer is able to create a fuzzy video of the visual imagery going through your mind. Dr. Gallant’s mathematical formula is so versatile that it can take a collection of MRI voxels and convert it into a picture, or it can do the reverse, taking a picture and then converting it to MRI voxels. I had a chance to view the video created by Dr. Gallant’s group, and it was very impressive. Watching it was like viewing a movie with faces, animals, street scenes, and buildings through dark glasses. Although you could not see the details within each face or animal, you could clearly identify the kind of object you were seeing. Not only can this program decode what you are looking at, it can also decode imaginary images circulating in your head.
Michio Kaku (The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind)