Artificial Neural Network Quotes

We've searched our database for all the quotes and captions related to Artificial Neural Network. Here they are! All 43 of them:

As advances in AI, machine learning, and neural networks evolve, incomprehensibility will reach even higher levels - exposing these complex systems to both human and machine errors.
Roger Spitz (The Definitive Guide to Thriving on Disruption: Volume I - Reframing and Navigating Disruption)
Is a mind a complicated kind of abstract pattern that develops in an underlying physical substrate, such as a vast network of nerve cells? If so, could something else be substituted for the nerve cells – something such as ants, giving rise to an ant colony that thinks as a whole and has an identity – that is to say, a self? Or could something else be substituted for the tiny nerve cells, such as millions of small computational units made of arrays of transistors, giving rise to an artificial neural network with a conscious mind? Or could software simulating such richly interconnected computational units be substituted, giving rise to a conventional computer (necessarily a far faster and more capacious one than we have ever seen) endowed with a mind and a soul and free will?
Andrew Hodges (Alan Turing: The Enigma)
A neural network, also known as an artificial neural network, is a type of machine learning algorithm that is inspired by the biological brain.
Michael Taylor (Machine Learning with Neural Networks: An In-depth Visual Introduction with Python: Make Your Own Neural Network in Python: A Simple Guide on Machine Learning with Neural Networks.)
In 1958 a Cornell professor, Frank Rosenblatt, attempted to do this by devising a mathematical approach for creating an artificial neural network like that of the brain, which he called a Perceptron.
Walter Isaacson (The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution)
The intense effort to develop artificial intelligence has increased our understanding of neural networks because at its core, AI is but an attempt to improve artificially what the brain already does effortlessly.
Leonard Shlain (Leonardo's Brain: Understanding Da Vinci's Creative Genius)
Information contains an almost mystical power of free flow and self replication, just as water seeks it's own level or sparks fly upward.
Neal Stephenson (The Diamond Age: Or, a Young Lady's Illustrated Primer)
Is a mind a complicated kind of abstract pattern that develops in an underlying physical substrate, such as a vast network of nerve cells? If so, could something else be substituted for the nerve cells – something such as ants, giving rise to an ant colony that thinks as a whole and has an identity – that is to say, a self? Or could something else be substituted for the tiny nerve cells, such as millions of small computational units made of arrays of transistors, giving rise to an artificial neural network with a conscious mind? Or could software simulating such richly interconnected computational units be substituted, giving rise to a conventional computer (necessarily a far faster and more capacious one than we have ever seen) endowed with a mind and a soul and free will? In short, can thinking and feeling emerge from patterns
Andrew Hodges (Alan Turing: The Enigma)
Our brain is therefore not simply passively subjected to sensory inputs. From the get-go, it already possesses a set of abstract hypotheses, an accumulated wisdom that emerged through the sift of Darwinian evolution and which it now projects onto the outside world. Not all scientists agree with this idea, but I consider it a central point: the naive empiricist philosophy underlying many of today's artificial neural networks is wrong. It is simply not true that we are born with completely disorganized circuits devoid of any knowledge, which later receive the imprint of their environment. Learning, in man and machine, always starts from a set of a priori hypotheses, which are projected onto the incoming data, and from which the system selects those that are best suited to the current environment. As Jean-Pierre Changeux stated in his best-selling book Neuronal Man (1985), “To learn is to eliminate.
Stanislas Dehaene (How We Learn: Why Brains Learn Better Than Any Machine . . . for Now)
He postulated that many neurons can combine into a coalition, becoming a single processing unit. The connection patterns of these units, which can change, make up the algorithms (which can also change with the changing connection patterns) that determine the brain’s response to a stimulus. From this idea came the mantra “Cells that fire together wire together.” According to this theory, learning has a biological basis in the “wiring” patterns of neurons. Hebb noted that the brain is active all the time, not just when stimulated; inputs from the outside can only modify that ongoing activity. Hebb’s proposal made sense to those designing artificial neural networks, and it was put to use in computer programs.
Michael S. Gazzaniga (The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind)
Two years later, DeepMind engineers used what they had learned from game playing to solve an economic problem of vital interest: How should Google optimize the management of its computer servers? The artificial neural network remained similar; the only things that changed were the inputs (date, time, weather, international events, search requests, number of people connected to each server, etc.), the outputs (turn on or off this or that server on various continents), and the reward function (consume less energy). The result was an instant drop in power consumption. Google reduced its energy bill by up to 40 percent and saved tens of millions of dollars—even after myriad specialized engineers had already tried to optimize those very servers. Artificial intelligence has truly reached levels of success that can turn whole industries upside down.
Stanislas Dehaene (How We Learn: Why Brains Learn Better Than Any Machine . . . for Now)
Here are some practical Dataist guidelines for you: ‘You want to know who you really are?’ asks Dataism. ‘Then forget about mountains and museums. Have you had your DNA sequenced? No?! What are you waiting for? Go and do it today. And convince your grandparents, parents and siblings to have their DNA sequenced too – their data is very valuable for you. And have you heard about these wearable biometric devices that measure your blood pressure and heart rate twenty-four hours a day? Good – so buy one of those, put it on and connect it to your smartphone. And while you are shopping, buy a mobile camera and microphone, record everything you do, and put in online. And allow Google and Facebook to read all your emails, monitor all your chats and messages, and keep a record of all your Likes and clicks. If you do all that, then the great algorithms of the Internet-of-All-Things will tell you whom to marry, which career to pursue and whether to start a war.’ But where do these great algorithms come from? This is the mystery of Dataism. Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms. At present, of course, the algorithms are mostly written by human hackers. Yet the really important algorithms – such as the Google search algorithm – are developed by huge teams. Each member understands just one part of the puzzle, and nobody really understands the algorithm as a whole. Moreover, with the rise of machine learning and artificial neural networks, more and more algorithms evolve independently, improving themselves and learning from their own mistakes. They analyse astronomical amounts of data that no human can possibly encompass, and learn to recognise patterns and adopt strategies that escape the human mind. The seed algorithm may initially be developed by humans, but as it grows it follows its own path, going where no human has gone before – and where no human can follow.
Yuval Noah Harari (Homo Deus: A History of Tomorrow)
Yann LeCun's strategy provides a good example of a much more general notion: the exploitation of innate knowledge. Convolutional neural networks learn better and faster than other types of neural networks because they do not learn everything. They incorporate, in their very architecture, a strong hypothesis: what I learn in one place can be generalized everywhere else. The main problem with image recognition is invariance: I have to recognize an object, whatever its position and size, even if it moves to the right or left, farther or closer. It is a challenge, but it is also a very strong constraint: I can expect the very same clues to help me recognize a face anywhere in space. By replicating the same algorithm everywhere, convolutional networks effectively exploit this constraint: they integrate it into their very structure. Innately, prior to any learning, the system already “knows” this key property of the visual world. It does not learn invariance, but assumes it a priori and uses it to reduce the learning space-clever indeed!
Stanislas Dehaene (How We Learn: Why Brains Learn Better Than Any Machine . . . for Now)
Thiel, the PayPal cofounder who had invested in SpaceX, holds a conference each year with the leaders of companies financed by his Founders Fund. At the 2012 gathering, Musk met Demis Hassabis, a neuroscientist, video-game designer, and artificial intelligence researcher with a courteous manner that conceals a competitive mind. A chess prodigy at age four, he became the five-time champion of an international Mind Sports Olympiad that includes competition in chess, poker, Mastermind, and backgammon. In his modern London office is an original edition of Alan Turing’s seminal 1950 paper, “Computing Machinery and Intelligence,” which proposed an “imitation game” that would pit a human against a ChatGPT–like machine. If the responses of the two were indistinguishable, he wrote, then it would be reasonable to say that machines could “think.” Influenced by Turing’s argument, Hassabis cofounded a company called DeepMind that sought to design computer-based neural networks that could achieve artificial general intelligence. In other words, it sought to make machines that could learn how to think like humans.
Walter Isaacson (Elon Musk)
When General Genius built the first mentar [Artificial Intelligence] mind in the last half of the twenty-first century, it based its design on the only proven conscious material then known, namely, our brains. Specifically, the complex structure of our synaptic network. Scientists substituted an electrochemical substrate for our slower, messier biological one. Our brains are an evolutionary hodgepodge of newer structures built on top of more ancient ones, a jury-rigged system that has gotten us this far, despite its inefficiency, but was crying out for a top-to-bottom overhaul. Or so the General genius engineers presumed. One of their chief goals was to make minds as portable as possible, to be easily transferred, stored, and active in multiple media: electronic, chemical, photonic, you name it. Thus there didn't seem to be a need for a mentar body, only for interchangeable containers. They designed the mentar mind to be as fungible as a bank transfer. And so they eliminated our most ancient brain structures for regulating metabolic functions, and they adapted our sensory/motor networks to the control of peripherals. As it turns out, intelligence is not limited to neural networks, Merrill. Indeed, half of human intelligence resides in our bodies outside our skulls. This was intelligence the mentars never inherited from us. ... The genius of the irrational... ... We gave them only rational functions -- the ability to think and feel, but no irrational functions... Have you ever been in a tight situation where you relied on your 'gut instinct'? This is the body's intelligence, not the mind's. Every living cell possesses it. The mentar substrate has no indomitable will to survive, but ours does. Likewise, mentars have no 'fire in the belly,' but we do. They don't experience pure avarice or greed or pride. They're not very curious, or playful, or proud. They lack a sense of wonder and spirit of adventure. They have little initiative. Granted, their cognition is miraculous, but their personalities are rather pedantic. But probably their chief shortcoming is the lack of intuition. Of all the irrational faculties, intuition in the most powerful. Some say intuition transcends space-time. Have you ever heard of a mentar having a lucky hunch? They can bring incredible amounts of cognitive and computational power to bear on a seemingly intractable problem, only to see a dumb human with a lucky hunch walk away with the prize every time. Then there's luck itself. Some people have it, most don't, and no mentar does. So this makes them want our bodies... Our bodies, ape bodies, dog bodies, jellyfish bodies. They've tried them all. Every cell knows some neat tricks or survival, but the problem with cellular knowledge is that it's not at all fungible; nor are our memories. We're pretty much trapped in our containers.
David Marusek (Mind Over Ship)
One neural network called bidirectional associative memory (BAM) allows you to provide the value and receive the key.
Jeff Heaton (Artificial Intelligence for Humans, Volume 3: Deep Learning and Neural Networks)
There are four main predictive modeling techniques detailed in this book as important upstream O&G data-driven analytic methodologies: Decision trees Regression Linear regression Logistic regression Neural networks Artificial neural networks Self-organizing maps (SOMs) K-means clustering
Keith Holdaway (Harness Oil and Gas Big Data with Analytics: Optimize Exploration and Production with Data-Driven Models (Wiley and SAS Business Series))
By the time I began my Ph.D., the field of artificial intelligence had forked into two camps: the “rule-based” approach and the “neural networks” approach.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
these researchers figured they’d go straight to the source. This approach mimics the brain’s underlying architecture, constructing layers of artificial neurons that can receive and transmit information in a structure akin to our networks of biological neurons. Unlike the rule-based approach, builders of neural networks generally do not give the networks rules to follow in making decisions. They simply feed lots and lots of examples of a given phenomenon—pictures, chess games, sounds—into the neural networks and let the networks themselves identify patterns within the data. In other words, the less human interference, the better.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
one of the names that deep learning has gone by is artificial neural networks (ANNs).
Ian Goodfellow (Deep Learning (Adaptive Computation and Machine Learning series))
The technicians added links in the neural network, dumped data, waited for the AI to respond. It looked uncannily lifelike, from the waist up; sitting behind that table as if they were reporting to it. He always walked the other way, behind the thing’s back, so he could see the mess of wires and glinting microchips. He wondered if that disturbed it, if it was capable of feeling the same uncanny itch down its spine as a human with their back unprotected.
Sara Barkat (The Shivering Ground & Other Stories)
Inspired by the tangled webs of neurons in our brains, deep learning constructs software layers of artificial neural networks with input and output layers. Data is fed into the input layer of the network, and a result emerges from the output layer of the network. In between the input and output layers may be up to thousands of other layers, hence the name “deep” learning.
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
The first academic paper describing deep learning dates all the way back to 1967. It took almost fifty years for this technology to blossom. The reason it took so long is that deep learning requires large amounts of data and computing power for training the artificial neural network. If computing power is the engine of AI, data is the fuel. Only in the last decade has computing become fast enough and data sufficiently plentiful. Today, your smartphone holds millions of times more processing power than the NASA computers that sent Neil Armstrong to the moon in 1969. Similarly, the Internet of 2020 is almost one trillion times larger than the Internet of 1995.
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
Thanks to the 'universality of intelligence', reusable pre-trained models are set to rule the AI world.
Mukesh Borar (The Secrets of AI: a Math-Free Guide to Thinking Machines)
The universal nature of intelligence is the driving force behind most human interactions.
Mukesh Borar (The Secrets of AI: a Math-Free Guide to Thinking Machines)
There is no such tiny “Cartesian Theater” in the brain; conscious experience is generated by a vastly complex, distributed network that synchronizes and adjusts its activity by the millisecond. As far as we can tell, certain patterns of activity in this distributed network give rise to conscious experience. But fundamentally, this network’s activity is self-contained and the feeling of a unified flow of consciousness you have is not just from the processing of sensory information. The experience you have right now is a unique creation of your brain that has transformed data from your body into something closer to a hallucination. To break down this seemingly obvious point that we will deal with very often in this book and that I myself struggle to understand: the existence of our experience is real, but the contents of this experience exist only in your brain. Some philosophers call this “irreducible subjectivity,” which means that no totally objective theory of human experience may be possible. The contents of your experience are not representations of the world, but your experience is part of the world. By altering this process with molecules like psilocybin or LSD we can become aware of different aspects of our perceptions. By perturbing consciousness and observing the consequences, we can gain insight into its normal functioning. This is again not to say that consciousness is not real; there can be no doubt that I am conscious as I write this sentence. However, it is the relationship between consciousness and the external world that is more mysterious than one might assume. It is often supposed that cognition and consciousness result from processing the information from our sensory systems (like vision), and that we use neural computations to process this information. However, following Riccardo Manzotti and others such as the cognitive neuroscientist Stanislas Dehaene, I will argue that computations are not natural things that can cause a physical phenomenon like consciousness. When I read academic papers on artificial or machine intelligence, or popular books on the subject, I have not found anyone grappling with these strange “facts” about human consciousness. Either consciousness is not mentioned, or if it is, it is assumed to be a computational problem.
Andrew Smart (Beyond Zero and One: Machines, Psychedelics, and Consciousness)
By the time I began my Ph.D., the field of artificial intelligence had forked into two camps: the “rule-based” approach and the “neural networks” approach. Researchers in the rule-based camp (also sometimes called “symbolic systems” or “expert systems”) attempted to teach computers to think by encoding a series of logical rules: If X, then Y. This approach worked well for simple and well-defined games (“toy problems”) but fell apart when the universe of possible choices or moves expanded. To make the software more applicable to real-world problems, the rule-based camp tried interviewing experts in the problems being tackled and then coding their wisdom into the program’s decision-making (hence the “expert systems” moniker). The “neural networks” camp, however, took a different approach. Instead of trying to teach the computer the rules that had been mastered by a human brain, these practitioners tried to reconstruct the human brain itself. Given that the tangled webs of neurons in animal brains were the only thing capable of intelligence as we knew it, these researchers figured they’d go straight to the source. This approach mimics the brain’s underlying architecture, constructing layers of artificial neurons that can receive and transmit information in a structure akin to our networks of biological neurons. Unlike the rule-based approach, builders of neural networks generally do not give the networks rules to follow in making decisions. They simply feed lots and lots of examples of a given phenomenon—pictures, chess games, sounds—into the neural networks and let the networks themselves identify patterns within the data. In other words, the less human interference, the better.
Kai-Fu Lee (AI Superpowers: China, Silicon Valley, and the New World Order)
Isaac Asimov’s short story “The Fun They Had” describes a school of the future that uses advanced technology to revolutionize the educational experience, enhancing individualized learning and providing students with personalized instruction and robot teachers. Such science fiction has gone on to inspire very real innovation. In a 1984 Newsweek interview, Apple’s co-founder Steve Jobs predicted computers were going to be a bicycle for our minds, extending our capabilities, knowledge, and creativity, much the way a ten-speed amplifies our physical abilities. For decades, we have been fascinated by the idea that we can use computers to help educate people. What connects these science fiction narratives is that they all imagined computers might eventually emulate what we view as intelligence. Real-life researchers have been working for more than sixty years to make this AI vision a reality. In 1962, the checkers master Robert Nealey played the game against an IBM 7094 computer, and the computer beat him. A few years prior, in 1957, the psychologist Frank Rosenblatt created Perceptron, the first artificial neural network, a computer simulation of a collection of neurons and synapses trained to perform certain tasks. In the decades following such innovations in early AI, we had the computation power to tackle systems only as complex as the brain of an earthworm or insect. We also had limited techniques and data to train these networks. The technology has come a long way in the ensuing decades, driving some of the most common products and apps today, from the recommendation engines on movie streaming services to voice-controlled personal assistants such as Siri and Alexa. AI has gotten so good at mimicking human behavior that oftentimes we cannot distinguish between human and machine responses. Meanwhile, not only has the computation power developed enough to tackle systems approaching the complexity of the human brain, but there have been significant breakthroughs in structuring and training these neural networks.
Salman Khan (Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing))
Even if AI can be made robust enough for us to trust that a robojudge is using the legislated algorithm, will everybody feel that they understand its logical reasoning enough to respect its judgment? This challenge is exacerbated by the recent success of neural networks, which often outperform traditional easy-to-understand AI algorithms at the price of inscrutability. If defendants wish to know why they were convicted, shouldn’t they have the right to a better answer than “we trained the system on lots of data, and this is what it decided”?
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
proof of the social genealogy of AI: the first artificial neural network – the perceptron – was born not as the automation of logical reasoning but of a statistical method originally used to measure intelligence in cognitive tasks and to organise social hierarchies accordingly.
Matteo Pasquinelli (The Eye of the Master: A Social History of Artificial Intelligence)
Prepackaged software can circumvent the explosion of possibilities that a blank-slate brain would immediately run up against. A system that begins with a blank slate would be unable to learn all the complex rules of the world with only the impoverished input that babies receive.15 It would have to try everything, and it would fail. We know this, if for no other reason, than from the long history of failure of artificial neural networks that start off knowledge-free and attempt to learn the rules of the world.
David Eagleman (Incognito: The Secret Lives of the Brain)
For many years, it was believed that the connections between neurons in an adult brain were fixed. Learning, it was believed, involved increasing or decreasing the strength of synapses. This is still how learning occurs in most artificial neural networks. However, over the past few decades, scientists have discovered that in many parts of the brain, including the neocortex, new synapses form and old ones disappear. Every day, many of the synapses on an individual neuron will disappear and new ones will replace them. Thus, much of learning occurs by forming new connections between neurons that were previously not connected. Forgetting happens when old or unused connections are removed entirely.
Jeff Hawkins (A Thousand Brains: A New Theory of Intelligence)
The data mining tools used are nearest neighbor, genetic algorithms, rule induction, decision trees, and artificial neural networks.
Vince Reynolds (Big Data For Beginners: Understanding SMART Big Data, Data Mining & Data Analytics For improved Business Performance, Life Decisions & More!)
The currently most popular model for such an artificial neural network represents the state of each neuron by a single number and the strength of each synapse by a single number. In this model, each neuron updates its state at regular time steps by simply averaging together the inputs from all connected neurons, weighting them by the synaptic strengths, optionally adding a constant, and then applying what’s called an activation function to the result to compute its next state.fn5 The easiest way to use a neural network as a function is to make it feedforward, with information flowing only in one direction, as in figure 2.9, plugging the input to the function into a layer of neurons at the top and extracting the output from a layer of neurons at the bottom.
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
I think it would be fair to say that only with certain instances of top-down (or primarily top down) organization have computers exhibited a significant superiority over humans. The most obvious example is in straightforward numerical calculation, where computers would now win hands down-and also in 'computational' games, such as chess or draughts (checkers), and where there may be only a very few human players able to beat the best machines. With bottom-up (artificial neural network) organization, the computers can, in a few limited instances, reach about the level of ordinary well-trained humans.
Roger Penrose (Shadows of the Mind: A Search for the Missing Science of Consciousness)
Neural networks have now transformed both biological and artificial intelligence, and have recently started dominating the AI subfield known as machine learning (the study of algorithms that improve through experience).
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
In essence, this means neural networks can think using analogies, which is something that could never be said for classic AI.
Luke Dormehl (Thinking Machines: The inside story of Artificial Intelligence and our race to build the future)
fourth epoch, driven by technological advances such as cognitive computing. Whereas the previous eras developed efficiencies in an established process, the technologies available or in development today enable us fundamentally to rethink the process. Rather than simply replicating the established methods of managing wealth, they offer an opportunity to augment these in new ways. We are at a point where advances in computing power and lower computing costs enable us to apply artificial intelligence, machine learning, natural language processing, neural networks and a host of other tools to everyday tasks. The opportunities for wealth management are genuinely epoch-making.
Susanne Chishti (The WEALTHTECH Book: The FinTech Handbook for Investors, Entrepreneurs and Finance Visionaries)
It’s easy for you to tell what it’s a photo of, but to program a function that inputs nothing but the colors of all the pixels of an image and outputs an accurate caption such as “A group of young people playing a game of frisbee” had eluded all the world’s AI researchers for decades. Yet a team at Google led by Ilya Sutskever did precisely that in 2014. Input a different set of pixel colors, and it replies “A herd of elephants walking across a dry grass field,” again correctly. How did they do it? Deep Blue–style, by programming handcrafted algorithms for detecting frisbees, faces and the like? No, by creating a relatively simple neural network with no knowledge whatsoever about the physical world or its contents, and then letting it learn by exposing it to massive amounts of data. AI visionary Jeff Hawkins wrote in 2004 that “no computer can…see as well as a mouse,” but those days are now long gone.
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
I’m not suggesting that neural networks are easy. You need to be an expert to make these things work. But that expertise serves you across a broader spectrum of applications. In a sense, all of the effort that previously went into feature design now goes into architecture design and loss function design and optimization scheme design. The manual labor has been raised to a higher level of abstraction.
Stefano Soatto
However, AI researchers have shown that neural networks can still attain human-level performance on many remarkably complex tasks even if one ignores all these complexities and replaces real biological neurons with extremely simple simulated ones that are all identical and obey very simple rules. The currently most popular model for such an artificial neural network represents the state of each neuron by a single number and the strength of each synapse by a single number. In this model, each neuron updates its state at regular time steps by simply averaging together the inputs from all connected neurons, weighting them by the synaptic strengths, optionally adding a constant, and then applying what’s called an activation function to the result to compute its next state.*5 The easiest way to use a neural network as a function is to make it feedforward, with information flowing only in one direction, as in figure 2.9, plugging the input to the function into a layer of neurons at the top and extracting the output from a layer of neurons at the bottom.
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
Even if AI can be made robust enough for us to trust that a robojudge is using the legislated algorithm, will everybody feel that they understand its logical reasoning enough to respect its judgment? This challenge is exacerbated by the recent success of neural networks, which often outperform traditional easy-to-understand AI algorithms at the price of inscrutability. If defendants wish to know why they were convicted, shouldn’t they have the right to a better answer than “we trained the system on lots of data, and this is what it decided”? Moreover, recent studies have shown that if you train a deep neural learning system with massive amounts of prisoner data, it can predict who’s likely to return to crime (and should therefore be denied parole) better than human judges. But what if this system finds that recidivism is statistically linked to a prisoner’s sex or race—would this count as a sexist, racist robojudge that needs reprogramming? Indeed, a 2016 study argued that recidivism-prediction software used across the United States was biased against African Americans and had contributed to unfair sentencing.36 These are important questions that we all need to ponder and discuss to ensure that AI remains beneficial.
Max Tegmark (Life 3.0: Being Human in the Age of Artificial Intelligence)
in this way, artificial neural networks are like the brain’s very real neural network. These networks can be insanely complicated.
David Weinberger (Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility)
God Bias: When an algorithm or neural network inherits flaws of it's human creator.
Clyde DeSouza (Maya)