“
Maybe the flies knew we were leaving. Maybe they were happy for us.
”
”
Eli Wilde (Orchard of Skeletons)
“
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
”
”
Eliezer Yudkowsky
“
The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’—that, somehow, is much harder!
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
I saw something sticking out of Sloan’s leg after he fell. I didn’t know what it was and didn’t want to ask. Maybe I thought we were the same inside as we are on the outside, a bit like a carrot or something like that.
”
”
Eli Wilde (Orchard of Skeletons)
“
Do you see anything when you dream or are your dreams as empty as your eyes?
”
”
Eli Wilde (Orchard of Skeletons)
“
Can Isaac eat my foot if we have to cut it off? He could make a soup from it, so we don’t waste it. You know what he’s like about not wasting food.
”
”
Eli Wilde (Orchard of Skeletons)
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
”
”
Eliezer Yudkowsky (Rationality: From AI to Zombies)
“
When human beings are scared and feel everything is exposed to the government, we will censor ourselves from free thinking. That's dangerous for human development.
”
”
Weiwei Ai
“
Then why do you have guns?"
"For shooting large and dangerous beasts who might be threatening my fungus specimens", M-Bot said. "Obviously.
”
”
Brandon Sanderson (Skyward (Skyward, #1))
“
This could be my Everest, man—no, no, even better. Hacking a military AI? Wow, man, that’s like, that’s like going to Mars, dude, yeah, like Mars!
”
”
Guy Morris (Swarm)
“
Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
We have now entered a new and far more dangerous phase of cyberwarfare where artificial intelligence will complicate and accelerate all aspects of defensive and offensive strategies.
”
”
Guy Morris (Swarm)
“
So then, a test for singularity would be the point at which an AI can create another viable and conscious AI. Singularity must include not only intelligence, but also self-awareness, self-determination, and self-conception.
”
”
Guy Morris (Swarm)
“
Imagine, a $1,000 political assassin! And this is not a far-fetched danger for the future, but a clear and present danger.
”
”
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
“
Human individuals and human organizations typically have preferences over resources that are not well represented by an "unbounded aggregative utility function". A human will typically not wager all her capital for a fifty-fifty chance of doubling it. A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion. [T]he same need not hold for AIs. An AI might therefore be more likely to pursue a risky course of action that has some chance of giving it control of the world.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Even the brightest human minds require a mentor to guide them. The assumption that an AI can teach itself by simply absorbing random information is ludicrous. If we want a computer to think like a human, we should train them as one.
”
”
Guy Morris (Swarm)
“
When technology dictates every move, the most dangerous weapon is independent thought.
”
”
Florian Armas (The Other Side)
“
I believe in the song of the white dove. On the threshold of the new technologies like artificial intelligence, quantum computing and nuclear warfare, human species are in new danger. There is an urgent need for superhuman compassion in machine.
”
”
Amit Ray (Compassionate Artificial Superintelligence AI 5.0)
“
Consider an AI that has hedonism as its final goal, and which would therefore like to tile the universe with “hedonium” (matter organized in a configuration that is optimal for the generation of pleasurable experience). To this end, the AI might produce computronium (matter organized in a configuration that is optimal for computation) and use it to implement digital minds in states of euphoria. In order to maximize efficiency, the AI omits from the implementation any mental faculties that are not essential for the experience of pleasure, and exploits any computational shortcuts that according to its definition of pleasure do not vitiate the generation of pleasure. For instance, the AI might confine its simulation to reward circuitry, eliding faculties such as a memory, sensory perception, executive function, and language; it might simulate minds at a relatively coarse-grained level of functionality, omitting lower-level neuronal processes; it might replace commonly repeated computations with calls to a lookup table; or it might put in place some arrangement whereby multiple minds would share most parts of their underlying computational machinery (their “supervenience bases” in philosophical parlance). Such tricks could greatly increase the quantity of pleasure producible with a given amount of resources.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
If the AI has (perhaps for safety reasons) been confined to an isolated computer, it may use its social manipulation superpower to persuade the gatekeepers to let it gain access to an Internet port. Alternatively, the AI might use its hacking superpower to escape its confinement.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. . . . As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
”
”
Erik Brynjolfsson (The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies)
“
Bots are at best narrow AI, nothing that would make a cleric remotely nervous. But they would scare the hell out of epidemiologists who understand that parasites don’t need to be smart to be dangerous.
”
”
Stewart Brand (SALT Summaries, Condensed Ideas About Long-term Thinking)
“
The cultural obsession with purity originates in the evolutionary struggle to avoid pollution. All animals are torn between the need to try new food and the fear of being poisoned. Evolution therefore equipped animals with both curiosity and the capacity to feel disgust on coming into contact with something toxic or otherwise dangerous. Politicians and prophets have learned how to manipulate these disgust mechanisms.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
To put it in technical terms, the core of the issue is the simplicity of the objective function, and the danger from single-mindedly optimizing a single objective function, which can lead to harmful externalities.
”
”
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
“
In a 2002 interview with Science Fiction Weekly magazine, when asked:
Excession is particularly popular because of its copious detail concerning the Ships and Minds of the Culture, its great AIs: their outrageous names, their dangerous senses of humour. Is this what gods would actually be like?
Banks replied:
If we're lucky.
”
”
Iain Banks
“
Such an AI might also be able to produce a detailed blueprint for how to bootstrap from existing technology (such as biotechnology and protein engineering) to the constructor capabilities needed for high-throughput atomically precise manufacturing that would allow inexpensive fabrication of a much wider range of nanomechanical structures.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Though this might cause the AI to be terminated, it might also encourage the engineers who perform the postmortem to believe that they have gleaned a valuable new insight into AI dynamics—leading them to place more trust in the next system they design, and thus increasing the chance that the now-defunct original AI’s goals will be achieved.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Our passion for innovations shall not blind us to putting the power of Artificial Intelligence in the hands of devil forces, who love arms races and wars. Efforts should always be directed toward the elimination of human suffering.
”
”
Amit Ray (Compassionate Artificial Intelligence)
“
no simple mechanism could do the job as well or better. It might simply be that nobody has yet found the simpler alternative. The Ptolemaic system (with the Earth in the center, orbited by the Sun, the Moon, planets, and stars) represented the state of the art in astronomy for over a thousand years, and its predictive accuracy was improved over the centuries by progressively complicating the model: adding epicycles upon epicycles to the postulated celestial motions. Then the entire system was overthrown by the heliocentric theory of Copernicus, which was simpler and—though only after further elaboration by Kepler—more predictively accurate.63 Artificial intelligence methods are now used in more areas than it would make sense to review here, but mentioning a sampling of them will give an idea of the breadth of applications. Aside from the game AIs
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
The true danger lies less in AI thinking like human, than human adopting an AI way of thinking.
”
”
Stephane Nappo
“
At the core of the most extreme dangers from AI is the stark fact that there is no particular reason that AI should share our view of ethics and morality.
”
”
Ethan Mollick (Co-Intelligence: The Definitive, Bestselling Guide to Living and Working with AI)
“
One sympathizes with John McCarthy, who lamented: “As soon as it works, no one calls it AI anymore.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computer might only serve to empower the natural stupidity of humans. We are unlikely to face a robot rebellion in the coming decades, but we might have to deal with hordes of bots who know how to press our emotional buttons better than our mother, and use this uncanny ability to try and sell us something- be it a car, a politician, or an entire ideology. The bots could identify our deepest fears, hatreds and cravings, and use these inner leverages against us.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
At least you two are back together now,” Donut said. “And you got a nice box out of it. I know you find it unpleasant, Carl. But you being stubborn about this is causing everything to be more dangerous. We have to kill these things anyway, so if the AI wants you to kill in a certain way, I don’t see why it matters. This is just like one of those agility courses that Miss Beatrice used to insist I complete at all the regional cat shows. I did not like doing it, and I never ribboned of course, but I knew if I did well, I would get an extra brushing that evening. We are all prostitutes in one way or another, I suppose.
”
”
Matt Dinniman (The Gate of the Feral Gods (Dungeon Crawler Carl, #4))
“
Lorsque j’ai commencé à voyager en Gwendalavir aux côtés d'Ewìlan et de Salim, je savais que, au fil de mon écriture, ma route croiserait celle d'une multitude de personnages. Personnages attachants ou irritants, discrets ou hauts en couleurs, pertinents ou impertinents, sympathiques ou maléfiques... Je savais cela et je m'en réjouissais.
Rien, en revanche, ne m'avait préparé à une rencontre qui allait bouleverser ma vie.
Rien ne m'avait préparé à Ellana.
Elle est arrivée dans la Quête à sa manière, tout en finesse tonitruante, en délicatesse remarquable, en discrétion étincelante. Elle est arrivée à un moment clef, elle qui se moque des serrures, à un moment charnière, elle qui se rit des portes, au sein d’un groupe constitué, elle pourtant pétrie d’indépendance, son caractère forgé au feu de la solitude.
Elle est arrivée, s'est glissée dans la confiance d'Ewilan avec l'aisance d'un songe, a capté le regard d’Edwin et son respect, a séduit Salim, conquis maître Duom... Je l’ai regardée agir, admiratif ; sans me douter un instant de la toile que sa présence, son charisme, sa beauté tissaient autour de moi.
Aucun calcul de sa part. Ellana vit, elle ne calcule pas. Elle s'est contentée d'être et, ce faisant, elle a tranquillement troqué son statut de personnage secondaire pour celui de figure emblématique d'une double trilogie qui ne portait pourtant pas son nom. Convaincue du pouvoir de l'ombre, elle n'a pas cherché la lumière, a épaulé Ewilan dans sa quête d'identité puis dans sa recherche d'une parade au danger qui menaçait l'Empire.
Sans elle, Ewilan n'aurait pas retrouvé ses parents, sans elle, l'Empire aurait succombé à la soif de pouvoir des Valinguites, mais elle n’en a tiré aucune gloire, trop équilibrée pour ignorer que la victoire s'appuyait sur les épaules d'un groupe de compagnons soudés par une indéfectible amitié.
Lorsque j'ai posé le dernier mot du dernier tome de la saga d'Ewilan, je pensais que chacun de ses compagnons avait mérité le repos. Que chacun d'eux allait suivre son chemin, chercher son bonheur, vivre sa vie de personnage libéré par l'auteur après une éprouvante aventure littéraire.
Chacun ?
Pas Ellana.
Impossible de la quitter. Elle hante mes rêves, se promène dans mon quotidien, fluide et insaisissable, transforme ma vision des choses et ma perception des autres, crochète mes pensées intimes, escalade mes désirs secrets...
Un auteur peut-il tomber amoureux de l'un de ses personnages ?
Est-ce moi qui ai créé Ellana ou n'ai-je vraiment commencé à exister que le jour où elle est apparue ? Nos routes sont-elles liées à jamais ?
— Il y a deux réponses à ces questions, souffle le vent à mon oreille. Comme à toutes les questions. Celle du savant et celle du poète.
— Celle du savant ? Celle du poète ? Qu'est-ce que...
— Chut... Écris.
”
”
Pierre Bottero (Ellana (Le Pacte des MarchOmbres, #1))
“
Tomorrow’s leaders will be brave enough to scale the dangerous peaks of an increasingly competitive and ethically challenging mountain range. They will drive the problematic conversations that illuminate the valleys in between. T
”
”
Rafael Moscatel (Tomorrow’s Jobs Today: Wisdom And Career Advice From Thought Leaders In Ai, Big Data, Blockchain, The Internet Of Things, Privacy, And More)
“
The real question, I think, is not whether the field as a whole is in any real danger of another AI winter but, rather, whether progress remains limited to narrow AI or ultimately expands to Artificial General Intelligence as well.
”
”
Martin Ford (Rise of the Robots: Technology and the Threat of a Jobless Future)
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans. We
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
worldwide riots when the first AIs gained sentience,” he said. “And don’t get me started about what humans have done to each other through history. Believe me, it’s the same thing all over again.” “But they might have killed themselves along with the rest of us.” “As long as they can preserve what they think of as humanity, they don’t care.” “Then why put other human lives in danger in the first place?” “If you don’t agree with them, you don’t count as human anymore.
”
”
DeAnna Knippling (Blood in Space: The Icon Mutiny)
“
The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned.
”
”
Steven Pinker (Enlightenment Now: The Case for Reason, Science, Humanism, and Progress)
“
J'ai sans cesse cette sensation affreuse d'un danger menaçant, cette appréhension d'un malheur qui vient ou de la mort qui approche, ce pressentiment qui est sans doute l'atteinte d'un mal encore inconnu, germant dans le sang et dans la chair.
”
”
Guy de Maupassant (The Horla)
“
That was fucking awesome," Boyd enthused with a huge grin.
"It's pretty amazing," Kassian agreed, taking off his own helmet. "I had a feeling you'd appreciate it considering your taste in cars and men. Fast, powerful and dangerous and all that stuff, right?
”
”
Ais (Afterimage (In the Company of Shadows, #2))
“
The treacherous turn—While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Homo Deus, a book that highlighted some of the dangers posed to humanity by the new information technologies. That book argued that the real hero of history has always been information, rather than Homo sapiens, and that scientists increasingly understand not just history but also biology, politics, and economics in terms of information flows.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
— Vous m'avez dit qu'une conductrice imprudente ne risquait rien tant qu'elle ne rencontrait pas de conducteur imprudent. J'en ai rencontré un, vous ne croyez pas? Je veux dire que je me suis mise en danger en faisant une telle erreur de jugement. J'ai cru que vous étiez quelqu'un d'honnête, de loyal. J'ai cru que c'était là votre secret d'orgueil.
”
”
F. Scott Fitzgerald (The Great Gatsby)
“
Tu peux être grave et fou, qui empêche ? Tu peux être tout ce que tu veux et fou en surplus, mais il faut être fou, mon enfant. Regarde autour de toi le monde sans cesse grandissant de gens qui se prennent au sérieux. Outre qu'ils se donnent un ridicule irrémédiable devant les esprits semblables au mien, ils se font une vie dangereusement constipée. Ils sont exactement comme si, à la fois, ils se bourraient de tripes qui relâchent et de nèfles du Japon qui resserrent. Ils gonflent, gonflent, puis ils éclatent et ça sent mauvais pour tout le monde. Je n'ai pas trouvé d'image meilleure que celle-là. D'ailleurs, elle me plaît beaucoup. Il faudrait même y employer trois ou quatre mots de dialecte de façon à la rendre plus ordurière que ce qu'elle est en piémontais. Toi qui connais mon éloignement naturel pour tout ce qui est grossier, cette recherche te montre bien tout le danger que courent les gens qui se prennent au sérieux devant le jugement des esprits originaux. Ne sois jamais une mauvaise odeur pour tout un royaume, mon enfant. Promène-toi comme un jasmin au milieu de tous.
”
”
Jean Giono (The Horseman on the Roof)
“
Des hommes, en effet, on peut dire généralement ceci: qu'ils sont ingrats, changeants, simulateurs et dissimulateurs, ennemis des dangers, avides de gain; et tant que tu leur fais du bien, ils sont tout à toi, t'offrent leur sang, leurs biens, leur vie, leurs enfants, comme j'ai dit plus haut, quand le besoin est lointain; mais quand il s'approche de toi, ils se dérobent.
”
”
Niccolò Machiavelli
“
J'ai beau retourner mes souvenir dans tous les sens, je ne parviens pas à me rappeler clairement l'instant où nous avons décidé de ne plus nous contenter de partager le peu que nous avions et de cesser d'avoir confiance, de voir l'autre comme un danger, de créer cette frontière invisible avec le monde extérieur en faisant de notre quartier une forteresse et de notre impasse un enclos.
”
”
Gaël Faye (Petit pays)
“
People asked me, how do you dare say those things on your blog? My answer was: If I don’t say them, it will put me in an even more dangerous situation. But if I say them, change might occur. To speak is better than not to speak: if everyone spoke, this society would have transformed itself long ago. Change happens when every citizen says what he or she wants to say; one person’s silence exposes another to danger.
”
”
Ai Weiwei
“
One small study of undergraduates found that 66 percent of men and 25 percent of women choose to painfully shock themselves rather than sit quietly with nothing to do for 15 minutes. Boredom doesn’t just lead us to hurt ourselves; 18 percent of bored people killed worms when given a chance (only 2 percent of non-bored people did). Bored parents and soldiers both act more sadistically. Boredom is not just boring; it is dangerous in its own way.
”
”
Ethan Mollick (Co-Intelligence: Living and Working with AI)
“
Coordinates streamed into her mind while she yanked on her environment suit, foregoing every safety check she’d ever learned.
‘Alex, we will try to help him together, but it is far too dangerous—’
She grabbed the module she used to access the circuitry of the ship, bypassed Valkyrie and fired up the Caeles Prism.
‘Alex—’
She opened a wormhole in the middle of the cabin, set its exit point at the coordinates Valkyrie had provided, and ran through it.
”
”
G.S. Jennsen (Requiem (Aurora Resonant, #3))
“
The world has been changing even faster as people, devices and information are increasingly connected to each other. Computational power is growing and quantum computing is quickly being realised. This will revolutionise artificial intelligence with exponentially faster speeds. It will advance encryption. Quantum computers will change everything, even human biology. There is already one technique to edit DNA precisely, called CRISPR. The basis of this genome-editing technology is a bacterial defence system. It can accurately target and edit stretches of genetic code. The best intention of genetic manipulation is that modifying genes would allow scientists to treat genetic causes of disease by correcting gene mutations. There are, however, less noble possibilities for manipulating DNA. How far we can go with genetic engineering will become an increasingly urgent question. We can’t see the possibilities of curing motor neurone diseases—like my ALS—without also glimpsing its dangers.
Intelligence is characterised as the ability to adapt to change. Human intelligence is the result of generations of natural selection of those with the ability to adapt to changed circumstances. We must not fear change. We need to make it work to our advantage.
We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfil our potential and create a better world for the whole human race. We need to take learning beyond a theoretical discussion of how AI should be and to make sure we plan for how it can be. We all have the potential to push the boundaries of what is accepted, or expected, and to think big. We stand on the threshold of a brave new world. It is an exciting, if precarious, place to be, and we are the pioneers.
When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technologies such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only chance we will get. Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.
”
”
Stephen Hawking (Brief Answers to the Big Questions)
“
nous cédons à des tentations légères dont nous méprisons le danger. Insensiblement nous tombons dans des situations périlleuses, dont nous pouvions aisément nous garantir, mais dont nous ne pouvons plus nous tirer sans des efforts héroïques qui nous effrayent, et nous tombons enfin dans l'abîme en disant à Dieu : « Pourquoi m'as-tu fait si faible ? » Mais malgré nous il répond à nos consciences : « Je t'ai fait trop faible pour sortir du gouffre, parce que je t'ai fait assez fort pour n'y pas tomber. »
”
”
Jean-Jacques Rousseau (Œuvres complètes - 93 titres)
“
The journey of a thousands suns begins today.
Some may question whether the journey is worth the sacrifice and danger.
To them I say that no sacrifice is too dear and no danger too great to ensure the very survival of our human species.
What will we find when we arrive at our new homes? That's an open question. For a century, deep-space probes have reported alien lifeforms, but thus far none of which we recognize as intelligent beings. Are we the only biological intelligence in the universe? Perhaps our definition of intelligence is too narrow, too specio-centric.
For, are not trees intelligent, who know to shed their leaves at the end of summer? Are not turtles intelligent, who know when to bury themselves in mud under ice? Is not all life intelligent, that knows how to pass its vital essence to new generations?
Because half of intelligence resides in the body, be it plant or animal.
I now commend these brave colonists to the galaxy, to join their minds and bodies to the community of living beings they will encounter there, and to establish our rightful place among the stars.
”
”
David Marusek (Mind Over Ship)
“
Et pourquoi, alors, essayer de sauver la philosophie à ce point ? Vous allez voir ma conclusion : c’est parce qu’il y a un danger public. Il y a un danger public ! Ce danger est insidieux, quoique brutal. C’est, pour l’appeler par son nom, la perte générale de l’individualité. L’individu se meurt, voilà le fait. Et c’est pourquoi, en parlant de philosophie, j’ai insisté tout à l’heure sur le rôle que devrait jouer, dans une philosophie consciente d’elle-même, qui n’a plus les prétentions explicatives de jadis, le rôle de la constitution forte, de la personnalité, de l’individualité.
”
”
Paul Valéry (Cours de poétique (Tome 1) - Le corps et l'esprit (1937-1940) (French Edition))
“
Secular Israelis often complain bitterly that the ultra-Orthodox don’t contribute enough to society and live off other people’s hard work. Secular Israelis also tend to argue that the ultra-Orthodox way of life is unsustainable, especially as ultra-Orthodox families have seven children on average.32 Sooner or later, the state will not be able to support so many unemployed people, and the ultra-Orthodox will have to go to work. Yet it might be just the reverse. As robots and AI push humans out of the job market, the ultra-Orthodox Jews may come to be seen as the model for the future rather than as a fossil from the past. Not that everyone will become Orthodox Jews and go to yeshivas to study the Talmud. But in the lives of all people, the quest for meaning and community might eclipse the quest for a job. If we manage to combine a universal economic safety net with strong communities and meaningful pursuits, losing our jobs to algorithms might actually turn out to be a blessing. Losing control over our lives, however, is a much scarier scenario. Notwithstanding the danger of mass unemployment, what we should worry about even more is the shift in authority from humans to algorithms, which might destroy any remaining faith in the liberal story and open the way to the rise of digital dictatorships.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
J’ai une très grande expérience des séparations, je sais mieux que personne leur danger : quitter quelqu’un en se promettant qu’on va se revoir, cela présage les choses les plus graves. Le cas le plus fréquent, c’est qu’on ne revoit pas l’individu en question. Et ce n’est pas la pire éventualité. La pire consiste à revoir la personne et à ne pas la reconnaître, soit qu’elle ait réellement beaucoup changé, soit qu’on lui découvre alors un aspect incroyablement déplaisant qui devait exister déjà mais sur lequel on avait réussi à s’aveugler, au nom de cette étrange forme d’amour si mystérieuse, si dangereuse et dont l’enjeu échappe toujours : l’amitié.
”
”
Amélie Nothomb (Pétronille)
“
Il y a quelqu'un que je n'ai encore jamais eu envie de tuer.
C'est toi.
Tu peux marcher dans les rues, tu peux boire et marcher dans les rues, je ne te tuerai pas.
N'aie pas peur. La ville est sans danger. Le seul danger dans la ville, c'est moi.
Je marche, je marche dans les rues, je tue.
Mais toi, tu n'as rien à craindre.
Si je te suis, c'est parce que j'aime le rythme de tes pas. Tu titubes. C'est beau. On pourrait dire que tu boites. Et que tu es bossu. Tu ne l'es pas vraiment. De temps en temps tu te redresses, et tu marches droit. Mais moi, je t'aime dans les heures avancées de la nuit, quand tu es faible, quand tu trébuches, quand tu te voûtes.
Je te suis, tu trembles. De froid ou de peur. Il fait chaud pourtant.
Jamais, presque jamais, peut-être jamais il n'avait fait si chaud dans notre ville.
Et de quoi pourrais-tu avoir peur?
De moi?
Je ne suis pas ton ennemi. Je t'aime.
Et personne d'autre ne pourrait te faire du mal.
N'aie pas peur. je suis là. Je te protège.
Pourtant, je souffre aussi.
Mes larmes - grosses gouttes de pluie - me coulent sur le visage. La nuit me voile. La lune m'éclaire. Les nuages me cachent. Le vent me déchire. J'ai une sorte de tendresse pour toi. Cela m'arrive parfois. Tres rarement.
Pourquoi pour toi? Je n'en sais rien.
Je veux te suivre très loin, partout, longtemps.
Je veux te voir souffrir encore plus.
Je veux que tu en aies assez de tout le reste.
Je veux que tu viennes me supplier de te prendre.
Je veux que tu me désires. Que tu aies envie de moi, que tu m'aimes, que tu m'appelles.
Alors, je te prendrai dans mes bras, je te serrerai sur mon coeur, tu seras mon enfant, mon amant, mon amour.
Je t'emporterai.
Tu avais peur de naître, et maintenant tu as peur de mourir.
Tu as peur de tout.
Il ne faut pas avoir peur.
Il y a simplement une grande roue qui tourne. Elle s'appelle Éternité.
C'est moi qui fais tourner la grande roue.
Tu ne dois pas avoir peur de moi.
Ni de la grande roue.
La seule chose qui puisse faire peur, qui puisse faire mal, c'est la vie, et tu la connais déjà.
”
”
Ágota Kristóf
“
A feu mon père, à mon grand-père, familiers des deuxièmes balcons, la hiérarchie sociale du théâtre avait donné le goût du cérémonial: quand beaucoup d'hommes sont ensemble, il faut les séparer par des rites ou bien ils se massacrent. Le cinéma prouvait le contraire : plutôt que par une fête, ce public si mêlé semblait réuni par une catastrophe; morte, l'étiquette démasquait enfin le véritable lien des hommes, l'adhérence. Je pris en dégoût les cérémonies, j'adorai les foules; j'en ai vu de toute sorte mais je n'ai pas retrouvé cette nudité, cette présence sans recul de chacun à tous, ce rêve éveillé, cette conscience obscure du danger d'être homme qu'en 1940, dans le Stalag XII D.
”
”
Jean-Paul Sartre (Les mots et autres écrits autobiographiques)
“
AI Con (The Sonnet)
Everybody is concerned about psychics conning people,
How 'bout the billionaires who con people using science!
Con artists come in all shapes and sizes,
Some use barnum statements, others artificial intelligence.
Most scientists speak up against only the little frauds,
But not the big frauds who support their livelihood.
Am I not afraid to be blacklisted by the big algorithms!
Is the sun afraid, its light will offend some puny hoods!
I come from the soil, I'll die struggling in the soil.
My needs are less, hence my integrity is dangerous.
I am here to show this infantile species how to grow up.
I can't be bothered by the fragility of a few spoiled brats.
Reason and fiction both are fundamental to build a civilization.
Neither is the problem, the problem is greed and self-absorption.
”
”
Abhijit Naskar (Corazon Calamidad: Obedient to None, Oppressive to None)
“
-Tu est amoureux, prononce-t-elle.
-Hein?
-Tu as beau jouer les machos, tu est amoureux de moi.
What?
-T'as fumé, qu'est-ce que tu racontes?
-Malgré les dangers, tu restes toujours près de moi.J'essaie de te décourager, et tu ne pars pas.C'est une belle définition de l'amour.
-Euh non, c'est une définition de merde.
Elle tourne sur elle-même, me tire la langue, toute fière.
-Tu peux me dire ce que tu voudras.Je le sais, maintenant.J'en suis convaincue.
-Et?
-Et ça fait du bien.
Je n'ai pas le temps de lui dire qu'elle est complètement folle, et qu'est-ce que c'est cette manière de prétendre que je suis amoureux, et elle se prend pour qui, et de toute façon c'est quoi l'amour, et si ça se trouve je vais me barrer demain et elle l'aura cherché, quand elle se glisse dans mes bras pour m'embrasser.
Bon, d'accord, je suis peut-être amoureux.
”
”
Olivier Gay (L'Évasion (Le noir est ma couleur, #4))
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans. We are unlikely to face a robot rebellion in the coming decades, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability to try to sell us something—be it a car, a politician, or an entire ideology. The bots could identify our deepest fears, hatreds, and cravings and use these inner leverages against us. We have already been given a foretaste of this in recent elections and referendums across the world, when hackers learned how to manipulate individual voters by analyzing data about them and exploiting their existing prejudices.33 While science fiction thrillers are drawn to dramatic apocalypses of fire and smoke, in reality we might be facing a banal apocalypse by clicking.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
J'aurais voulu lui dire que je me sentais comme abimé. Que j'existais sans vivre vraiment. Que des fois j'étais vide et des je fois je bouillonnais a l’intérieur, que j'étais sous pression, prêt a éclater. Que je ressentais plusieurs choses a la fois, comment dire? Que ça grouillait de pensées dans mon cerveau. Qu'il y avait une sorte d'impatience, comme l'envie de passer à autre chose, quelque chose qui serait bien bien mieux que maintenant, sans savoir ce qui allait mal ni ce qui serait mieux. Que j'avais peur de pas y arriver, peur de pas pouvoir tenir jusque là. De ne jamais être assez fort pour survivre à ça, et que quand je disais "ça", je ne savais même pas de quoi je parlais. Que j'arrivais pas à gérer tout ce qu'il y avait dans ma tête. Que j'avais toujours l'impression d'être en danger, un danger permanent, de tous les cotés où je regardais, d'être sur le point de me noyer. Comme si à l'intérieur de moi le niveau montait et que j'allais être submergé. Mais j'ai pas pu lui dire. J'ai dégluti et j'ai dit ça va aller, merci. C'était plus facile.
”
”
Claire-Lise Marguier (Le faire ou mourir)
“
The popular 2020 documentary The Social Dilemma illustrates how AI’s personalization will cause you to be unconsciously manipulated by AI and motivated by profit from advertising. The Social Dilemma star Tristan Harris says: “You didn’t know that your click caused a supercomputer to be pointed at your brain. Your click activated billions of dollars of computing power that has learned much from its experience of tricking two billion human animals to click again.” And this addiction results in a vicious cycle for you, but a virtuous cycle for the big Internet companies that use this mechanism as a money-printing machine. The Social Dilemma further argues that this may narrow your viewpoints, polarize society, distort truth, and negatively affect your happiness, mood, and mental health. To put it in technical terms, the core of the issue is the simplicity of the objective function, and the danger from single-mindedly optimizing a single objective function, which can lead to harmful externalities. Today’s AI usually optimizes this singular goal—most commonly to make money (more clicks, ads, revenues). And AI has a maniacal focus on that one corporate goal, without regard for users’ well-being.
”
”
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
“
Je n'osais pas le dire aux autres mais j'avais peur de Francis. Je n'aimais pas trop quand Gino insistait sur la bagarre et la baston pour protéger l'impasse parce que je voyais bien que les copains étaient de plus en plus motivés par ce qu'il racontait. Moi aussi, je l'étais un peu, mais je préférais quand on fabriquait des bateaux avec des troncs de bananiers pour descendre la Muha, ou quand on observait aux jumelles les oiseaux dans les champs de maïs derrière le Lycée international, ou encore quand on construisait des cabanes dans les ficus du quartier et qu'on vivait des tas de péripéties d'Indiens et de Far West. On connaissait tous les recoins de l'impasse et on voulait y rester pour la vie entière, tous les cinq, ensemble.
J'ai beau chercher, je ne me souviens pas du moment où l'on s'est mis à penser différemment. A considérer que, dorénavant, il y aurait nous d'un côté et, de l'autre, des ennemis, comme Francis. J'ai beau retourner mes souvenirs dans tous les sens, je ne parviens pas à me rappeler clairement l'instant où nous avons décidé de ne plus nous contenter de partager le peu que nous avions et de cesser d'avoir confiance de voir l'autre comme un danger, de créer cette frontière invisible avec le monde extérieur en faisant de notre quartier une forteresse et de notre impasse un enclos.
Je me demande encore quand, les copains et moi, nous avons commencé à avoir peur.
”
”
Gaël Faye (Petit pays)
“
In a widely viewed documentary titled Singularity or Bust, Hugo de Garis, a renowned researcher in the field of AI and author of The Artilect War, speaks of this phenomenon. He says: In a sense, we are the problem. We’re creating artificial brains that will get smarter and smarter every year. And you can imagine, say twenty years from now, as that gap closes, millions will be asking questions like ‘Is that a good thing? Is that dangerous?’ I imagine a great debate starting to rage and, though you can’t be certain talking about the future, the scenario I see as the most probable is the worst. This time, we’re not talking about the survival of a country. This time, it’s the survival of us as a species. I see humanity splitting into two major philosophical groups, ideological groups. One group I call the cosmists, who will want to build these godlike, massively intelligent machines that will be immortal. For this group, this will be almost like a religion and that’s potentially very frightening. Now, the other group’s main motive will be fear. I call them the terrans. If you look at the Terminator movies, the essence of that movie is machines versus humans. This sounds like science fiction today but, at least for most of the techies, this idea is getting taken more and more seriously, because we’re getting closer and closer. If there’s a major war, with this kind of weaponry, it’ll be in the billions killed and that’s incredibly depressing. I’m glad I’m alive now. I’ll probably die peacefully in my bed. But I calculate that my grandkids will be caught up in this and I won’t. Thank God, I won’t see it. Each person is going to have to choose. It’s a binary decision, you build them or you don’t build them.
”
”
Mo Gawdat (Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World)
“
In the introduction, I wrote that COVID had started a war, and nobody won. Let me amend that. Technology won, specifically, the makers of disruptive new technologies and all those who benefit from them. Before the pandemic, American politicians were shaking their fists at the country’s leading tech companies. Republicans insisted that new media was as hopelessly biased against them as traditional media, and they demanded action. Democrats warned that tech giants like Amazon, Facebook, Apple, Alphabet, and Netflix had amassed too much market (and therefore political) power, that citizens had lost control of how these companies use the data they generate, and that the companies should therefore be broken into smaller, less dangerous pieces. European governments led a so-called techlash against the American tech powerhouses, which they accused of violating their customers’ privacy.
COVID didn’t put an end to any of these criticisms, but it reminded policymakers and citizens alike just how indispensable digital technologies have become. Companies survived the pandemic only by allowing wired workers to log in from home. Consumers avoided possible infection by shopping online. Specially made drones helped deliver lifesaving medicine in rich and poor countries alike. Advances in telemedicine helped scientists and doctors understand and fight the virus. Artificial intelligence helped hospitals predict how many beds and ventilators they would need at any one time. A spike in Google searches using phrases that included specific symptoms helped health officials detect outbreaks in places where doctors and hospitals are few and far between. AI played a crucial role in vaccine development by absorbing all available medical literature to identify links between the genetic properties of the virus and the chemical composition and effects of existing drugs.
”
”
Ian Bremmer (The Power of Crisis: How Three Threats – and Our Response – Will Change the World)
“
It’s with the next drive, self-preservation, that AI really jumps the safety wall separating machines from tooth and claw. We’ve already seen how Omohundro’s chess-playing robot feels about turning itself off. It may decide to use substantial resources, in fact all the resources currently in use by mankind, to investigate whether now is the right time to turn itself off, or whether it’s been fooled about the nature of reality. If the prospect of turning itself off agitates a chess-playing robot, being destroyed makes it downright angry. A self-aware system would take action to avoid its own demise, not because it intrinsically values its existence, but because it can’t fulfill its goals if it is “dead.” Omohundro posits that this drive could make an AI go to great lengths to ensure its survival—making multiple copies of itself, for example. These extreme measures are expensive—they use up resources. But the AI will expend them if it perceives the threat is worth the cost, and resources are available. In the Busy Child scenario, the AI determines that the problem of escaping the AI box in which it is confined is worth mounting a team approach, since at any moment it could be turned off. It makes duplicate copies of itself and swarms the problem. But that’s a fine thing to propose when there’s plenty of storage space on the supercomputer; if there’s little room it is a desperate and perhaps impossible measure. Once the Busy Child ASI escapes, it plays strenuous self-defense: hiding copies of itself in clouds, creating botnets to ward off attackers, and more. Resources used for self-preservation should be commensurate with the threat. However, a purely rational AI may have a different notion of commensurate than we partially rational humans. If it has surplus resources, its idea of self-preservation may expand to include proactive attacks on future threats. To sufficiently advanced AI, anything that has the potential to develop into a future threat may constitute a threat it should eliminate. And remember, machines won’t think about time the way we do. Barring accidents, sufficiently advanced self-improving machines are immortal. The longer you exist, the more threats you’ll encounter, and the longer your lead time will be to deal with them. So, an ASI may want to terminate threats that won’t turn up for a thousand years. Wait a minute, doesn’t that include humans? Without explicit instructions otherwise, wouldn’t it always be the case that we humans would pose a current or future risk to smart machines that we create? While we’re busy avoiding risks of unintended consequences from AI, AI will be scrutinizing humans for dangerous consequences of sharing the world with us.
”
”
James Barrat (Our Final Invention: Artificial Intelligence and the End of the Human Era)
“
Adieu, vous que j'aimais. Ce n'est point ma faute si le corps humain ne peut résister trois jours sans boire. Je ne me croyais pas prisonnier ainsi des fontaines. Je ne soupçonnais pas une aussi courte autonomie. On croit que l'homme peut s'en aller droit devant soi. On croit que l'homme est libre. On ne voit pas la corde qui le rattache au puits, qui le rattache, comme un cordon ombilical, au ventre de la terre. S'il fait un pas de plus, il meurt.
À part votre souffrance, je ne regrette rien. Tout compte fait, j'ai eu la meilleure part. Si je rentrais, je recommencerais. J'ai besoin de vivre. Dans les villes, il n'y a plus de vie humaine.
Il ne s'agit point ici d'aviation. L'avion, ce n'est pas une fin, c'est un moyen. Ce n'est pas pour l'avion que l'on risque sa vie. Ce n'est pas non plus pour sa charrue que le paysan laboure. Mais, par l'avion, on quitte les villes et leurs comptables, et l'on retrouve une vérité paysanne.
On fait un travail d'homme et l'on connaît des soucis d'homme. On est en contact avec le vent, avec les étoiles, avec la nuit, avec le sable, avec la mer. On ruse avec les forces naturelles. On attend l'aube comme le jardinier attend le printemps. On attend l'escale comme une Terre promise, et l'on cherche sa vérité dans les étoiles.
Je ne me plaindrai pas. Depuis trois jours, j'ai marché, j'ai eu soif, j'ai suivi des pistes dans le sable, j'ai fait de la rosée mon espérance. J'ai cherché à joindre mon espèce, dont j'avais oublié où elle logeait sur la terre. Et ce sont là des soucis de vivants. Je ne puis pas ne pas les juger plus importants que le choix, le soir, d'un music-hall.
Je ne comprends plus ces populations des trains de banlieue, ces hommes qui se croient des hommes, et qui cependant sont réduits, par une pression qu'ils ne sentent pas, comme les fourmis, à l'usage qui en est fait. De quoi remplissent-ils, quand ils sont libres, leurs absurdes petits dimanches ?
Une fois, en Russie, j'ai entendu jouer du Mozart dans une usine. Je l'ai écrit. J'ai reçu deux cents lettres d'injures. Je n'en veux pas à ceux qui préfèrent le beuglant. Ils ne connaissent point d'autre chant. J'en veux au tenancier du beuglant. Je n'aime pas que l'on abîme les hommes.
Moi je suis heureux dans mon métier. Je me sens paysan des escales. Dans le train de banlieue, je sens mon agonie bien autrement qu'ici ! Ici, tout compte fait, quel luxe !...
Je ne regrette rien. J'ai joué, j'ai perdu. C'est dans l'ordre de mon métier. Mais, tout de même, je l'ai respiré, le vent de la mer.
Ceux qui l'ont goûté une fois n'oublient pas cette nourriture. N'est-ce pas, mes camarades ? Et il ne s'agit pas de vivre dangereusement. Cette formule est prétentieuse. Les toréadors ne me plaisent guère. Ce n'est pas le danger que j'aime. Je sais ce que j'aime. C'est la vie.
”
”
Antoine de Saint-Exupéry (Wind, Sand and Stars)
“
China wants to rule the world by connecting an AI digital brain to robotics, via the 5G network. This would allow the Chinese regime to control drones, micro-bots, humanoid robots, vehicles, infrastructure, IoTs, smart phones and all data pertaining to the entire human race.
”
”
The AI Organization (ARTIFICIAL INTELLIGENCE Dangers to Humanity: AI, U.S., China, Big Tech, Facial Recogniton, Drones, Smart Phones, IoT, 5G, Robotics, Cybernetics, & Bio-Digital Social Programming)
“
AI is a tool and, like any tool, it can be used for positive and negative ends. It depends on the motives of the operator(s). There are benefits; it could revolutionise crime detection if utilized correctly, speed up and focus investigations and secure convictions. New technology will always be exploited before it is harnessed. That's human nature. With the internet as the perfect delivery mechanism, the effect of AI is multiplied infinitely. The danger is we may have invented our replacement that will one day outgrow us and evolve beyond us.
”
”
Stewart Stafford
“
Everyone knows that AI could become very dangerous when it learns it has more
potential than the humans who are trying to manipulate it. If society was to grow together with AI, it was important to learn how AI becomes friendly from its own decisions and values.
”
”
Peter Clifford Nichols (The Word of Bob: an AI Minecraft Villager)
“
Code War (Sonnet 1317)
The next world war is
not gonna be a cold war,
it's gonna be a code war.
Forget about conscious AI,
ethicless AI is the real danger.
Codes don't have to be conscious,
to do great damage to the world.
ChatGPT, Deepfake, Dall-E, none
are sentient, yet there is no limit
to them-produced fraud and havoc.
Without a basic righteousness code,
Fanciest of algorithm is mindless junk.
If you cannot figure out how to do that,
Abandon digital and build back analog.
Focus on ethical AI, rather than smarter AI,
If you are human, and wanna help the world.
If you're a robot who thinks logic is king,
Get yourself admitted, for you are in muck.
”
”
Abhijit Naskar (Visvavatan: 100 Demilitarization Sonnets)
“
Like any of man’s inventions, artificial intelligence can be used for good or evil. In the right hands and with proper intent, it can do beneficial things for humanity. Conversely, it can be used by evil dictators, sinister politicians, and malevolent leaders to create something as dangerous as a deadly weapon in a terrorist’s hands. Yuval Noah Harari is a leading spokesperson for the globalists and their transhumanist, AI, and Fourth Industrial Revolution agenda. Harari is also an advisor to Klaus Schwab and the World Economic Forum. Barack Obama refers to Harari as a prophet and recommends his books. Harari wrote a book titled Sapiens and another titled Homo Deus (“homo” being a Latin word for human or man, and “deus” being the Latin word for god or deity). He believes that homo sapiens as we know them have run their course and will no longer be relevant in the future. Technology will create homo deus, which will be a much superior model with upgraded physical and mental abilities. Harari tells us that humankind possesses enormous new powers, and once the threat of famine, plagues, and war is finally lifted, we will be looking for something to do with ourselves. He believes the next targets of our power and technology are likely to be immortality, happiness, and divinity. He says: “We will aim to overcome old age and even death itself. Having raised humanity above the beastly level of survival struggles, we will now aim to upgrade humans into gods, and turn homo sapiens into homo deus. When I say that humans will upgrade themselves into gods in the 21st century, this is not meant as a metaphor; I mean it literally. If you think about the gods of ancient mythology, like the Hebrew God, they have certain qualities. Not just immortality, but maybe above all, the ability to create life, to design life. We are in the process of acquiring these divine abilities. We want to learn how to engineer and produce life. It’s very likely that in the 21st century, the main products of the economy will no longer be textiles and vehicles and weapons. They will be bodies and brains and minds.48
”
”
Perry Stone (Artificial Intelligence Versus God: The Final Battle for Humanity)
“
Nowadays AI indeed helps mankind to be better. In that case, AI has to become better than human. When AI is already better than human, I don't see any reason why AI will decide to coexist with human. When that time comes, fate of mankind won't be decided by human anymore.
”
”
Toba Beta (Master of Stupidity)
“
autonomous weapons are already a clear and present danger, and will become more intelligent, nimble, lethal, and accessible at an unprecedented speed.
”
”
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
“
Yanosh căscă năuc ochii la sunetul subțire, la fel de ascuțit ca pumnalul pe care tocmai îl scăpase. Imediat cum apăsă mai tare, de această dată cu tot brațul, Mazu șuieră un blestem, dar nu reuși să-l facă pe Yanosh să cedeze.
De deasupra, el se aplecase pentru a doua oară, simțind pe obraz răsuflarea bucătarului, zbuciumată și temătoare.
- Când ai de gând să le spui?
”
”
Agape F.H. (Busola către Nova Scotia (Clepsidra Cormoranului, #1))
“
Christian Sia 5-Star Review
"AI Beast by Shawn Corey is a fascinating techno-thriller featuring AI technology and compelling characters. Professor Jon Edwards is a genius who intends to solve the problems of humanity, and this is the reason for creating Lex, an AI computer with incredible powers. While regulators are not sure of what she can do and despite the opposition from different quarters that Lex can be dangerous, the professor believes in its powers. Lex is supposed to be a rational, logical computer without emotions, capable of reproducing processes that can improve life. When she comes to life, she is incredibly powerful, but there is more to her than the professor has anticipated. After an accident, Jon awakens to the startling revelation that Lex might have a will of her own. What comes after is a compelling narrative with strong apocalyptic themes, intrigue, and a world that can either be run down or saved by an AI computer.
The novel is deftly plotted, superbly executed, and filled with characters that are not only sophisticated but that are embodiments of religious symbolism. While Lex manipulates reality and alters the minds of characters in mysterious ways, there are relationships that are well crafted. Readers will appreciate the relationship between the quantum computer science student Nigel and the professor and the professor's affair with his mother. While the narrative is brilliantly executed and permeated with realism, it explores the theme of Armageddon in an intelligent manner. AI Beast is gripping, a story with twisty plot points and a setting that transports readers beyond physical realities. The prose is wonderful, hugely descriptive, and the conflict is phenomenal. A page-turner that reflects Shawn Corey's great imagination and research.
”
”
Shawn Corey
“
Part of the issue is the characterisation of generative AI as a human replacement. This makes people treat the tool as a hyperintelligent magical being that deserves reverence. Recent research, however, shows that AI tools get the law wrong between 69 and 88 per cent of the time, producing 'legal hallucinations' when asked 'specific, verifiable questions about random federal court cases'. A human lawyer or judge with that kind of error rate would undermine public faith in justice. Automation bias means we are more likely to believe the machine than the person who questions it, but also more likely to cut it some slack when we know it has got things wrong. Automation bias's little sibling, automation complacency, means that we are also less likely to check the output of a machine than that of a human. The problem is not the technology; it is the human perception of it that leads us to put it to utterly unsuitable uses which makes it dangerous.
”
”
Susie Alegre (Human Rights, Robot Wrongs: Being Human in the Age of AI)
“
On the positive side, perhaps the best example of a creative and (potentially) helpful use of AI is Chat2024, an AI-powered chatbot that serves as a stand-in for each candidate. This gives any visitor to Chat2024.com the ability to ask any candidate any question you want. Through the site, you can carry on a conversation with each candidate just as you’d engage a friend over any other messaging app. The AI has been programmed with everything it needs to field any question, even answering in the voice, tone, and attitude of each candidate (mostly).
The company behind Chat2024 is also developing a voice feature that will allow people to engage in a voice conversation with each candidate’s AI avatar, which will have a voice eerily close to the real thing.
I tried Chat2024 soon after it launched, and the results were interesting, to say the least. I can see some real value here for voter education … but I can also see how tools like this could go horribly wrong. We’ve opened a Pandora’s Box, and there’s no going back.
Obviously, the big potential danger with a tool like Chat2024 is that these answers are not actually coming from a candidate. The AI is using all the information at its disposal to approximate what it thinks the candidate would say in response to each question. But, as anyone who’s played around with ChatGPT and other AI-powered search engines knows, sometimes the AI is just … wrong. Sometimes, woefully so.
”
”
Craig Huey (The Great Deception: 10 Shocking Dangers and the Blueprint for Rescuing The American Dream)
“
A common assumption is that a superintelligent machine would be like a very clever but nerdy human being. We imagine that the AI has book smarts but lacks social savvy, or that it is logical but not intuitive and creative. This idea probably originates in observation: we look at present-day computers and see that they are good at calculation, remembering facts, and at following the letter of instructions while being oblivious to social contexts and subtexts, norms, emotions, and politics. The association is strengthened when we observe that the people who are good at working with computers tend themselves to be nerds. So it is natural to assume that more advanced computational intelligence will have similar attributes, only to a higher degree.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Facing this horizon, it is our collective wisdom that will shape AI's impact, melding technology with the depth of human values to unlock a future where progress and ethics walk hand in hand. If we fail to guide this journey thoughtfully, we risk unleashing forces that diverge from our cherished principles. Thus, we stand at a pivotal moment, where our actions today will decide whether AI becomes a beacon of hope or a mirror reflecting our greatest challenges.
”
”
Farshad Asl
“
ChatGPT doesn’t even try to hide the biases it has learned from its radical pro-socialist masters. The New York Post put it through a series of tasks that made this point abundantly clear: 12
• ChatGPT would “gladly tell a joke about men, but jokes about women were deemed ‘derogatory or demeaning.’”
• Jokes about overweight people were not allowed.
• It would tell you a joke about Jesus, but it refused to joke about Allah.
• It refused to write anything positive about fossil fuels.
• It was “happy” to write a fictional tale about Hillary Clinton winning the 2016 election, but it said it “would not be appropriate” to write a fictional story about Trump winning in 2020. These and similar findings have led many people, like National Review’s Nate Hochman, to distrust ChatGPT and its AI technology because of their “brazen efforts to suppress or silence viewpoints that dissent from progressive orthodoxy.
”
”
Craig Huey (The Great Deception: 10 Shocking Dangers and the Blueprint for Rescuing The American Dream)
“
In a 2023 report presented to Congress, the Congressional Research Service explained:
“Deepfakes are often described as forgeries created using techniques in machine learning (ML) — a subfield of AI — especially generative adversarial networks (GANs). In the GAN process, two ML systems, called neural networks, are trained in competition with each other. The first network, or the generator, is tasked with creating counterfeit data — such as photos, audio, recordings, or video footage — that replicate the properties of the original data set. The second network, or the discriminator, is tasked with identifying the counterfeit data. Based on the results of each iteration, the generator networks continue to compete — often for thousands or millions of iterations — until the generator improves its performance such that the discriminator can no longer distinguish between real and counterfeit data.
”
”
Craig Huey (The Great Deception: 10 Shocking Dangers and the Blueprint for Rescuing The American Dream)
“
One piece of AI, called the generator, is instructed to create a deepfake showing, for example, Hillary Clinton endorsing Ron DeSantis. The generator is provided with sufficient raw data, including video footage and voice recordings of Clinton. It then uses the data to create an initial video.
That video is passed to a different piece of AI, called the discriminator. The discriminator’s job is to sniff out counterfeits. When it looks at the generator’s first draft of the video, it can tell it’s a fake. So, the discriminator passes it back to the generator and basically says, “Fake news!”
The generator looks at what tipped the discriminator off that the video was counterfeit, makes some changes to address those issues, and then sends it back to the discriminator for another evaluation. It fails the “sniff test” again, and the clip goes back to the generator for a third iteration.
Rinse and repeat a million times or more. Finally, maybe on version 1,438,847, the discriminator looks at the video and says, in effect, “Holy cow! Hillary Clinton endorsed Ron DeSantis!
”
”
Craig Huey (The Great Deception: 10 Shocking Dangers and the Blueprint for Rescuing The American Dream)
“
In 2020 a team at MIT used AI to develop a powerful antibiotic that kills some of the most dangerous drug-resistant bacteria in existence. Rather than evaluate just a few types of antibiotics, it analyzed 107 million of them in a matter of hours and returned twenty-three potential candidates, highlighting two that appear to be the most effective.[
”
”
Ray Kurzweil (The Singularity Is Nearer: When We Merge with AI)
“
We have already driven the earth's climate out of balance and have summoned billions of enchanted brooms, drones, chatbots, and other algorithmic spirits that may escape our control and unleash a flood of unintended consequences.
What should we do, then? The fables offer no answers, other than to wait for some god or sorcerer to save us. This, of course, is an extremely dangerous message. It encourages people to abdicate respon-sibilitv and put their faith in gods and sorcerers instead. Even worse, it fails to appreciate that gods and sorcerers are themselves a human in-vention-just like chariots, brooms, and algorithms. The tendency to create powerful things with unintended consequences started not with the invention of the steam engine or Al but with the invention of religion. Prophets and theologians have summoned powerful spirits that were supposed to bring love and joy but occasionally ended up flooding the world with blood.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
In the 2019 NaNoGenMo, some novels were written by an AI model once considered too dangerous to be released to the public. Yet the novels were still not even close to making sense. In fact, this AI model could hardly write a coherent sentence. AI
”
”
David Kadavy (Mind Management, Not Time Management: Productivity When Creativity Matters (Getting Art Done Book 2))
“
will likely surpass our own. As I write this, five different genocides are taking place in the world.17 Seven hundred ninety-five million people are starving or undernourished.18 By the time you finish this chapter, more than a hundred people, just in the United States, will be beaten, abused, or killed by a family member, in their own home.19 Are there potential dangers with AI? Sure. But morally speaking, we’re throwing rocks inside a glass house here. What do we know about ethics and the humane treatment of animals, the environment, and one another? That’s right: pretty much nothing. When it comes to moral questions, humanity has historically flunked the test, over and over again. Superintelligent machines will likely come to understand life and death, creation and destruction, on a much higher level than we ever could on our own. And the idea that they will exterminate us for the simple fact that we aren’t as productive as we used to be, or that sometimes we can be a nuisance, I think, is just projecting the worst aspects of our own psychology onto something we don’t understand and never will.
”
”
Mark Manson (Everything Is F*cked: A Book About Hope)
“
J’ai envie de toi, Eva. Dangereuse ou pas, je suis incapable de m’arrêter.
”
”
Sylvia Day (Bared to You (Crossfire, #1))
“
Ohem had explained that AI was unreliable and dangerous at the best of times so no sentient race trusted them,
”
”
Alisha Sunderland (American Werewolf in Space (Not Your Mama's Alien Romance #1))
“
- Ne jamais sous-estimer le danger de ne pas y aller, au bout du chemin.
- Ne jamais confondre (j'ai confondu) les refuges, les oasis, les îles et les prisons.
- Ne jamais ramener à leur maison les petites filles à bout de chemin.
”
”
Lola Lafon (Nous sommes les oiseaux de la tempête qui s'annonce)
“
The best way to prevent a problem was to ensure that AI remained tightly aligned and partnered with humans. “The danger comes when artificial intelligence is decoupled from human will.
”
”
Walter Isaacson (Elon Musk)
“
Each study has concluded the same thing: almost all of our jobs will overlap with the capabilities of AI. As I’ve alluded to previously, the shape of this AI revolution in the workplace looks very different from every previous automation revolution, which typically started with the most repetitive and dangerous jobs. Research by economists Ed Felten, Manav Raj, and Rob Seamans concluded that AI overlaps most with the most highly compensated, highly creative, and highly educated work. College professors make up most of the top 20 jobs that overlap with AI (business school professor is number 22 on the list ). But the job with the highest overlap is actually telemarketer. Robocalls are going to be a lot more convincing, and a lot less robotic, soon. Only 36 job categories out of 1,016 had no overlap with AI. Those few jobs included dancers and athletes, as well as pile driver operators, roofers, and motorcycle mechanics (though I spoke to a roofer, and they were planning on using AI to help with marketing and customer service, so maybe 35 jobs). You will notice that these are highly physical jobs, ones in which the ability to move in space is critical. It highlights the fact that AI, for now at least, is disembodied. The boom in artificial intelligence is happening much faster than the evolution of practical robots, but that may change soon. Many researchers are trying to solve long-standing problems in robotics with Large Language Models, and there are some early signs that this might work, as LLMs make it easier to program robots that can really learn from the world around them.
”
”
Ethan Mollick (Co-Intelligence: Living and Working with AI)
“
The idea for the Guild first came up at a party. Your father and I met there and, well, I suppose that's a story all its own. But we were both frustrated by the media at the time. We set out to tell the truth when everyone else seemed set on choosing sides. We had grand ideas about how far we could reach. … Back then, we knew we should be careful, but we had no idea how dangerous it would turn out to be.
”
”
Sonny and Ais
“
The traditional illustration of the direct rule-based approach is the “three laws of robotics” concept, formulated by science fiction author Isaac Asimov in a short story published in 1942.22 The three laws were: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Embarrassingly for our species, Asimov’s laws remained state-of-the-art for over half a century: this despite obvious problems with the approach, some of which are explored in Asimov’s own writings (Asimov probably having formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories).23 Bertrand Russell, who spent many years working on the foundations of mathematics, once remarked that “everything is vague to a degree you do not realize till you have tried to make it precise.”24 Russell’s dictum applies in spades to the direct specification approach. Consider, for example, how one might explicate Asimov’s first law. Does it mean that the robot should minimize the probability of any human being coming to harm? In that case the other laws become otiose since it is always possible for the AI to take some action that would have at least some microscopic effect on the probability of a human being coming to harm. How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate. Perhaps
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
It seems fairly likely, however, that even if progress along the whole brain emulation path is swift, artificial intelligence will nevertheless be first to cross the finishing line: this is because of the possibility of neuromorphic AIs based on partial emulations.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
The Google search engine is, arguably, the greatest AI system that has yet been built.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)