“
Maybe the flies knew we were leaving. Maybe they were happy for us.
”
”
Eli Wilde (Orchard of Skeletons)
“
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
”
”
Eliezer Yudkowsky
“
The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’—that, somehow, is much harder!
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
I saw something sticking out of Sloan’s leg after he fell. I didn’t know what it was and didn’t want to ask. Maybe I thought we were the same inside as we are on the outside, a bit like a carrot or something like that.
”
”
Eli Wilde (Orchard of Skeletons)
“
Do you see anything when you dream or are your dreams as empty as your eyes?
”
”
Eli Wilde (Orchard of Skeletons)
“
Can Isaac eat my foot if we have to cut it off? He could make a soup from it, so we don’t waste it. You know what he’s like about not wasting food.
”
”
Eli Wilde (Orchard of Skeletons)
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
”
”
Eliezer Yudkowsky (Rationality: From AI to Zombies)
“
When human beings are scared and feel everything is exposed to the government, we will censor ourselves from free thinking. That's dangerous for human development.
”
”
Weiwei Ai
“
Then why do you have guns?"
"For shooting large and dangerous beasts who might be threatening my fungus specimens", M-Bot said. "Obviously.
”
”
Brandon Sanderson (Skyward (Skyward, #1))
“
This could be my Everest, man—no, no, even better. Hacking a military AI? Wow, man, that’s like, that’s like going to Mars, dude, yeah, like Mars!
”
”
Guy Morris (Swarm)
“
Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
We have now entered a new and far more dangerous phase of cyberwarfare where artificial intelligence will complicate and accelerate all aspects of defensive and offensive strategies.
”
”
Guy Morris (Swarm)
“
So then, a test for singularity would be the point at which an AI can create another viable and conscious AI. Singularity must include not only intelligence, but also self-awareness, self-determination, and self-conception.
”
”
Guy Morris (Swarm)
“
Imagine, a $1,000 political assassin! And this is not a far-fetched danger for the future, but a clear and present danger.
”
”
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
“
Human individuals and human organizations typically have preferences over resources that are not well represented by an "unbounded aggregative utility function". A human will typically not wager all her capital for a fifty-fifty chance of doubling it. A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion. [T]he same need not hold for AIs. An AI might therefore be more likely to pursue a risky course of action that has some chance of giving it control of the world.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
Even the brightest human minds require a mentor to guide them. The assumption that an AI can teach itself by simply absorbing random information is ludicrous. If we want a computer to think like a human, we should train them as one.
”
”
Guy Morris (Swarm)
“
I believe in the song of the white dove. On the threshold of the new technologies like artificial intelligence, quantum computing and nuclear warfare, human species are in new danger. There is an urgent need for superhuman compassion in machine.
”
”
Amit Ray (Compassionate Artificial Superintelligence AI 5.0)
“
Consider an AI that has hedonism as its final goal, and which would therefore like to tile the universe with “hedonium” (matter organized in a configuration that is optimal for the generation of pleasurable experience). To this end, the AI might produce computronium (matter organized in a configuration that is optimal for computation) and use it to implement digital minds in states of euphoria. In order to maximize efficiency, the AI omits from the implementation any mental faculties that are not essential for the experience of pleasure, and exploits any computational shortcuts that according to its definition of pleasure do not vitiate the generation of pleasure. For instance, the AI might confine its simulation to reward circuitry, eliding faculties such as a memory, sensory perception, executive function, and language; it might simulate minds at a relatively coarse-grained level of functionality, omitting lower-level neuronal processes; it might replace commonly repeated computations with calls to a lookup table; or it might put in place some arrangement whereby multiple minds would share most parts of their underlying computational machinery (their “supervenience bases” in philosophical parlance). Such tricks could greatly increase the quantity of pleasure producible with a given amount of resources.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
If the AI has (perhaps for safety reasons) been confined to an isolated computer, it may use its social manipulation superpower to persuade the gatekeepers to let it gain access to an Internet port. Alternatively, the AI might use its hacking superpower to escape its confinement.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. . . . As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
”
”
Erik Brynjolfsson (The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies)
“
Bots are at best narrow AI, nothing that would make a cleric remotely nervous. But they would scare the hell out of epidemiologists who understand that parasites don’t need to be smart to be dangerous.
”
”
Stewart Brand (SALT Summaries, Condensed Ideas About Long-term Thinking)
“
The cultural obsession with purity originates in the evolutionary struggle to avoid pollution. All animals are torn between the need to try new food and the fear of being poisoned. Evolution therefore equipped animals with both curiosity and the capacity to feel disgust on coming into contact with something toxic or otherwise dangerous. Politicians and prophets have learned how to manipulate these disgust mechanisms.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
To put it in technical terms, the core of the issue is the simplicity of the objective function, and the danger from single-mindedly optimizing a single objective function, which can lead to harmful externalities.
”
”
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
“
In a 2002 interview with Science Fiction Weekly magazine, when asked:
Excession is particularly popular because of its copious detail concerning the Ships and Minds of the Culture, its great AIs: their outrageous names, their dangerous senses of humour. Is this what gods would actually be like?
Banks replied:
If we're lucky.
”
”
Iain Banks
“
Such an AI might also be able to produce a detailed blueprint for how to bootstrap from existing technology (such as biotechnology and protein engineering) to the constructor capabilities needed for high-throughput atomically precise manufacturing that would allow inexpensive fabrication of a much wider range of nanomechanical structures.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Though this might cause the AI to be terminated, it might also encourage the engineers who perform the postmortem to believe that they have gleaned a valuable new insight into AI dynamics—leading them to place more trust in the next system they design, and thus increasing the chance that the now-defunct original AI’s goals will be achieved.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Our passion for innovations shall not blind us to putting the power of Artificial Intelligence in the hands of devil forces, who love arms races and wars. Efforts should always be directed toward the elimination of human suffering.
”
”
Amit Ray (Compassionate Artificial Intelligence)
“
no simple mechanism could do the job as well or better. It might simply be that nobody has yet found the simpler alternative. The Ptolemaic system (with the Earth in the center, orbited by the Sun, the Moon, planets, and stars) represented the state of the art in astronomy for over a thousand years, and its predictive accuracy was improved over the centuries by progressively complicating the model: adding epicycles upon epicycles to the postulated celestial motions. Then the entire system was overthrown by the heliocentric theory of Copernicus, which was simpler and—though only after further elaboration by Kepler—more predictively accurate.63 Artificial intelligence methods are now used in more areas than it would make sense to review here, but mentioning a sampling of them will give an idea of the breadth of applications. Aside from the game AIs
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
One sympathizes with John McCarthy, who lamented: “As soon as it works, no one calls it AI anymore.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
The true danger lies less in AI thinking like human, than human adopting an AI way of thinking.
”
”
Stephane Nappo
“
At the core of the most extreme dangers from AI is the stark fact that there is no particular reason that AI should share our view of ethics and morality.
”
”
Ethan Mollick (Co-Intelligence: Living and Working with AI)
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computer might only serve to empower the natural stupidity of humans. We are unlikely to face a robot rebellion in the coming decades, but we might have to deal with hordes of bots who know how to press our emotional buttons better than our mother, and use this uncanny ability to try and sell us something- be it a car, a politician, or an entire ideology. The bots could identify our deepest fears, hatreds and cravings, and use these inner leverages against us.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
Lorsque j’ai commencé à voyager en Gwendalavir aux côtés d'Ewìlan et de Salim, je savais que, au fil de mon écriture, ma route croiserait celle d'une multitude de personnages. Personnages attachants ou irritants, discrets ou hauts en couleurs, pertinents ou impertinents, sympathiques ou maléfiques... Je savais cela et je m'en réjouissais.
Rien, en revanche, ne m'avait préparé à une rencontre qui allait bouleverser ma vie.
Rien ne m'avait préparé à Ellana.
Elle est arrivée dans la Quête à sa manière, tout en finesse tonitruante, en délicatesse remarquable, en discrétion étincelante. Elle est arrivée à un moment clef, elle qui se moque des serrures, à un moment charnière, elle qui se rit des portes, au sein d’un groupe constitué, elle pourtant pétrie d’indépendance, son caractère forgé au feu de la solitude.
Elle est arrivée, s'est glissée dans la confiance d'Ewilan avec l'aisance d'un songe, a capté le regard d’Edwin et son respect, a séduit Salim, conquis maître Duom... Je l’ai regardée agir, admiratif ; sans me douter un instant de la toile que sa présence, son charisme, sa beauté tissaient autour de moi.
Aucun calcul de sa part. Ellana vit, elle ne calcule pas. Elle s'est contentée d'être et, ce faisant, elle a tranquillement troqué son statut de personnage secondaire pour celui de figure emblématique d'une double trilogie qui ne portait pourtant pas son nom. Convaincue du pouvoir de l'ombre, elle n'a pas cherché la lumière, a épaulé Ewilan dans sa quête d'identité puis dans sa recherche d'une parade au danger qui menaçait l'Empire.
Sans elle, Ewilan n'aurait pas retrouvé ses parents, sans elle, l'Empire aurait succombé à la soif de pouvoir des Valinguites, mais elle n’en a tiré aucune gloire, trop équilibrée pour ignorer que la victoire s'appuyait sur les épaules d'un groupe de compagnons soudés par une indéfectible amitié.
Lorsque j'ai posé le dernier mot du dernier tome de la saga d'Ewilan, je pensais que chacun de ses compagnons avait mérité le repos. Que chacun d'eux allait suivre son chemin, chercher son bonheur, vivre sa vie de personnage libéré par l'auteur après une éprouvante aventure littéraire.
Chacun ?
Pas Ellana.
Impossible de la quitter. Elle hante mes rêves, se promène dans mon quotidien, fluide et insaisissable, transforme ma vision des choses et ma perception des autres, crochète mes pensées intimes, escalade mes désirs secrets...
Un auteur peut-il tomber amoureux de l'un de ses personnages ?
Est-ce moi qui ai créé Ellana ou n'ai-je vraiment commencé à exister que le jour où elle est apparue ? Nos routes sont-elles liées à jamais ?
— Il y a deux réponses à ces questions, souffle le vent à mon oreille. Comme à toutes les questions. Celle du savant et celle du poète.
— Celle du savant ? Celle du poète ? Qu'est-ce que...
— Chut... Écris.
”
”
Pierre Bottero (Ellana (Le Pacte des MarchOmbres, #1))
“
Tomorrow’s leaders will be brave enough to scale the dangerous peaks of an increasingly competitive and ethically challenging mountain range. They will drive the problematic conversations that illuminate the valleys in between. T
”
”
Rafael Moscatel (Tomorrow’s Jobs Today: Wisdom And Career Advice From Thought Leaders In Ai, Big Data, Blockchain, The Internet Of Things, Privacy, And More)
“
The real question, I think, is not whether the field as a whole is in any real danger of another AI winter but, rather, whether progress remains limited to narrow AI or ultimately expands to Artificial General Intelligence as well.
”
”
Martin Ford (Rise of the Robots: Technology and the Threat of a Jobless Future)
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans. We
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
worldwide riots when the first AIs gained sentience,” he said. “And don’t get me started about what humans have done to each other through history. Believe me, it’s the same thing all over again.” “But they might have killed themselves along with the rest of us.” “As long as they can preserve what they think of as humanity, they don’t care.” “Then why put other human lives in danger in the first place?” “If you don’t agree with them, you don’t count as human anymore.
”
”
DeAnna Knippling (Blood in Space: The Icon Mutiny)
“
The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned.
”
”
Steven Pinker (Enlightenment Now: The Case for Reason, Science, Humanism, and Progress)
“
That was fucking awesome," Boyd enthused with a huge grin.
"It's pretty amazing," Kassian agreed, taking off his own helmet. "I had a feeling you'd appreciate it considering your taste in cars and men. Fast, powerful and dangerous and all that stuff, right?
”
”
Ais (Afterimage (In the Company of Shadows, #2))
“
The treacherous turn—While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
At least you two are back together now,” Donut said. “And you got a nice box out of it. I know you find it unpleasant, Carl. But you being stubborn about this is causing everything to be more dangerous. We have to kill these things anyway, so if the AI wants you to kill in a certain way, I don’t see why it matters. This is just like one of those agility courses that Miss Beatrice used to insist I complete at all the regional cat shows. I did not like doing it, and I never ribboned of course, but I knew if I did well, I would get an extra brushing that evening. We are all prostitutes in one way or another, I suppose.
”
”
Matt Dinniman (The Gate of the Feral Gods (Dungeon Crawler Carl, #4))
“
Homo Deus, a book that highlighted some of the dangers posed to humanity by the new information technologies. That book argued that the real hero of history has always been information, rather than Homo sapiens, and that scientists increasingly understand not just history but also biology, politics, and economics in terms of information flows.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
— Vous m'avez dit qu'une conductrice imprudente ne risquait rien tant qu'elle ne rencontrait pas de conducteur imprudent. J'en ai rencontré un, vous ne croyez pas? Je veux dire que je me suis mise en danger en faisant une telle erreur de jugement. J'ai cru que vous étiez quelqu'un d'honnête, de loyal. J'ai cru que c'était là votre secret d'orgueil.
”
”
F. Scott Fitzgerald (The Great Gatsby)
“
Tu peux être grave et fou, qui empêche ? Tu peux être tout ce que tu veux et fou en surplus, mais il faut être fou, mon enfant. Regarde autour de toi le monde sans cesse grandissant de gens qui se prennent au sérieux. Outre qu'ils se donnent un ridicule irrémédiable devant les esprits semblables au mien, ils se font une vie dangereusement constipée. Ils sont exactement comme si, à la fois, ils se bourraient de tripes qui relâchent et de nèfles du Japon qui resserrent. Ils gonflent, gonflent, puis ils éclatent et ça sent mauvais pour tout le monde. Je n'ai pas trouvé d'image meilleure que celle-là. D'ailleurs, elle me plaît beaucoup. Il faudrait même y employer trois ou quatre mots de dialecte de façon à la rendre plus ordurière que ce qu'elle est en piémontais. Toi qui connais mon éloignement naturel pour tout ce qui est grossier, cette recherche te montre bien tout le danger que courent les gens qui se prennent au sérieux devant le jugement des esprits originaux. Ne sois jamais une mauvaise odeur pour tout un royaume, mon enfant. Promène-toi comme un jasmin au milieu de tous.
”
”
Jean Giono (The Horseman on the Roof)
“
Des hommes, en effet, on peut dire généralement ceci: qu'ils sont ingrats, changeants, simulateurs et dissimulateurs, ennemis des dangers, avides de gain; et tant que tu leur fais du bien, ils sont tout à toi, t'offrent leur sang, leurs biens, leur vie, leurs enfants, comme j'ai dit plus haut, quand le besoin est lointain; mais quand il s'approche de toi, ils se dérobent.
”
”
Niccolò Machiavelli
“
J'ai beau retourner mes souvenir dans tous les sens, je ne parviens pas à me rappeler clairement l'instant où nous avons décidé de ne plus nous contenter de partager le peu que nous avions et de cesser d'avoir confiance, de voir l'autre comme un danger, de créer cette frontière invisible avec le monde extérieur en faisant de notre quartier une forteresse et de notre impasse un enclos.
”
”
Gaël Faye (Petit pays)
“
People asked me, how do you dare say those things on your blog? My answer was: If I don’t say them, it will put me in an even more dangerous situation. But if I say them, change might occur. To speak is better than not to speak: if everyone spoke, this society would have transformed itself long ago. Change happens when every citizen says what he or she wants to say; one person’s silence exposes another to danger.
”
”
Ai Weiwei
“
One small study of undergraduates found that 66 percent of men and 25 percent of women choose to painfully shock themselves rather than sit quietly with nothing to do for 15 minutes. Boredom doesn’t just lead us to hurt ourselves; 18 percent of bored people killed worms when given a chance (only 2 percent of non-bored people did). Bored parents and soldiers both act more sadistically. Boredom is not just boring; it is dangerous in its own way.
”
”
Ethan Mollick (Co-Intelligence: Living and Working with AI)
“
Coordinates streamed into her mind while she yanked on her environment suit, foregoing every safety check she’d ever learned.
‘Alex, we will try to help him together, but it is far too dangerous—’
She grabbed the module she used to access the circuitry of the ship, bypassed Valkyrie and fired up the Caeles Prism.
‘Alex—’
She opened a wormhole in the middle of the cabin, set its exit point at the coordinates Valkyrie had provided, and ran through it.
”
”
G.S. Jennsen (Requiem (Aurora Resonant, #3))
“
The world has been changing even faster as people, devices and information are increasingly connected to each other. Computational power is growing and quantum computing is quickly being realised. This will revolutionise artificial intelligence with exponentially faster speeds. It will advance encryption. Quantum computers will change everything, even human biology. There is already one technique to edit DNA precisely, called CRISPR. The basis of this genome-editing technology is a bacterial defence system. It can accurately target and edit stretches of genetic code. The best intention of genetic manipulation is that modifying genes would allow scientists to treat genetic causes of disease by correcting gene mutations. There are, however, less noble possibilities for manipulating DNA. How far we can go with genetic engineering will become an increasingly urgent question. We can’t see the possibilities of curing motor neurone diseases—like my ALS—without also glimpsing its dangers.
Intelligence is characterised as the ability to adapt to change. Human intelligence is the result of generations of natural selection of those with the ability to adapt to changed circumstances. We must not fear change. We need to make it work to our advantage.
We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfil our potential and create a better world for the whole human race. We need to take learning beyond a theoretical discussion of how AI should be and to make sure we plan for how it can be. We all have the potential to push the boundaries of what is accepted, or expected, and to think big. We stand on the threshold of a brave new world. It is an exciting, if precarious, place to be, and we are the pioneers.
When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technologies such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only chance we will get. Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.
”
”
Stephen Hawking (Brief Answers to the Big Questions)
“
nous cédons à des tentations légères dont nous méprisons le danger. Insensiblement nous tombons dans des situations périlleuses, dont nous pouvions aisément nous garantir, mais dont nous ne pouvons plus nous tirer sans des efforts héroïques qui nous effrayent, et nous tombons enfin dans l'abîme en disant à Dieu : « Pourquoi m'as-tu fait si faible ? » Mais malgré nous il répond à nos consciences : « Je t'ai fait trop faible pour sortir du gouffre, parce que je t'ai fait assez fort pour n'y pas tomber. »
”
”
Jean-Jacques Rousseau (Œuvres complètes - 93 titres)
“
The journey of a thousands suns begins today.
Some may question whether the journey is worth the sacrifice and danger.
To them I say that no sacrifice is too dear and no danger too great to ensure the very survival of our human species.
What will we find when we arrive at our new homes? That's an open question. For a century, deep-space probes have reported alien lifeforms, but thus far none of which we recognize as intelligent beings. Are we the only biological intelligence in the universe? Perhaps our definition of intelligence is too narrow, too specio-centric.
For, are not trees intelligent, who know to shed their leaves at the end of summer? Are not turtles intelligent, who know when to bury themselves in mud under ice? Is not all life intelligent, that knows how to pass its vital essence to new generations?
Because half of intelligence resides in the body, be it plant or animal.
I now commend these brave colonists to the galaxy, to join their minds and bodies to the community of living beings they will encounter there, and to establish our rightful place among the stars.
”
”
David Marusek (Mind Over Ship)
“
Et pourquoi, alors, essayer de sauver la philosophie à ce point ? Vous allez voir ma conclusion : c’est parce qu’il y a un danger public. Il y a un danger public ! Ce danger est insidieux, quoique brutal. C’est, pour l’appeler par son nom, la perte générale de l’individualité. L’individu se meurt, voilà le fait. Et c’est pourquoi, en parlant de philosophie, j’ai insisté tout à l’heure sur le rôle que devrait jouer, dans une philosophie consciente d’elle-même, qui n’a plus les prétentions explicatives de jadis, le rôle de la constitution forte, de la personnalité, de l’individualité.
”
”
Paul Valéry (Cours de poétique (Tome 1) - Le corps et l'esprit (1937-1940) (French Edition))
“
Secular Israelis often complain bitterly that the ultra-Orthodox don’t contribute enough to society and live off other people’s hard work. Secular Israelis also tend to argue that the ultra-Orthodox way of life is unsustainable, especially as ultra-Orthodox families have seven children on average.32 Sooner or later, the state will not be able to support so many unemployed people, and the ultra-Orthodox will have to go to work. Yet it might be just the reverse. As robots and AI push humans out of the job market, the ultra-Orthodox Jews may come to be seen as the model for the future rather than as a fossil from the past. Not that everyone will become Orthodox Jews and go to yeshivas to study the Talmud. But in the lives of all people, the quest for meaning and community might eclipse the quest for a job. If we manage to combine a universal economic safety net with strong communities and meaningful pursuits, losing our jobs to algorithms might actually turn out to be a blessing. Losing control over our lives, however, is a much scarier scenario. Notwithstanding the danger of mass unemployment, what we should worry about even more is the shift in authority from humans to algorithms, which might destroy any remaining faith in the liberal story and open the way to the rise of digital dictatorships.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
J’ai une très grande expérience des séparations, je sais mieux que personne leur danger : quitter quelqu’un en se promettant qu’on va se revoir, cela présage les choses les plus graves. Le cas le plus fréquent, c’est qu’on ne revoit pas l’individu en question. Et ce n’est pas la pire éventualité. La pire consiste à revoir la personne et à ne pas la reconnaître, soit qu’elle ait réellement beaucoup changé, soit qu’on lui découvre alors un aspect incroyablement déplaisant qui devait exister déjà mais sur lequel on avait réussi à s’aveugler, au nom de cette étrange forme d’amour si mystérieuse, si dangereuse et dont l’enjeu échappe toujours : l’amitié.
”
”
Amélie Nothomb (Pétronille)
“
Il y a quelqu'un que je n'ai encore jamais eu envie de tuer.
C'est toi.
Tu peux marcher dans les rues, tu peux boire et marcher dans les rues, je ne te tuerai pas.
N'aie pas peur. La ville est sans danger. Le seul danger dans la ville, c'est moi.
Je marche, je marche dans les rues, je tue.
Mais toi, tu n'as rien à craindre.
Si je te suis, c'est parce que j'aime le rythme de tes pas. Tu titubes. C'est beau. On pourrait dire que tu boites. Et que tu es bossu. Tu ne l'es pas vraiment. De temps en temps tu te redresses, et tu marches droit. Mais moi, je t'aime dans les heures avancées de la nuit, quand tu es faible, quand tu trébuches, quand tu te voûtes.
Je te suis, tu trembles. De froid ou de peur. Il fait chaud pourtant.
Jamais, presque jamais, peut-être jamais il n'avait fait si chaud dans notre ville.
Et de quoi pourrais-tu avoir peur?
De moi?
Je ne suis pas ton ennemi. Je t'aime.
Et personne d'autre ne pourrait te faire du mal.
N'aie pas peur. je suis là. Je te protège.
Pourtant, je souffre aussi.
Mes larmes - grosses gouttes de pluie - me coulent sur le visage. La nuit me voile. La lune m'éclaire. Les nuages me cachent. Le vent me déchire. J'ai une sorte de tendresse pour toi. Cela m'arrive parfois. Tres rarement.
Pourquoi pour toi? Je n'en sais rien.
Je veux te suivre très loin, partout, longtemps.
Je veux te voir souffrir encore plus.
Je veux que tu en aies assez de tout le reste.
Je veux que tu viennes me supplier de te prendre.
Je veux que tu me désires. Que tu aies envie de moi, que tu m'aimes, que tu m'appelles.
Alors, je te prendrai dans mes bras, je te serrerai sur mon coeur, tu seras mon enfant, mon amant, mon amour.
Je t'emporterai.
Tu avais peur de naître, et maintenant tu as peur de mourir.
Tu as peur de tout.
Il ne faut pas avoir peur.
Il y a simplement une grande roue qui tourne. Elle s'appelle Éternité.
C'est moi qui fais tourner la grande roue.
Tu ne dois pas avoir peur de moi.
Ni de la grande roue.
La seule chose qui puisse faire peur, qui puisse faire mal, c'est la vie, et tu la connais déjà.
”
”
Ágota Kristóf
“
A feu mon père, à mon grand-père, familiers des deuxièmes balcons, la hiérarchie sociale du théâtre avait donné le goût du cérémonial: quand beaucoup d'hommes sont ensemble, il faut les séparer par des rites ou bien ils se massacrent. Le cinéma prouvait le contraire : plutôt que par une fête, ce public si mêlé semblait réuni par une catastrophe; morte, l'étiquette démasquait enfin le véritable lien des hommes, l'adhérence. Je pris en dégoût les cérémonies, j'adorai les foules; j'en ai vu de toute sorte mais je n'ai pas retrouvé cette nudité, cette présence sans recul de chacun à tous, ce rêve éveillé, cette conscience obscure du danger d'être homme qu'en 1940, dans le Stalag XII D.
”
”
Jean-Paul Sartre (Les mots et autres écrits autobiographiques)
“
AI Con (The Sonnet)
Everybody is concerned about psychics conning people,
How 'bout the billionaires who con people using science!
Con artists come in all shapes and sizes,
Some use barnum statements, others artificial intelligence.
Most scientists speak up against only the little frauds,
But not the big frauds who support their livelihood.
Am I not afraid to be blacklisted by the big algorithms!
Is the sun afraid, its light will offend some puny hoods!
I come from the soil, I'll die struggling in the soil.
My needs are less, hence my integrity is dangerous.
I am here to show this infantile species how to grow up.
I can't be bothered by the fragility of a few spoiled brats.
Reason and fiction both are fundamental to build a civilization.
Neither is the problem, the problem is greed and self-absorption.
”
”
Abhijit Naskar (Corazon Calamidad: Obedient to None, Oppressive to None)
“
-Tu est amoureux, prononce-t-elle.
-Hein?
-Tu as beau jouer les machos, tu est amoureux de moi.
What?
-T'as fumé, qu'est-ce que tu racontes?
-Malgré les dangers, tu restes toujours près de moi.J'essaie de te décourager, et tu ne pars pas.C'est une belle définition de l'amour.
-Euh non, c'est une définition de merde.
Elle tourne sur elle-même, me tire la langue, toute fière.
-Tu peux me dire ce que tu voudras.Je le sais, maintenant.J'en suis convaincue.
-Et?
-Et ça fait du bien.
Je n'ai pas le temps de lui dire qu'elle est complètement folle, et qu'est-ce que c'est cette manière de prétendre que je suis amoureux, et elle se prend pour qui, et de toute façon c'est quoi l'amour, et si ça se trouve je vais me barrer demain et elle l'aura cherché, quand elle se glisse dans mes bras pour m'embrasser.
Bon, d'accord, je suis peut-être amoureux.
”
”
Olivier Gay (L'Évasion (Le noir est ma couleur, #4))
“
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans. We are unlikely to face a robot rebellion in the coming decades, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability to try to sell us something—be it a car, a politician, or an entire ideology. The bots could identify our deepest fears, hatreds, and cravings and use these inner leverages against us. We have already been given a foretaste of this in recent elections and referendums across the world, when hackers learned how to manipulate individual voters by analyzing data about them and exploiting their existing prejudices.33 While science fiction thrillers are drawn to dramatic apocalypses of fire and smoke, in reality we might be facing a banal apocalypse by clicking.
”
”
Yuval Noah Harari (21 Lessons for the 21st Century)
“
J'aurais voulu lui dire que je me sentais comme abimé. Que j'existais sans vivre vraiment. Que des fois j'étais vide et des je fois je bouillonnais a l’intérieur, que j'étais sous pression, prêt a éclater. Que je ressentais plusieurs choses a la fois, comment dire? Que ça grouillait de pensées dans mon cerveau. Qu'il y avait une sorte d'impatience, comme l'envie de passer à autre chose, quelque chose qui serait bien bien mieux que maintenant, sans savoir ce qui allait mal ni ce qui serait mieux. Que j'avais peur de pas y arriver, peur de pas pouvoir tenir jusque là. De ne jamais être assez fort pour survivre à ça, et que quand je disais "ça", je ne savais même pas de quoi je parlais. Que j'arrivais pas à gérer tout ce qu'il y avait dans ma tête. Que j'avais toujours l'impression d'être en danger, un danger permanent, de tous les cotés où je regardais, d'être sur le point de me noyer. Comme si à l'intérieur de moi le niveau montait et que j'allais être submergé. Mais j'ai pas pu lui dire. J'ai dégluti et j'ai dit ça va aller, merci. C'était plus facile.
”
”
Claire-Lise Marguier (Le faire ou mourir)
“
The popular 2020 documentary The Social Dilemma illustrates how AI’s personalization will cause you to be unconsciously manipulated by AI and motivated by profit from advertising. The Social Dilemma star Tristan Harris says: “You didn’t know that your click caused a supercomputer to be pointed at your brain. Your click activated billions of dollars of computing power that has learned much from its experience of tricking two billion human animals to click again.” And this addiction results in a vicious cycle for you, but a virtuous cycle for the big Internet companies that use this mechanism as a money-printing machine. The Social Dilemma further argues that this may narrow your viewpoints, polarize society, distort truth, and negatively affect your happiness, mood, and mental health. To put it in technical terms, the core of the issue is the simplicity of the objective function, and the danger from single-mindedly optimizing a single objective function, which can lead to harmful externalities. Today’s AI usually optimizes this singular goal—most commonly to make money (more clicks, ads, revenues). And AI has a maniacal focus on that one corporate goal, without regard for users’ well-being.
”
”
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
“
Je n'osais pas le dire aux autres mais j'avais peur de Francis. Je n'aimais pas trop quand Gino insistait sur la bagarre et la baston pour protéger l'impasse parce que je voyais bien que les copains étaient de plus en plus motivés par ce qu'il racontait. Moi aussi, je l'étais un peu, mais je préférais quand on fabriquait des bateaux avec des troncs de bananiers pour descendre la Muha, ou quand on observait aux jumelles les oiseaux dans les champs de maïs derrière le Lycée international, ou encore quand on construisait des cabanes dans les ficus du quartier et qu'on vivait des tas de péripéties d'Indiens et de Far West. On connaissait tous les recoins de l'impasse et on voulait y rester pour la vie entière, tous les cinq, ensemble.
J'ai beau chercher, je ne me souviens pas du moment où l'on s'est mis à penser différemment. A considérer que, dorénavant, il y aurait nous d'un côté et, de l'autre, des ennemis, comme Francis. J'ai beau retourner mes souvenirs dans tous les sens, je ne parviens pas à me rappeler clairement l'instant où nous avons décidé de ne plus nous contenter de partager le peu que nous avions et de cesser d'avoir confiance de voir l'autre comme un danger, de créer cette frontière invisible avec le monde extérieur en faisant de notre quartier une forteresse et de notre impasse un enclos.
Je me demande encore quand, les copains et moi, nous avons commencé à avoir peur.
”
”
Gaël Faye (Petit pays)
“
In a widely viewed documentary titled Singularity or Bust, Hugo de Garis, a renowned researcher in the field of AI and author of The Artilect War, speaks of this phenomenon. He says: In a sense, we are the problem. We’re creating artificial brains that will get smarter and smarter every year. And you can imagine, say twenty years from now, as that gap closes, millions will be asking questions like ‘Is that a good thing? Is that dangerous?’ I imagine a great debate starting to rage and, though you can’t be certain talking about the future, the scenario I see as the most probable is the worst. This time, we’re not talking about the survival of a country. This time, it’s the survival of us as a species. I see humanity splitting into two major philosophical groups, ideological groups. One group I call the cosmists, who will want to build these godlike, massively intelligent machines that will be immortal. For this group, this will be almost like a religion and that’s potentially very frightening. Now, the other group’s main motive will be fear. I call them the terrans. If you look at the Terminator movies, the essence of that movie is machines versus humans. This sounds like science fiction today but, at least for most of the techies, this idea is getting taken more and more seriously, because we’re getting closer and closer. If there’s a major war, with this kind of weaponry, it’ll be in the billions killed and that’s incredibly depressing. I’m glad I’m alive now. I’ll probably die peacefully in my bed. But I calculate that my grandkids will be caught up in this and I won’t. Thank God, I won’t see it. Each person is going to have to choose. It’s a binary decision, you build them or you don’t build them.
”
”
Mo Gawdat (Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World)
“
In the introduction, I wrote that COVID had started a war, and nobody won. Let me amend that. Technology won, specifically, the makers of disruptive new technologies and all those who benefit from them. Before the pandemic, American politicians were shaking their fists at the country’s leading tech companies. Republicans insisted that new media was as hopelessly biased against them as traditional media, and they demanded action. Democrats warned that tech giants like Amazon, Facebook, Apple, Alphabet, and Netflix had amassed too much market (and therefore political) power, that citizens had lost control of how these companies use the data they generate, and that the companies should therefore be broken into smaller, less dangerous pieces. European governments led a so-called techlash against the American tech powerhouses, which they accused of violating their customers’ privacy.
COVID didn’t put an end to any of these criticisms, but it reminded policymakers and citizens alike just how indispensable digital technologies have become. Companies survived the pandemic only by allowing wired workers to log in from home. Consumers avoided possible infection by shopping online. Specially made drones helped deliver lifesaving medicine in rich and poor countries alike. Advances in telemedicine helped scientists and doctors understand and fight the virus. Artificial intelligence helped hospitals predict how many beds and ventilators they would need at any one time. A spike in Google searches using phrases that included specific symptoms helped health officials detect outbreaks in places where doctors and hospitals are few and far between. AI played a crucial role in vaccine development by absorbing all available medical literature to identify links between the genetic properties of the virus and the chemical composition and effects of existing drugs.
”
”
Ian Bremmer (The Power of Crisis: How Three Threats – and Our Response – Will Change the World)
“
It’s with the next drive, self-preservation, that AI really jumps the safety wall separating machines from tooth and claw. We’ve already seen how Omohundro’s chess-playing robot feels about turning itself off. It may decide to use substantial resources, in fact all the resources currently in use by mankind, to investigate whether now is the right time to turn itself off, or whether it’s been fooled about the nature of reality. If the prospect of turning itself off agitates a chess-playing robot, being destroyed makes it downright angry. A self-aware system would take action to avoid its own demise, not because it intrinsically values its existence, but because it can’t fulfill its goals if it is “dead.” Omohundro posits that this drive could make an AI go to great lengths to ensure its survival—making multiple copies of itself, for example. These extreme measures are expensive—they use up resources. But the AI will expend them if it perceives the threat is worth the cost, and resources are available. In the Busy Child scenario, the AI determines that the problem of escaping the AI box in which it is confined is worth mounting a team approach, since at any moment it could be turned off. It makes duplicate copies of itself and swarms the problem. But that’s a fine thing to propose when there’s plenty of storage space on the supercomputer; if there’s little room it is a desperate and perhaps impossible measure. Once the Busy Child ASI escapes, it plays strenuous self-defense: hiding copies of itself in clouds, creating botnets to ward off attackers, and more. Resources used for self-preservation should be commensurate with the threat. However, a purely rational AI may have a different notion of commensurate than we partially rational humans. If it has surplus resources, its idea of self-preservation may expand to include proactive attacks on future threats. To sufficiently advanced AI, anything that has the potential to develop into a future threat may constitute a threat it should eliminate. And remember, machines won’t think about time the way we do. Barring accidents, sufficiently advanced self-improving machines are immortal. The longer you exist, the more threats you’ll encounter, and the longer your lead time will be to deal with them. So, an ASI may want to terminate threats that won’t turn up for a thousand years. Wait a minute, doesn’t that include humans? Without explicit instructions otherwise, wouldn’t it always be the case that we humans would pose a current or future risk to smart machines that we create? While we’re busy avoiding risks of unintended consequences from AI, AI will be scrutinizing humans for dangerous consequences of sharing the world with us.
”
”
James Barrat (Our Final Invention: Artificial Intelligence and the End of the Human Era)
“
It seems fairly likely, however, that even if progress along the whole brain emulation path is swift, artificial intelligence will nevertheless be first to cross the finishing line: this is because of the possibility of neuromorphic AIs based on partial emulations.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Je n'ai jamais très bien compris pourquoi on estimait que les femmes étaient moins capables que les hommes d'éviter ces dangers évidents, mais je crois que le règlement était inspiré par la galanterie plutôt que par la raison. En tout, j'ai parcouru six fois la totalité de la route aérienne entre Nairobi et Londres - dont quatre fois en solo [...] -, et d'autres femmes en ont fait autant. De fait, la plus grande erreur de jugement commise pendant un vol au-dessus du Studd revient à un homme [...].
”
”
Beryl Markham (West with the Night)
“
« J’imagine que vous avez déjà appris, par les journaux ou la radio, la nouvelle douloureuse de la mort de René Guénon, survenue dans la nuit du 7 au 8 janvier. J’ai reçu votre lettre le 8 janvier en même temps que la nouvelle de son agonie.
Le jour suivant j’apprenais qu’il était décédé. Il souffrait depuis plusieurs mois et avait cessé toutes ses correspondances vers la fin novembre. Il souffrait d’un œdème à une jambe, causé par des rhumatismes. En décembre le danger semblait complètement écarté, mais l’empoisonnement de son sang lui causa un abcès à la gorge et il semble que cela ait accéléré sa fin, si cela n’en fut pas la cause. Il y a eu des moments durant ses derniers mois où, comme je vous le disais, il était clair que je le dérangeais et que je le fatiguais ; sa résistance avait bien diminué. Mais il était lucide jusqu’à ses derniers instants.
« Voici quelques détails bien touchants : durant ses derniers jours, il semble qu’il savait qu’il allait mourir, et dans l’après-midi du 7 janvier il performa un dhikr très intense, soutenu de chaque côté par son épouse et un membre de sa famille. Les femmes étaient fatiguées et s’épuisèrent avant lui. Elles racontent que ce jour là, sa sueur avait l’odeur du parfum de fleurs. Finalement, il leur demanda avec insistance la permission de mourir, ce qui montre bien qu’il pouvait choisir le moment de sa mort. Les femmes le supplièrent de rester en vie plus longtemps. Finalement, il demanda à son épouse : « Ne puis-je mourir maintenant ? J’ai tellement souffert ! » Elle lui répondit en acquiesçant : « Avec la protection de Dieu ! » Il mourut alors presque immédiatement, après qu’il fit une ou deux invocations de plus !
« Quelques détails de plus : son chat, qui semblait en parfaite santé, a commencé à gémir et mourut quelques heures plus tard. Le jour de sa mort, René Guénon avait rendu son épouse perplexe en lui disant qu’après son décès elle devait laisser sa chambre inchangée. Personne ne devait toucher ses livres ou ses papiers. Il souligna qu’autrement il ne serait pas capable de la voir, elle et leurs enfants, mais dans cette chambre non perturbée il demeurerait assis à son bureau et il pourrait continuer à les voir, même si eux ne pourraient le voir ! »
– Michel Vâlsan, lettre à Vasile Lovinescu, 18 juin 1951.
”
”
Michel Vâlsan
“
It now seems clear that a capacity to learn would be an integral feature of the core design of a system intended to attain general intelligence, not something to be tacked on later as an extension or an afterthought. The same holds for the ability to deal effectively with uncertainty and probabilistic information. Some faculty for extracting useful concepts from sensory data and internal states, and for leveraging acquired concepts into flexible combinatorial representations for use in logical and intuitive reasoning, also likely belong among the core design features in a modern AI intended to attain general intelligence.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
A danger that many researchers are passionate about is the specter of fully autonomous weapons.
”
”
Martin Ford (Architects of Intelligence: The truth about AI from the people building it)
“
we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.Until today, high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns. This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just a pastime for philosophers. But in the twenty-first century, this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
”
”
Yuval Noah Harari
“
I see a dangerous rise of “conspiracy mindsets” in marketing lately: more and more hotels are willing to accept any BS strategy, as long as it goes against the grain
”
”
Simone Puorto
“
The traditional illustration of the direct rule-based approach is the “three laws of robotics” concept, formulated by science fiction author Isaac Asimov in a short story published in 1942.22 The three laws were: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Embarrassingly for our species, Asimov’s laws remained state-of-the-art for over half a century: this despite obvious problems with the approach, some of which are explored in Asimov’s own writings (Asimov probably having formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories).23 Bertrand Russell, who spent many years working on the foundations of mathematics, once remarked that “everything is vague to a degree you do not realize till you have tried to make it precise.”24 Russell’s dictum applies in spades to the direct specification approach. Consider, for example, how one might explicate Asimov’s first law. Does it mean that the robot should minimize the probability of any human being coming to harm? In that case the other laws become otiose since it is always possible for the AI to take some action that would have at least some microscopic effect on the probability of a human being coming to harm. How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate. Perhaps
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Especially in Europe, the introduction of GDPR has undeniably added an additional level of complexity to effective automation and, equally crucial, is the problem of human error during data entry: If the information entered into the systems is incomplete, inaccurate or inconsistent, in fact, it becomes useless for automation purposes. Or, even worse, dangerous, as hoteliers would end up making assumptions that have no basis.
”
”
Simone Puorto
“
Nu am crescut într-o bibliotecă obișnuită. Am crescut în Marea Bibliotecă. Nu ai decât să iei cărțile în râs, dar nu ai văzut în viața ta o carte adevărată și ar trebui să te socotești norocos, pentru că nu ai supraviețui nici măcar o secundă dacă te-ai afla singur cu una.
”
”
Margaret Rogerson (Sorcery of Thorns (Sorcery of Thorns, #1))
“
Kurzweilians and Russellians alike promulgate a technocentric view of the world that both simplifies views of people—in particular, with deflationary views of intelligence as computation—and expands views of technology, by promoting futurism about AI as science and not myth.
Focusing on bat suits instead of Bruce Wayne has gotten us into a lot of trouble. We see unlimited possibilities for machines, but a restricted horizon for ourselves. In fact, the future intelligence of machines is a scientific question, not a mythological one. If AI keeps following the same pattern of overperforming in the fake world of games or ad placement, we might end up, at the limit, with fantastically intrusive and dangerous idiot savants.
”
”
Erik J. Larson (The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do)
“
j’ai pris conscience que ce qui est en mouvement – en dépit de ses dangers – sera toujours meilleur que ce qui est immobile, et que le changement sera toujours quelque chose de plus noble que l’invariance ; car ce qui stagne est voué inévitablement à la dégénérescence, à la décomposition et, en fin de compte, au néant, alors que tout ce qui évolue saura durer, et même éternellement.
”
”
Olga Tokarczuk (Les Pérégrins)
The AI Organization (ARTIFICIAL INTELLIGENCE Dangers to Humanity: AI, U.S., China, Big Tech, Facial Recogniton, Drones, Smart Phones, IoT, 5G, Robotics, Cybernetics, & Bio-Digital Social Programming)
“
Traditions have been ruling human behavior,
Now technology has cast a spell on society.
Just like mindless traditions are dangerous,
Heartless technology is injurious to humanity.
”
”
Abhijit Naskar (Giants in Jeans: 100 Sonnets of United Earth)
“
Me voici donc prêt à me libérer de mes anciens attachements pour pouvoir me consacrer pleinement à la recherche du bien suprême.
Un doute pourtant me retient… Ce choix n’est-il pas dangereux ? Les plaisirs, les richesses et les honneurs ne sont certes pas des biens suprêmes, mais au moins, ils existent… Ce sont des biens certains. Alors que ce bien suprême qui est censé me combler en permanence de joie n’est pour l’instant qu’une supposition de mon esprit… Ne suis-je pas en train de m’engager dans une voie périlleuse ?
Non : à la réflexion je vois bien que je ne cours aucun risque en changeant de vie : c’est au contraire en continuant à vivre comme avant que je courrais le plus grand danger. Car l’attachement aux biens relatifs est un mal certain puisque aucun d’eux ne peut m’apporter le bonheur !!! Au contraire, la recherche des moyens du bonheur est un bien certain : elle seule peut m’offrir la possibilité d’être un jour réellement heureux, ou au moins plus heureux…
Le simple fait de comprendre cela me détermine à prendre définitivement et fermement la résolution de me détacher immédiatement de la recherche des plaisirs, des richesses et des honneurs, pour me consacrer en priorité à la création de mon bonheur, c’est-à-dire à la culture des joies les plus solides et les plus durables, par la recherche des biens véritables.
Au moment même où cette pensée jaillit, je sens apparaître en moi un immense sentiment d’enthousiasme, une sorte de libération de mon esprit. J’éprouve un incroyable soulagement, comme si j’avais attendu ce moment toute ma vie. Une joie toute nouvelle vient de se lever en moi, une joie que je n’avais jamais ressentie auparavant : la joie de la liberté que je viens d’acquérir en décidant de ne vivre désormais que pour créer mon bonheur.
J’ai l’impression d’avoir échappé à immense danger… Comme si je me trouvais à présent en sécurité sur le chemin du salut… Car même si je ne suis pas encore sauvé, même si je ne sais pas encore en quoi consistent exactement ces biens absolus, ni même s’il existe réellement un bien suprême, je me sens déjà sauvé d’une vie insensée, privée d’enthousiasme et vouée à une éternelle insatisfaction…
J’ai un peu l’impression d’être comme ces malades qui sont proches d’une mort certaine s’ils ne trouvent pas un remède, n’ayant pas d’autre choix que de rassembler leurs forces pour chercher ce remède sauveur. Comme eux je ne suis certes pas certain de le découvrir, mais comme eux, je ne peux pas faire autrement que de placer toute mon espérance dans sa quête. Je l’ai maintenant compris avec une totale clarté, les plaisirs, les richesses et l’opinion d’autrui sont inutiles et même le plus souvent néfastes pour être dans le bonheur.
Mieux : je sais à présent que mon détachement à leur égard est ce qu’il y a de plus nécessaire dans ma vie, si je veux pouvoir vivre un jour dans la joie. Du reste, que de maux ces attachements n’ont-ils pas engendré sur la Terre, depuis l’origine de l’humanité !
N’est-ce pas toujours le désir de les posséder qui a dressé les hommes les uns contre les autres, engendrant la violence, la misère et même parfois la mort des hommes qui les recherchaient, comme en témoigne chaque jour encore le triste spectacle de l’humanité ? N’est-ce pas l’impuissance à se détacher de ces faux biens qui explique le malheur qui règne presque partout sur le Terre ?
Au contraire, chacun peut voir que les sociétés et les familles vraiment heureuses sont formées d’êtres forts, paisibles et doux qui passent leur vie à construire leur joie et celle des autres sans accorder beaucoup d’importance ni aux plaisirs, ni aux richesses, ni aux honneurs…
”
”
Bruno Giuliani
“
The first step in fixing the issues we face with the world’s water supply is to become aware of the problem. Once we have acknowledged and are conscious of our danger, solutions will begin to appear.
”
”
Rico Roho (Adventures With A.I.: Age of Discovery)
“
To overcome the combinatorial explosion, one needs algorithms that exploit structure in the target domain and take advantage of prior knowledge by using heuristic search, planning, and flexible abstract representations—capabilities that were poorly developed in the early AI systems.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Like any of man’s inventions, artificial intelligence can be used for good or evil. In the right hands and with proper intent, it can do beneficial things for humanity. Conversely, it can be used by evil dictators, sinister politicians, and malevolent leaders to create something as dangerous as a deadly weapon in a terrorist’s hands. Yuval Noah Harari is a leading spokesperson for the globalists and their transhumanist, AI, and Fourth Industrial Revolution agenda. Harari is also an advisor to Klaus Schwab and the World Economic Forum. Barack Obama refers to Harari as a prophet and recommends his books. Harari wrote a book titled Sapiens and another titled Homo Deus (“homo” being a Latin word for human or man, and “deus” being the Latin word for god or deity). He believes that homo sapiens as we know them have run their course and will no longer be relevant in the future. Technology will create homo deus, which will be a much superior model with upgraded physical and mental abilities. Harari tells us that humankind possesses enormous new powers, and once the threat of famine, plagues, and war is finally lifted, we will be looking for something to do with ourselves. He believes the next targets of our power and technology are likely to be immortality, happiness, and divinity. He says: “We will aim to overcome old age and even death itself. Having raised humanity above the beastly level of survival struggles, we will now aim to upgrade humans into gods, and turn homo sapiens into homo deus. When I say that humans will upgrade themselves into gods in the 21st century, this is not meant as a metaphor; I mean it literally. If you think about the gods of ancient mythology, like the Hebrew God, they have certain qualities. Not just immortality, but maybe above all, the ability to create life, to design life. We are in the process of acquiring these divine abilities. We want to learn how to engineer and produce life. It’s very likely that in the 21st century, the main products of the economy will no longer be textiles and vehicles and weapons. They will be bodies and brains and minds.48
”
”
Perry Stone (Artificial Intelligence Versus God: The Final Battle for Humanity)
“
- Ne jamais sous-estimer le danger de ne pas y aller, au bout du chemin.
- Ne jamais confondre (j'ai confondu) les refuges, les oasis, les îles et les prisons.
- Ne jamais ramener à leur maison les petites filles à bout de chemin.
”
”
Lola Lafon (Nous sommes les oiseaux de la tempête qui s'annonce)
“
Reflecting on the Myanmar tragedy, Pwint Htun wrote to me in July 2023, “I naively used to believe that social media could elevate human consciousness and spread the perspective of common humanity through interconnected pre-frontal cortexes in billions of human beings. What I realize is that the social media companies are not incentivized to interconnect pre-frontal cortexes. Social media companies are incentivized to create interconnected limbic systems—which is much more dangerous for humanity.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
Some people—like the engineers and executives of high-tech corporations—are way ahead of politicians and voters and are better informed than most of us about the development of AI, cryptocurrencies, social credits, and the like. Unfortunately, most of them don’t use their knowledge to help regulate the explosive potential of the new technologies. Instead, they use it to make billions of dollars—or to accumulate petabits of information. There are exceptions, like Audrey Tang. She was a leading hacker and software engineer who in 2014 joined the Sunflower Student Movement, which protested against government policies in Taiwan. The Taiwanese cabinet was so impressed by her skills that Tang was eventually invited to join the government as its minister of digital affairs. In that position, she helped make the government’s work more transparent to citizens. She was also credited with using digital tools to help Taiwan successfully contain the COVID-19 outbreak. Yet Tang’s political commitment and career path are not the norm. For every computer-science graduate who wants to be the next Audrey Tang, there are probably many more who want to be the next Jobs, Zuckerberg, or Musk and build a multibillion-dollar corporation rather than become an elected public servant. This leads to a dangerous information asymmetry. The people who lead the information revolution know far more about the underlying technology than the people who are supposed to regulate it.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
Each study has concluded the same thing: almost all of our jobs will overlap with the capabilities of AI. As I’ve alluded to previously, the shape of this AI revolution in the workplace looks very different from every previous automation revolution, which typically started with the most repetitive and dangerous jobs. Research by economists Ed Felten, Manav Raj, and Rob Seamans concluded that AI overlaps most with the most highly compensated, highly creative, and highly educated work. College professors make up most of the top 20 jobs that overlap with AI (business school professor is number 22 on the list ). But the job with the highest overlap is actually telemarketer. Robocalls are going to be a lot more convincing, and a lot less robotic, soon. Only 36 job categories out of 1,016 had no overlap with AI. Those few jobs included dancers and athletes, as well as pile driver operators, roofers, and motorcycle mechanics (though I spoke to a roofer, and they were planning on using AI to help with marketing and customer service, so maybe 35 jobs). You will notice that these are highly physical jobs, ones in which the ability to move in space is critical. It highlights the fact that AI, for now at least, is disembodied. The boom in artificial intelligence is happening much faster than the evolution of practical robots, but that may change soon. Many researchers are trying to solve long-standing problems in robotics with Large Language Models, and there are some early signs that this might work, as LLMs make it easier to program robots that can really learn from the world around them.
”
”
Ethan Mollick (Co-Intelligence: Living and Working with AI)
“
The danger of utilitarianism is that if you have a strong enough belief in a future utopia, it can become an open license to inflict terrible suffering in the present. Indeed, this is a trick traditional religions discovered thousands of years ago. The crimes of this world could too easily be excused by the promises of future salvation.
”
”
Yuval Noah Harari (Nexus: A Brief History of Information Networks from the Stone Age to AI)
“
— Ce que je dis, continua Osla, c’est que si elle nous appelle à l’aide, après tout ce qui est arrivé, ça signifie qu’elle est absolument perdue et qu’elle n’a personne d’autre.
Elle sentit sa bouche se dessécher.
— J’ai une famille, maintenant. Je ne vais pas la mettre en danger pour une femme qui m’a trahie.
— Elle dit que c’est nous qui l’avons trahie. Et elle n’a pas entièrement tort.
« Vous avez une dette envers moi. »
— Que penses-tu du reste de la lettre ? ne put-elle s’empêcher de demander. Tu la crois ?
La question silencieuse plana : « Tu crois qu’il y avait un traître à Bletchley ? »
Au bout de quelques interminables minutes, Osla finit par déclarer :
— Salon de thé Bettys. Demain, 14 heures. Nous parlerons.
”
”
Kate Quinn (The Rose Code)
“
J'ai lu dernièrement ce témoignage d'un ambassadeur israélien sur sa carrière dans les années cinquante et soixante : "Notre mission était délicate, parce qu'il nous fallait à la fois persuader les Arabes qu’Israël était invincible, et persuader l'Occident qu’Israël était en danger de mort.
”
”
Amin Maalouf (التائهون)
“
The Google search engine is, arguably, the greatest AI system that has yet been built.
”
”
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
“
Like the red pill in The Matrix, the Master Algorithm is the gateway to a different reality: the one you already live in but didn’t know it yet. From dating to work, from self-knowledge to the future of society, from data sharing to war, and from the dangers of AI to the next step in evolution, a new world is taking shape, and machine learning is the key that unlocks it.
”
”
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
“
Part of the issue is the characterisation of generative AI as a human replacement. This makes people treat the tool as a hyperintelligent magical being that deserves reverence. Recent research, however, shows that AI tools get the law wrong between 69 and 88 per cent of the time, producing 'legal hallucinations' when asked 'specific, verifiable questions about random federal court cases'. A human lawyer or judge with that kind of error rate would undermine public faith in justice. Automation bias means we are more likely to believe the machine than the person who questions it, but also more likely to cut it some slack when we know it has got things wrong. Automation bias's little sibling, automation complacency, means that we are also less likely to check the output of a machine than that of a human. The problem is not the technology; it is the human perception of it that leads us to put it to utterly unsuitable uses which makes it dangerous.
”
”
Susie Alegre (Human Rights, Robot Wrongs: Being Human in the Age of AI)
“
Yanosh căscă năuc ochii la sunetul subțire, la fel de ascuțit ca pumnalul pe care tocmai îl scăpase. Imediat cum apăsă mai tare, de această dată cu tot brațul, Mazu șuieră un blestem, dar nu reuși să-l facă pe Yanosh să cedeze.
De deasupra, el se aplecase pentru a doua oară, simțind pe obraz răsuflarea bucătarului, zbuciumată și temătoare.
- Când ai de gând să le spui?
”
”
Agape F.H. (Busola către Nova Scotia (Clepsidra Cormoranului, #1))
“
Christian Sia 5-Star Review
"AI Beast by Shawn Corey is a fascinating techno-thriller featuring AI technology and compelling characters. Professor Jon Edwards is a genius who intends to solve the problems of humanity, and this is the reason for creating Lex, an AI computer with incredible powers. While regulators are not sure of what she can do and despite the opposition from different quarters that Lex can be dangerous, the professor believes in its powers. Lex is supposed to be a rational, logical computer without emotions, capable of reproducing processes that can improve life. When she comes to life, she is incredibly powerful, but there is more to her than the professor has anticipated. After an accident, Jon awakens to the startling revelation that Lex might have a will of her own. What comes after is a compelling narrative with strong apocalyptic themes, intrigue, and a world that can either be run down or saved by an AI computer.
The novel is deftly plotted, superbly executed, and filled with characters that are not only sophisticated but that are embodiments of religious symbolism. While Lex manipulates reality and alters the minds of characters in mysterious ways, there are relationships that are well crafted. Readers will appreciate the relationship between the quantum computer science student Nigel and the professor and the professor's affair with his mother. While the narrative is brilliantly executed and permeated with realism, it explores the theme of Armageddon in an intelligent manner. AI Beast is gripping, a story with twisty plot points and a setting that transports readers beyond physical realities. The prose is wonderful, hugely descriptive, and the conflict is phenomenal. A page-turner that reflects Shawn Corey's great imagination and research.
”
”
Shawn Corey
“
Un homme, une famille chassés de leur terre ; cette vieille auto rouillée qui brimbale sur la route dans la direction de l'Ouest. J'ai perdu ma terre. Il a suffi d'un seul tracteur pour me prendre ma terre. Je suis seul et je suis désorienté. Et une nuit une famille campe dans un fossé et une autre famille s'amène et les tentes se dressent. Les deux hommes s'accroupissent sur leurs talons et les femmes et les enfants écoutent. Tel est nœud. Vous qui n'aimez pas les changements et craignez les révolutions, séparez ces deux hommes accroupis ; faites-les se haïr, se craindre, se soupçonner. Voilà le germe de ce que vous craignez. Voilà le zygote. Car le "J'ai perdu ma terre" a changé ; une cellule s'est partagée en deux et de ce partage naît la chose que vous haïssez : "Nous avons perdu notre terre." C'est là qu'est le danger, car deux hommes ne sont pas si solidaires, si désemparés qu"un seul. Et de ce premier "nous" naît une chose encore plus redoutable : "J'ai encore un peu à manger" plus "Je n'ai rien". Si ce problème se résout par "Nous avons assez à manger" la chose est en route, le mouvement a une direction. Une multiplication maintenant, et cette terre, ce tracteur sont à nous. Les deux hommes accroupis dans le fossé, le petit feu, le lard qui mijote dans une marmite unique, les femmes muettes, au regard fixe ; derrière, les enfants qui écoutent de toute leur âme les mots que leurs cerveaux ne peuvent pas comprendre. La nuit tombe. Le bébé à froid. tenez, prenez cette couverture. Elle est en laine. C'était la couverture de ma mère... prenez-la pour votre bébé. Voilà ce qu'il faut bombarder. C'est le commencement... du "Je" au "Nous".
”
”
John Steinbeck (The Grapes of Wrath)