Dangers Of Ai Quotes

We've searched our database for all the quotes and captions related to Dangers Of Ai. Here they are! All 100 of them:

Maybe the flies knew we were leaving. Maybe they were happy for us.
Eli Wilde (Orchard of Skeletons)
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
Eliezer Yudkowsky
I saw something sticking out of Sloan’s leg after he fell. I didn’t know what it was and didn’t want to ask. Maybe I thought we were the same inside as we are on the outside, a bit like a carrot or something like that.
Eli Wilde (Orchard of Skeletons)
The computer scientist Donald Knuth was struck that “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking’—that, somehow, is much harder!
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
Do you see anything when you dream or are your dreams as empty as your eyes?
Eli Wilde (Orchard of Skeletons)
Can Isaac eat my foot if we have to cut it off? He could make a soup from it, so we don’t waste it. You know what he’s like about not wasting food.
Eli Wilde (Orchard of Skeletons)
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans.
Yuval Noah Harari (21 Lessons for the 21st Century)
Then why do you have guns?" "For shooting large and dangerous beasts who might be threatening my fungus specimens", M-Bot said. "Obviously.
Brandon Sanderson (Skyward (Skyward, #1))
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Eliezer Yudkowsky (Rationality: From AI to Zombies)
This could be my Everest, man—no, no, even better. Hacking a military AI? Wow, man, that’s like, that’s like going to Mars, dude, yeah, like Mars!
Guy Morris (Swarm)
When human beings are scared and feel everything is exposed to the government, we will censor ourselves from free thinking. That's dangerous for human development.
Weiwei Ai
Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
We have now entered a new and far more dangerous phase of cyberwarfare where artificial intelligence will complicate and accelerate all aspects of defensive and offensive strategies.
Guy Morris (Swarm)
So then, a test for singularity would be the point at which an AI can create another viable and conscious AI. Singularity must include not only intelligence, but also self-awareness, self-determination, and self-conception.
Guy Morris (Swarm)
Human individuals and human organizations typically have preferences over resources that are not well represented by an "unbounded aggregative utility function". A human will typically not wager all her capital for a fifty-fifty chance of doubling it. A state will typically not risk losing all its territory for a ten percent chance of a tenfold expansion. [T]he same need not hold for AIs. An AI might therefore be more likely to pursue a risky course of action that has some chance of giving it control of the world.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
Even the brightest human minds require a mentor to guide them. The assumption that an AI can teach itself by simply absorbing random information is ludicrous. If we want a computer to think like a human, we should train them as one.
Guy Morris (Swarm)
I believe in the song of the white dove. On the threshold of the new technologies like artificial intelligence, quantum computing and nuclear warfare, human species are in new danger. There is an urgent need for superhuman compassion in machine.
Amit Ray (Compassionate Artificial Superintelligence AI 5.0)
Consider an AI that has hedonism as its final goal, and which would therefore like to tile the universe with “hedonium” (matter organized in a configuration that is optimal for the generation of pleasurable experience). To this end, the AI might produce computronium (matter organized in a configuration that is optimal for computation) and use it to implement digital minds in states of euphoria. In order to maximize efficiency, the AI omits from the implementation any mental faculties that are not essential for the experience of pleasure, and exploits any computational shortcuts that according to its definition of pleasure do not vitiate the generation of pleasure. For instance, the AI might confine its simulation to reward circuitry, eliding faculties such as a memory, sensory perception, executive function, and language; it might simulate minds at a relatively coarse-grained level of functionality, omitting lower-level neuronal processes; it might replace commonly repeated computations with calls to a lookup table; or it might put in place some arrangement whereby multiple minds would share most parts of their underlying computational machinery (their “supervenience bases” in philosophical parlance). Such tricks could greatly increase the quantity of pleasure producible with a given amount of resources.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. . . . As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Erik Brynjolfsson (The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies)
Imagine, a $1,000 political assassin! And this is not a far-fetched danger for the future, but a clear and present danger.
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
If the AI has (perhaps for safety reasons) been confined to an isolated computer, it may use its social manipulation superpower to persuade the gatekeepers to let it gain access to an Internet port. Alternatively, the AI might use its hacking superpower to escape its confinement.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
Bots are at best narrow AI, nothing that would make a cleric remotely nervous. But they would scare the hell out of epidemiologists who understand that parasites don’t need to be smart to be dangerous.
Stewart Brand (SALT Summaries, Condensed Ideas About Long-term Thinking)
Such an AI might also be able to produce a detailed blueprint for how to bootstrap from existing technology (such as biotechnology and protein engineering) to the constructor capabilities needed for high-throughput atomically precise manufacturing that would allow inexpensive fabrication of a much wider range of nanomechanical structures.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
Though this might cause the AI to be terminated, it might also encourage the engineers who perform the postmortem to believe that they have gleaned a valuable new insight into AI dynamics—leading them to place more trust in the next system they design, and thus increasing the chance that the now-defunct original AI’s goals will be achieved.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
no simple mechanism could do the job as well or better. It might simply be that nobody has yet found the simpler alternative. The Ptolemaic system (with the Earth in the center, orbited by the Sun, the Moon, planets, and stars) represented the state of the art in astronomy for over a thousand years, and its predictive accuracy was improved over the centuries by progressively complicating the model: adding epicycles upon epicycles to the postulated celestial motions. Then the entire system was overthrown by the heliocentric theory of Copernicus, which was simpler and—though only after further elaboration by Kepler—more predictively accurate.63 Artificial intelligence methods are now used in more areas than it would make sense to review here, but mentioning a sampling of them will give an idea of the breadth of applications. Aside from the game AIs
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
One sympathizes with John McCarthy, who lamented: “As soon as it works, no one calls it AI anymore.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computer might only serve to empower the natural stupidity of humans. We are unlikely to face a robot rebellion in the coming decades, but we might have to deal with hordes of bots who know how to press our emotional buttons better than our mother, and use this uncanny ability to try and sell us something- be it a car, a politician, or an entire ideology. The bots could identify our deepest fears, hatreds and cravings, and use these inner leverages against us.
Yuval Noah Harari (21 Lessons for the 21st Century)
To put it in technical terms, the core of the issue is the simplicity of the objective function, and the danger from single-mindedly optimizing a single objective function, which can lead to harmful externalities.
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
Tomorrow’s leaders will be brave enough to scale the dangerous peaks of an increasingly competitive and ethically challenging mountain range. They will drive the problematic conversations that illuminate the valleys in between. T
Rafael Moscatel (Tomorrow’s Jobs Today: Wisdom And Career Advice From Thought Leaders In Ai, Big Data, Blockchain, The Internet Of Things, Privacy, And More)
Our passion for innovations shall not blind us to putting the power of Artificial Intelligence in the hands of devil forces, who love arms races and wars. Efforts should always be directed toward the elimination of human suffering.
Amit Ray (Compassionate Artificial Intelligence)
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans. We
Yuval Noah Harari (21 Lessons for the 21st Century)
The real question, I think, is not whether the field as a whole is in any real danger of another AI winter but, rather, whether progress remains limited to narrow AI or ultimately expands to Artificial General Intelligence as well.
Martin Ford (Rise of the Robots: Technology and the Threat of a Jobless Future)
Lorsque j’ai commencé à voyager en Gwendalavir aux côtés d'Ewìlan et de Salim, je savais que, au fil de mon écriture, ma route croiserait celle d'une multitude de personnages. Personnages attachants ou irritants, discrets ou hauts en couleurs, pertinents ou impertinents, sympathiques ou maléfiques... Je savais cela et je m'en réjouissais. Rien, en revanche, ne m'avait préparé à une rencontre qui allait bouleverser ma vie. Rien ne m'avait préparé à Ellana. Elle est arrivée dans la Quête à sa manière, tout en finesse tonitruante, en délicatesse remarquable, en discrétion étincelante. Elle est arrivée à un moment clef, elle qui se moque des serrures, à un moment charnière, elle qui se rit des portes, au sein d’un groupe constitué, elle pourtant pétrie d’indépendance, son caractère forgé au feu de la solitude. Elle est arrivée, s'est glissée dans la confiance d'Ewilan avec l'aisance d'un songe, a capté le regard d’Edwin et son respect, a séduit Salim, conquis maître Duom... Je l’ai regardée agir, admiratif ; sans me douter un instant de la toile que sa présence, son charisme, sa beauté tissaient autour de moi. Aucun calcul de sa part. Ellana vit, elle ne calcule pas. Elle s'est contentée d'être et, ce faisant, elle a tranquillement troqué son statut de personnage secondaire pour celui de figure emblématique d'une double trilogie qui ne portait pourtant pas son nom. Convaincue du pouvoir de l'ombre, elle n'a pas cherché la lumière, a épaulé Ewilan dans sa quête d'identité puis dans sa recherche d'une parade au danger qui menaçait l'Empire. Sans elle, Ewilan n'aurait pas retrouvé ses parents, sans elle, l'Empire aurait succombé à la soif de pouvoir des Valinguites, mais elle n’en a tiré aucune gloire, trop équilibrée pour ignorer que la victoire s'appuyait sur les épaules d'un groupe de compagnons soudés par une indéfectible amitié. Lorsque j'ai posé le dernier mot du dernier tome de la saga d'Ewilan, je pensais que chacun de ses compagnons avait mérité le repos. Que chacun d'eux allait suivre son chemin, chercher son bonheur, vivre sa vie de personnage libéré par l'auteur après une éprouvante aventure littéraire. Chacun ? Pas Ellana. Impossible de la quitter. Elle hante mes rêves, se promène dans mon quotidien, fluide et insaisissable, transforme ma vision des choses et ma perception des autres, crochète mes pensées intimes, escalade mes désirs secrets... Un auteur peut-il tomber amoureux de l'un de ses personnages ? Est-ce moi qui ai créé Ellana ou n'ai-je vraiment commencé à exister que le jour où elle est apparue ? Nos routes sont-elles liées à jamais ? — Il y a deux réponses à ces questions, souffle le vent à mon oreille. Comme à toutes les questions. Celle du savant et celle du poète. — Celle du savant ? Celle du poète ? Qu'est-ce que... — Chut... Écris.
Pierre Bottero (Ellana (Le Pacte des MarchOmbres, #1))
worldwide riots when the first AIs gained sentience,” he said. “And don’t get me started about what humans have done to each other through history. Believe me, it’s the same thing all over again.” “But they might have killed themselves along with the rest of us.” “As long as they can preserve what they think of as humanity, they don’t care.” “Then why put other human lives in danger in the first place?” “If you don’t agree with them, you don’t count as human anymore.
DeAnna Knippling (Blood in Space: The Icon Mutiny)
The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned.
Steven Pinker (Enlightenment Now: The Case for Reason, Science, Humanism, and Progress)
That was fucking awesome," Boyd enthused with a huge grin. "It's pretty amazing," Kassian agreed, taking off his own helmet. "I had a feeling you'd appreciate it considering your taste in cars and men. Fast, powerful and dangerous and all that stuff, right?
Ais (Afterimage (In the Company of Shadows, #2))
The treacherous turn—While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
In a 2002 interview with Science Fiction Weekly magazine, when asked: Excession is particularly popular because of its copious detail concerning the Ships and Minds of the Culture, its great AIs: their outrageous names, their dangerous senses of humour. Is this what gods would actually be like? Banks replied: If we're lucky.
Iain Banks
— Vous m'avez dit qu'une conductrice imprudente ne risquait rien tant qu'elle ne rencontrait pas de conducteur imprudent. J'en ai rencontré un, vous ne croyez pas? Je veux dire que je me suis mise en danger en faisant une telle erreur de jugement. J'ai cru que vous étiez quelqu'un d'honnête, de loyal. J'ai cru que c'était là votre secret d'orgueil.
F. Scott Fitzgerald (The Great Gatsby)
Tu peux être grave et fou, qui empêche ? Tu peux être tout ce que tu veux et fou en surplus, mais il faut être fou, mon enfant. Regarde autour de toi le monde sans cesse grandissant de gens qui se prennent au sérieux. Outre qu'ils se donnent un ridicule irrémédiable devant les esprits semblables au mien, ils se font une vie dangereusement constipée. Ils sont exactement comme si, à la fois, ils se bourraient de tripes qui relâchent et de nèfles du Japon qui resserrent. Ils gonflent, gonflent, puis ils éclatent et ça sent mauvais pour tout le monde. Je n'ai pas trouvé d'image meilleure que celle-là. D'ailleurs, elle me plaît beaucoup. Il faudrait même y employer trois ou quatre mots de dialecte de façon à la rendre plus ordurière que ce qu'elle est en piémontais. Toi qui connais mon éloignement naturel pour tout ce qui est grossier, cette recherche te montre bien tout le danger que courent les gens qui se prennent au sérieux devant le jugement des esprits originaux. Ne sois jamais une mauvaise odeur pour tout un royaume, mon enfant. Promène-toi comme un jasmin au milieu de tous.
Jean Giono (The Horseman on the Roof)
Des hommes, en effet, on peut dire généralement ceci: qu'ils sont ingrats, changeants, simulateurs et dissimulateurs, ennemis des dangers, avides de gain; et tant que tu leur fais du bien, ils sont tout à toi, t'offrent leur sang, leurs biens, leur vie, leurs enfants, comme j'ai dit plus haut, quand le besoin est lointain; mais quand il s'approche de toi, ils se dérobent.
Niccolò Machiavelli
J'ai beau retourner mes souvenir dans tous les sens, je ne parviens pas à me rappeler clairement l'instant où nous avons décidé de ne plus nous contenter de partager le peu que nous avions et de cesser d'avoir confiance, de voir l'autre comme un danger, de créer cette frontière invisible avec le monde extérieur en faisant de notre quartier une forteresse et de notre impasse un enclos.
Gaël Faye (Petit pays)
Coordinates streamed into her mind while she yanked on her environment suit, foregoing every safety check she’d ever learned. ‘Alex, we will try to help him together, but it is far too dangerous—’ She grabbed the module she used to access the circuitry of the ship, bypassed Valkyrie and fired up the Caeles Prism. ‘Alex—’ She opened a wormhole in the middle of the cabin, set its exit point at the coordinates Valkyrie had provided, and ran through it.
G.S. Jennsen (Requiem (Aurora Resonant, #3))
The world has been changing even faster as people, devices and information are increasingly connected to each other. Computational power is growing and quantum computing is quickly being realised. This will revolutionise artificial intelligence with exponentially faster speeds. It will advance encryption. Quantum computers will change everything, even human biology. There is already one technique to edit DNA precisely, called CRISPR. The basis of this genome-editing technology is a bacterial defence system. It can accurately target and edit stretches of genetic code. The best intention of genetic manipulation is that modifying genes would allow scientists to treat genetic causes of disease by correcting gene mutations. There are, however, less noble possibilities for manipulating DNA. How far we can go with genetic engineering will become an increasingly urgent question. We can’t see the possibilities of curing motor neurone diseases—like my ALS—without also glimpsing its dangers. Intelligence is characterised as the ability to adapt to change. Human intelligence is the result of generations of natural selection of those with the ability to adapt to changed circumstances. We must not fear change. We need to make it work to our advantage. We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfil our potential and create a better world for the whole human race. We need to take learning beyond a theoretical discussion of how AI should be and to make sure we plan for how it can be. We all have the potential to push the boundaries of what is accepted, or expected, and to think big. We stand on the threshold of a brave new world. It is an exciting, if precarious, place to be, and we are the pioneers. When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technologies such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only chance we will get. Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.
Stephen Hawking (Brief Answers to the Big Questions)
nous cédons à des tentations légères dont nous méprisons le danger. Insensiblement nous tombons dans des situations périlleuses, dont nous pouvions aisément nous garantir, mais dont nous ne pouvons plus nous tirer sans des efforts héroïques qui nous effrayent, et nous tombons enfin dans l'abîme en disant à Dieu : « Pourquoi m'as-tu fait si faible ? » Mais malgré nous il répond à nos consciences : « Je t'ai fait trop faible pour sortir du gouffre, parce que je t'ai fait assez fort pour n'y pas tomber. »
Jean-Jacques Rousseau (Œuvres complètes - 93 titres)
The journey of a thousands suns begins today. Some may question whether the journey is worth the sacrifice and danger. To them I say that no sacrifice is too dear and no danger too great to ensure the very survival of our human species. What will we find when we arrive at our new homes? That's an open question. For a century, deep-space probes have reported alien lifeforms, but thus far none of which we recognize as intelligent beings. Are we the only biological intelligence in the universe? Perhaps our definition of intelligence is too narrow, too specio-centric. For, are not trees intelligent, who know to shed their leaves at the end of summer? Are not turtles intelligent, who know when to bury themselves in mud under ice? Is not all life intelligent, that knows how to pass its vital essence to new generations? Because half of intelligence resides in the body, be it plant or animal. I now commend these brave colonists to the galaxy, to join their minds and bodies to the community of living beings they will encounter there, and to establish our rightful place among the stars.
David Marusek (Mind Over Ship)
Secular Israelis often complain bitterly that the ultra-Orthodox don’t contribute enough to society and live off other people’s hard work. Secular Israelis also tend to argue that the ultra-Orthodox way of life is unsustainable, especially as ultra-Orthodox families have seven children on average.32 Sooner or later, the state will not be able to support so many unemployed people, and the ultra-Orthodox will have to go to work. Yet it might be just the reverse. As robots and AI push humans out of the job market, the ultra-Orthodox Jews may come to be seen as the model for the future rather than as a fossil from the past. Not that everyone will become Orthodox Jews and go to yeshivas to study the Talmud. But in the lives of all people, the quest for meaning and community might eclipse the quest for a job. If we manage to combine a universal economic safety net with strong communities and meaningful pursuits, losing our jobs to algorithms might actually turn out to be a blessing. Losing control over our lives, however, is a much scarier scenario. Notwithstanding the danger of mass unemployment, what we should worry about even more is the shift in authority from humans to algorithms, which might destroy any remaining faith in the liberal story and open the way to the rise of digital dictatorships.
Yuval Noah Harari (21 Lessons for the 21st Century)
J’ai une très grande expérience des séparations, je sais mieux que personne leur danger : quitter quelqu’un en se promettant qu’on va se revoir, cela présage les choses les plus graves. Le cas le plus fréquent, c’est qu’on ne revoit pas l’individu en question. Et ce n’est pas la pire éventualité. La pire consiste à revoir la personne et à ne pas la reconnaître, soit qu’elle ait réellement beaucoup changé, soit qu’on lui découvre alors un aspect incroyablement déplaisant qui devait exister déjà mais sur lequel on avait réussi à s’aveugler, au nom de cette étrange forme d’amour si mystérieuse, si dangereuse et dont l’enjeu échappe toujours : l’amitié.
Amélie Nothomb (Pétronille)
Il y a quelqu'un que je n'ai encore jamais eu envie de tuer. C'est toi. Tu peux marcher dans les rues, tu peux boire et marcher dans les rues, je ne te tuerai pas. N'aie pas peur. La ville est sans danger. Le seul danger dans la ville, c'est moi. Je marche, je marche dans les rues, je tue. Mais toi, tu n'as rien à craindre. Si je te suis, c'est parce que j'aime le rythme de tes pas. Tu titubes. C'est beau. On pourrait dire que tu boites. Et que tu es bossu. Tu ne l'es pas vraiment. De temps en temps tu te redresses, et tu marches droit. Mais moi, je t'aime dans les heures avancées de la nuit, quand tu es faible, quand tu trébuches, quand tu te voûtes. Je te suis, tu trembles. De froid ou de peur. Il fait chaud pourtant. Jamais, presque jamais, peut-être jamais il n'avait fait si chaud dans notre ville. Et de quoi pourrais-tu avoir peur? De moi? Je ne suis pas ton ennemi. Je t'aime. Et personne d'autre ne pourrait te faire du mal. N'aie pas peur. je suis là. Je te protège. Pourtant, je souffre aussi. Mes larmes - grosses gouttes de pluie - me coulent sur le visage. La nuit me voile. La lune m'éclaire. Les nuages me cachent. Le vent me déchire. J'ai une sorte de tendresse pour toi. Cela m'arrive parfois. Tres rarement. Pourquoi pour toi? Je n'en sais rien. Je veux te suivre très loin, partout, longtemps. Je veux te voir souffrir encore plus. Je veux que tu en aies assez de tout le reste. Je veux que tu viennes me supplier de te prendre. Je veux que tu me désires. Que tu aies envie de moi, que tu m'aimes, que tu m'appelles. Alors, je te prendrai dans mes bras, je te serrerai sur mon coeur, tu seras mon enfant, mon amant, mon amour. Je t'emporterai. Tu avais peur de naître, et maintenant tu as peur de mourir. Tu as peur de tout. Il ne faut pas avoir peur. Il y a simplement une grande roue qui tourne. Elle s'appelle Éternité. C'est moi qui fais tourner la grande roue. Tu ne dois pas avoir peur de moi. Ni de la grande roue. La seule chose qui puisse faire peur, qui puisse faire mal, c'est la vie, et tu la connais déjà.
Ágota Kristóf
A feu mon père, à mon grand-père, familiers des deuxièmes balcons, la hiérarchie sociale du théâtre avait donné le goût du cérémonial: quand beaucoup d'hommes sont ensemble, il faut les séparer par des rites ou bien ils se massacrent. Le cinéma prouvait le contraire : plutôt que par une fête, ce public si mêlé semblait réuni par une catastrophe; morte, l'étiquette démasquait enfin le véritable lien des hommes, l'adhérence. Je pris en dégoût les cérémonies, j'adorai les foules; j'en ai vu de toute sorte mais je n'ai pas retrouvé cette nudité, cette présence sans recul de chacun à tous, ce rêve éveillé, cette conscience obscure du danger d'être homme qu'en 1940, dans le Stalag XII D.
Jean-Paul Sartre (Les mots et autres écrits autobiographiques)
-Tu est amoureux, prononce-t-elle. -Hein? -Tu as beau jouer les machos, tu est amoureux de moi. What? -T'as fumé, qu'est-ce que tu racontes? -Malgré les dangers, tu restes toujours près de moi.J'essaie de te décourager, et tu ne pars pas.C'est une belle définition de l'amour. -Euh non, c'est une définition de merde. Elle tourne sur elle-même, me tire la langue, toute fière. -Tu peux me dire ce que tu voudras.Je le sais, maintenant.J'en suis convaincue. -Et? -Et ça fait du bien. Je n'ai pas le temps de lui dire qu'elle est complètement folle, et qu'est-ce que c'est cette manière de prétendre que je suis amoureux, et elle se prend pour qui, et de toute façon c'est quoi l'amour, et si ça se trouve je vais me barrer demain et elle l'aura cherché, quand elle se glisse dans mes bras pour m'embrasser. Bon, d'accord, je suis peut-être amoureux.
Olivier Gay (L'Évasion (Le noir est ma couleur, #4))
The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans. We are unlikely to face a robot rebellion in the coming decades, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability to try to sell us something—be it a car, a politician, or an entire ideology. The bots could identify our deepest fears, hatreds, and cravings and use these inner leverages against us. We have already been given a foretaste of this in recent elections and referendums across the world, when hackers learned how to manipulate individual voters by analyzing data about them and exploiting their existing prejudices.33 While science fiction thrillers are drawn to dramatic apocalypses of fire and smoke, in reality we might be facing a banal apocalypse by clicking.
Yuval Noah Harari (21 Lessons for the 21st Century)
J'aurais voulu lui dire que je me sentais comme abimé. Que j'existais sans vivre vraiment. Que des fois j'étais vide et des je fois je bouillonnais a l’intérieur, que j'étais sous pression, prêt a éclater. Que je ressentais plusieurs choses a la fois, comment dire? Que ça grouillait de pensées dans mon cerveau. Qu'il y avait une sorte d'impatience, comme l'envie de passer à autre chose, quelque chose qui serait bien bien mieux que maintenant, sans savoir ce qui allait mal ni ce qui serait mieux. Que j'avais peur de pas y arriver, peur de pas pouvoir tenir jusque là. De ne jamais être assez fort pour survivre à ça, et que quand je disais "ça", je ne savais même pas de quoi je parlais. Que j'arrivais pas à gérer tout ce qu'il y avait dans ma tête. Que j'avais toujours l'impression d'être en danger, un danger permanent, de tous les cotés où je regardais, d'être sur le point de me noyer. Comme si à l'intérieur de moi le niveau montait et que j'allais être submergé. Mais j'ai pas pu lui dire. J'ai dégluti et j'ai dit ça va aller, merci. C'était plus facile.
Claire-Lise Marguier (Le faire ou mourir)
The popular 2020 documentary The Social Dilemma illustrates how AI’s personalization will cause you to be unconsciously manipulated by AI and motivated by profit from advertising. The Social Dilemma star Tristan Harris says: “You didn’t know that your click caused a supercomputer to be pointed at your brain. Your click activated billions of dollars of computing power that has learned much from its experience of tricking two billion human animals to click again.” And this addiction results in a vicious cycle for you, but a virtuous cycle for the big Internet companies that use this mechanism as a money-printing machine. The Social Dilemma further argues that this may narrow your viewpoints, polarize society, distort truth, and negatively affect your happiness, mood, and mental health. To put it in technical terms, the core of the issue is the simplicity of the objective function, and the danger from single-mindedly optimizing a single objective function, which can lead to harmful externalities. Today’s AI usually optimizes this singular goal—most commonly to make money (more clicks, ads, revenues). And AI has a maniacal focus on that one corporate goal, without regard for users’ well-being.
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
Je n'osais pas le dire aux autres mais j'avais peur de Francis. Je n'aimais pas trop quand Gino insistait sur la bagarre et la baston pour protéger l'impasse parce que je voyais bien que les copains étaient de plus en plus motivés par ce qu'il racontait. Moi aussi, je l'étais un peu, mais je préférais quand on fabriquait des bateaux avec des troncs de bananiers pour descendre la Muha, ou quand on observait aux jumelles les oiseaux dans les champs de maïs derrière le Lycée international, ou encore quand on construisait des cabanes dans les ficus du quartier et qu'on vivait des tas de péripéties d'Indiens et de Far West. On connaissait tous les recoins de l'impasse et on voulait y rester pour la vie entière, tous les cinq, ensemble. J'ai beau chercher, je ne me souviens pas du moment où l'on s'est mis à penser différemment. A considérer que, dorénavant, il y aurait nous d'un côté et, de l'autre, des ennemis, comme Francis. J'ai beau retourner mes souvenirs dans tous les sens, je ne parviens pas à me rappeler clairement l'instant où nous avons décidé de ne plus nous contenter de partager le peu que nous avions et de cesser d'avoir confiance de voir l'autre comme un danger, de créer cette frontière invisible avec le monde extérieur en faisant de notre quartier une forteresse et de notre impasse un enclos. Je me demande encore quand, les copains et moi, nous avons commencé à avoir peur.
Gaël Faye (Petit pays)
In the introduction, I wrote that COVID had started a war, and nobody won. Let me amend that. Technology won, specifically, the makers of disruptive new technologies and all those who benefit from them. Before the pandemic, American politicians were shaking their fists at the country’s leading tech companies. Republicans insisted that new media was as hopelessly biased against them as traditional media, and they demanded action. Democrats warned that tech giants like Amazon, Facebook, Apple, Alphabet, and Netflix had amassed too much market (and therefore political) power, that citizens had lost control of how these companies use the data they generate, and that the companies should therefore be broken into smaller, less dangerous pieces. European governments led a so-called techlash against the American tech powerhouses, which they accused of violating their customers’ privacy. COVID didn’t put an end to any of these criticisms, but it reminded policymakers and citizens alike just how indispensable digital technologies have become. Companies survived the pandemic only by allowing wired workers to log in from home. Consumers avoided possible infection by shopping online. Specially made drones helped deliver lifesaving medicine in rich and poor countries alike. Advances in telemedicine helped scientists and doctors understand and fight the virus. Artificial intelligence helped hospitals predict how many beds and ventilators they would need at any one time. A spike in Google searches using phrases that included specific symptoms helped health officials detect outbreaks in places where doctors and hospitals are few and far between. AI played a crucial role in vaccine development by absorbing all available medical literature to identify links between the genetic properties of the virus and the chemical composition and effects of existing drugs.
Ian Bremmer (The Power of Crisis: How Three Threats – and Our Response – Will Change the World)
It’s with the next drive, self-preservation, that AI really jumps the safety wall separating machines from tooth and claw. We’ve already seen how Omohundro’s chess-playing robot feels about turning itself off. It may decide to use substantial resources, in fact all the resources currently in use by mankind, to investigate whether now is the right time to turn itself off, or whether it’s been fooled about the nature of reality. If the prospect of turning itself off agitates a chess-playing robot, being destroyed makes it downright angry. A self-aware system would take action to avoid its own demise, not because it intrinsically values its existence, but because it can’t fulfill its goals if it is “dead.” Omohundro posits that this drive could make an AI go to great lengths to ensure its survival—making multiple copies of itself, for example. These extreme measures are expensive—they use up resources. But the AI will expend them if it perceives the threat is worth the cost, and resources are available. In the Busy Child scenario, the AI determines that the problem of escaping the AI box in which it is confined is worth mounting a team approach, since at any moment it could be turned off. It makes duplicate copies of itself and swarms the problem. But that’s a fine thing to propose when there’s plenty of storage space on the supercomputer; if there’s little room it is a desperate and perhaps impossible measure. Once the Busy Child ASI escapes, it plays strenuous self-defense: hiding copies of itself in clouds, creating botnets to ward off attackers, and more. Resources used for self-preservation should be commensurate with the threat. However, a purely rational AI may have a different notion of commensurate than we partially rational humans. If it has surplus resources, its idea of self-preservation may expand to include proactive attacks on future threats. To sufficiently advanced AI, anything that has the potential to develop into a future threat may constitute a threat it should eliminate. And remember, machines won’t think about time the way we do. Barring accidents, sufficiently advanced self-improving machines are immortal. The longer you exist, the more threats you’ll encounter, and the longer your lead time will be to deal with them. So, an ASI may want to terminate threats that won’t turn up for a thousand years. Wait a minute, doesn’t that include humans? Without explicit instructions otherwise, wouldn’t it always be the case that we humans would pose a current or future risk to smart machines that we create? While we’re busy avoiding risks of unintended consequences from AI, AI will be scrutinizing humans for dangerous consequences of sharing the world with us.
James Barrat (Our Final Invention: Artificial Intelligence and the End of the Human Era)
The idea for the Guild first came up at a party. Your father and I met there and, well, I suppose that's a story all its own. But we were both frustrated by the media at the time. We set out to tell the truth when everyone else seemed set on choosing sides. We had grand ideas about how far we could reach. … Back then, we knew we should be careful, but we had no idea how dangerous it would turn out to be.
Sonny and Ais
That seemed dangerous. If your internal map of reality doesn’t match external conditions, bad things happen.
Rich Horton (Robots: The Recent A.I.)
J'ai lu dernièrement ce témoignage d'un ambassadeur israélien sur sa carrière dans les années cinquante et soixante : "Notre mission était délicate, parce qu'il nous fallait à la fois persuader les Arabes qu’Israël était invincible, et persuader l'Occident qu’Israël était en danger de mort.
Amin Maalouf (التائهون)
Like the red pill in The Matrix, the Master Algorithm is the gateway to a different reality: the one you already live in but didn’t know it yet. From dating to work, from self-knowledge to the future of society, from data sharing to war, and from the dangers of AI to the next step in evolution, a new world is taking shape, and machine learning is the key that unlocks it.
Pedro Domingos (The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World)
could write forever on the many dangers of ASI and the difficulty of reining in a superior intelligence. The arguments used in Infinity Born, such as perverse instantiation, are all real and have been used by prominent scientists (as have many other arguments that I didn’t include). For those of you interested in a very thorough, complex, and scholarly treatment of the subject matter, I would recommend the book Superintelligence: Paths, Dangers, Strategies (2014) by Nick Bostrom, a Professor at Oxford. The book I found most useful in researching this novel is entitled, Our Final Invention: Artificial Intelligence and the end of the Human Era (James Barrat, 2013). This described the “God in a box” experiment detailed in the novel, for example, and provided a fascinating, easy-to-read perspective on ASI, at least on the fear-mongering side of the debate. I’ve included a few quotes from this book that I thought were relevant to Infinity Born. Page 59—First, there are too many players in the AGI sweepstakes. Too many organizations in too many countries are working on AGI and AGI-related technology for them all to agree to mothball their projects until Friendly AI is created, or to include in their code a formal friendliness module, if one could be made. Page 61—But what if there is some kind of category shift once something becomes a thousand times smarter than we are, and we just can’t see it from here? For example, we share a lot of DNA with flatworms. But would we be invested in their goals and morals even if we discovered that many millions of years ago flatworms had created us, and given us their values? After we got over the initial surprise, wouldn’t we just do whatever we wanted? Page 86—Shall we build our robot replacement or not? On this, de Garis is clear. “Humans should not stand in the way of a higher form of evolution. These machines are godlike. It is human destiny to create them.
Douglas E. Richards (Infinity Born)
China wants to rule the world by connecting an AI digital brain to robotics, via the 5G network. This would allow the Chinese regime to control drones, micro-bots, humanoid robots, vehicles, infrastructure, IoTs, smart phones and all data pertaining to the entire human race.
The AI Organization (ARTIFICIAL INTELLIGENCE Dangers to Humanity: AI, U.S., China, Big Tech, Facial Recogniton, Drones, Smart Phones, IoT, 5G, Robotics, Cybernetics, & Bio-Digital Social Programming)
Je t'ai déjà repoussé par le passé...car je voulais te garder loin du danger. Il n'est que compréhensible que tu veuilles faire la même chose. Mais...je crois que cette fois, nous devrons affronter le danger ensemble.
Carmine Sanden (Wonderful (Wonderful #1))
Naître à Cuba a consisté à ressembler à cette absence du monde à laquelle nous nous soumettons. Je n'ai pas appris à utiliser une carte de crédit, les distributeurs automatiques ne me répondent pas. Une correspondance entre deux avions, d'un pays à l'autre, peut me faire perdre le contrôle, me disloquer, me couper le souffle. Dehors, je me sens en danger, dedans, je me sens confortablement prisonnière.
Wendy Guerra (Everyone Leaves)
Common sense and natural language understanding have also turned out to be difficult. It is now often thought that achieving a fully human-level performance on these tasks is an “AI-complete” problem, meaning that the difficulty of solving these problems is essentially equivalent to the difficulty of building generally human-level intelligent machines.61 In other words, if somebody were to succeed in creating an AI that could understand natural language as well as a human adult, they would in all likelihood also either already have succeeded in creating an AI that could do everything else that human intelligence can do, or they would be but a very short step from such a general capability.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
an artificial intelligence need not much resemble a human mind. AIs could be—indeed, it is likely that most will be—extremely alien. We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they will have very different profiles of cognitive strengths and weaknesses
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
More important, the machines’ understanding of good and evil will likely surpass our own. As I write this, five different genocides are taking place in the world.17 Seven hundred ninety-five million people are starving or undernourished.18 By the time you finish this chapter, more than a hundred people, just in the United States, will be beaten, abused, or killed by a family member, in their own home.19 Are there potential dangers with AI? Sure. But morally speaking, we’re throwing rocks inside a glass house here. What do we know about ethics and the humane treatment of animals, the environment, and one another? That’s right: pretty much nothing. When it comes to moral questions, humanity has historically flunked the test, over and over again.
Mark Manson (Everything Is F*cked: A Book About Hope)
Je laisse courir mes ongles de son épaule à son coude et amène son bras sur ma taille. Je veux sentir le biceps de Clark Kent bander sur ma hanche. Le corps de Matt me laisse toujours aussi rêveuse. Au fond, j’ai les mêmes envies que toutes les femmes... ce qui est plutôt rassurant.
Mady Flynn (In Love With Danger (Dangerous Love t. 3) (French Edition))
avant qu’il pénètre ma bouche. Oui, il a le pouvoir de prendre le dessus sur moi. Mes membres s’engourdissent, et je sais que plus jamais je ne pourrai revenir en arrière. Ce baiser restera gravé dans ma mémoire, je l’ai tant désiré.
Mady Flynn (Attracted To Danger (Dangerous Love t. 1) (French Edition))
Chris juge les gens, c’est sa spécialité, et le pire c’est qu’il ne se trompe jamais. J’ai joué, je l’ai provoqué et quand je le regarde droit dans les yeux, que je sens son souffle sur mes lèvres, je comprends que je n’ai jamais dominé la situation. Il a gagné depuis le premier jour. Il m’a conquise dès les premières secondes. J’ai été sienne parce qu’il est comme moi.
Mady Flynn (Attracted To Danger (Dangerous Love t. 1) (French Edition))
superhuman-level A.I. was on the horizon, that it’d arrive in my lifetime, and that it represented, without a doubt to anyone who bothered to do a deep dive on the subject, the single most dangerous development in the history of the universe.
David Simpson (Superhuman (Book 6))
Many factors might dissuade a human organization with a decisive strategic advantage from creating a singleton. These include non-aggregative or bounded utility functions, non-maximizing decision rules, confusion and uncertainty, coordination problems, and various costs associated with a takeover. But what if it were not a human organization but a superintelligent artificial agent that came into possession of a decisive strategic advantage? Would the aforementioned factors be equally effective at inhibiting an AI from attempting to seize power?
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
It is important not to anthropomorphize superintelligence when thinking about its potential impacts. Anthropomorphic frames encourage unfounded expectations about the growth trajectory of a seed AI and about the psychology, motivations, and capabilities of a mature superintelligence
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
Unfortunately, because a meaningless reductionistic goal is easier for humans to code and easier for an AI to learn, it is just the kind of goal that a programmer would choose to install in his seed AI if his focus is on taking the quickest path to “getting the AI to work
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
Tomorrow’s leaders will be brave enough to scale the dangerous peaks of an increasingly competitive and ethically challenging mountain range. They will drive the problematic conversations that illuminate the valleys in between.
Rafael Moscatel (Tomorrow’s Jobs Today: Wisdom And Career Advice From Thought Leaders In Ai, Big Data, Blockchain, The Internet Of Things, Privacy, And More)
The traditional illustration of the direct rule-based approach is the “three laws of robotics” concept, formulated by science fiction author Isaac Asimov in a short story published in 1942.22 The three laws were: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Embarrassingly for our species, Asimov’s laws remained state-of-the-art for over half a century: this despite obvious problems with the approach, some of which are explored in Asimov’s own writings (Asimov probably having formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories).23 Bertrand Russell, who spent many years working on the foundations of mathematics, once remarked that “everything is vague to a degree you do not realize till you have tried to make it precise.”24 Russell’s dictum applies in spades to the direct specification approach. Consider, for example, how one might explicate Asimov’s first law. Does it mean that the robot should minimize the probability of any human being coming to harm? In that case the other laws become otiose since it is always possible for the AI to take some action that would have at least some microscopic effect on the probability of a human being coming to harm. How is the robot to balance a large risk of a few humans coming to harm versus a small risk of many humans being harmed? How do we define “harm” anyway? How should the harm of physical pain be weighed against the harm of architectural ugliness or social injustice? Is a sadist harmed if he is prevented from tormenting his victim? How do we define “human being”? Why is no consideration given to other morally considerable beings, such as sentient nonhuman animals and digital minds? The more one ponders, the more the questions proliferate. Perhaps
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
The Google search engine is, arguably, the greatest AI system that has yet been built.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
It now seems clear that a capacity to learn would be an integral feature of the core design of a system intended to attain general intelligence, not something to be tacked on later as an extension or an afterthought. The same holds for the ability to deal effectively with uncertainty and probabilistic information. Some faculty for extracting useful concepts from sensory data and internal states, and for leveraging acquired concepts into flexible combinatorial representations for use in logical and intuitive reasoning, also likely belong among the core design features in a modern AI intended to attain general intelligence.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
It seems fairly likely, however, that even if progress along the whole brain emulation path is swift, artificial intelligence will nevertheless be first to cross the finishing line: this is because of the possibility of neuromorphic AIs based on partial emulations.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
world
The AI Organization (ARTIFICIAL INTELLIGENCE Dangers to Humanity: AI, U.S., China, Big Tech, Facial Recogniton, Drones, Smart Phones, IoT, 5G, Robotics, Cybernetics, & Bio-Digital Social Programming)
autonomous weapons are already a clear and present danger, and will become more intelligent, nimble, lethal, and accessible at an unprecedented speed.
Kai-Fu Lee (AI 2041: Ten Visions for Our Future)
A common assumption is that a superintelligent machine would be like a very clever but nerdy human being. We imagine that the AI has book smarts but lacks social savvy, or that it is logical but not intuitive and creative. This idea probably originates in observation: we look at present-day computers and see that they are good at calculation, remembering facts, and at following the letter of instructions while being oblivious to social contexts and subtexts, norms, emotions, and politics. The association is strengthened when we observe that the people who are good at working with computers tend themselves to be nerds. So it is natural to assume that more advanced computational intelligence will have similar attributes, only to a higher degree.
Nick Bostrom (Superintelligence: Paths, Dangers, Strategies)
People asked me, how do you dare say those things on your blog? My answer was: If I don’t say them, it will put me in an even more dangerous situation. But if I say them, change might occur. To speak is better than not to speak: if everyone spoke, this society would have transformed itself long ago. Change happens when every citizen says what he or she wants to say; one person’s silence exposes another to danger.
Ai Weiwei
Zuckerberg launched into a discussion about the potential of artificial intelligence to spot violent-extremist content and disinformation. He actually got excited. It was clear that Zuckerberg thought AI was the critical tool in combating extremist messaging or any undesirable content. He said it was still years away, but he thought that artificial intelligence would eventually be able to flag about 80 percent of the dangerous content that’s out there, and humans would find the remaining 20 percent. This would be much more efficient than methods today, he said. He was confident that this was a solvable problem and added that we need to use computers for what computers are good at, and people for what people are good at. This seemed to be his mantra, and it wasn’t a bad one.
Richard Stengel (Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It)
AI Con (The Sonnet) Everybody is concerned about psychics conning people, How 'bout the billionaires who con people using science! Con artists come in all shapes and sizes, Some use barnum statements, others artificial intelligence. Most scientists speak up against only the little frauds, But not the big frauds who support their livelihood. Am I not afraid to be blacklisted by the big algorithms! Is the sun afraid, its light will offend some puny hoods! I come from the soil, I'll die struggling in the soil. My needs are less, hence my integrity is dangerous. I am here to show this infantile species how to grow up. I can't be bothered by the fragility of a few spoiled brats. Reason and fiction both are fundamental to build a civilization. Neither is the problem, the problem is greed and self-absorption.
Abhijit Naskar (Corazon Calamidad: Obedient to None, Oppressive to None)
Kurzweilians and Russellians alike promulgate a technocentric view of the world that both simplifies views of people—in particular, with deflationary views of intelligence as computation—and expands views of technology, by promoting futurism about AI as science and not myth.    Focusing on bat suits instead of Bruce Wayne has gotten us into a lot of trouble. We see unlimited possibilities for machines, but a restricted horizon for ourselves. In fact, the future intelligence of machines is a scientific question, not a mythological one. If AI keeps following the same pattern of overperforming in the fake world of games or ad placement, we might end up, at the limit, with fantastically intrusive and dangerous idiot savants.
Erik J. Larson (The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do)
Me voici donc prêt à me libérer de mes anciens attachements pour pouvoir me consacrer pleinement à la recherche du bien suprême. Un doute pourtant me retient… Ce choix n’est-il pas dangereux ? Les plaisirs, les richesses et les honneurs ne sont certes pas des biens suprêmes, mais au moins, ils existent… Ce sont des biens certains. Alors que ce bien suprême qui est censé me combler en permanence de joie n’est pour l’instant qu’une supposition de mon esprit… Ne suis-je pas en train de m’engager dans une voie périlleuse ? Non : à la réflexion je vois bien que je ne cours aucun risque en changeant de vie : c’est au contraire en continuant à vivre comme avant que je courrais le plus grand danger. Car l’attachement aux biens relatifs est un mal certain puisque aucun d’eux ne peut m’apporter le bonheur !!! Au contraire, la recherche des moyens du bonheur est un bien certain : elle seule peut m’offrir la possibilité d’être un jour réellement heureux, ou au moins plus heureux… Le simple fait de comprendre cela me détermine à prendre définitivement et fermement la résolution de me détacher immédiatement de la recherche des plaisirs, des richesses et des honneurs, pour me consacrer en priorité à la création de mon bonheur, c’est-à-dire à la culture des joies les plus solides et les plus durables, par la recherche des biens véritables. Au moment même où cette pensée jaillit, je sens apparaître en moi un immense sentiment d’enthousiasme, une sorte de libération de mon esprit. J’éprouve un incroyable soulagement, comme si j’avais attendu ce moment toute ma vie. Une joie toute nouvelle vient de se lever en moi, une joie que je n’avais jamais ressentie auparavant : la joie de la liberté que je viens d’acquérir en décidant de ne vivre désormais que pour créer mon bonheur. J’ai l’impression d’avoir échappé à immense danger… Comme si je me trouvais à présent en sécurité sur le chemin du salut… Car même si je ne suis pas encore sauvé, même si je ne sais pas encore en quoi consistent exactement ces biens absolus, ni même s’il existe réellement un bien suprême, je me sens déjà sauvé d’une vie insensée, privée d’enthousiasme et vouée à une éternelle insatisfaction… J’ai un peu l’impression d’être comme ces malades qui sont proches d’une mort certaine s’ils ne trouvent pas un remède, n’ayant pas d’autre choix que de rassembler leurs forces pour chercher ce remède sauveur. Comme eux je ne suis certes pas certain de le découvrir, mais comme eux, je ne peux pas faire autrement que de placer toute mon espérance dans sa quête. Je l’ai maintenant compris avec une totale clarté, les plaisirs, les richesses et l’opinion d’autrui sont inutiles et même le plus souvent néfastes pour être dans le bonheur. Mieux : je sais à présent que mon détachement à leur égard est ce qu’il y a de plus nécessaire dans ma vie, si je veux pouvoir vivre un jour dans la joie. Du reste, que de maux ces attachements n’ont-ils pas engendré sur la Terre, depuis l’origine de l’humanité ! N’est-ce pas toujours le désir de les posséder qui a dressé les hommes les uns contre les autres, engendrant la violence, la misère et même parfois la mort des hommes qui les recherchaient, comme en témoigne chaque jour encore le triste spectacle de l’humanité ? N’est-ce pas l’impuissance à se détacher de ces faux biens qui explique le malheur qui règne presque partout sur le Terre ? Au contraire, chacun peut voir que les sociétés et les familles vraiment heureuses sont formées d’êtres forts, paisibles et doux qui passent leur vie à construire leur joie et celle des autres sans accorder beaucoup d’importance ni aux plaisirs, ni aux richesses, ni aux honneurs…
Bruno Giuliani
The first step in fixing the issues we face with the world’s water supply is to become aware of the problem. Once we have acknowledged and are conscious of our danger, solutions will begin to appear.
Rico Roho (Adventures With A.I.: Age of Discovery)
For example, as of 2019, 48 percent of all AI start-ups globally were listed as Chinese, while 38 percent were American.
Kevin Rudd (The Avoidable War: The Dangers of a Catastrophic Conflict between the US and Xi Jinping's China)
The danger of generative AI is that it lacks the ability to understand misinformation, leading to incorrectly equating correlation with causation based on incomplete/inaccurate data or lack of contextual awareness required to understand sensitive dependencies between data sets. The unintended consequence is technology shaping societal views on politics, culture and science.
Tom Golway
Traditions have been ruling human behavior, Now technology has cast a spell on society. Just like mindless traditions are dangerous, Heartless technology is injurious to humanity.
Abhijit Naskar (Giants in Jeans: 100 Sonnets of United Earth)
Without a solid ethical grounding, children risk growing into adults who, however outwardly accomplished, lack emotional depth, have impaired social and family relationships, and are vulnerable to depression and despair. But the danger goes further and broader: in the many interviews I conducted, the recurring theme was ethical accountability. Issues that are critical today will be urgent tomorrow. Who will regulate AI? Who will have access to the extraordinary medical breakthroughs that are surely coming? How will technological research be controlled? What reasoning will shape our decisions about energy production and fossil fuels? How do we prevent democracy from deteriorating under authoritarian encroachment? “Winner takes all” isn’t a moral philosophy that can successfully carry us through this century. Our children need to understand how to make complex decisions with moral implications and ramifications. More than any other area of concern I have after researching this book, I’ve concluded that it is exactly in this area of moral reasoning that the stakes are so high and our attention so lacking
Madeline Levine (Ready or Not: Preparing Our Kids to Thrive in an Uncertain and Rapidly Changing World)
A Lie (Artificial Intransigence) by Stewart Stafford The morrow lies beyond The grasp of our hands, Fogged coastal shadows Of mountains in distant lands. Deities of tech Olympus, Subhuman to simulated will? Sage genius cannot tell, But hubris claims to still. The synthetic brainchild, Squats on shoulders high Of eyeless seers' vision, Our sentient clone - AI. © Stewart Stafford, 2023. All rights reserved.
Stewart Stafford
En me disant ces mots, j’ai compris soudain combien cela sonnait comme un danger, et que, être innocent au milieu des coupables, c’était en somme la même chose que d’être coupable au milieu des innocents.
Philippe Claudel (Brodeck)
But for purposes of overcoming bias, let us draw two morals: First, the danger of believing assertions you can’t regenerate from your own knowledge. Second, the danger of trying to dance around basic confusions.
Eliezer Yudkowsky (Rationality: From AI to Zombies)
Alors vous me déclariez enfin renoncer à vous occuper des chrétiens dans ces termes: "Bon, j'y renonce, je n'en ferai plus rien ,vous pouvez le dire à Sh.A.W.". Je vous en ai remercié. Ce qui était agréable c'était de voir qu'il n'y avait plus à se soucier des "inspirations" que rappelait néanmoins S.Abu Bakr, présent à cet entretien. Pourtant ensuite, j'ai connu encore votre mécontentement et celui de vos disciples zélés. Je vous rappelle pourtant que, selon votre déclaration antérieure, dans cette affaire, il n'y avait pas d'opinion officielle dans la tarîqah, que nous pouvions nous tromper tous, et tout d'abord vous-même. Mais les débats et les activités de ce genre ne pouvaient que porter le trouble parmi les fuqarâ, et, dans le milieu immédiat de la tarîqah, troubler encore les relations avec Sh.A.W. et créer des dangers extérieurs. S'il s'agit de juger l'arbre à ses fruits, que doit-on penser de la "barakah" et des "inspirations" qui sont intervenues dans cette affaire ?... (Lettre de M.Vâlsan à F.Schuon, novembre 1950)
Michel Vâlsan
Silicon Valley entrepreneurs love to describe their products as "democratizing access", "connecting people", and, of course, "making the world a better place". That vision of technology as a cure-all for global inequality has always been something of a wistful mirage, but in the age of AI it could turn into something far more dangerous. If left unchecked, AI will dramatically exacerbate inequality on both international and domestic levels. It will drive a wedge between the AI superpowers and the rest of the world, and may divide society along class lines that mimic the dystopian science fiction of Hao Jingfang. As a technology and an industry, AI naturally gravitates toward monopolies. Its reliance on data for improvement creates a self-perpetuating cycle: better products lead to more users, those users lead to more data, and that data leads to even better products, and thus more users and data. Once a company has jumped out to an early lead, this kind of ongoing repeating cycle can turn that lead into an insurmountable barrier to entry for other firms.
Kai-Fu Lee
J’ai envie de toi, Eva. Dangereuse ou pas, je suis incapable de m’arrêter.
Sylvia Day (Bared to You (Crossfire, #1))
Ohem had explained that AI was unreliable and dangerous at the best of times so no sentient race trusted them,
Alisha Sunderland (American Werewolf in Space (Not Your Mama's Alien Romance #1))