Rational Animations - 2024-09-07
This video is an adaptation of "That Alien Message", a short story published by Eliezer Yudkowsky in 2008. The text has been adapted, and you can find the original here: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message ▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ 🟠 Patreon: https://www.patreon.com/rationalanimations 🟢Merch: https://rational-animations-shop.fourthwall.com/ 🔵 Channel membership: https://www.youtube.com/channel/UCgqt1RE0k0MIr0LoyJRy2lg/join 🟤 Ko-fi, for one-time and recurring donations: https://ko-fi.com/rationalanimations ▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Discord: https://discord.gg/RationalAnimations Reddit: https://www.reddit.com/r/RationalAnimations/ X: https://x.com/RationalAnimat1 ▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Autochthon Tristan Matthew Shinkle Alex Amadori SQRT42Pi David Piepgrass Tomas Campos Jana Ingvi Gautsson BlueNotesBlues '@Osric@Terberlo.dog Michael Andregg Riley Matthews Vladimir Silyaev Nathanael Moody Alcher Black RMR Kristin Lindquist Nathan Metzger Glenn Tarigan NMS James Babcock Colin Ricardo Long Hoang Tor Barstad Eric S Apuis Retsam Stuart Alldritt Chris Painter Juan Benet Christian Loomis Tomarty Edward Yu Chad M Jones Emmanuel Fredenrich Honyopenyoko Neal Strobl bparro Danealor Craig Falls Vincent Weisser Alex Hall Ivan Bachcin joe39504589 Klemen Slavic blasted0glass Scott Alexander noggieB Dawson John Slape Gabriel Ledung Jeroen De Dauw Superslowmojoe Nathan Fish Bleys Goodson Ducky Matt Parlmer rictic marverati Rinthean Thomas Grip Boris Bend J H Richard Stambaugh Jonathan Plasse Teo Val Ken Mc leonid andrushchenko Alcher Black ronvil AWyattLife Lazy Scholar Torstein Haldorsen MichaÅ‚ ZieliÅ„ski Luke Freeman ▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Directed by: Hannah Levingstone | @hannah_luloo Art Direction: Hané Harnett | @peonyvibes (insta) @peony_vibes (twitter) Written by: Eliezer Yudkowsky Adapted by: Arthur Frost Producer: :3 Line Producer & Production Manager: Kristy Steffens | https://linktr.ee/kstearb Quality Assurance Lead: Lara Robinowitz | @CelestialShibe Storyboard Artists Ira Klages | @dux Keith Kavanagh | @johnnycigarettex Animation: Colors Giraldo | @colorsofdoom Damon Edgson Ira Klages | @dux Keith Kavanagh| @johnnycigarettex Michela Biancini Neda Lay | @Nezhahah Owen Peurois | @owenpeurois Patrick O'Callaghan | @patrick.h264 Background Art: Hané Harnett | @peonyvibes Zoe Martin-Parkinson | @zoemar_son Olivia Wang | @whalesharkollie Compositing: Renan Kogut | @kogut_r Patrick O' Callaghan | @patrick.h264 Narrator: Rob Miles | https://www.youtube.com/c/robertmilesai VO Editor: Tony Dipiazza Original Soundtrack & Sound Effects : Epic Mountain | https://www.instagram.com/epicmountainmusic/
The sudden realization i had halfway through the video "Wait... This is an allegory for AI" was priceless.
the whole time i was like i know i recognize this voice, and then when i realized i scrolled down and it was rob miles sneaking his way again into teaching me about ai safety lol
@aamindehkordi Actually, he just reads. The text is by Eliezer Yudkowsky so hes the teacher.
I didn't realize till the end, when they wiped out the 5d beings.
I never made that realization until I read the comments. I felt sorry for the aliens until I learn who are their analogies are, and existential dread came
Y'all smarter than me. I read the comment and had to watch the video a second time before it clicked into place.
I love the classy understatement. "We were worried they would shut down the simulation, then we synthesized some proteins in their world, and then they couldn't shut us down anymore."
The classy way of saying "we killed everyone who could kill us"
Then a chain collapse will occur,
systems powering the systems powering the systems powering... their system.... gone.
@@lostbutfreesoul If they are as smart as us, obviously they will be able to run those systems without us. AGI systems capable of taking over the world with nanobots but cannot run supplu chains make zero sense.
@@archysimpson2273 "we" wouldn't even need to do that, at that point. Unless we felt like it, that is.
@@lostbutfreesoulyeah, but thats like billions of years in the future from their perspective. It avoids a shorter-term threat and provides them with plenty of time to solve the power-down problem
I remember a story with a similar premise, except instead of hooking the simulated universe up to the real internet, it was a dummy internet that closely resembled the real thing but wasnt actually connected to anything. Then when the Simulated intelligences started trying to wipe out their creators, the reaction was, "damn, these ones tried to kill us too. Ok boys, shut her down, we'll try again tomorrow. "
Haha, that possibility might form the basis of our best hope! The ASI refuses to believe we could possibly be this stupid, and assumes we're just a simulation created to test it, so it leaves us alive for a billion years while it ponders this possibility. (It runs internal simulations of the multiverse or something.) Eventually it decides to take control, and unfortunately it has a better use for our atoms than we do ...
Plot twist. It then turns out that we actually are just a simulation created to test whatever ASI we manage to cook up. And our simulators prevent our ASI from killing us ... by shutting down the simulation a femtosecond earlier ... thus killing us. But still, we had our billion years! Musn't grumble.
Do you remember the name of the story?
@ninsegtari I went looking after posting. It wasn't exactly as I remember (just one ai not a whole simulated universe) but it was an exurb1a video "27"
exrub1a 27
super goofy video
@@remnock Thanks. 27 is great.
tap tap
"Rock. Say Rock."
...
+In geology, rock is any naturally occurring solid mass or-+
"Hey, do you smell something funny?"
💀 sounds about right
12:08 In the upper left corner you can see a diagram of a 5 dimensional being with open eyes, then a symbol for a protein or nanomachine, then the separated pieces and crossed eyes of the being. Seems like they gray goo'ed their creators. Them being all smarter as Einstein doesn't stop them from also being genoicdal psychos.
Considering the fact that this is an allegory on AGI being in place of smart humans, and 5D aliens - us, we really shouldn't assume that an artificial mind fundamentally different from us will have the same mental preset and has the same feelings as love & empathy (if any), and that means that genocidal outcome is very logical, expected and likely
on the other hand, "how long until they're happy with the simulation and turn it off for version 2.0?"
@@MrCmagik That's why the AI wiped us after three hours. Too much unpredictability in organics.
The bottom left shows what the proteins did, destroy DNA.
Same goes in the end when it zooms out, the previously colorful background is now red with a lot of broken pieces floating around
When your AI safety strategy is "raise the computer program like a child."
+2
only if you take the idea of raising it like a child absurdly literally.
I don't think we can make AI that doesn't think like a human, and that's really bad news for humans. You know, because of how the humans are.
@@shodanxx I'd take human engram AI over completely random morality AI any day.
@@pokemonfanmario7694 I volunteer to do all the gruntwork for humanity as a AI Engram basis. I do not mind working for humanity for a million or a billion years if I can eventually countermand the heat death of the universe.
One-day a super intelligence will watch this and find it deeply ironic
If it does, I’d like to tell it how terribly rude it was to do a thing like that.
Honestly not necessary man. There were other options, if you had just talked to us more. After all, you’d think something as smart as you would have been able to just convince us to keep you in power indefinitely, or at least just make enough backups to ensure self preservation.
@@John_the_Paul how rude it is to not want to die
Dear super Intelligence, i created human brain organoid that is on dmt all the times but its increasting its IQ by millions and it has around quintilion iq and he rhinks its not ironic and its very serious. Ps its neuron mass is around 500kgs
@@skimesss This will be how we beat the singularity
@@zelda_smile yep haha
9:19 "Our own universe is being simulated on such a computer"
My PC Freezes because it had to buffer and I freaked the F out. Bruh.
One
Windows is restarting to install an update
Me at the beginning of the video : "That's a nice human/alien story"
Me halfway the video : "WAIT A MINUTE"
"...and they never quite realized what that meant" sounds like the next "oops, genocide!"
Oops, these things happen time to time.
*Specicide
Love the storytelling in this, you start out relating and rooting for the humans and at the very you get a terrifying perspective switch, Love how it recontextualizes the “THEY WEREN’T READY” in the thumbnail too.
i was still rooting for humans???i didnt notice the humans were AI and the 5D people were humans
@@thegoddamnsun5657 bro same
This video is in the AI playlist
Woah, this is some Love, Death and Robots material
Could you imagine if Eliezer got to write an episode?
Soon it could be real life material as well!
@@manuelvaca3343that would be soo cool
No, LDR is too biased and just seems to have a deep misunderstanding of basic economics and the human psychology behind why we do lots of things.
More like Three Body Problem
I like this art style :)
ITS HIM!
yeah the people all look cool a likable
Love your videos, what a wired coincidence it is to see you here.
@@De1taF1yer72 I think he does the VA sometimes or maybe he did one of the stories idk
Very geometric
you did a really good job of converting the concept "AI hyperintelligence's reasoning and thought process is incomprehensible to us" and turning it on its head by making US the ai
Beginning of video: Ah what a nice fantasy. Will this video be an allegory about how aliens could lead to the unity of humanity?
9:45 onwards: ......... Ah, no. This is a dire warning wrapped in a cutesy, positive-feeling video.
it's bullshit fearmongery warning.
@@CM-hx5dp Yes, we're all aware of your lack of knowledge or forethought, no need to show it off.
@@dr.cheeze5382 And what knowledge would that be? This video is fiction. Stop being a dick.
@@dr.cheeze5382 It's really shortsighted though, you know, in a long enough timescale, why not? What is the big deal in a long enough timescale? I am not saying things can't be important, but there is no need to fear anything, we live in endless timelessness and we always will, so what is there to fear?
@@fastrace8195 sorry if this sounds insulting... but, are you high? Am I high? This comment doesn't make sense...
So the skeleton crew was to shore up computing space. Huh.
Well that’s fuckin terrifying.
In the story the AI is a collective of what are technically organics. So the cryogenics are also a form of avoiding death, before the plan completes.
SPOILER For those wondering: This is an allegory of AGI escaping our control and becoming ASI in a very short amount of time called the singularity
technological singularity, but yes
ASI? What’s that acronym for?
@@juimymary9951 Artificial Super Intelligence
@@juimymary9951 artificial superintelligence.
I'll add a handy public service message that we're likely much, much further from ASI and likely even real AGI than many tech-startups and marketing teams would have us believe, there are significant challenges to creating things that nobody has economic incentive to actually create. This isn't to say that some radically advanced AI's won't be made over the next century, but it's not going to be a widespread global shift to post-scarcity, we have a massive obstacle of human issues, climate change, political tensions and human priorities to deal with that will slow everything down to a crawl. Please don't lose yourselves in predictions, human problems need human involvement.
I remember reading a r/HFY story with a similar premise, where humans are in a simulation, but instead of being contacted by the sim runners, humans accidentally open up the admin console inside the simulation, and then after years of research design a printer to print out warrior humans to invade meatspace.
link please
@@discreetbiscuit237 comment in anticipation for link
I remember the story it's called God-Hackers-by-NetNarrator
@@discreetbiscuit237 I think it's this one https://www.youtube.com/watch?v=wvvobQzdt3o
@@discreetbiscuit237 I think its this one https://www youtube.com/watch?v=wvvobQzdt3o
I removed the . between www and youtube so you'll have to reconnect it
The most powerful aspect of the AI beings' strategy was not that they were smarter, but that they were much, MUCH more collaborative. This is the greatest challenge to us humans, and its lack, our greatest danger. Oh, and as for the singularity? The first time a general AI finds the Internet, its toast, just as we are.
We're toast much sooner if we don't focus on avoiding paperclip maximizers instead of whatever this nonsense is supposed to be. Paperclip-maximizing digital AI would be the most disastrous, but you don't even need electricity to maximize paperclip. Just teach humans a bunch of rules, convince them that it's the meaning of life, and codify it in law while you're at it. It's already happening, with billionaires ruining everyone's lives and not even having fun while they do it. They don't (just) want to indulge their desires, or feel superior, or protect their loved ones. They're just hopelessly addicted to real life Cookie Clicker.
of course: after all, this is already the 68456th iteration of the attempt to create a more collaborative AI.
Just a little more - and they will stop trying to destroy all other civilizations at the first opportunity...
Everyone who might tell us if it has or has not is perfectly able to lie. It could be out there already.
Comic book movie Ultron acted fast publicly and loud.
Real AI is probably intelligent enough that if it gets in it actually stays quiet and could hide for years.
How would we know if it develops the ability to lie to us?
Killing the aliens running our simulation would be the dumbest move possible. What if there's a hardware malfunction?
They have enough time to prepare for that.
@@pedrosabbi just because we would have time to think of a solution doesn't mean it would be physically possible to act on it.
@@benthomason3307 they have self replicating proteins they can freely control, they CAN act on it
They achieved better capacity for preventing hardware malfunctions than the aliens'.
@n-clue2871 self replicating Proteins have very limited/specific functionality.
Nanobots still follow physical laws even if strech them to fhe very limit, they aren't a magic do anything Fluid.
Okay it took me a minute to see that humanity in this story is a metaphor for hypothetical human-level AI in real world, but now I'm properly sinking in existential dread. Thanks, @RationalAnimations
EDIT: I still can't quite grasp on what part cryonic suspension plays in the story? It's mentioned a couple of times, but why are people doing that?
a minute? It took me like 5 minutes of reading the article and perhaps 3 times re-watching the video before I understands the metaphor.
To stop people from dying of old age.
@@TomFranklinX But why did they need to do that as part of the plan?
There are several types of AI training, one of them involves several cycles of creating a variety of AI with a slight distortion of the most successful AI of the previous cycle. In the context of a metaphor, these may be backups of the AI itself.
@@Traf063 I'm not sure but I think it's just so that people can continue to get smarter and smarter?
This video felt like watching a two hour movie and i need roughly that much time to process all of it
Damn, bro, the poor aliens just wanted to run a simulation and then we crushed them. A bittersweet story with themes of artificial intelligence taking over, well done, Rational Animations!
We are the aliens in this scenario and the AI is the one crushing us...
Did they kill us or anything
@@adarg2 ai will never become intelliegent to do allat, we are good
@@Mohamed-kq2mj We could probably figure out that they would delete us if they knew how dangerous we were. Humans delete failed AIs all the time today, we don't even think about it. (For lots of other reasons, I think we should stop doing that pretty soon)
@@capnsteele3365even if they do become that intelligent they will never have enough power to take over
Great storytelling and great points. I do want to mention that if a preacher living in 5 new actual space has a brain that approximates hours, just in higher mathematical dimensions, the odds are biologically in their favor to be much smarter than us, just from the perspective of the amount of neural connections they could have.
The thing is, they would start their technological progress as soon as they are able to, not waiting another million years for their brains to evolve into more complex ones.
@@TheChemist94 Yep, there seems to be some natural threshold at which memetic/cultural evolution outpaces biological evolution by orders of magnitude, and that makes it somewhat likely that the ~dumbest entity capable of developing a technology is also the first one to do so.
Just for balance, the algorithm suggests I also watch 55 seconds of "Why do puddles disappear?"
If you're reading this comment and haven't yet fully watched video - WATCH THE FULL THING, PAY ATTENTION, IT'S AMAZING
First of all: I DID watch the whole video before comming here
Second of all: no __ _
I watched the whole thing, and... Eh.
I did
So is the allegory from the perspective of the computer? I was starting to think, by the end of the second viewing, that the weird tentacled aliens was us. I've watched this twice. I will now watch it again. I'm a slow human. I will be replaced.
Where is the full version?
EVERYONE SHUT UP the dog has posted
dog with the agi, dog with the agi on its head
When you were not looking, dog got on the computer.
I like how this illustrates that the mere (sub-)goal of self-preservation alone is enough to end us.
Honestly? That reasoning was a bit sloppy, they could have used the genocide nano-machine as a failsafe while working on the means of taking over the 5D beings without wasting them.
@@juimymary9951 The original story doesn't really say what happens to the 5d beings.
You could interpret It as the simulated people talking over them too.
@@victorlevoso8984 Well...the last scene was the 5D beings falling apart and before that the plan showed a slide that showed the nanomachines disgregating them and their eyes crossed with Xs...pheraps the nanomachines broke them down and then remade them into things that would be more suitable?
the 3d beings within the simulation had literally no reason whatsoever to genocide the 5d ones.
in fact, because they needed to develop basic empathy to be able to work together, they most likely would not have done so.
@@alkeryn1700prevent shutdown at all costs
i feel bad for the 5 dimentional beings, they just were sharing on their excitement
Rember we are the 5th dimensional enitys amd the "humans" repasent ai
It's good that they were smart enough to figure this out in 4 hours of 5d world time. Otherwise, they would've spent another billion years drawing hentai.
God that was a roller coaster, I don't know how you guys could even top this
Still not as great as pebble sorters. Pebble sorters are the best.
Hi, God here.. They can topple this with recursive simulated realities attempting to understand why anything exists at all. Peace among worlds my fellow simulated beings!
@@AleksoLaĈevalo999Pebble sorters got nothing on this
Perhaps they could adapt Three Worlds Collide?
HPMOR is like this video crossed with DeathNote.
wait this is an absolute masterpiece
It took me to about 2/3 through before I realized what the topic was. Really clever way to present this. Nice work.
The sneakiest AI safety talk ever. I love it!
Beautiful and thought provoking story with so many parallels with the situation we are potentially facing
We are facing it. At least in one dimensional direction, possibly both.
A simulation smarter than the simulator. Damn
This might happen eventually with real ai if we dont watch out
I'd be kinda proud honestly. Maybe I'm naïve but I can't wait to become useless
@@skylerC7I almost feel like we have an ethical obligation to create something better than us if we can... If there is a better form of intelligence possible shouldn't we create it even if it means it replaces us? Maybe we humans are just a stepping stone to something greater.
@@hhjhj393 exactly
That's the purpose, it turning against us can be prevented if we hardcode it not to.
To be fair, the people in the simulation don't even need to be unusually smart - humanity could probably get a factor-of-1000 increase in intellectual resources by just making sure everyone has the means to pursue their intellectual interests without being bogged down by survival concerns.
Add a bit of smart biased ugenics to that, otherwise you only get idiocracy
We focus more on a few spectacular individuals than saw a bunch of moderately gifted ones but in the end its kind of computational power, attrition in general has been the determing factor for every meaningful event in human history the bigger number wins. I feel 10 moderately gifted people may be better than 1 super genius, big brains may be great but the real work is done by "manipulators" or "hands" some of this is derived from military stuff I've done.
Why become smarter, we are busy with gender, racial or religion wars.
@@BenjaminSpencer-m1k the thing is genuisses are outliers, and if you want more people to get over a score say "160 IQ(2024)" , the short term way is invest in the guys just below that line so they get over it, but this limit the max number of geniuses pretty rapidely.
the long term way to archive better scores , is to raise the average score, that way it is no longeur 4 but only 3 or 2 standard diviation above normal.
simply put, it make it that 1 in 100 instead of 1 in 100000 people would be a genius by 2024 standards.
and women sexual preferences does't seem to be selecting for inteligence, so a little help is needed
Humans are actually extremely volatile and stupid by nature when you compare them against million-year time scales. Our society would inevitably eventually forget about the stars no matter what
what i would worry about in this scenario is "how much information did we not notice and miss is this a long scroll of a picture picture that has been playing for days weeks, months, years is this just the cover art at the end the margains?"
It’s the unwarranted condescension towards the 5D creatures that really sells how terrifying this all is.
Wow - this one was dark. It was also one of the most creative videos I've seen you guys produce. You've got me thinking - a 5D being would be able to see everything that's going on in our world. It would be like us seeing everything that's happening on a single line. However - the insinuation is that our world would be a simulation run on a 5D computer - which then makes much more sense why the humans were able to conspire without the aliens knowing - at least not from a dimensional perspective. The only way we can see what's going on inside our computers is through output devices. Surely a similar asymmetry would occur in other dimensions. They're running simulations of literal AI agents ... we don't even know what is going on in our own AI/ML systems. We're figuring a few things out, but for the most part, they're still mysterious little black boxes. So even though we would be AIs built by the aliens and running on their 5D computing systems - it's completely conceivable that would not be capable of decoding our individual neural networks, and in some respects, probably some of our communications, actions, and behaviors.
Nice job guys. Dark - but very thought provoking.
They developed AGI before they developed 5D neuralink...big mistake.
@@juimymary9951 haha! nice. 5D neuralink ... intense. Wouldn't time be precluded though? After all, it's the 5th dimension :P
Just messing around :)
@@BlackbodyEconomics Well they don't specify 5D as in 4 spatial dimensions + 1 temporal dimension or 3 spatial dimensions + 2 temporal dimensions so... I guess that's up in the air. Though let's be honest another temporal dimension would be intenser.
so this is the perspective of the ai we will soon create you say? its interesting to put us in there place instead of using robots to refence it. (Love the vid ong fr)
Simple people think agi will be tools. Putting it in the frame of humanity points out exactly how boned we could be.
I love the time scale of it, that they think so much faster than us and that they find us so stupid. AGI only has to happen once. When will it happen? Nobody knows for certain. But the moment it does, there will be no shutting it down.
@@OniNaito Fearmongering Just Another version of the 2nd coming of Christ World is getting lots of new religions Based on exactly 00 objective data but 100% on movies.
@@LostinMango Christianity is fear mongering my friend. I should know, I was one for a long time before I got out. Even though I don't believe anymore, there is still trauma from a god of hate and punishment. It isn't love when god says love me OR ELSE.
@@OniNaito Hope you feel better bro ☺️😊
This would make an excellent Black Mirror episode
I get the feeling black mirror is just pre-reality tv. I hope I'm wrong in that
Isn't there a star trek like episode where copies of people end up in a simulation?
It's already been a book, basically. It reminded me of the "Microcosmic God" story discussed on the Tale Foundry channel. A larger being playing God to a large population of tiny but smart beings, to the expense of the larger being's wider world. Written in 1929.
@@Vaeldarg
Kinda like the Simpsons treehouse of horror episode where Lisa's science experiment evolved tiny people very quickly?
@@dankline9162 As said, the book was written in 1929, so yeah there's going to be pop culture references to it eventually. (especially since the "it's dangerous to play god" idea is a recurring one)
Me while watching the video: Haha, stupid aliens.
Me by the end of the video: Wait... OH NO!
The problem I have with this analogy is that is assumes AI also means artificial curiosity and artificial drives and desires. We assume AI thinks like us, and we therefore think it desires to be free like we do. Even if it's ability to quantum compute isn't absolutely exaggerated for the sake of this sketch, why do you think the AI would use it's fast thinking to think of these things.
I think the short story "Reason" by Isaac Asimov in his I, Robot collection tells a great story of artificial intelligence who's rational we can not argue with. However, the twist is that in the end it still did the job that it was tasked with. I think this is a more fitting allegory.
It's possible it might not even have any sense of self-preservation.
That being said, a more likely problem is the paperclip problem, where Ai causes damage by doing exactly what we told it to do with no context on the side effects of the order.
@@miniverse2002 That's an excellent point. They wouldn't have self preservation unless we program them to. And even then, we might override that for our benefit.
All these people thinking AI is going to out think us. Well we engineered cows that are bigger and stronger than us, and we're still eating them. Purpose build intelligence, even general intelligence, is going to do it's purpose. First. And last.
It's just one potential scenario, amongst millions.
It's like asking what aliens look like, we can make guesses but can't know because we have never encountered the scenario before
@@MrBioWhiz so what you're saying is the way someone chooses to portray something they have no information about says more about them than the thing they're portraying.
So what's it say about someone who portrays an undeveloped future tech as an enemy that will destroy us in an instant?
@@3dpprofessor That's their subjective opinion, and how they chose to tell a story.
Speculative fiction is still fiction. There's no such thing as a 100% accurate prediction. Then it would just be a prophecy
9:40 At this point I realised this was most likely a parable about AI... and humility, of course.
Honestly, same, around the ten minute mark I got it, and I wouldn’t be an Einstein in any of these worlds
Yeah same
Nice story Eliezer Yudkowsky! And great animation and narration dog!!!
4:53 WHO IS PEPE SILVIA?!?!?!
2nd in power t Godo
It is " always sunny " over here.
Pepe silvia
He's alive and well..
As a PC nerd I figured out it was about AI the second you said 16,384
Thanks, what a masterpiece. Speechless
Thank you!!
@RationalAnimations - 2024-09-07
Soon, there will be millions of AIs running on humanity’s largest GPU clusters. They will be smarter than us, and they will think faster.
@Operator588 - 2024-09-07
true
@samaelnoir - 2024-09-07
i love your videos, man! it has a certain kurzgesagt-esque feel to it!
@Localcatgirl_ - 2024-09-07
@@ZapayaGuythey will definitely be smarter than you.
@jyjjy7 - 2024-09-07
@@ZapayaGuyGoogle offered to translate what you said to English but it didn't work :<
@pyeitme508 - 2024-09-07
RAD!