> temp > à-trier > the-worrying-status-of-ai-safety-in-2024-things-are-moving-fast-robert-miles-ai-safety

AI Ruined My Year

Robert Miles AI Safety - 2024-06-01

How to Help: https://aisafety.info/questions/8TJV/How-can-I-help
https://www.aisafety.com/

AI Safety Talks: https://www.youtube.com/@aisafetytalks

There's No Rule That Says We'll Make It: https://www.youtube.com/watch?v=JD_iA7imAPs
The other "Killer Robot Arms Race" Elon Musk should worry about: https://www.youtube.com/watch?v=7FCEiCnHcbo

Rob's Reading List:
Podcast: https://rmrlp.libsyn.com/
YouTube Channel: https://www.youtube.com/@RobMilesReadingList
The FLI Open Letter: https://www.youtube.com/watch?v=3GHjhG6Vo40
Yudkowsky in TIME: https://www.youtube.com/watch?v=a6m7JynBp-0
Ian Hogarth in the FT: https://www.youtube.com/watch?v=Z8VvF82T6so

Links:
The CAIS Open Letter: https://www.safe.ai/work/statement-on-ai-risk
The FLI Open Letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
The Bletchley Declaration: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
US Executive Order: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Some analysis of the EO: https://thezvi.substack.com/p/on-the-executive-order
"Sparks of AGI" Paper: https://arxiv.org/abs/2303.12712
Yudkowsky in TIME: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Hogarth in the FT: https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2
The AI Safety Institute: https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute
Responsible Scaling Policies: https://metr.org/blog/2023-09-26-rsp/
The EU AI Act: https://artificialintelligenceact.eu/the-act/
Hinton on CBS: https://youtu.be/qpoRO378qRY

Sources:
"Sparks of AGI" Talk: https://www.youtube.com/watch?v=qbIk7-JPB2c
Yann LeCunn on Lex Fridman's Podcast: https://www.youtube.com/watch?v=SGzMElJ11Cc
White House Press Briefings: https://x.com/TVNewsNow/status/1663640562363252742
https://www.youtube.com/watch?v=JHNkyHl5FpY
King Chuck on AI: https://www.youtube.com/watch?v=0_jw40Ga_mA

"Equally sharing a cake between three people - Numberphile": https://www.youtube.com/watch?v=kaMKInkV7Vs

Community, various screenshots
The Simpsons
Sneakers (1992)

Thanks to Rational Animations for the train sequence!
https://www.youtube.com/@RationalAnimations

With enormous thanks to my wonderful patrons:
- Tor Barstad
- Timothy Lillicrap
- Juan Benet
- Sarah Howell
- Kieryn 
- Mazianni
- Scott Worley
- Jason Hise
- Clemens Arbesser
- Francisco Tolmasky
- David Reid
- Andrew Blackledge
- Cam MacFarlane
- Olivier Coutu
- CaptObvious
- Ze Shen Chin
- ikke89
- Isaac
- Erik de Bruijn
- Jeroen De Dauw
- Ludwig Schubert
- Eric James
- Owen Campbell-Moore
- Raf Jakubanis
- Esa Koskinen
- Nathan Metzger
- Jonatan R
- Gunnar
- Laura Olds
- Paul Hobbs
- Bastiaan Cnossen
- Eric Scammell
- Alexare
- Reslav Hollós
- Jérôme Beaulieu
- Nathan Fish
- Taras Bobrovytsky
- Jeremy 
- Vaskó Richárd
- Andrew Harcourt
- Chris Beacham
- Zachary Gidwitz
- Art Code Outdoors
- Abigail Novick
- Edmund Fokschaner
- DragonSheep 
- Richard Newcombe
- Joshua Michel
- Richard 
- ttw
- Sophia Michelle Andren
- Alan J. Etchings
- James Vera
- Stumbleboots
- Peter Lillian
- Grimrukh
- Colin Ricardo
- DN
- Mr Cats
- Robert Paul Schwin
- Roland G. McIntosh
- Benjamin Mock
- Emiliano Hodges
- Maxim Kuzmich
- Joanny Raby
- Tom Miller
- Eran Glicksman
- CheeseBerry
- Hoyskedotte
- Alexey Malafeev
- Jeff Starr
- Justin 
- Liviu Macovei
- Javier Soto
- David Christal
- Jam
- Just Me
- Sebastian Zimmer
- Matt Thompson
- Xan Atkinson
- Andy
- Albert Higgins
- Alexander230
- Clay Upton
- Alex Ander
- Carolyn
- Nathan Rogowski
- David Morgan
- little Bang
- Chad M Jones
- Dmitri Afanasjev
- Christian Oehne
- Marcel Ward
- Andrew Weir
- Miłosz Wierzbicki
- Tendayi Mawushe
- Kees 
- loopuleasa
- Marco Tiraboschi
- Fraser Cain
- Patrick Henderson
- Daniel Munter
- Ian
- James Fowkes
- Len 
- Yuchong Li
- Diagon 
- Puffjanga
- Daniel Eickhardt
- 14zRobot 
- Stuart Alldritt
- DeepFriedJif 
- Garrett Maring
- Stellated Hexahedron
- Jim Renney
- Edison Franklin
- Piers Calderwood
- Matt Brauer
- Mihaly Barasz
- Rajeen Nabid
- Iestyn bleasdale-shepherd
- Marek Belski
- Luke Peterson
- Eric Rogstad
- Max Chiswick
- slindenau
- Nicholas Turner
- Jannis Funk
- This person's name is too hard to pronounce
- Jon Wright
- Andrei Trifonov
- Bren Ehnebuske
- Martin Frassek
- Matthew Shinkle
- Robby Gottesman
- Ohelig
- Sarah 
- Nikola Tasev
- Tapio Kortesaari
- Soroush Pour
- Boris Badinoff
- DangerCat 
- Jack Phelps
- Kyle Green
- Lexi X
- John Slape
- Joel Gardner
- Christopher Creutzig
- Johann Puzik
- Pindex 
- RMR
- Andrew Edstrom
https://www.patreon.com/robertskmiles

@WolfDGreyJr - 2024-06-01

7:00 GPT-4 did not score 90th percentile on the bar exam. That figure is in relation to test-takers who already failed the bar at least once, and would be 48th percentile compared to the general population. GPT-4 was also not graded by people with training scoring bar exams.

For further info and methodological criticism, refer to Eric Martínez' paper "Re-evaluating GPT-4's bar exam performance" in AI and Law.

@gabrote42 - 2024-06-01

Doing the good work right there. Have a bump

@Fs3i - 2024-06-01

It still beats half of lawyers, roughly. Half of them!

@taragnor - 2024-06-01

@@Fs3i It beats people at test taking, not practicing law. There's a difference. AI is naturally slanted well towards test taking because there's a lot of training data on previous tests and questions and so it can come loaded up with those answers already trained into it. It's the same with AI coders and their ability to pass coding interviews. The thing is that tests and the real world are very different things. Computers have been better at being databases than humans for a long time, so the fact that we can do information lookup isn't all that impressive.

@WolfDGreyJr - 2024-06-01

@@Fs3i I should clarify something I misconstrued after editing: the 48th percentile figure refers to the essay section, in total the exam score it was evaluated to have would be 69th percentile (nice), which is still barely passing. The population isn't lawyers, it's people trying to become lawyers. About a third don't manage in the first place.

This still puts us at beating half of lawyers because maths but I needed to correct myself before moving on to the bigger issues. Plus, when the reported essay performance specifically is scored against those who passed the exam, it comes out to a rather unimpressive 15th percentile score, without even taking into question whether that score is a fair assessment.

There are significant doubts to be had about the scoring methodology with which the exam score of 297 (still really good for an LLM) was arrived at. They were not graded according to NCBE guidelines or by persons specifically trained in grading bar exams, which is especially problematic for the MPT and MEE parts, which are actually intended to be a test of one's skill to practice law or elucidate how the law would apply to a given set of circumstances.

@badabing3391 - 2024-06-02

@@WolfDGreyJr i now wonder what the exact details of statements like various LLMs doing well on graduate level physics exams and contest level mathematics actually are

@MrMpakobec - 2024-06-01

"Man Who Thought He'd Lost All Hope Loses Last Additional Bit Of Hope He Didn't Even Know He Still Had" LOL

@krakow10 - 2024-06-01

The Onion doesn't miss

@tristenarctician6910 - 2024-06-01

Gone into hope debt

@mikeuk1927 - 2024-06-04

​@@tristenarctician6910Nah, there is still more hope to lose. Just let the reality do its job :3

@Kenjuudo - 2024-06-04

@@mikeuk1927 I don't think you necessarily want that.

@Respectable_Username - 2024-06-04

"Who the hell am I?" Well, you're a person with good reasoning skills who isn't tied to a corporate profit motive, who knows the topic very well, and who has been actively communicating about it for years! It can be intimidating being the subject matter expert for a really important topic, and it can weigh heavily on your shoulders, but you feel that weight because you care . And what we need more than anything else is rational thinkers who have years of study in the topic who don't have a profit motive and who care. And you won't get it right 100% of the time. But you've got the highest proficiency in this area in the current party, and so are more likely to roll above the DC than most others in this world! ❤

@clray123 - 2024-06-06

Actually he is a tool with a much too high opinion of himself.

@imveryangryitsnotbutter - 2024-06-07

@@clray123 Well you two should get along swimmingly then

@clray123 - 2024-06-07

@@imveryangryitsnotbutter You are trying to insult me, but your attempt is not making any sense. Try again harder.

@inaim2 - 2024-06-08

yes mentor the ppl with potential and try to get involved with the growth of AI we believe in you :)

@ivoryas1696 - 2024-06-20

​@@clray123
Why are you trying to insult him is my question? I mean... he pretty clearly knows he doesn't know it all (otherwise, this comment wouldn't be responding to a _direct quote), and is willing to learn to improve himself... What's the problem?

@RationalAnimations - 2024-05-31

WE ARE SO BACK

@WoolyCow - 2024-06-01

not me being like 'hey that voice kinda sounds familiar...oh its the ai guy, wait doesn't he do stuff with [checks comments] yeah that makes more sense'

@En1Gm4A - 2024-06-01

As for ai alignment my dream is a neurosymbolic task execution system with a knowledgegraph and visualization for the user of the suggested problem solving task with alternatives to choose from. Human in the driver seat. Misuse management by eliminating risky topics from the knowledge graph

@acethirtysix8378 - 2024-06-01

finishes watching video
It's so over!

@AtomosNucleous - 2024-06-01

Proposal:
Cut the part of the "random assignment team" in a short format. This can be viral, give more attention to this channel and its topics

@NicholasWilliams-uk9xu - 2024-06-01

Doesn't seem to remotely care about personal data harvesting and Youtube and it's influencer trolls using it harass individuals and leverage it for psyops.

@AtilaElari - 2024-06-01

The horrible feeling of "Are you saying that I am the most qualified person for the task? Are you saying that everyone else is even worse than I am?!".
It is dreadful when the task in question is mundane. Its hard to comprehend when the implications of the said task is possibility of an extinction event.

For what its worth, I think you are as qualified for this task as anyone can be in our circumstances. Go save the world! We are rooting for you! No pressure...

Seriously though, looking at multiple comments where people are saying that they started doing AI safety as a career thanks to you shows that you ARE a right person for the job.

@buddatobi - 2024-06-01

You can help too!

@krishp1104 - 2024-06-01

this reminds me of the last two episodes of the three body problem on Netflix lmao

@JorgetePanete - 2024-06-01

it's*

@gabrote42 - 2024-06-01

Absolutely true

@gavinjenkins899 - 2024-06-01

I mean, by DEFINITION, whoever the most qualified person is has that feeling, that doesn't really change the "implications" for us in general

@XIIchiron78 - 2024-06-03

"maybe I can hide my misunderstanding of moral philosophy behind my misunderstanding of physics" lmao

@XIIchiron78 - 2024-06-05

I have seen a confusing number of people unironically hold the view that "humans should be replaced because AI will be better"

At what??

@kirktown2046 - 2024-06-05

@@XIIchiron78 Starcraft. What else matters?

@SianaGearz - 2024-06-08

I'd love to be able to understand your comment but i'm struggling. Any help?

Edit: oh the post-it at 19:39, it wasn't legible when i first watched.

@XIIchiron78 - 2024-06-08

@@SianaGearz it was a little joke he put in during the Overton window segment

@xXCindellaXx - 2024-06-22

@@XIIchiron78 impact on nature maybe

@Badspot - 2024-06-01

LLMs are particularly good at telling lies because that's how we train them. The RLHF step isn't gauged against truth, it's gauged against "can you convince this person to approve your response". Sometimes that involves being correct, sometimes it involves sycophancy, sometimes it involves outright falsehood - but it needs to convince a human evaluator before it can move from dev to production. The "AI could talk it's way out of confinement" scenario has moved from a toy scenario that no one took seriously to standard operating procedure and no one even noticed.

@mindrages - 2024-06-01

Your second sentence is quotably spot-on.

@devoNo2good - 2024-06-01

This

@NicholasWilliams-uk9xu - 2024-06-01

Personal data harvesting and Youtube and it's influencer trolls using it harass individuals and leverage it for psyops.

@ChristianIce - 2024-06-01

An LLM can just be coincidentally right or wrong, it can't "lie".
It doesn't know what the words mean, it repeats words like a parrot.

@lwmburu5 - 2024-06-01

@@ChristianIce the stochastic parrot model is disfavored by mech interp

@LadyTink - 2024-06-01

I noticed, obviously when your fav ai safety channel disappears right when suddenly ai safety seems the most important thing xD

@Kenionatus - 2024-06-01

In today's news: dozens of influential AI safety researchers and journalists killed in series of plane crashes

@Eddie-th8ei - 2024-06-01

"just when the world needed him the most, he stopped uploading to his youtube channel"

@someguycalledcerberus9805 - 2024-06-02

I had been wondering if he's busy because he's working in one of the teams and simply doesn't have time or signed an NDA.

@darkzeroprojects4245 - 2024-06-02

@@Kenionatus Id not be suprised of came true.

@RoulDukeGonzo - 2024-06-03

I honestly thought he looked at gpt output being charming and was like, oh, I guess I was wrong

@johannesdolch - 2024-06-08

"The cat thinks that it is in the Box, since that it is where it is."
"The Box and the Basket think nothing because they are not sentient." Wow. That is some reasoning.

@Mrpersonman0 - 2024-06-24

It's entirely accurate I agree.

@NitFlickwick - 2024-06-01

Just remember, Rob, Y2K was mocked as the disaster that didn’t happen. But it didn’t happen because a lot of people realized how big of a deal it was and fixed it before the end of 1999. I really hope we are able to do the same thing with AI safety!

@wojtek4p4 - 2024-06-02

The scary thing to me is not that with Y2K, almost all people wanted it not to happen.
But with Y2K2 electric boogaloo AGI risks, there are some people (Three Letter Agencies, companies, and independents), which want AI threats to happen, but controllably. That means that instead of all of the efforts focusing on mitigating the issue, we're fumbling around implementing random measures in hope they help - while those mentioned groups focus on making sure we're not doing that to them.

@duytdl - 2024-06-02

Add Ozone disaster to that list too. We barely got away with it. If it had happened today dollars to donuts, we'd never have been able to convince enough people to ditch even their hairsprays. Internet (social media particularly) has done more harm than good.

@ChrisBigBad - 2024-06-02

"If our hygiene measure work, we will not get sick and the measures taken will look like they were never necessary in the first place."

@XenoCrimson-uv8uz - 2024-06-02

@@duytdl I disagree with that, without internet I wouldn't have know climate change was real because of everyone attitude was normal and not panicking

@GabrielPettier - 2024-06-02

I'm pretty sure it's one of the things he hints at at 44:15

@caleblarson6925 - 2024-05-30

Hey Rob! I just wanted you to know that I've been watching your videos for several years (all the way back to the stop button problem). This year you inspired me to apply to MATS to finally get involved in AI safety research directly, to which I've now been accepted! Thanks for making these videos over the years, you've definitely influenced at least one more mind to work on solving these extremely important problems.

@alexbistagne1713 - 2024-06-01

Congrats!

@deifiedtitan - 2024-06-01

That’s great, well done

@NicholasWilliams-uk9xu - 2024-06-01

Personal data harvesting and Youtube and it's influencer trolls using it harass individuals and leverage it for psyops.

@NicholasWilliams-uk9xu - 2024-06-01

If he actually wanted people to speak out, he wouldn't have said "autism" then split off to useless skinny nerdery talk. (he doesn't care, he sucks up to Youtube for paycheck, and harvest personal data and intellectual property for his content) btw (your intellectual property and personal data).

@timothy6966 - 2024-07-01

God, it’s like looking in a goddamn mirror. I switch between “near” and “far” mode on a daily basis. If I stay in near mode I’ll be committed to an insane asylum in a week or so.

@naptime_riot - 2024-06-01

I started watching your videos years ago, and you're the person I trust the most with these questions. I absolutely noticed you disappeared. This is not some parasocial BS, just the truth. You should post as much as you want, but know that your voice is wanted and needed now.

@Alorand - 2024-06-01

The problem with fixing AI alignment problem is that we are already dealing with Government and Corporate alignment problems...
And those Governments and Corporations are accelerating the integration of AI into their internal structures.

@ZappyOh - 2024-06-01

Yes ... All the money goes toward AI-alignment to government and corporations.
It is hard to envision that as anything other than extreme dystopia :(

@Frommerman - 2024-06-01

The way I put it is that we know for a fact misaligned AI will kill us all because we've already created one and it is currently doing that. It's called capitalism, and it does all the things the people in this community have been predicting malign superintelligences would do. Has been doing them for centuries. And it's not going to stop until it IS STOPPED.

@Shabazza84 - 2024-06-01

Can't happen in my country yet. They still figure out how to "govern" without using a fax machine
and to introduce ways to be able to actually use your electronic ID you got 14 years ago.

@NicholasWilliams-uk9xu - 2024-06-01

Personal data harvesting and Youtube and it's influencer trolls using it harass individuals and leverage it for psyops.

@cortanathelawless1848 - 2024-06-01

I mean Israel literally is using ai to kill enemy combatant's in their family homes

@nastrimarcello - 2024-06-03

Autism for the win 20:40

@Gaswafers - 2024-06-01

Suffering from "fringe researcher in a Hollywood disaster movie" syndrome.

@MetsuryuVids - 2024-06-01

Don't look up.

@endlessvoid7952 - 2024-06-01

And like a Hollywood movie, the risks aren’t real 😅

@flickwtchr - 2024-06-01

Do you also refer to Connor Leahy as having a "Messiah complex"? Why is it so many AI bros go straight to the ad hominem attack rather than engage arguments?

@endlessvoid7952 - 2024-06-01

@@flickwtchrI mean…. Kinda. Have you seen interviews with him?

@D_Cragoon - 2024-06-01

​@@endlessvoid7952
This video includes an example of an ai, even in its current state, being able to formulate a strategy that involved misleading a human. That's what ai can already do.
Many common objections to taking ai safety seriously are addressed in this other video of this channel here: https://m.youtube.com/watch?v=9i1WlcCudpU

@NitFlickwick - 2024-06-01

First, the cat joke. Then the “depends on autism” Overton window joke. Glad to have you back! - Signed “a guy who is happy to ignore (ok, doesn’t even see) the Overton window”

@gavinjenkins899 - 2024-06-01

You can also just be privileged/arrogant instead of autistic. it's your chance to put those things to good use! Like, at my current job, I know full well i can easily get another job, with my education and background, so I don't care at all about just slamming home brusque comments that are clearly true.

@kaitlyn__L - 2024-06-02

@@gavinjenkins899 that’s part of the thing though, isn’t it? In everyone else, it requires that kind of personality. That’s part of why us autistic people often get called arrogant when we’re trying to help others!

@ejayAD - 2024-06-02

Great Rob love this thank you!

@MrDoboz - 2024-06-02

also Elon jumping on changing planet lol

@AtomicVertigo_Comics - 2024-06-02

@@kaitlyn__L so true!

@evrimagaci - 2024-06-04

It's good to see you back Robert. This video confirms what I've been seeing in the field too: Things are changing, drastically. Even those who were much more reserved about how AI will change our lives seem to have changed their points of views. By that I mean if you compare how "The Future of AI" was being talked about just a mere 1.5 years ago vs. today is drastically different among the scientists who know the field. I am not saying this to take a stab at the community, I think it is honorable to adapt the the landscape as it changes without our control. It just signals that AI (and its safety) is actually way more important than what has been portrayed to the public in the past couple of decades. We need to talk more about it and we need to underestimate it much less.

@UnPetitPoulet - 2024-06-01

5:24 Is this the death sound from the game Noita ?
In Noita, players kill themselfs a LOT while trying to play god while combining dangerous (and often really dumb) spell combinations to make their magic wand as f*ing powerfull and game-breaking as possible.
Now I can't help to see AI as a wand we are collectively tinkering and testing randomly. What could go wrong ?

Spoiler: I had a run where I casted infinite spontaneous explosions that spawned on my ennemies. At one point I ran out of ennemies so it relocated on my character... Funniest shit, I'll do it again

@Frommerman - 2024-06-01

Lmao I literally just finished the sun quest for the first time. Nice to see a fellow Minä.

@awadafuk4863 - 2024-06-01

It definitely is. Had me shouting at my phone in Finnish 😤😤

@NicholasWilliams-uk9xu - 2024-06-01

Personal data harvesting and Youtube and it's influencer trolls using it harass individuals and leverage it for psyops.

@cameronforester8413 - 2024-06-01

Homing rocks are pascifist 🪨 ✌️

@cerocero2817 - 2024-06-01

After all, why not? Why shouldn't I put a larpa on my rapid fire nuke wand?

@x11tech45 - 2024-06-01

35:14 "OpenAI's super alignment team (that was dissolved) seems promising and deserves its own video" - combined with the horn blow - combined with the visual stimulus ("nevermind") - made understanding the spoken words difficult to understand. Thankfully, closed captioning was available.

@JB52520 - 2024-06-01

I think that was intentional, since the words became irrelevant. Anyone just listening might have heard outdated information without getting the joke.

@NicholasWilliams-uk9xu - 2024-06-01

Personal data harvesting and Youtube and it's influencer trolls using it harass individuals and leverage it for psyops.

@x11tech45 - 2024-06-02

@@JB52520 oh, I got the joke once I read the words in closed captioning... But the horn stopped me from even hearing the joke.

@Reaperance - 2024-06-04

I wrote a complex work in my Abitur (German equivalent to something like A-Levels) about the possibility and threats of AGI and ASI in late 2019. In recent years with the incredibly fast-paced development of things like GPT, Stable Diffusion, etc. I find myself (often to a silly degree) incredibly validated.

And terrified.

That aside, it's great to see there are people (much smarter than me) who understand this very real concern, and are working to find solutions and implementations to avoid a catastrophe... working against gigantic corporations funneling massive amounts of money into accelerating the opposite. Oh boy, this is gonna be fun.

@carrotylemons1190 - 2024-06-01

Noita death noise made this even more terrifying than it already was

@SaffronMilkChap - 2024-06-01

Thank you - it was tickling my brain and I couldn’t place it.

@MetallicMutalisk - 2024-06-01

I noticed that too lol

@Brunoenribeiro - 2024-06-01

I thought it was a modification of the Majora's Mask noise. Maybe Noita took some inspiration?

@huhabab - 2024-06-01

I'm so conditioned to that sound I felt rage and disappointment about myself as soon as the sound played, Noita ruins lifes.

@kriskolish6423 - 2024-06-01

AGI Extinction = Skill Issue

@pafnutiytheartist - 2024-06-01

The problem both sides of the argument seem to be mostly dismissing is economic:
We will almost certainly create systems that can automate large enough percentage of human labor before we create any superintendent agents posing existential risks.
This will lead to unemployment, inequality and small number of people reaping most benefits of new technology. OpenAI was a nonprofit organization with the aim to benefit humanity in general until they achieved some success in their goals and restructured to be a company optimizing shareholder income.

@juliahenriques210 - 2024-06-01

The main overlap is that the same economic pressures that drive the obsolescence of jobs, the cohercive laws of competition, also drive the premature deployment of deficient AI systems to control tasks they're not yet ready for. The current example is autonomous vehicles, which still have a hard time functioning outside standard parameters, and thus have been documented to... run over people. On a larger scale, a limited AI system can already do lethal harm when put in charge of, say, an electrical grid, or a drone swarm. It's the same root cause leading to different problems.

@arthurdefreitaseprecht2648 - 2024-06-03

Very very well said, up!

@reverse_engineered - 2024-06-02

Thank you Robert. I understand how difficult this must be for you. Imposter syndrome is very real and anyone with the knowledge and self-awareness you have would be well served by being cautious and skeptical of your ability to answer the question properly. But as far fetched as it may seem, you have all the right qualities to help: you are very knowledgeable about the area, you carefully consider your words and actions in an attempt to do the least harm as possible, and you are a recognizable and influential person within the public sphere.

We need you and more people like you to be strong influencers on perception, awareness, and policy making. For anyone working in AI Safety, alignment is the clear problem, and we already know how governments' and corporations' alignments often prioritize their own success over the good of society. Folks like Sam Altman can sign all the open letters they want, but their actions show that they still want to race to be the first and treat safety as a distant third priority.

I think the only hope we have is that enough emphasis is put into research and policy that we can figure out safety before the corporations figure out AGI. There is no way we are going to get them to stop or even slow down much, since that directly opposes their shareholders' interests. Policy and law aren't going to stop them; we have seen that numerous times throughout history and in many areas even today. Perhaps people on the inside could choose to defect or prioritize safety over advancement, but there are too many people excited to make AGI that all of the cautious folks who would blockade or walk in order to stop things would quickly be replaced by those who are too excited to care.

What we need is knowledgeable and influential people making their way into the higher ranks of these corporations. We need people with real decision making power to be there to make decisions that better align with the good of society and not just short-term profit seeking. People like you.

Good speed, sir, and thank you for stepping up to help save humanity from itself.

@genegray9895 - 2024-06-01

1:15 No no. We noticed

@arbitool - 2024-06-01

True

@mellowsign - 2024-06-01

We cared.

@ClaimClam - 2024-06-01

i didnt

@LeoCage - 2024-06-01

I definitely noticed, but I figured he was an actual expert and busy.

@adfaklsdjf - 2024-06-01

I was sad

@fritt_wastaken - 2024-06-01

"Sh*t's getting real"
> Noita death sound is playing.

Yeah, I feel you

@Zicore47 - 2024-06-02

Thats funny, because I'm playing Noita while watching this...

@selectionn - 2024-06-02

dying to fire and getting noita'd sounds more dangerous than AI

@Marquis-Sade - 2024-06-05

@@Zicore47 What is Noita?

@rehenaziasmen4603 - 2024-06-22

​@@Marquis-Sade
Its a 2d pixelated game of magic and alchemy and lots of dying

@Marquis-Sade - 2024-07-01

@@rehenaziasmen4603 Lots of dying? Sounds dark

@bennie_pie - 2024-06-02

Rob, thank you for this video! I noticed your absence but there is more to life than youtube and I'm glad your talents are being put to good use advising the UK government. I'm as surprised as you are at the UK seems to be doing something right considering the mess our government seems to make of everything else it touches. Thanks for levelling with us re your concerns/considerations. AI alignment has been looming more and more and it's good to have your well considered views on it. I have a UK specific question - we've got elections coming up next month and I wondered if ytou had ay views on how that might affect the work the UK is doing and whether any particular party seems to be more tuned in to AI safety than any others and would value your opinion. I will pose the question to the candiidates I can vote for but thought I'd ask as you are likely more in the know that I am!

@jamieclarke321 - 2024-06-03

Id be interested to hear robs take on this as well.

@zoggoth - 2024-06-01

39:11 I appreciate the joke of saying that companies have to follow EU law while showing a pop-up that still doesn't follow EU law

@Nulley0 - 2024-06-01

Even the camera lost its focus.

@Hexanitrobenzene - 2024-06-03

Doesn't follow ?

@zoggoth - 2024-06-03

@@Hexanitrobenzene
The one I was thinking of was that you can't emphasise "I agree" to get people to click on it, but I'm not 100% sure that's in every EU country
However, basically every version of that pop-up breaks "You must [...] Make it as easy for users to withdraw their consent as it was for them to give their consent in the first place." (from gdpr eu, so definitely EU-wide)
But who knows, maybe that website gives you a pop-up to delete your cookies too!

@FoxtrotYouniform - 2024-06-01

I posit that the reason AI Safety has taken so long to hit the mainstream is that it forces us to confront the uncomfortable reality that nobody is in charge, there are no grand values in governance, and even the most individually powerful among us have no idea what really makes the world tick day to day. Confronting AI Safety, which could be reworded as Superorganism Safety, makes us realize that we even have yet to solve the alignment problem in our governments and other human-staffed organizations like corporations, churches, charities, etc.

The powers that be have very little incentive in that context to push forward in the arena of AI Safety because it inherently raises the question of Superorganism Safety which includes those organizations, and thus puts them at the forefront of the same "is this really what we want from these things" question.

@NealHoltschulte - 2024-06-02

How do I upvote this twice?

@Sal1981 - 2024-06-02

AI alignment is probably more about human alignment.

@tristan7216 - 2024-06-03

"what we want from these things" - there is no we any more, maybe there never was. There's a bunch of people who don't like or trust each other but happen to be geographically co located. This is the fundamental alignment problem no matter what you're trying to to align. Maybe they could align governments and AIs in Finland or Japan, I don't know. Maybe I'm just pessimistic because I'm in the US.

@Hexanitrobenzene - 2024-06-03

You raise a good point, but I don't think it's the main reason at all. Only for the philosophically oriented, maybe.

The problem is that most people are practically oriented and consider things only when they confront them.

@elfpi55-bigB0O85 - 2024-06-03

"there are no grand values in governance"

That's not true. Capitalism, colonial management and the expectations of a society derived from White Supremacist economic theory. There you go.

@coltenh581 - 2024-06-06

That scene around the “Community” table was so great. Awesome work.

@maxwinga839 - 2024-05-31

Hey Rob,

I just finished watching this video with tears streaming down my face. Watching your transition from casual youtuber talking about fun thought experiments to stepping up as they materialize into reality was incredibly moving. What hit me especially was the way in which you captured the internal conflict around being ahead of the Overton window on AI risk.

While I may be just some random person on the internet, I want you to know that you've had a serious impact on my life and are one of the catalysts for my career shift into AI safety, and I deeply appreciate you for that. I was midway through my Bachelor's degree in Physics at the University of Illinois (2020-24) when Midjourney and ChatGPT released in 2022. As a physicist, learning about AI from a mathematical perspective was fascinating and seeing the unbelievable results (that seem so unnervingly mainstream now) really hammered home how impactful AI would be. I started to have some concerns as I learned more, and eventually stumbled upon your channel in December 2022, quickly binging all of your videos and internalizing the true scale of the danger we face as a species. Shortly after, GPT-4 was released while I was staying on campus over spring break with a close friend. I remember distinctly the true pit of existential dread I felt in my stomach as I read the technical report and realized that this was no longer some abstract future problem. Since then, I added a computer science minor to my degree and used it to take every upper-level course on AI and specifically two on trustworthy AI, including a graduate course as a capstone project. I'm now going to be interviewing at Conjecture AI soon, with the goal of contributing to solve the alignment problem.

I've missed your videos over the last year, and often wondered what you were up to (rational animations is great btw!) During this last year I've felt so many of the same conflicts and struggles that you articulate here. I've felt sadness seeing children frolicking with no idea of what is coming, I've been the one to bear the news about the immense dangers we're facing to those close to me, and I've struggled with believing these things intellectually while the world still seems much the same mundane place around me. Hearing you put these thoughts to words and seeing the same struggle reflected elsewhere means a lot to me, and I'm incredibly grateful to you for that. Your rousing speech at the end really moved me and was an important reminder that no matter how lost our cause may feel as yet more bad news rolls in, the only way for our species to prevail is for us to be willing to stand up and fight for a better world. I don't know where my future will lead just yet, but my life's work will be fighting for humanity until the bitter end.

Thank you for everything Rob.

@flickwtchr - 2024-06-01

What a great comment and good luck with your interview at Conjecture. Connor Leahy and Rob Miles are my top favorite thinkers/voices regarding AI safety/alignment issues.

@tonyduncan9852 - 2024-06-01

That's Life, as expressed in the present, made available to all. It should be quite useful, one would think. Causality is inexorable, so hold on to your hat. Best wishes.

@cemacmillan - 2024-06-01

Great to see another describe the personal side of witnessing and coming to understand an emerging problem, and saying: "I'm going to drop what I am doing, retool myself and change the center of what they are doing for reasons other than mammon and the almighty currency unit."

As Rob demonstrates in the video, paltry funding into research into AI safety in all of its subdomains, and correspondingly small number of persons actively working on the problem and the enormous problem space presented by circumstances. We are living after all in a world where a fairly small elite who have disproportionate influence in a super-heated segment of the economy are optimizing for a different goal: crafting the successful model in a free-market economy model, a target very different from safety as the histories of automation, scaled, process-modeled industry optimizing return on investment show us.

I'll stop there as I mean to be encouraging. :)

Smart thinking, collaboration and effort remain our best tools to confront the challenge by asymmetric means.

@gavinjenkins899 - 2024-06-01

This is too eloquently written, I'm actually concerned it is Chat GPT lol

@tonyduncan9852 - 2024-06-01

@@gavinjenkins899 You should be concerned that you might be the same. Or something.

@drkalamity4518 - 2024-06-01

20:35 legit had me rollin, nice

@pegatrisedmice - 2024-06-01

😂

@TheOmzee - 2024-06-06

same lmao
The ironic thing is that I failed the theory of mind test, I legit thought Sally would go to the box first before I thought more about it. T_T

@KelseyHigham - 2024-06-17

ahahaha

@lioedevon4275 - 2024-07-23

I’m glad people are finally taking this shit seriously. As an artist it’s been incredibly frustrating because we’ve been losing our jobs and trying to tell people “ai will come for you next and it’s dangerous” but it feels like people haven’t been listening because they still don’t consider art a real job

@DarkestMirrored - 2024-06-01

I actually have a pair of questions I'm curious to see your take on answering.

1.) Is any serious work on AI alignment considering the possibility that we can't solve it for the same reason that /human/ alignment is an unsolved problem? We can't seem to reliably raise kids that do exactly what their parents want, either. Or even reliably instill "societal values" on people over their whole lives, for that matter.

2.) What do you say to the viewpoint that these big open letters and such warning about the risks of AI are, effectively, just marketing fluff? That companies like OpenAI are incentivized to fearmonger about the risks of what they're creating to make it seem more capable to potential investors? "Oh, sure, we should be REALLY careful with AI! We're worried the product we're making might be able to take over the world within the century, it's that good at doing things!"

@fartface8918 - 2024-06-01

its less marketing fluff and more trying to trick lawmakers into letting laws be made with openAi on the top of the pile, the same way regualions around seach engines made with google at the top favor google because it rases the barer to entry for a competitor, if regulation is made 5-10 years from now when openAi is doing worse off the company would be doing worse and so must make letters like this as is its legal obligation to maximize shareholder profits, this is in addion to the normal big company thing of regulations lose you less money if you in the lawmakers ear rather then a activist trying to do whats right/safe/good, because of these factors in addition to what you said no pr statements by openai should be taken as fact

@taragnor - 2024-06-01

Yeah honestly most of what's going on with OpenAI is a ton of hype. That is what the stock prices of companies like OpenAI and NVIDIA thrive on.

@MisterNohbdy - 2024-06-01

1) I wouldn't say human alignment is "unsolved". Most people are sufficiently aligned to human values that they are not keen on annihilating the entire species; the exceptions are generally diagnosable and controllable. That would be a good state in which to find ourselves with regard to AGI.

2) The letters are mostly not written by such companies; Robert goes through many of the names of neutral experts who signed them in the video. Some hypothetically bad actors in the bunch don't negate the overwhelming consensus of those who have no such motivations.

@juliahenriques210 - 2024-06-01

Both are very good points, and while the first might remain forever undecided, the second one has already been proven factual by autonomous vehicles. While in this case it's more a matter of artificial stupidity, it's still proof that AI safety standards for deployment in the real world are faaaaar below any acceptable level.

@taragnor - 2024-06-01

@@juliahenriques210 Well when you're talking about AI safety, there's two types. There's the "How do we stop this thing from becoming Skynet and taking over the world?" and there's "How do I keep my Tesla from driving me into oncoming traffic".

They're very different problems.

@ZevIsert - 2024-06-01

Attempting to finish the sentence (I think intentionally) left in that cut following 20:30, it'd be "The ability of our society to respond to such things basically depends on aut[ism existing in our species, so that these kind of things are more often said out loud]." Which, if thats actually what Rob said in that cut, would be a really beautiful easter egg to this video.

Edit: "can be said" -> "are more often said".

@DevinDTV - 2024-06-01

This is certainly a virtue of autism, but it lets non-autistic people off the hook too much, imo. You don't have to have autism to reject conformity in favor of rationality. Conforming with an irrational belief or behavior is actually self-destructive, and people only do it out of an avoidance of discomfort.

@singularityscan - 2024-06-01

I am autistic and the need to inform a group so the collective knows all the facts, is a strong urge and motivation. As is being wrong or corrected by the group, it's not a attack on me it's just me getting the same info as the group.

@NicholasWilliams-uk9xu - 2024-06-01

Personal data harvesting and Youtube and it's influencer trolls using it harass individuals and leverage it for psyops.

@anthonybailey4530 - 2024-06-01

It's truly a spectrum. "If you know one autistic person, you know one autistic person" etc.

But the insight holds, and I loved the joke.

More generally, huge ❤ for the whole video.

@pierrebilley276 - 2024-06-01

Guys don't forget to watch the video, not just listen !

@drone_video9849 - 2024-08-28

Robert, not sure if you will see this but I was the one who spoke to you at the train station two weeks ago (leaving out station name and city on purpose) - just wanted to say thanks for sharing your time despite rushing to your meeting. You were very pleasant and generous with your time. Great content also! Looking forward to getting home and catching up on the last few weeks of videos I have missed while on the road.

@eldarad - 2024-06-01

04:26 I just enjoy thinking about the day Robert setup his camera and was like..."right, I'm now going to film myself looking deep in thought for one minute"

@test-sc2iy - 2024-06-01

OMG WELCOME BACK I LOVE YOU

edit: ahem I mean, I'm very happy to see another video from you. continue to make them please ❤

you got me so much cred reppin open ai since u said they graphs ain't plateauing when open ai was worried of gpt 2. I have been touting ai is here since that vid.

@Maxime-fo8iv - 2024-06-04

13:48 Honestly, I wouldn't be so quick to dismiss the answer of GPT-4 when it comes to transparent boxes. It's true that you can see the inside of the boxes, but you still need to look at them for that. And since Sarah put the cat in the carrier, that's probably where she'll look for it first ^^
To be precise, I think the answer depends on how close to each other the containers are, it's still possible that they are so close to each other that you can immediately see where the cat is without "looking for it", but I don't think it's obvious that it would or wouldn't be the case.
So, my ratings:
- human: incomplete answer
- GPT-4: incomplete answer

@aa.bb.9053 - 2024-06-05

…or the GPT answer describes the immediate period when Sarah “comes back”, which has an infinite number of moments in which she is realistically “looking for” the cat where she left it. It’s only upon updating herself on her surroundings that her expectation should change. Such tests are testing for utility to human users, not for accurate modeling.

There are innumerable scenarios similar to the one you mention… for example, is Sarah visually impaired (despite being able to “play tennis”)? Is the transparent carrier floating in the air in front of her, or is it sitting on one of a number of objects that could distract one’s visual processing for a few moments, as in the real world? Are there such distracting objects in the line of sight or field of view generally (as in the real world)? Is the cat’s stillness & coat pattern blending into that background? We are notoriously bad at visual processing & retention; nature mainly selected us to recognize faces & to notice movement in the tall grass. Many such real-world factors would severely alter Robert’s model… but wouldn’t alter GPT’s, because it’s probably optimizing for the whole range (making GPT’s answer more realistic… beyond even the initial moments of “coming back” to look for the cat, which imo GPT modeled correctly & it’s the average human who presumes too much).

Sarah & Bob probably have a social understanding (given they occupy the same place where a cat is being “stored”) which extends to the care of cats… does that influence where Sarah might initially look for the cat?

The tendency to reinforce in GPT responses that reflect our social histories & our evolutionary history, both of which streamline & simplify our intuitions about the world & each other… will this tendency make AI’s better at offering us a mirror to ourselves, while effectively understanding us better than we understand ourselves? Doesn’t bode well.

@pooroldnostradamus - 2024-06-01

4:27 I like how choosing to wear a red shirt in the main video meant that wearing it for the role of the "devil" wouldn't be viable, so a dull, no less malicious looking grey was given the nod.

@RobertMilesAI - 2024-06-01

Oh, he's not the devil, he's the voice of conformity, of course he's in inoffensive grey :)

@pooroldnostradamus - 2024-06-01

@@RobertMilesAI It's the conformity that's going to get us in the end. I stand by my initial guess;)

@christophstahl8169 - 2024-06-01

everybody knows that redshirts are the first to die...

@CopingwithAI - 2024-06-01

"Admittedly, this particular researcher has a pretty poor track record predicting this kind of thing."
I died😂

@-Rook- - 2024-06-02

That's pretty much everybody though!

@hellfiresiayan - 2024-06-02

​@@-Rook- Yann is uniquely bad tho lol

@Sal1981 - 2024-06-02

@@hellfiresiayan The reason being he has this view of human faculty of being special. We're basically just pattern prediction machines, with added reasoning lodged into our prefrontal cortex. AGI systems would, for instance, not be fooled by optical illusions.

@darkzeroprojects4245 - 2024-06-02

@@Sal1981 "pattern prediction machines"
Don't like people comparing people to machines.

@clintonbehrends4659 - 2024-06-03

@@darkzeroprojects4245 but thats how biology works though it's a cascade of chemicals and electrical systems of which is optimized by the enviroment to survive and reproduce now thats not to say it's alright to say justify genocide on the basis of "oh humans are just pattern recognition machines" but I would say nothing or at the very least infintesimaly little as to be negligible is a good justification for de-humanization (p.s. I wonder if we'll eventually have to change the term de- "humanization" to be more encompassing to things other than humans)

@EternalKernel - 2024-06-02

The problem, is capitalism. I agree it's important to slow down and take on AGI in a more deliberate manner. But because of capitalism, this is just not going to happen. 90% of the people who would like to work on slowing things down, on alignment etc simply can not because they do not have the economic freedom to do so. And probably 50% of the people who decided "Woohoo! pedal the metal lets get to AGI!" Decide that because they know that the circumstances of being poor and under the boot of the systems are going to stay the same unless something big and disruptive comes along.

Add in the people who think the world is fine and AI is going to make them wealthier/happier/more powerful and you have our current state right? We as a species have sewn these seeds, our very own creations will be our be judge jury and executioner (possibly). This train is not in a stoppable state, not unless people with real power suddenly all grow a freaking brain. Which they won't because one of the features that capitalism likes to reenforce is it gives people who are good at being figure heads (look a certain way, have confidence, have a certain pedegree, and are more likely to be actual psychopaths) power. Just look at musky boy.

Me? it doesn't matter what I think. I'm nobody, just like everyone else. I have no money/power/influence, just like 99% of the world.

@SeamusCameron - 2024-06-01

The whiplash of LLMs being bumbling hallucination machines a lot of the time, while also showing surprising moments of lucidity and capability has been the worst part. It's hard to take a potential existential threat seriously when you keep catching it trying to put it's metaphorical pants on backwards.

@flickwtchr - 2024-06-01

Over and over and over again, people like Rob Miles, Connor Leahy, Geoffrey Hinton and others have repeated that they don't believe the current most advanced LLMs pose an existential threat. The do however point to the coming AGI/ASI in that regard.

@ClaimClam - 2024-06-01

@@flickwtchr advanced AI will SAVE lives, people that stand in the way are guilty of murder

@ekki1993 - 2024-06-01

It's always hard to be reasonable with small chances of extreme risks because humans are intrinsically incapable of properly gauging that. It's why casinos exist.

@DeruwynArchmage - 2024-06-01

@@flickwtchr you’re absolutely right. And so is @SeamusCameron (and many other commenters here).

But it doesn’t matter. In some ways, the very thing that Seamus pointed out is precisely the problem. It was powerful enough to get everyone’s attention. The people who really understood got very concerned.

But people paid attention… and they saw it occasionally “putting its pants on backwards”.

They didn’t draw the conclusion, “Holy crap! It’s getting more powerful really fast. This is the stupidest they’ll ever be. Soon (single digit years), a future one may be smarter than any of us, or all of us put together. That has the chance to turn out really bad.”

Most didn’t even think, “Wow! I see where this is going. It really might take almost everyone’s jobs!”

They thought, “El oh El! Look how dumb it is! I saw people talking about this one way that will make it look dumb every time. And oh look, there’s another. I can’t believe they made me worry for a moment. Clearly, all of these people are crazy and watched too much SciFi. If there was a real problem, then the government and the big corps would be doing something about it. It’d be in the news all the time. Even if I ever thought things could go bad, it’s easy to let it slide to the back of my mind and just live my life. Surely nothing bad could really happen.”

Maybe that’s not everyone, but I hear it enough, or just see the apathy, that I’m pretty convinced most people aren’t taking it seriously.

If it were foreigners who had banded together and were marching towards our country with the stated plan of working for essentially nothing, we’d be freaking the ** out.

If we knew aliens were on there way and had told us they’d blow us all up and the governments all said, “Gee, we think no matter what we do, we’re almost certainly going to lose.”, people would be losing their minds.

But we’re not. We’re hubristic. I can’t say how many people have said machines can’t be smarter. Or argued how they don’t have a soul (as if that would make any difference, even if souls were a thing.)

And we don’t like thinking about really bad things. That’s why religion is such a thing. People are scared of dying. So we dress it up. We try not to think about it. We find ways to cope. And that’s just thinking about our own personal mortality.

It’s almost impossible to truly wrap your mind around everyone dying. It’s hard to truly feel the gravity of real people dying by the 10s of thousands right now because it’s half way around the world. It seems so distant. So abstract. And it’s happening. Right this second. You can watch the videos.

The only way I can even approach coming to grips with it is thinking about the people I love being impacted by it (whether it’s merely their careers or their very lives).

It’s a hard thing. I know how Rob feels. I’ve got some ideas that might work (mechanistic interpretability stuff), and it’s hard for me to even pursue them.

@gavinjenkins899 - 2024-06-01

I don't think LLMs are EVER a threat, however they've already moved on from LLMs. Like he mentioned, the new "Chat" GPT is cross-trained on images as well. So it's not an LLM. So we aren't protected by limitations of how smart you can get by reading books alone. If you can get books, pictures, videos, touch, sound, whatever, then there's no obvious limit anymore.

@GermanTopGameTV - 2024-06-01

We have been building huge AI models that now run into power consumption limitations. I think the way forwards is to build small agents, capable of doing simple tasks, being called up by superceding, nested models, similar to how our biology works.

Instead of one huge model that can do all tasks, you'll have models which are able to do specific small tasks really well, and have their neurons only called if a bigger level model needs their output. Our brain does this by having certain areas of neuron bundles that do certain tasks, such as "keeping us alive by regulating breathing", "Keeping us balanced", "Producing speach" and "Understandig speach" and many more, all governed by the hippocampus, that can do reasoning.

People who have strokes can retrain their brains to do some of these tasks in different places again, and regain some of their cognitive ability. This leads me to belive that the governing supernetwork does not have the capacity and ability to actually learn the fine details the specialised areas do very well. A stroke victim who lost a significant part of their Wernicke Area may be able to relearn language, but will always have issues working out the exact meaning.

I'd bet our AGIs will recieve similar structures, as it could significantly speed up the processing of inputs by only doing a trained scan of "which specialised sub AI will produce the best output for this question?" and then assign the task there, while also noticing when a task doesn't fit any of the assigned areas and then, and only then use the hippocampus equivilant to formulate an answer.

This architechture might also provide the solution to safety - as by training solely network components for certain tasks, we can use the side channel of energy consumption to detect unpredicted model behavior. If it was trying to do things it's not supposed to, like trying to escape it's current environment, it won't find a pretrained sup-AI that can do this task well, and would need to use it's expensive high level processes to try to formulate a solution. This will lead to higher energy usage and can be used to trigger a shutdown.

I might be wrong though. I probably am.

@napdogs - 2024-06-02

I want to see this idea explored. I think the most difficult thing would be the requirement to outline and program consciousness and subconsciousness of these separate elements to facilitate true learning while allowing noninvasive intervention. As the video showed the language model can show a "train of thought" to make decisions and so there would need to be multiple layers of "thought", subconscious decision making and micro agent triggers to effectively operate as this fauxbrain AGI. Ensuring essential functions only operate with no awareness sounds like a strong AI safety feature to me. Like how "You are now breathing manually" triggers obvious measurable unnatural breathing patterns. Very compelling.

@NoName-zn1sb - 2024-06-02

way forward

@elfpi55-bigB0O85 - 2024-06-03

that's just a computer program but with an inefficient word processor tacked onto it

@edwardmitchell6581 - 2024-06-06

I think this is possible if we can extract out the parts of these large models. The first part to extract would be encyclopedic knowledge. Imagine if you could swap this out to have the model have only knowledge available in 1800. Or if you wanted to update it with the most recent year. Or if you wanted it to only know what the average Republican from Indiana knows.

@Thespikedballofdoom - 2024-06-07

god dammit you invented litral ai cancers

@manark1234 - 2024-06-03

1:53 It's worth noting that there are likely shockingly few AI safety researchers because it costs so much to get to the point where anyone would consider you a genuine researcher, and so it creates the perverse incentive to try to make that money back.

@humanaku9135 - 2024-06-01

The Overton Window self-reinforcement was a scary thought I never considered before. It must be terribly annoying to be an expert who has to temper his opinion to "fit-in"

@jameslincs - 2024-06-01

Maybe experts need more courage

@Jablicek - 2024-06-01

@@jameslincs Maybe they need not to be shouted down/mocked for raising concerns, and especially we need real protections for whistleblowers.

@gasdive - 2024-06-01

See also climate change...

What climate scientists say off the record isn't what makes it into IPCC reports.

@TomFranklinX - 2024-06-01

@@gasdive See also IQ research.

@useodyseeorbitchute9450 - 2024-06-01

It's a common problem. Cancel culture is not only very good at fighting any heresy but also on fighting reality.

@TheInsideView - 2024-05-31

"it's 2024 and I'll you who the hell I am
I am robert miles
and I'm not dead
not yet
we're not dead yet
we're not doomed
we're not done yet
and there's a hell of a lot to do
so I accept whatever responsibilities falls to me
I accept that I might make...
I mean, I will make mistakes
I don't really know what I'm doing
But humanity doesn't seem to know what it's doing either
So I will do my best
I'll do my best
That's all any of us can do
And that's all I ask of you"

goosebumps here

welcome back king

(i mean rob, not charles)

@waththis - 2024-06-21

Nothing is funnier to me than an "other other hand" joke in a video about generative AI.

@bazoo513 - 2024-05-31

22:08 - Heh, kudos for both ignoring Musk and calling Wozniak "the actually good Steve from Apple" 😀

@Z3nt4 - 2024-06-01

Elon is out the window.

@totalermist - 2024-06-01

@@Z3nt4 Could have something to do with Musk being the biggest hypocrite on that list. Warning about AI, yet collecting billions to build the biggest AI supercomputer... He basically did a full 180 on the topic.

@shayneweyker - 2024-06-01

The bit where Elon started to raise his hand when Rob asked if he could get another planet was comedy gold.

@svenhoek - 2024-06-01

Ketamine is bad kids, mkay?

@anchor83 - 2024-06-01

So funny. 😄

@azaria2977 - 2024-06-01

Literally this channel just popped in my head. When i looked it up there's a video 10 hours ago after a year. How lucky am I?

@RoulDukeGonzo - 2024-06-03

He was waiting for you