2 days ago

The Doomer's Error: Why AGI Is An Incoherent Concept

What's the strongest anti-AGI case, the argument that reveals the fallacies underlying the belief that AGI is a viable goal – as well as the AI doomerism that believing AGI will soon arrive often spawns? Princeton professor Arvind Narayanan recently made a statement that we feel deserves amplification: For real-world problems, machines face some of the same key fundamental limits and challenges that humans face.

Listen to Luba and Eric unpack, explore, and expound. #noAGI

Comment (0)

No comments yet. Be the first to say something!

Copyright 2022 All rights reserved.

Podcast Powered By Podbean

Version: 20241125