Retrospring is shutting down on 1st March, 2025 Read more

Arthur · 2 answers · 20h

AI question: Are reasoning language models like DeepSeek R1 and OpenAI o3 inherently inefficient due to their tendency to engage in extensive internal dialogue before providing an (often short) answer? Do the computational resources required to support such reasoning capabilities outweigh any potential benefits they may offer?

I don't know the details of DeepSeek and OpenAI o3, but I know that even older models have to run the entire model for each word they produce. That seems pretty inefficient. And also, I think I heard that LLMs are now taking a nontrivial proportion of the world's energy (and hence carbon footprint)--as much as some small countries. Add to that that their apparent benefits are dubious--they take human jobs, they're prone to inaccuracies that sound believable, or making programs with subtle bugs, etc. And now we can't tell what's actually human-produced vs what's AI-produced, like for example news articles...and some students' essays are being incorrectly flagged as probable AI, which threatens their entire education. And so on.

Retrospring uses Markdown for formatting

*italic text* for italic text

**bold text** for bold text

[link](https://example.com) for link