There are parts of it I like and parts that I don't like. I don't like when people assume that they can just replace humans with AI, which it seems like a lot of CEOs want to do with things like ChatGPT and Bard. I do like that they can help give you ideas and suggestions for things - for instance, ChatGPT was helping me edit my resume and do examples of technical writing.
as an artist. who likes art. hate it burn it let it die kill it with fire and throw it to the deepest pits of hell
in most other uses... meh. don't care much. if it's doing the hard, monotonous, repetitive stuff for the humans, cool beans. leave my art and my music and my everything artistic alone though
It's really neat, it's been like a year since ChatGPT and GPT-4 were released, and I'm still amazed every time I see one or the other in action.
I'm skeptical that true AGI is actually achievable, though, since I think that would require actual consciousness, and AI can't be conscious since consciousness/life is not an algorithm, it's cosmic and ineffable.
So I also think it's ridiculous whenever people bring up possible ethical concerns about the use of AI. Though we should definitely consider the ethics of possible robotics that make use of actual biological neural networks.
Why can biological neural networks harbor experience while computer algorithms can't? I've written some about that here: https://philosophy.inhahe.com/2019/09/13/on-the-possibility-of-artificial-general-intelligence/ Basically, I don't think either can actually be the seat of consciousness, but biology can be a vessel for life/consciousness, while computer algorithms can't for reasons I explain in the essay. It's an interesting question what types of systems other than biology or biological neural networks can also be a vessel for life/consciousness.
Another thought I have about AI is that it's going to cost a lot of people their jobs. Ideally, automating labor would be a good thing, because that's more product society gets for less overall labor, but unfortunately the system more or less requires that you work in order to receive any goods or services that you need to live, so in practice it's very problematic. It's more or less the same problem we have with cheap overseas labor.
Another problem with the proliferation of AI is the low quality of its output, in the senses that (a) it lacks the actual touch of life, which makes it deleterious for very subtle reasons, and (b) it tends to hallucinate in very convincing ways, so people get misinformed, and we don't even always know when we're reading the products of AI.
I also suspect that in some mystical/metaphysical way, simulating intelligence on a non-living machine is actually diabolical and deadly in the bigger scheme of things.
Retrospring uses Markdown for formatting
*italic text*
for italic text
**bold text**
for bold text
[link](https://example.com)
for link