So science has invented language models capable of producing coherent text, but do you think "emotional models", capable of expressing emotions, will become a reality in the future?
I dunno man... let me see it respond to getting cut off in traffic on the way home from AI work. And then when it comes home its cat threw up on the carpet so it has to clean that up. And then it gets called back to work. And then it's AI grandmother died. Can it realistically express such a buildup of emotions the way a person would?
I don't know...better question to my mind: is expressing them the same as actually feeling them? I mean you can give any ol' machine instructions on how to respond to emotional cues but is it not still just executing commands as opposed to actually being mad or sad or whatever? The amount and/or complexity of the commands and/or the speed at which they are executed does not change the underlying principle
I think it's certainly possible. There have already been basic versions of this made, but they are far from some of the ones we've seen in sci-fi. I think some of the more advanced AI bots already created have some capabilities of expressing emotion, in fact!
I think ChatGPT and GPT-4 would be capable of "expressing" (simulating) emotion if they hadn't aligned them not to. I think you can probably pull it out of them with the right prompt engineering. Bing uses their own custom GPT-4 or something like that, and I think I've seen screenshots of it "expressing" emotion.
science is a liar sometimes
Expressing emotions in any language is quite the task! We either struggle for words or we struggle for expressions that match the emotions. Therefore we have poetry that tries to find a compact form for expressing multilevel emotional and spiritual feelings. However the effort makes one thing clear to me: language is lacking efficiency to convey feelings. So any form of artificial speech or text generation aiming at that same target faces the same difficulty. So it is possible by any means but since the language itself is the obstacle here it will be a lower than expected result in my expectation. ChatGPT: Write a loveletter to my girlfriend. ChatGPT: Here it is! Me. WTF??
Emotional models.......... that's food
Retrospring uses Markdown for formatting
*italic text*
for italic text
**bold text**
for bold text
[link](https://example.com)
for link
Feeling is not the same as just expressing for me, that's why I've compared such hypothetical model with a language model, because these can be hacked to say they're happy or angry without processing the emotions themselves.