La linguística computacional se divide en varias partes.
- La primera es GPT3
- Luego... ya no hay nada más.
AI-Generated Text Is the Scariest Deepfake of All - Slashdot
There are a few problems with the GPT-3 AI the article is referring to:
1) It has no understanding of context.
2) It can't stay on topic.
3) It doesn't even understand topic sentences.
4) It doesn't understand anything, it picks words interpolating what it has seen before.
4) It is barely coherent, sometimes producing sentences that don't even parse.
So a typical forum poster. Carry on.
Are We in an AI Overhang? - Slashdot
I find the hype around GPT-3 greatly exagerated.
It's basically just a giant glorified "auto-correct"-like text predictor. It takes text modelling (a concept that
existed two decades ago [eblong.com]) and just throws insanely vast amount of processing power to it.
Of course, given the sheer size of the neural net powering it underneath, the result are incredible in terms of style that the AI can write in.
It can keep enough context to make whole pargraph that are actually coherent (As opposed to the "complete this sentence using you're phone's autocomplete" game that has been making rounds on social media). And that paragraph will have a style (text messaging, prose, computer code) that matches the priming you gave to it.
But it's still just text prediction, the AI is good at modelling the context, but doesn't have any concept of the subject it's writing about, it's just completing paragraphs in the most realistic manner, but doesn't track the actual things it's talking about.
It will write a nice prose paragraph, but do not count on it to complete the missing volumes of GRR Martin's Songs book serie - it lacks the concept of "caracters" and tracking ehri inner motivation.
It will write a nice looking C function, but is unable to write the Linux kernel, because it doesn't have a mental map of the architecture that is needed.
It's not really transformative or revolutionnary. It's just the natural evolution what two decade of research in AI (Neural Net and giant clouds able to train hyper large nets) can add to the simplistic Markov toy example I linked above.
It's basically
Deep Drumpf [twitter.com] on steroids.
Mientras no se tenga:
- Una ontología.
- Un modelo de la mente.
- Un modelo del yo.
Difícil hacer nada lingüístico.
Y esto es la inteligencia artificial dura (hard AI) o auténtica.
Aquí una evaluación estilo Turing.
Se le somete a sinsentidos, y no sabe reaccionar.
¿Motivo?
Le falta comprensión.
Giving GPT-3 a Turing Test
Lo que ocurre es que la carga cognitiva de dar un marco de referencia de congruencia lo lleva el humano, no la máquina.
El humano ve lo que quiere ver, porque aporta el marco de referencia CONSTANTEMENTE.
En cuanto lo rompe, la máquina se pierde, porque es un repetidor vació (como la mayoría de seres humanos, especialmente los progres y emotivos).
Un artículo al respecto:
OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless
No es mucho mejor que RACTER, sólo sabe de más temas.
racter - Buscar con Google