Artificial intelligence will understand emotions. We should taboo the human form, says the cybernetic | iRADIO

--

Cybernetician and philosopher Jan Romportl can no longer imagine a world without artificial intelligence (AI). “We need AI to keep an eight-billion-dollar civilization worth buying,” he says in an interview with Lucía Výborna. But at the same time, he points out that AI must be created in such a way that its values ​​are aligned with the values ​​of people. “Politicians do not speak very eruditely about AI. Understand the basics of how AI works, then demand that politicians clearly state their point of view,” he advises.



Guest Lucie Excellent
Prague
10:03 p.m November 11, 2023

Share on Facebook


Share on Twitter

Share on LinkedIn

Print

Copy the url address


Abbreviated address





Copy to clipboard

Close

Cybernetician and philosopher Jan Romportl | Photo: Agáta Faltová | Source: Czech Radio

Do people who work with artificial intelligence teach it to be nice? And do they have any influence?
A lot of people in the AI ​​field deny that this problem exists. IN communities dit turns to enormous polarization. Unfortunately, there are huge ideological struggles in the AI ​​community.

We cannot stop the development of artificial intelligence. But you need to add brakes and a steering wheel, cybernetician Jan Romportl compares the development of artificial intelligence to motoring

Artificial-intelligence-will-understand-

There’s a group of people who say, “Watch out, this is potentially dangerous where we’re going, we should make AI safe.” That seems like a legitimate demand to me.

And there’s an even larger group of AI people against it, saying: “NWell, you’re all alarmists and crazy and marketing yourself.

Someone is trying to make bigger, stronger cues, that’s pretty good for AI, but the problem is that we don’t know how to make it right so that its values ​​are completely and 100 percent aligned with those of us humans. We can only give instructions and commands.

We have to deal with how to make artificial intelligence so that its values ​​do not conflict with the values ​​of humanity. How does one program such a thing when I’ve heard from some AI experts that I can’t do much with it because the thing does it on its own?
That’s exactly the problem, we can’t somehow incorporate it into the system.


Will artificial intelligence replace teachers? If he is extremely bad, sure, admits the teacher

Read the article

As an example, I could cite GPT, which is a huge artificial neural network that learned so that if you present it with a sequence of words, for example Ema mele, you ask it what comes next. And she will say meat or nonsense…

The neural network learned it on the Internet, no one explained to it how the world works, what it should and should not be able to do. These things arose there by themselves.

Is it realistic to suspend the development of artificial intelligence from the outside?
We must not suspend the development of artificial intelligence as scientific research progress. We need to stop training much larger language and other models. Now they have a trillion parameters and we have to focus in AI research on the fact that with the current sizes, let’s take GPT-4 for example, maybe GPT-5 when it comes out, then we will not develop it further. But we will make the brakes and the steering wheel for the device created in this way.

And if it works or not? We don’t know that.

Are you telling me we’re driving a brutally tuned car with a powerful engine that has no brakes?
Exactly, this is how the development of AI is created today. We make a metaphorical example of cars, in which we install a bigger and bigger engine – these are the bigger and bigger language models – but we don’t put a steering wheel and brakes in those cars.


Get rich off the mental health of others? The series Štěstí ASAP shows the world of start-ups, AI and burnout

Read the article

I believe that in the field of artificial intelligence there may be something that the government and the opposition can agree on.

And when someone comments: “Shouldn’t you put a steering wheel and brakes in there?”, no one in motoring would say that this is a retard who is stopping the development of the car industry. But in AI they will say that.

When people come and say: “OFtry to slow down more and bigger models and bigger computing resources, so many people will argue: You are a retard, stopping the progress of AI.

You know what’s the saddest thing for me? That I listen to it, but I can’t do anything at all…
I think you’re already doing a relatively good job of getting the message out there: we need AI, but we don’t need GPT-5 or GPT-6 right now until we figure out the steering wheel and brakes for them.

Should the machine have human form?
He didn’t have One of the essential things we should ensure for future coexistence with AI is to completely ban the anthropomorphization of AI. That means making the AI ​​look like a human.

The mistake is that people think that AI will not be able to understand emotions or show emotions outwardly. He will understand human emotions just like a human. And for her to demonstrate them herself, that is to say that there will be an artificial robot that will pretend to be you, will be technically possible, but we should make it taboo.

Where should the development of artificial intelligence end? And what happens when he becomes better than a human? Listen to the full interview.

Lucie Excellent, prh

Share on Facebook


Share on Twitter

Share on LinkedIn

Print

Copy the url address


Abbreviated address





Copy to clipboard

Close


The article is in Czech &&

Tags: Artificial intelligence understand emotions taboo human form cybernetic iRADIO

-

PREV Westwood NJ football vs. Rumson-Fair Haven: Live updates
NEXT Welcome to New Zealand. Fulfills the strict anti-graft law