Humans risk being overrun by artificial superintelligence in 30 years

A MACHINE with human-level intelligence could be built in the next 30 years and could represent a threat to life on Earth, some experts believe.

AI researchers and technology executives like Elon Musk are openly concerned about human extinction caused by machines.


The next stage of artificial intelligence could be the last humans develop
The tiers of artificial intelligence


The tiers of artificial intelligence

Smart computers make smarter computers

The Law of Accelerating Returns is a concept popularized by futurist Ray Kurzweil that states the rate of technological improvement is on a very steep curve.

As technology gets more advanced, society and industry are better equipped to improve technology faster and more drastically.

“With more powerful computers and related technology, we have the tools and the knowledge to design yet more powerful computers, and to do so more quickly,” Kurzweil wrote in his famous 2001 essay.

Artificial Intelligence is getting ‘scary good’ - the things AI can beat humans at
AI robot dog with 'virtual spine' learning to walk one hour after 'birth'

This is visible when checked against the past: the first United States patent was issued in 1836 and the millionth patent was issued 75 years later in 1911.

The US had two million patents by 1936 – it took just 25 years to match the production, ingenuity and creativity of the previous 75.

At today’s pace, one million patents are issued every three years, and it’s only getting faster.

Apply this improvement principle to the artificial intelligence revolution and you can see why scientists think AI could become very capable, and possibly threatening, in our lifetimes.

Present and future threats

According to Dr. Lewis Liu, CEO of an AI-driven company called Eigen Technologies, some artificial has already “gone dark”.

“Even the ‘dumb, non-conscious’ models we have today may have ethical issues around inclusion,” Dr Liu told The US Sun. “That kind of stuff is already happening today.”

Research from Johns Hopkins University shows that artificial intelligence algorithms tend to show biases that could unfairly target people of color and women while executing their operations.

The American Civil Liberties Union also warns that AI could “deepen racial inequality” as more selective processes like hiring and housing become automated.

“General AI or AI Superintelligence is just going to be a much broader, larger propagation of these problems,” Dr Liu said.

The all-out, Terminator-style war of man versus machine is not said to be an impossibility either.

A poll in futurist Nick Bostrom’s book Superintelligence says that almost 10% of experts believe a computer with human-level intelligence would be a life-threatening crisis for humanity.

One of the misconceptions about AI is that it’s restricted to its black box that can just be unplugged if it intends to hurt us.

“It’s much more likely that AGI is going to emerge in the Web itself and not from some human constructed box just because of the requirement of complexity,” Dr Liu said. “And if that is the case, then you can’t unplug the Web.”

Relatedly, several military programs are intertwined with AI and “killer robots” that are capable of taking a life without human input have been developed.

Some experts believe the threat landscape should be taking sentient AI into account because we can’t know for sure when it will come online, or how it will react to humans.

Preventing Judgement Day

Dr Liu sadly conceded “it’s going to be a pretty s***y world” if we achieve artificial superintelligence with today’s lax style of technology regulation.

He advises the development of oversight where the data that powers AI models is scoured for bias.

Kentucky flood death toll rises to 25 & 664 people are rescued by helicopter
Kardashian fans think Kim & Pete are over as they share wild theory

If the data training a model is sourced from the public, then programmers should have to gain users’ consent to apply it.

Regulation in the US is short of emphasizing “a human check on the outputs” but recent developments in China have begun to highlight keeping artificial intelligence under human control.

Source link

Denial of responsibility! planetcirculate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.