Why is technology taking over humanity 1

If you want to get scared, you should look into artificial intelligence at the South By Southwest digital festival in Texas: Scientists report there that people can control prostheses with their brains. Artificial intelligence (AI) will soon control artificial body parts. Robotic brains will hire workers, predict crimes, direct drones, sort health records.

The futurologist Ray Kurzweil predicts that by 2029 they will be as intelligent as we are - and that no one will recognize whether they are talking to machines or people. Elon Musk, billionaire tech fan and head of Tesla, believes artificial intelligence is more dangerous than nuclear weapons.

To dismiss worries as fear of the future, like those about the railroad, does not go far enough and is driven too much by technology optimism. Unlike any invention in history, artificial intelligence has a new dimension: Nobody understands it. It evolves and makes decisions that its inventor cannot explain. There is therefore an urgent need for rules for robot brains.

Technology develops faster than laws

But because of all the rhetoric about the robocalypse, the risk of overregulation is even greater than the risk of AI itself. Because this also harbors gigantic opportunities for progress, for example when it comes to curing diseases, because it can sort large amounts of data much better than human Brain. Such applications are already a reality or close to being a reality. By contrast, AI, which is taking over the world, has so far remained science fiction.

The regulation of artificial intelligence faces a fundamental problem: Technology is developing faster than legislation. In addition, AI is a collective term behind which several technologies are hidden, there is no such thing as artificial intelligence. Behind this misunderstanding lies the call to establish an AI control authority. That’s a bad idea. There is also no computer authority that sets rules for computers. AI is a tool, regulation has to start where it can cause damage.

AI will come whether we like it or not

Autonomous cars, for example, are not allowed to decide on their own to exceed speed limits just because drivers are driving too fast around them. There must be limits in financial markets and medicine. Artificial intelligence must not break laws that apply to people. For example, she is not allowed to record and analyze conversations in the living room without permission. The responsibility must remain with the people.

The excuse "It wasn't me, that was my artificial intelligence" must not apply. In addition, the AI ​​must always make it clear that it is not human. It could also be considered that artificial intelligence oversees other artificial intelligence: a robot that brakes when autonomous cars drive too fast.

Artificial intelligence will come whether we like it or not. If we try to slow progress with laws, China will push it. So far, almost all politicians are at a desolate level of knowledge, in principle they only know that buzzwords like blockchain and AI are important. You have to face the responsibility and take the fear of job losses and killer weapons as seriously as the opportunities offered by AI. Before they can write laws they have to understand - even if the technology has a dimension of the inexplicable.