The biggest AI of 2035 is already here

The biggest AI of 2035 is already here

 The biggest AI of 2035 is already here

credit : theweek.in


A team of academics, policy experts and private sector stakeholders has warned that trouble is on the horizon as 2020 does not get much worse. He pointed out the top 18 artificial intelligence dangers we should be concerned about in the next 15 years.

Although scientific fiction and popular culture believe that intellectual robot uprisings will undo our action, an upcoming study in criminology indicates that the top threat is actually with us A.I. Own.


While ratings reflect their potential pitfalls, profitability, practicality and threat of defeat, the group recognizes the deepest duplication - existing and pervasive technology - as the highest level of threat.


Unlike a robotic invasion that damages property, the damage from these deep duplications is a loss of self-confidence in people and society.


The threatening future of AI may seem to have stalled forever - all A.I. Should my Alexa harm us when we can't even report the weather accurately? - But Shane Johnson, director of the Dawes Center for Future Crime at UCL, which funded the study, believes that these threats add to the sophistication and complexity of our daily lives.

"We live in a changing world that creates new opportunities - good and bad," Johnson warned. "Thus, it is imperative that we anticipate future crime risks so that policy makers and other stakeholders with the ability to take action can do so before new crimes occur."

Also Read : 


Although the author acknowledges that the decisions made in this study are internally influenced by ulation factors and our current political and technological landscape, he argues that the future of these technologies cannot be set aside even for those environments.


How they did it 

 To make these future decisions, the researchers assembled a team of 14 academics, seven private sector experts and 10 public sector experts.

These 30 professionals were divided equally into groups of four to six people and the potential A.I. From digital forms of dangerous threats such as phishing schemes to crime (such as autonomous drone attack). In making their decisions, the team considered four main features of the attacks:


  • Loss,
  • Benefits
  • achievability
  • Defeatability

Harm, in this case, refers to physical, mental or social damage. The study authors further defined these threats as AI. Damaged by beating. (E.g. developing facial recognition) or A.I. To commit a crime (such as blackmailing people using deep fake video).


Since these factors are in fact indistinguishable from each other (e.g., the damage of the attack, in fact, may be adhered to its realization), experts should consider the impact of these criteria separately. Said. Teams scores fixed to determine total damaging attacks from AI. In the next 15 years.

What are the results 

Comparison of 18 different types A.I. The group concluded that video and audio manipulations in the form of accidents and deep duplication were the biggest threat overall.


Let the authors explain to us that "humans have a strong tendency to trust their eyes and ears, so audio and video evidence has traditionally given a lot of credibility (and often legitimacy), despite its tradition of photographic fraud." "But recent intensive education [and deep duplication] has greatly increased the chances of producing fake content."


The author notes that the potential impact of these manipulations is from individuals who insult adults to cause distrust and increase distrust in the family and family members. He says these attacks are difficult for individuals (and in some cases experts) to detect and prevent.

Other top threats include the use of autonomous cars as remote weapons, similar to the vehicle terrorist attacks we have seen in recent years, and the A.I. Write fake news Interestingly, this group considers the robot robot (small robots that can climb through people's cats, steal keys and help human thieves) to be one of the least threatening.


So ... is what we say true? 

No, but we have some work to do. The famous illustration A.I. Hold the dangers of having a single red button to press, which can stop all dangerous robots and computers in their tracks. Of course, the danger is not just robots, but how we use them to manipulate and harm each other.


Understanding the potential of this vulnerability and doing our part to overcome it in the form of information literacy and community building is a powerful tool against this virtual robot apocalypse.



Post a Comment

0 Comments