Home » News » Hackers Could Reduce The Threat of Dangerous Artificial Intelligence

Hackers Could Reduce The Threat of Dangerous Artificial Intelligence

(Image Credit Google)
According to a policy forum, it is necessary to identify dangerous Artificial Intelligence deployments. This global community of hackers, threat models, auditors, and others with an eye for software vulnerabilities would stress-test any new AI-driven products or services. According to the authors, these third parties will ultimately help the public evaluate the trustworthiness and credibility of AI developers. This will result in better products and services and less harm from poorly programmed, unethical, or biased AI. Artificial Intelligence The authors state that such a call for action is necessary due to growing distrust between the public, software developers, and Artificial Intelligence creators. Current strategies to detect and report AI-related problems are insufficient. The trust in AI and AI developers is rapidly eroding. It is evident in our changing approach to social media.  White-hat hacking is also known as red teaming or white-hat hackers. This term comes from cybersecurity. This is when ethical hackers are hired to attack AI systems to find vulnerabilities or other ways they could be used for evil purposes. Red teams will report to developers any weaknesses or potential harms. Trusted external agencies would conduct audits. This is where an auditor has access to restricted information. In turn, they can verify the claims or release the info anonymously or aggregated. The authors claim that red teams within AI development companies are not sufficient. Instead, external, third-party groups with the ability to independently and openly examine new Artificial intelligence are the true power. However, the policy forum states that not all AI companies can afford such quality assurance, particularly start-ups. This is why an international community for ethical hackers is a great resource. Artificial Intelligence AI developers would theoretically create a solution if they were informed of possible problems. Avin explained why auditing and incident-sharing findings should be used to force AI developers to make changes. "Reporters and researchers expose Artificial Intelligence problems and other issues. It has in the past resulted in systems being removed or modified. He replied via email that it had also resulted in lawsuits. While Ar auditing is still in its infancy, it has been proven to be a valuable tool for other industries. Failure to pass audits can lead customers to lose their business and result in regulatory action or fine These policy recommendations are reasonable and long overdue. However, the commercial sector must buy into these ideas. To keep Artificial intelligence developers under control, it will take a village. This village will include a vigilant public, a media watch, accountable government institutions, and, as suggested by the policy forum, army hackers and other third-party watchdogs. 

By Saloni Behl

I always had a crush on technology that\'s why I love reviewing the latest tech for the readers.

RELATED NEWS

On Monday, Microsoft disclosed that it was increas...

news-extra-space

The Super Heavy rocket and have successfully comp...

news-extra-space

Before its first test flight, SpaceX released a vi...

news-extra-space

The CEO of , Tim Sweeney, has revealed how the com...

news-extra-space

Here comes the AI takeover. When structural biolog...

news-extra-space

According to a recent ranking by a website, Tencen...

news-extra-space
2
3
4
5
6
7
8
9
10