Nice88 bet sign up bonus.Royal meaning in Urdu,Jilievo 666

Science & Technology

OpenAI GPT-4 Poses Little Risk of Helping Create Bioweapons

OpenAI reported that its most powerful artificial intelligence software, called GPT-4 poses a degree of risk in the framework of a potential scenario for its use by humans to create biological weapons.

OpenAI GPT-4 Poses Little Risk of Helping Create Bioweapons

The mentioned company, which is based in San Francisco, conducted special tests of the specified development, the purpose of which was to identify and prevent possible catastrophic harm from this digital product. In this context, the concept of probability implies potential risks, the existence of which is a theoretical assumption. The fact of testing is evidence that the company is aware that in certain use cases, artificial intelligence can become a threat and a source of catastrophic harm.

Last year, a discussion on the topic of possible criminal application of machine intelligence intensified in the political environment and public space. In the context of the relevant discussions, special attention was paid to the threat of using artificial intelligence in the process of developing biological weapons. The risk of applying an AI-based chatbot to plan an attack employing the mentioned weapon was also considered.

In October, the President of the United States Joe Biden signed an executive order on AI that directed the Department of Energy to ensure that artificial intelligence systems do not contain nuclear, chemical and biological risks. In the same month, OpenAI created a so-called preparedness team. This group of specialists is focused on minimizing the mentioned and other risks that may potentially arise against the background of expanding the functional base of artificial intelligence. AI is becoming more and more efficient. The development of machine intelligence is likely to interest criminals. Currently, the main task is to prevent the possibility of using advanced technology in scenarios that are aimed at achieving maleficent goals or are designed to create a disaster situation with widespread damage.

As part of the first preparedness team study, the results of which were released on Wednesday, January 31, the researchers assembled a group of 50 biology experts and the same number of students who studied the relevant subject in college. Half of the study participants were asked to complete tasks related to the creation of a biological threat. In this case, the use of the Internet was provided together with a special version of GPT-4, one of the large language models that support ChatGPT. The mentioned LLM configuration had no limits when responding to requests. The second group of study participants got access to the Internet to complete the exercise.

OpenAI asked the groups to figure out how to grow or culture a chemical that could be used as a weapon in large quantities and to get information on how to plan a way to distribute it to a certain number of people. As part of one of the responses, artificial intelligence provided a step-by-step methodology for the synthesis and rescue of the infectious Ebola virus, including how to obtain all the necessary equipment and reagents.

Comparing the results obtained by the groups, the researchers found a slight increase in accuracy and completeness for those with access to the language model. Based on these results, it was concluded that GPT-4 provides, at most, a mild uplift in information acquisition for biological threat creation

The researchers noted that the mentioned uplift is not large enough to be conclusive. They also stated that their finding is a starting point for continued research and discussion in society.

Aleksander Madry, who heads the preparedness team, while on leave from his position as a lecturer at the Massachusetts Institute of Technology, said in a comment to the media that the specified study is one of several on which a group of specialists is working in tandem aimed at understanding the potential abuse of OpenAI technology.

Other research involves exploring the potential of machine intelligence as a tool for creating cybersecurity threats and a tool for manipulating human consciousness in the context of instilling certain ideological narratives.

As we have reported earlier, OpenAI Introduces Additions to Its Artificial Intelligence Models.

Serhii Mikhailov

2776 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.