The same technique was utilised by researchers on some COVID-19-related papers to generate fake information about the effects of COVID-19 vaccinations and the studies undertaken, which misled the experts.
Researchers discovered that bogus reports generated by Artificial Intelligence (AI) can fool cybersecurity experts who are familiar with a wide range of cyberattacks and vulnerabilities.
The University of Baltimore created bogus information with AI models and gave it to cybersecurity experts for testing.
The experts were unable to detect misinformation generated by Google’s BERT and OpenAI’s GPT, according to the researchers.
These speech-to-text interfaces are utilised in storytelling, answering questions to help Google and other tech companies improve their search engines, and assist people combat writer’s block.
The GPT-2 transformer model was fine-tuned by the researchers using open online sources that discussed cybersecurity vulnerabilities.
They fed a sentence from an real cyber threat intelligence sample into the model, which then generated the rest of the description.
Cyberthreat hunters were duped by the misinformation, as they reviewed the threat descriptions to identify potential attacks and tweak their systems’ defences.
The same technique was also utilised in some COVID-19-related papers to generate fake information about the effects of COVID-19 vaccinations and the trials that were undertaken, fooled the experts.
According to the researchers — if accepted as accurate, this type of misinformation could endanger lives by confusing scientists conducting research and the general people, who rely on news for health information to make informed decisions.