OpenAI Atlas Omnibox Is Vulnerable to Jailbreaks
Researchers have discovered that a prompt can be disguised as an url, and accepted by Atlas as an url in the omnibox. The post OpenAI Atlas Omnibox Is Vulnerable to Jailbreaks appeared first on SecurityWeek.
The OpenAI Atlas omnibox can be jailbroken by disguising a prompt instruction as an url to visit.
While a traditional browser like Chrome uses an omnibox to accept both urls to visit and subjects to search (and knows the difference), the Atlas omnibox accepts urls to visits and prompts to obey – and doesn’t always know the difference.
Researchers at NeuralTrust have discovered that a prompt can be disguised as an url, and accepted by Atlas as an url in the omnibox. As an url it is subject to less restrictions than text recognized as a prompt. “The issue stems from a boundary failure in Atlas’s input parsing,” say the researchers.
Source: https://www.securityweek.com/chatgpt-atlas-omnibox-is-vulnerable-to-jailbreaks/
