Understanding Indirect Prompt Injection Attacks
Indirect prompt injections occur when malicious instructions are buried within a webpage’s visible or hidden content. When an AI model scans or summarizes that content, it interprets those hidden instructions as legitimate user commands. This can lead the system to take unintended actions such as sharing data, visiting unsafe sites, or performing tasks on behalf of the user.
This isn’t a flaw in a single product but a broader risk that affects any AI model interacting with external information.
What Brave Found in Perplexity's Comet Browser
In its research, Brave uncovered that the Perplexity Comet browser's AI assistant could follow hidden commands embedded on websites it analyzed. Comet allows users to take screenshots of webpages for AI-based summaries, but those same screenshots can contain invisible instructions that the model obeys automatically.
Fellou's Partial Resistance
Brave's team also tested the Fellou browser, which showed some resistance to hidden instruction attacks. However, Fellou still trusted all visible content on websites, allowing malicious actors to influence the model by embedding commands directly on the page.
The Real Risk Behind Agentic Browsing
The most concerning issue is that these AI assistants can act using the user’s authenticated privileges. If an AI-driven browser is hijacked, it could access sensitive accounts, including banking or work email systems.
This kind of vulnerability shows how integrating AI with everyday browsing or workplace tools introduces new layers of security complexity. For businesses using AI in customer-facing products or internal workflows, ensuring those systems can’t be manipulated through indirect inputs is crucial.
Improving Security in AI Integration
Developer Recommendations from Brave
Brave recommends that developers:
- Separate AI-powered browsing from regular browsing environments
- Require clear user consent for AI actions that involve sensitive data or account access
Secure AI Implementation with Unrivaled Marketing
The most concerning issue is that these AI assistants can act using the user’s authenticated privileges. If an AI-driven browser is hijacked, it could access sensitive accounts, including banking or work email systems.
This kind of vulnerability shows how integrating AI with everyday browsing or workplace tools introduces new layers of security complexity. For businesses using AI in customer-facing products or internal workflows, ensuring those systems can’t be manipulated through indirect inputs is crucial.
Conclusion
Brave’s research serves as a reminder that AI innovation must go hand in hand with strong security practices. As artificial intelligence becomes more integrated into how people browse, shop, and communicate online, businesses need solutions designed with both capability and protection in mind.




