Integrating Custom AI Models
with BurpSuite
Application Security

Charlie Moorton
20 March 2025
Our technical team are always on the lookout for new technologies and approaches to enhance our security assessments and research. AI is rapidly transforming the industry, helping security professionals automate repetitive tasks and gain deeper insights during testing. However, many AI-driven security tools require sending large amounts of raw data to third-party servers for processing. In a security context this could include sensitive details like vulnerable endpoints, post-authentication responses, or even credentials.
​
For independent researchers or internal security teams leveraging external AI platforms may be an acceptable trade-off. However, consultancies operating in commercial environments must adhere to strict data privacy requirements. At Wilbourne, we take a hardline stance on protecting client data. We operate within tightly controlled environments and limit third-party integrations to trusted platforms, such as Microsoft, under strict governance.
Third-Party AI Tools and Privacy
Security researchers and tool developers often prioritise functionality over privacy, which can lead to gaps when AI is introduced into commercial security workflows. By default, many AI solutions process data externally, meaning sensitive information could leave our trusted environment without adequate controls. This is not an acceptable risk for us in terms of the security of our clients' data.
​

​​At the same time, we recognise the immense potential AI has in augmenting security testing. To ensure our testing continues to deliver high quality results as efficiently as possible, we are committed to leveraging the best tools available — without compromising on privacy. As a result, we are exploring solutions that allow us to integrate AI into our workflows while keeping full control over our data and infrastructure.
Integrating Private AI Models on BurpSuite
BurpSuite is the industry standard for analysing web and mobile application traffic and is used heavily by our application security team. One of the key features is its extensibility which allows testers to enhance its capabilities through custom tooling developed by PortSwigger and the community.
​
Recently, PortSwigger introduced AI functionality that enables developers to build tools leveraging their managed OpenAI instance. This simplifies AI adoption for many users, but it also means that data is processed externally by their central platform. We understand this design decision but unfortunately were left unable to use some of the exciting and powerful tooling being developed.
However, rather than being dismayed, we got to work figuring out an approach to ensure our team had access to the best tools available. ​
To solve this, we developed a set of Java client libraries designed to integrate BurpSuite with private AI models. These libraries support OpenAI and Ollama out of the box, providing security professionals with a way to build AI-enhanced BurpSuite extensions centred around self-controlled AI platforms.
​We designed these libraries to be simple, modular, and to map closely to existing AI extensions to allow for easy adaptation. Now, Wilbourne’s consultants use AI-powered tools built on these libraries to streamline testing and enhance analysis, all while maintaining full control over where our data goes.


Open-Sourcing Our BurpSuite AI Libraries
We believe in contributing to the security community and pushing the industry forward. That’s why we’ve published these Java libraries on our public GitHub, making them freely available to the security community. These libraries should provide a flexible starting point to build your own AI extensions or modify an existing tool to work with a private AI model.

​You can check out and contribute to this project at
https://github.com/wilbourne-labs/BurpLLM
We’re excited to see how the security community builds upon these libraries to push AI-powered security testing forward while considering the requirements of commercial engagements.
Looking Ahead
As AI continues to evolve, so too will our approach to integrating it into security testing. Wilbourne are committed to pushing the boundaries of application security while ensuring our solutions align with the highest privacy and security standards.
​
If you’re interested in collaborating or have ideas on how to further improve privacy-first AI integrations, we would love to hear from you. Please also reach out if you would like our expertise in testing your applications, infrastructure, or AI models.