During very early stages, since we started building our scanning technology, we carefully structured our scanner APIs for greater flexibility. Till date, up-to 70% of our customers have successfully integrated our APIs as a native part of their development process.
The core objectives of our API are,
We have carefully crafted our APIs to customize application scanning and scan database to resonate with target application’s environment. The scanning API itself identifies crucial application information and virtually creates a data-map to effective utilize it into the upcoming scan-jobs of application.
Master crawler plays a key role into any scan-task by detecting and managing base-level information of the target application (such as URL structures, development technologies, application libraries, third-party integrations and services). Master crawler creates a data-set for every individual scan which can be updated with custom entries by the user to help the master crawler to effectively utilize user-supplied data by passing the same to scanner and behavioural database.
Behavioural database is the core part of the scanning technology, it detects and stores information related to the application logic such as the type of response returned by an application on specific inputs, structure of generating requests and functionality of responses, functional patterns and application logic, etc. It is capable of simulating human-like actions into a scan process. For instance, it can identify a login page, differentiate it from a user registration/sign up page, identify the parameters used in registering a user and simulate a user registration process and then conduct scanning on authenticated session. This capability creates endless opportunities in utilizing the data for finding complex vulnerabilities and different types of security threats.
Almost every other vulnerability scanner cannot recognise the identified data which makes them incapable of conducting any such advanced actions like differentiating between data or conducting any data-specific job into the scan process apart from crawling application pages and applying pre-defined algorithms.