نبذة مختصرة : Since machine learning components are now being considered for integration in safety-critical systems, safety stakeholdersshould be able to provide convincing arguments that the systems are safe for use in realistic deployment settings. In the caseof vision-based systems, the use of tree ensembles calls for formal stability verification against a host of composite geometricperturbations that the system may encounter. Such perturbations are a combination of an affine transformation like rotation,scaling, or translation and a pixel-wise transformation like changes in lighting. However, existing verification approachesmostly target small norm-based perturbations, and do not account for composite geometric perturbations. In this work,we present a novel method to precisely define the desired stability regions for these types of perturbations. We propose afeature space modelling process that generates abstract intervals which can be passed to VoTE, an efficient formal verificationengine that is specialised for tree ensembles. Our method is implemented as an extension to VoTE by defining a new propertychecker. The applicability of the method is demonstrated by verifying classifier stability and computing metrics associatedwith stability and correctness, i.e., robustness, fragility, vulnerability, and breakage, in two case studies. In both case studies,targeted data augmentation pre-processing steps were applied for robust model training. Our results show that even modelstrained with augmented data are unable to handle these types of perturbations, thereby emphasising the need for certifiedrobust training for tree ensembles.
No Comments.