ÐÓ°ÉÔ­´´

Skip to Content

New Publication: Towards Logical Specification of Adversarial Examples in Machine Learning

March 24, 2023

Our publication “” is now available online. This is collaborative work with researchers at  and . This paper proposes an approach to adversarial example threat specification and detection in component-based software architecture models using first-order and modal logic. The general idea of the approach is to specify the threat as property of a modeled system such that the violation of the specified property indicates the presence of the threat. We demonstrate the applicability of the method through a classifier used in a recommendation system. See Publications for more details!