Abstract
This paper proposes a practical list of safety concerns and mitigation methods for visual deep learning algorithms. The growing success of deep learning algorithms in solving non-linear and complex problems has recently attracted the attention of safety-critical applications. While the state-of-the-art methods achieve high performance in synthetic and real-case scenarios, it is impossible to verify/validate their reliability based on currently available safety standards. Recent works try to solve the issue by providing a list of safety concerns and mitigation methods in generic machine learning algorithms from the standards’ perspective. However, these solutions are either vague, and non-practical when dealing with deep learning methods in real-case scenarios, or they are shallow and fail to address all potential safety concerns. This paper provides an in-depth look at the underlying cause of faults in a visual deep learning algorithm to find a practical and complete safety concern list with potential state-of-the-art mitigation strategies.
Original language | English |
---|---|
Title of host publication | SafeAI 2022: Proceedings of the Workshop on Artificial Intelligence Safety 2022 (SafeAI 2022) |
Editors | Gabriel Pedroza, José Hernández-Orallo, Xin Cynthia Chen, Xiaowei Huang, Huáscar Espinoza, Mauricio Castillo-Effen, John McDermid, Richard Mallah, Seán Ó hÉigeartaigh |
Publication status | Published - 17 Feb 2022 |
Publication type | A4 Article in conference proceedings |
Event | SafeAI: The AAAI's Workshop on Artificial Intelligence Safety - Virtual Duration: 28 Feb 2022 → 1 Mar 2022 https://safeai.webs.upv.es/ |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
Publisher | CEUR-WS |
Volume | 3087 |
ISSN (Electronic) | 1613-0073 |
Workshop
Workshop | SafeAI |
---|---|
Period | 28/02/22 → 1/03/22 |
Internet address |
Publication forum classification
- Publication forum level 1