With the rise in popularity of smart objects and online services, the use of Trigger-Action Platforms for the definition of custom behaviors is growing significantly. These platforms enable end-users to create Event-Condition-Action (ECA) rules for triggering actions upon event occurrences on physical devices or online services in different domains. ECA rules could easily expose end-users to security risks mainly due to their low level of knowledge and awareness. To alleviate this problem, classification models can be used for identifying possible security issues that ECA rules could inflict when triggered. However, the results produced by these classifiers may not be understood by end-users. This position paper provides first insights concerning the application of AI models for generating natural language explanations according to the identified risks of ECA rules.
Towards Explainable Security for ECA Rules
Breve B.;Cimino G.;Deufemia V.
2022-01-01
Abstract
With the rise in popularity of smart objects and online services, the use of Trigger-Action Platforms for the definition of custom behaviors is growing significantly. These platforms enable end-users to create Event-Condition-Action (ECA) rules for triggering actions upon event occurrences on physical devices or online services in different domains. ECA rules could easily expose end-users to security risks mainly due to their low level of knowledge and awareness. To alleviate this problem, classification models can be used for identifying possible security issues that ECA rules could inflict when triggered. However, the results produced by these classifiers may not be understood by end-users. This position paper provides first insights concerning the application of AI models for generating natural language explanations according to the identified risks of ECA rules.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.