Utilize este identificador para referenciar este registo: http://hdl.handle.net/10071/29696
Registo completo
Campo DCValorIdioma
dc.contributor.authorDuarte, R.-
dc.contributor.authorCorreia, F.-
dc.contributor.authorArriaga, P.-
dc.contributor.authorPaiva, A.-
dc.date.accessioned2023-11-21T15:16:16Z-
dc.date.available2023-11-21T15:16:16Z-
dc.date.issued2023-
dc.identifier.citationDuarte, R., Correia, F., Arriaga, P., & Paiva, A. (2023). AI trust: Can explainable AI enhance warranted trust?. Human Behavior and Emerging Technologies, 2023, 4637678. https://dx.doi.org/10.1155/2023/4637678-
dc.identifier.issn2578-1863-
dc.identifier.urihttp://hdl.handle.net/10071/29696-
dc.description.abstractExplainable artificial intelligence (XAI), known to produce explanations so that predictions from AI models can be understood, is commonly used to mitigate possible AI mistrust. The underlying premise is that the explanations of the XAI models enhance AI trust. However, such an increase may depend on many factors. This article examined how trust in an AI recommendation system is affected by the presence of explanations, the performance of the system, and the level of risk. Our experimental study, conducted with 215 participants, has shown that the presence of explanations increases AI trust, but only in certain conditions. AI trust was higher when explanations with feature importance were provided than with counterfactual explanations. Moreover, when the system performance is not guaranteed, the use of explanations seems to lead to an overreliance on the system. Lastly, system performance had a stronger impact on trust, compared to the effects of other factors (explanation and risk).eng
dc.language.isoeng-
dc.publisherWiley-
dc.relationinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/LA%2FP%2F0083%2F2020/PT-
dc.relationUIDB/50021/ 2020-
dc.relationinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDP%2F50009%2F2020/PT-
dc.relationH2020-ICT-48-2020/952026-
dc.relationTAILORH2020-ICT-48-2020/952215-
dc.rightsopenAccess-
dc.subjectArtificial intelligence (AI)eng
dc.subjectTrusteng
dc.subjectExplainable AIeng
dc.subjectRiskeng
dc.titleAI trust: Can explainable AI enhance warranted trust?eng
dc.typearticle-
dc.peerreviewedyes-
dc.volume2023-
dc.date.updated2023-11-21T15:15:24Z-
dc.description.versioninfo:eu-repo/semantics/publishedVersion-
dc.identifier.doi10.1155/2023/4637678-
dc.subject.fosDomínio/Área Científica::Ciências Naturais::Ciências da Computação e da Informaçãopor
dc.subject.fosDomínio/Área Científica::Ciências Sociais::Psicologiapor
iscte.subject.odsTrabalho digno e crescimento económicopor
iscte.subject.odsIndústria, inovação e infraestruturaspor
iscte.identifier.cienciahttps://ciencia.iscte-iul.pt/id/ci-pub-98455-
iscte.journalHuman Behavior and Emerging Technologies-
Aparece nas coleções:CIS-RI - Artigos em revistas científicas internacionais com arbitragem científica

Ficheiros deste registo:
Ficheiro TamanhoFormato 
article_98455.pdf1,14 MBAdobe PDFVer/Abrir


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.