Utilize este identificador para referenciar este registo: http://hdl.handle.net/10071/35772
Autoria: Páez Velázquez, M.
Bobrowicz-Campos, E.
Arriaga, P.
Data: 2025
Título próprio: Prompt assessment for human-AI interaction: Intent, complexity and lay perceptions
Título da revista: Journal of Machine Intelligence and Data Science
Volume: 6
Paginação: 83 - 96
Referência bibliográfica: Páez Velázquez, M., Bobrowicz-Campos, E., & Arriaga, P. (2025). Prompt assessment for human-AI interaction: Intent, complexity and lay perceptions. Journal of Machine Intelligence and Data Science, 6, 83-96. https://doi.org/10.11159/jmids.2025.008
ISSN: 2564-3282
DOI (Digital Object Identifier): 10.11159/jmids.2025.008
Palavras-chave: Large Language Models (LLMs)
Human–AI interaction
Prompt-based interactions
ChatGPT
Mixed methods
Resumo: Large Language Models (LLMs) are democratising access to AI for users with diverse levels of expertise, raising questions about the nature, dynamics, and effects of such interactions, particularly among lay users. Understanding how non-expert users engage with these systems is essential to inform AI literacy frameworks and responsible use guidelines, helping to reduce misinformation and address broader societal implications. To investigate these dynamics, it is first necessary to identify interaction types based on user intent, as well as prompt characteristics such as complexity, appeal, and domain familiarity, given the unprecedented flexibility of LLM use across diverse contexts. However, no categorisation of prompts with comparable complexity levels and demonstrated suitability for lay populations has yet been developed. This categorisation is essential to avoid confounds in the study of human–AI interaction. To address this gap, we applied a three-stage methodological approach. First, we generated prompts and iteratively refined their categories and complexity levels using ChatGPT, all written by the model itself. Second, we conducted a thematic qualitative analysis and curated a pre-set of 34 prompts with comparable complexity, classifying them into two main categories: a) task-oriented and b) reflexive, and two additional control categories; c) both and d) none. Third, we tested these prompts with 28 lay users from different countries through an online survey. For each prompt, participants assessed the category, perceived complexity, how interesting it was, and whether non-experts could easily understand it. Task-oriented prompts achieved a mean category confirmation rate of 62% (Max = 82%), while reflexive prompts reached 52% (Max = 71%). Complexity ratings averaged near the scale midpoint (M = 4.10), similar to interestingness (M = 4.67) and general domain (M = 4.20), indicating that prompts were neither simplistic nor overly demanding, but suitably engaging and accessible for a broad lay population. A final set of 12 prompts with at least 60% category agreement was obtained. This work can contribute to studying prompt categories among lay users of LLM-powered conversational agents, considering intent, complexity, and users’ perceptions of appeal and suitability for a general audience. The final set of prompts provides a resource for advancing research in human–AI interaction, supporting future investigations into trust, emotional responses, and other key constructs in Human Computer Interaction (HCI).
Arbitragem científica: yes
Acesso: Acesso Aberto
Aparece nas coleções:CIS-RN - Artigos em revistas científicas nacionais com arbitragem científica

Ficheiros deste registo:
Ficheiro TamanhoFormato 
article_114261.pdf621,49 kBAdobe PDFVer/Abrir


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.