Please use this identifier to cite or link to this item: http://hdl.handle.net/10071/29453
Author(s): Reis, J. P. dos.
Brito e Abreu, F.
Carneiro, G. de F.
Almeida, D.
Editor: Fernandes, J. M., Travassos, G. H., Lenarduzzi, V., and Li, X.
Date: 2023
Title: Scientific workflow management for software quality assessment replication: An open source architecture
Volume: 1871
Book title/volume: Quality of Information and Communications Technology. Communications in Computer and Information Science
Pages: 1 - 14
Event title: 16th International Conference on the Quality of Information and Communications Technology, QUATIC 2023
Reference: Reis, J. P. dos., Brito e Abreu, F., Carneiro, G. de F., & Almeida, D. (2023). Scientific workflow management for software quality assessment replication: An open source architecture. In J. M. Fernandes, G. H. Travassos, V. Lenarduzzi, & X. Li (Eds.), Quality of Information and Communications Technology. Communications in Computer and Information Science (vol.1871, pp. 1-14). Springer. https://doi.org/10.1007/978-3-031-43703-8_1
ISSN: 1865-0929
ISBN: 978-3-031-43703-8
DOI (Digital Object Identifier): 10.1007/978-3-031-43703-8_1
Keywords: Scientific workflow
Software quality
Quality assessment
Replication
Code smells
Open source
Abstract: Replication of research experiments is important for establishing the validity and generalizability of findings, building a cumulative body of knowledge, and addressing issues of publication bias. The quest for replication led to the concept of scientific workflow, a structured and systematic process for carrying out research that defines a series of steps, methods, and tools needed to collect and analyze data, and generate results. In this study, we propose a cloud-based framework built upon open source software, which facilitates the construction and execution of workflows for the replication/reproduction of software quality studies. To demonstrate its feasibility, we describe the replication of a software quality experiment on automatically detecting code smells with machine learning techniques. The proposed framework can mitigate two types of validity threats in software quality experiments: (i) internal validity threats due to instrumentation, since the same measurement instruments can be used in replications, thus not affecting the validity of the results, and (ii) external validity threats due to reduced generalizability, since different researchers can more easily replicate experiments with different settings, populations, and contexts while reusing the same scientific workflow.
Peerreviewed: yes
Access type: Embargoed Access
Appears in Collections:ISTAR-CRI - Comunicações a conferências internacionais

Files in This Item:
File SizeFormat 
conferenceobject_97795.pdf
  Restricted Access
391,1 kBAdobe PDFView/Open Request a copy


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.