Show simple item record

dc.contributor.authorArza, E. 
dc.contributor.authorPérez, A. 
dc.contributor.authorCeberio, J.
dc.contributor.authorIrurozki, E.
dc.date.accessioned2021-03-29T09:54:43Z
dc.date.available2021-03-29T09:54:43Z
dc.date.issued2021
dc.identifier.urihttp://hdl.handle.net/20.500.11824/1264
dc.description.abstractAn experimental comparison of two or more optimization algorithms requires the same computational resources to be assigned to each algorithm. When a maximum runtime is set as the stopping criterion, all algorithms need to be executed in the same machine if they are to use the same resources. Unfortunately, the implementation code of the algorithms is not always available, which means that running the algorithms to be compared in the same machine is not always possible. And even if they are available, some optimization algorithms might be costly to run, such as training large neural-networks in the cloud. In this paper, we consider the following problem: how do we compare the performance of a new optimization algorithm B with a known algorithm A in the literature if we only have the results (the objective values) and the runtime in each instance of algorithm A? Particularly, we present a methodology that enables a statistical analysis of the performance of algorithms executed in different machines. The proposed methodology has two parts. Firstly, we propose a model that, given the runtime of an algorithm in a machine, estimates the runtime of the same algorithm in another machine. This model can be adjusted so that the probability of estimating a runtime longer than what it should be is arbitrarily low. Secondly, we introduce an adaptation of the one-sided sign test that uses a modified p-value and takes into account that probability. Such adaptation avoids increasing the probability of type I error associated with executing algorithms A and B in different machines.en_US
dc.description.sponsorshipPID2019-1064536A-I00 Basque Government through consolidated groups 2019-2021 IT1244-19en_US
dc.formatapplication/pdfen_US
dc.language.isoengen_US
dc.rightsReconocimiento-NoComercial-CompartirIgual 3.0 Españaen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/es/en_US
dc.subjectAlgorithmsen_US
dc.subjectOptimizationen_US
dc.subjectBenchmarkingen_US
dc.subjectStatistical Testsen_US
dc.titleOn the fair comparison of optimization algorithms in different machinesen_US
dc.typeinfo:eu-repo/semantics/articleen_US
dc.relation.projectIDES/1PE/SEV-2017-0718en_US
dc.relation.projectIDES/1PE/TIN2017-82626-Ren_US
dc.relation.projectIDEUS/BERC/BERC.2018-2021en_US
dc.relation.projectIDEUS/ELKARTEKen_US
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessen_US
dc.type.hasVersioninfo:eu-repo/semantics/submittedVersionen_US
dc.journal.titleTheoretical Computer Scienceen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Reconocimiento-NoComercial-CompartirIgual 3.0 España
Except where otherwise noted, this item's license is described as Reconocimiento-NoComercial-CompartirIgual 3.0 España