Now showing items 1-20 of 24

• #### An adaptive neuroevolution-based hyperheuristic ﻿

(2020)
According to the No-Free-Lunch theorem, an algorithm that performs efficiently on any type of problem does not exist. In this sense, algorithms that exploit problem-specific knowledge usually outperform more generic ...
• #### Approaching the Quadratic Assignment Problem with Kernels of Mallows Models under the Hamming Distance ﻿

(2019-07)
The Quadratic Assignment Problem (QAP) is a specially challenging permutation-based np-hard combinatorial optimization problem, since instances of size $n>40$ are seldom solved using exact methods. In this sense, many ...
• #### Are the artificially generated instances uniform in terms of difficulty? ﻿

(2018-06)
In the field of evolutionary computation, it is usual to generate artificial benchmarks of instances that are used as a test-bed to determine the performance of the algorithms at hand. In this context, a recent work on ...
• #### Crowd Learning with Candidate Labeling: an EM-based Solution ﻿

(2018-09-27)
Crowdsourcing is widely used nowadays in machine learning for data labeling. Although in the traditional case annotators are asked to provide a single label for each instance, novel approaches allow annotators, in case ...
• #### Efficient approximation of probability distributions with k-order decomposable models ﻿

(2016-01-01)
During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models. Some of these algorithms can be used to search for a maximum likelihood decomposable ...
• #### Efficient approximation of probability distributions with k-order decomposable models ﻿

(2016-07)
During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models. Some of these algorithms can be used to search for a maximum likelihood decomposable ...
• #### An efficient approximation to the K-means clustering for Massive Data ﻿

(2017-02-01)
Due to the progressive growth of the amount of data available in a wide variety of scientific fields, it has become more difficult to manipulate and analyze such information. In spite of its dependency on the initial ...
• #### An efficient approximation to the K-means clustering for Massive Data ﻿

(2016-06-28)
Due to the progressive growth of the amount of data available in a wide variety of scientific fields, it has become more difficult to manipulate and analyze such information. In spite of its dependency on the initial ...
• #### An efficient K-means clustering algorithm for tall data ﻿

(2020)
The analysis of continously larger datasets is a task of major importance in a wide variety of scientific fields. Therefore, the development of efficient and parallel algorithms to perform such an analysis is a a crucial ...
• #### General supervision via probabilistic transformations ﻿

(2020-08-01)
Different types of training data have led to numerous schemes for supervised classification. Current learning techniques are tailored to one specific scheme and cannot handle general ensembles of training samples. This ...
• #### Identifying common treatments from Electronic Health Records with missing information. An application to breast cancer. ﻿

(2020-12-29)
The aim of this paper is to analyze the sequence of actions in the health system associated with a particular disease. In order to do that, using Electronic Health Records, we define a general methodology that allows us ...
• #### Kernels of Mallows Models under the Hamming Distance for solving the Quadratic Assignment Problem ﻿

(2020-07)
The Quadratic Assignment Problem (QAP) is a well-known permutation-based combinatorial optimization problem with real applications in industrial and logistics environments. Motivated by the challenge that this NP-hard ...
• #### A Machine Learning Approach to Predict Healthcare Cost of Breast Cancer Patients ﻿

(2021)
This paper presents a novel machine learning approach to per- form an early prediction of the healthcare cost of breast cancer patients. The learning phase of our prediction method considers the following two steps: i) in ...
• #### Minimax Classification with 0-1 Loss and Performance Guarantees ﻿

(2020-12-01)
Supervised classification techniques use training samples to find classification rules with small expected 0-1 loss. Conventional methods achieve efficient learning and out-of-sample generalization by minimizing surrogate ...
• #### Nature-inspired approaches for distance metric learning in multivariate time series classification ﻿

(2017-07)
The applicability of time series data mining in many different fields has motivated the scientific community to focus on the development of new methods towards improving the performance of the classifiers over this particular ...
• #### Nature-inspired approaches for distance metric learning in multivariate time series classification ﻿

(2017)
The applicability of time series data mining in many different fields has motivated the scientific community to focus on the development of new methods towards improving the performance of the classifiers over this particular ...
• #### On the evaluation and selection of classifier learning algorithms with crowdsourced data ﻿

(2019-02-16)
In many current problems, the actual class of the instances, the ground truth, is unavail- able. Instead, with the intention of learning a model, the labels can be crowdsourced by harvesting them from different annotators. ...
• #### On the fair comparison of optimization algorithms in different machines ﻿

(2021)
An experimental comparison of two or more optimization algorithms requires the same computational resources to be assigned to each algorithm. When a maximum runtime is set as the stopping criterion, all algorithms need to ...
• #### On-Line Dynamic Time Warping for Streaming Time Series ﻿

(2017-09)
Dynamic Time Warping is a well-known measure of dissimilarity between time series. Due to its flexibility to deal with non-linear distortions along the time axis, this measure has been widely utilized in machine learning ...
• #### On-line Elastic Similarity Measures for time series ﻿

(2019-04)
The way similarity is measured among time series is of paramount importance in many data mining and machine learning tasks. For instance, Elastic Similarity Measures are widely used to determine whether two time series are ...