• ES
  • EN


Viernes 9 de marzo de 2018


Por Humberto Bustince

In recent years, precision medicine is getting more and more relevance. The idea of precision medicine is to develop techniques aimed at taking advantage from the specific features of individual reflected in the genomics in order to get better medical diagnosis and treatments. The huge amount of data involved in this kind if studies makes it necessary to develop appropriate data mining and big data techniques to cope with them in a reasonable time. Furthermore, the main idea is not only to get an image of current data, but to obtain intelligent information from them in such a way that accurate predictions on the future can be made.


In this line, we discuss here two research projects in collaboration with the Health Service of the Government of Navarra. The first one has already been successfully developed whereas the second one has already provided excellent results but it is still in progress. Both of them display the kind of difficulties that can be found in this type of research and justify the need of intelligent techniques to cope with them.


1- Prediction of adverse events in polimedicated patients. The Navarra primary health assistance system was interested in predicting the risk of an adverse event if a new medicament is given to patient who is already taking in a regular way seven or more medicaments. Although there are around 35.000 patients in this situation in Navarra, less than 700 will develop an adverse event, which means that the instances we can consider in order to make the system learn to predict whether an adverse event is going to happen or not are highly unbalanced. For this reason, it is necessary to develop appropriate machine learning algorithms which are able to detect in an intelligent way the features which are common to the patients displaying an adverse effect and the corresponding medicaments and which do not appear in the other patients. This can be done by considering fusion functions which are data-dependent and take into account the links between data, a theoretical topic in which our group is leading at a worldwide level.


2- Prediction of the evolution of a head stroke affected patient. In medicine, the future evolution of a patient who has suffered a head stroke is measured by means of a scale from 0 to 6, with 0 meaning total recovery and 6 that the patient dies. In this case, the main difficulty in this type of classification problem lies on the fact that there are very few instances to work with, around 800 patients, and the prediction should be done in an individual basis, which means that a lot of uncertainty is involved in the procedure. In this sense, the use of intelligent techniques that can cope with this uncertainty individually to include it into the classification procedure is crucial, and should be done by using an appropriate representation of data and by considering intelligent algorithms adapted to this representation.


These two examples do not fall properly into the big data scope. But in this moment the group is developing some modifications in deep learning algorithms which are suitable to be applied to problems such as the representation of the computational brain or the prediction of the genomic expression in specific cellule so that these intelligent techniques can be extended to Big Data. These modifications are mainly based on building ways of establishing relationships between data that can be mathematically represented and hence improve both the pooling and convolution steps in deep neural networks.

Centro Jerónimo de Ayanz
Campus Arrosadia
31006 Pamplona-Iruña
Tel. 948169512
Contacto por email