Publicaciones

Seleccione el orden de las publicaciones:


AÑO


2015

  • Gachet Páez, D., Buenaga, M., Puertas, E., Villalba, M. T., & Muñoz Gil, R.. (2015). Big data processing using wearable devices for wellbeing and healthy activities promotion. In Cleland, I., Guerrero, L., & Bravo, J. (Ed.), In Ambient assisted living. ict-based solutions in real life situations (pp. 196-205). Springer International Publishing.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Abstract The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this paper we describe a system for promoting an active and healthy lifestyle

    @INCOLLECTION{Gachet2015a,
    author = {Gachet Páez, Diego and Buenaga, Manuel and Puertas, Enrique and Villalba, María Teresa and Muñoz Gil, Rafael},
    title = {Big Data Processing Using Wearable Devices for Wellbeing and Healthy Activities Promotion},
    booktitle = {Ambient Assisted Living. ICT-based Solutions in Real Life Situations},
    publisher = {Springer International Publishing},
    year = {2015},
    editor = {Cleland, Ian and Guerrero, Luis and Bravo, Jos{\'e}},
    volume = {},
    series = {},
    pages = {196--205},
    month = {December},
    abstract = {Abstract The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this paper we describe a system for promoting an active and healthy lifestyle},
    copyright = {Springer},
    doi = {10.1007/978-3-319-26410-3_19},
    isbn = {978-3-319-26410-3},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=0ynMYdoAAAAJ&sortby=pubdate&citation_for_view=0ynMYdoAAAAJ:vRqMK49ujn8C},
    urldate = {2015-02-02}
    }

  • Gachet Páez, D., Buenaga, M., Puertas, E., & Villalba, M. T.. (2015). Big data processing of bio-signal sensors information for self-management of health and diseases. In Imis 2015 proceedings (pp. 330-335). IEEE.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    European countries are characterized by aging population and economical crisis, as a consequence, the funds dedicated to social services has been diminished specially those dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependant people to care centers and enabling the management of chronic diseases outside institutions. It is necessary to streamline the health system resources leading to the development of new medical services

    @INCOLLECTION{Gachet2015b,
    author = {Gachet Páez, Diego and Buenaga, Manuel and Puertas, Enrique and Villalba, María Teresa},
    title = {Big Data Processing of Bio-signal Sensors Information for Self-management of Health and Diseases},
    booktitle = {IMIS 2015 Proceedings},
    publisher = {IEEE},
    year = {2015},
    editor = {},
    volume = {},
    series = {},
    pages = {330--335},
    month = {July},
    abstract = {European countries are characterized by aging population and economical crisis, as a consequence, the funds dedicated to social services has been diminished specially those
    dedicated to healthcare, is then desirable to optimize the costs of public and private
    healthcare systems reducing the affluence of chronic and dependant people to care centers
    and enabling the management of chronic diseases outside institutions. It is necessary to
    streamline the health system resources leading to the development of new medical services},
    copyright = {IEEE},
    doi = {10.1109/IMIS.2015.51},
    isbn = {978-1-4799-8872-3 },
    url = {https://scholar.google.es/citations?view_op=view_citation&continue=/scholar%3Fq%3DBig%2BData%2BProcessing%2Bof%2BBio-signal%2BSensors%2BInformation%2Bfor%2BSelf-management%2Bof%2BHealth%2Band%2BDiseases%26hl%3Des%26as_sdt%3D0,5%26as_ylo%3D2015%26scilib%3D2%26scioq%3DIPHealth:%2BPlataforma%2Binteligente%2Bbasada%2Ben%2Bopen,%2Blinked%2By%2Bbig%2Bdata%2Bpara%2Bla%2Btoma%2Bde%2Bdecisiones%2By%2Baprendizaje%2Ben%2B&citilm=1&citation_for_view=0ynMYdoAAAAJ:K3LRdlH-MEoC&hl=es&oi=p},
    urldate = {2015-02-08}
    }

  • de Buenaga, M., Gachet Páez, D., Maña, Mata, J., Borrajo, L., & Lorenzo, E.. (2015). Iphealth: plataforma inteligente basada en open, linked y big data para la toma de decisiones y aprendizaje en el ámbito de la salud. In Procesamiento de lenguaje natural (, Vol. 55pp. 161-164). SEPLN.
    [BibTeX] [Abstract] [Google Scholar]
    The IPHealth project’s main objective is to design and implement a platform with services that enable an integrated and intelligent access to related in the biomedical domain. We propose three usage scenarios: (i) assistance to healthcare professionals during the decision making process at clinical settings, (ii) access to relevant information about their health status and dependent chronic patients and (iii) to support evidence-based training of new medical students. Most effective techniques are proposed for reveral NLP tecniques and extraction of information from large data sets from sets of sensors and using open data. A Web application framework and an architecture that would enable integration of processes and techniques of text and data mining will be designed. Also, this architecture have to allow an integration of information in a fast, consistent and reusable (via plugins) way.

    @INCOLLECTION{Buenaga2015b,
    author = {Buenaga, Manuel de and Gachet Páez, Diego and Maña and Mata, Jaciento and Borrajo, Lourdes and Lorenzo, Eva},
    title = {IPHealth: Plataforma inteligente basada en open, linked y big data para la toma de decisiones y aprendizaje en el ámbito de la salud},
    booktitle = {Procesamiento de Lenguaje Natural},
    publisher = {SEPLN},
    year = {2015},
    editor = {},
    volume = {55},
    series = {},
    pages = {161--164},
    month = {September},
    abstract = {The IPHealth project's main objective is to design and implement a platform with services that enable an integrated and intelligent access to related in the biomedical domain. We propose three usage scenarios: (i) assistance to healthcare professionals during the decision making process at clinical settings, (ii) access to relevant information about their health status and dependent chronic patients and (iii) to support evidence-based training of new medical students. Most effective techniques are proposed for reveral NLP tecniques and extraction of information from large data sets from sets of sensors and using open data. A Web application framework and an architecture that would enable integration of processes and techniques of text and data mining will be designed. Also, this architecture have to allow an integration of information in a fast, consistent and reusable (via plugins) way.},
    copyright = {SEPLN},
    doi = {},
    isbn = {1989-7553},
    url = {https://scholar.google.es/citations?view_op=view_citation&continue=/scholar%3Fhl%3Des%26as_sdt%3D0,5%26as_ylo%3D2015%26scilib%3D2%26scioq%3DIPHealth:%2BPlataforma%2Binteligente%2Bbasada%2Ben%2Bopen,%2Blinked%2By%2Bbig%2Bdata%2Bpara%2Bla%2Btoma%2Bde%2Bdecisiones%2By%2Baprendizaje%2Ben%2B&citilm=1&citation_for_view=0ynMYdoAAAAJ:Tiz5es2fbqcC&hl=es&oi=p},
    urldate = {2015-02-02}
    }

2014

  • Sasián, F., Theró;n, R., & Gachet Páez, D.. (2014). Protocolo para comunicaci?n inal?mbrica de alta eficiencia en instalaciones de energ?as renovable. In Nov?;tica (pp. 33-38). ATI.
    [BibTeX] [Abstract] [Google Scholar]
    Durante estos ultimos cuatro aos, la industria fotovoltaica (FV) ha tenido que enfrentarse a su primer proceso de consolidacion, debido, entre otros factores, a la crisis economica. En esas circunstancias, la FV tiene la necesidad vital de reducir los costes. Una nueva lnea de trabajo, la electrnica de potencia empotrada a nivel de modulo (MLPE Module Level Power Electronic), esta en plena expansin y promete aumentar no solo la eficiencia sino tambin la flexibilidad y la seguridad de los sistemas fotovoltaicos.

    @INCOLLECTION{Gachet2014d,
    author = {Sasián, Felix and Theró;n, Ricardo and Gachet Páez, Diego},
    title = {Protocolo para comunicacin inalmbrica de alta eficiencia en instalaciones de energas renovable},
    booktitle = {Nov;tica},
    publisher = {ATI},
    year = {2014},
    editor = {},
    volume = {},
    series = {},
    pages = {33-38},
    month = {December},
    abstract = {Durante estos ultimos cuatro aos, la industria fotovoltaica (FV) ha tenido que enfrentarse a su primer proceso de consolidacion, debido, entre otros factores, a la crisis economica. En esas circunstancias, la FV tiene la necesidad vital de reducir los costes. Una nueva lnea de trabajo, la electrnica de potencia empotrada a nivel de modulo (MLPE Module Level Power Electronic), esta en plena expansin y promete aumentar no solo la eficiencia sino tambin la flexibilidad y la seguridad de los sistemas fotovoltaicos.},
    copyright = {Open Access},
    doi = {},
    isbn = {02112124},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=0ynMYdoAAAAJ&sortby=pubdate&citation_for_view=0ynMYdoAAAAJ:OU6Ihb5iCvQC},
    urldate = {2014-12-12}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Ascanio, J. R.. (2014). Chronic patients monitoring using wireless sensors and big data processing. In Ubiquitous computing & ambient intelligence (pp. 33-38). IEEE.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.

    @INCOLLECTION{Gachet2014c,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Ascanio , J. R.},
    title = {Chronic patients monitoring using wireless sensors and Big Data Processing},
    booktitle = {Ubiquitous Computing & Ambient Intelligence},
    publisher = {IEEE},
    year = {2014},
    editor = {},
    volume = {},
    series = {IMIS 2014 Proceeding},
    pages = {33-38},
    month = {December},
    abstract = {Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.},
    copyright = {IEEE},
    doi = {10.1109/IMIS.2014.54},
    isbn = {9781479943319},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=en&user=Mwr8bDQAAAAJ&citation_for_view=Mwr8bDQAAAAJ:mB3voiENLucC},
    urldate = {2014-12-12}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Ascanio, J. R.. (2014). Big data and iot for chronic patients monitoring. In Nugent, C., Coronato Antonio, D., & Bravo, J. (Ed.), In Ubiquitous computing & ambient intelligence (, Vol. 8277pp. 33-38). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.

    @INCOLLECTION{Gachet2014b,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Ascanio , J. R.},
    title = {Big data and IoT for chronic patients monitoring},
    booktitle = {Ubiquitous Computing & Ambient Intelligence},
    publisher = {Springer Berlin Heidelberg},
    year = {2014},
    editor = {Nugent, Christofer and Coronato Antonio, Davy and Bravo, José.},
    volume = {8277},
    series = {Lecture Notes in Computer Science},
    pages = {33-38},
    month = {December},
    abstract = {Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.},
    copyright = {©2013 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-319-13102-3_68},
    isbn = {03029743},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=Mwr8bDQAAAAJ&citation_for_view=Mwr8bDQAAAAJ:HDshCWvjkbEC},
    urldate = {2014-12-12}
    }

  • Duenas Fuentes, A., Mochon, A., Escribano, A., Pina Fernandez, J., & Gachet Paez, D.. (2014). Mathematical probability model for obstructive sleep apnea syndrome. In Chest (Ed.), In Chest (, Vol. 145pp. 597). Chest.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Establish an econometric probability model to serve as a complementary tool to support the diagnostic test of respiratory polygraphy to predict the probability that a patient has to suffer OSAS.

    @INCOLLECTION{Gachet2014a,
    author = {Duenas Fuentes, Antonio and Mochon, Ana and Escribano, Ana and Pina Fernandez, Juan and Gachet Paez, Diego},
    title = {Mathematical probability model for Obstructive Sleep Apnea Syndrome},
    booktitle = {Chest},
    publisher = {Chest},
    year = {2014},
    editor = {Chest},
    volume = {145},
    series = {},
    pages = {597},
    month = {June},
    abstract = {Establish an econometric probability model to serve as a complementary tool to support the diagnostic test of respiratory polygraphy to predict the probability that a patient has to suffer OSAS.},
    copyright = {},
    doi = {10.1378/chest.1785482},
    isbn = {19313543},
    url = {http://abacus.universidadeuropea.es/handle/11268/3382},
    urldate = {2014-12-12}
    }

2013

  • Gachet Páez, D., Padrón, V., Buenaga, M., & Aparicio, F.. (2013). Improving health services using cloud computing, big data and wireless sensors. In Nugent, C., Coronato Antonio, D., & Bravo, J. (Ed.), In Ambient assisted living and active aging (, Vol. 8277pp. 33-38). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services, especially in outdoors environments, based on cloud computing and vital signs monitoring.

    @INCOLLECTION{Gachet2013a,
    author = {Gachet Páez, Diego and Padrón, Víctor and Buenaga, Manuel and Aparicio, Fernando},
    title = {Improving Health Services Using Cloud Computing, Big Data and Wireless Sensors},
    booktitle = {Ambient Assisted Living and Active Aging},
    publisher = {Springer Berlin Heidelberg},
    year = {2013},
    editor = {Nugent, Christofer and Coronato Antonio, Davy and Bravo, José.},
    volume = {8277},
    series = {Lecture Notes in Computer Science},
    pages = {33-38},
    month = {December},
    abstract = {In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services, especially in outdoors environments, based on cloud computing and vital signs monitoring.},
    copyright = {©2013 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-319-03092-0_5},
    isbn = {978-3-319-03091-3},
    url = {http://scholar.google.es/scholar?q=allintitle%3AImproving+Health+Services+Using+Cloud+Computing%2C+Big+Data+and+Wireless+Sensors&btnG=&hl=es&as_sdt=0%2C5},
    urldate = {2014-01-01}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Rubio, M.. (2013). Highly personalized health services using cloud and sensors. In Proceedings of the 2013 seventh international conference on innovative mobile and internet services in ubiquitous computing (pp. 451-455). IEEE Computer Society.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services based on cloud computing and vital signs sensors.

    @INCOLLECTION{Gachet2013b,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Rubio, Margarita},
    title = {Highly Personalized Health Services Using Cloud and Sensors},
    booktitle = { Proceedings of the 2013 Seventh International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing},
    publisher = {IEEE Computer Society},
    year = {2013},
    pages = {451-455},
    month = {July},
    abstract = {In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services based on cloud computing and vital signs sensors.},
    copyright = {©2013 IEEE},
    doi = {10.1109/IMIS.2013.81},
    isbn = {978-3-319-03091-3},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3AHighly+Personalized+Health+Services+Using+Cloud+and+Sensors&btnG=&lr=},
    urldate = {2014-01-01}
    }

  • Gachet Páez, D., Ascanio, J. R., & Sánchez de Pedro, I.. (2013). Computación en la nube, big data y sensores inalámbricos para la provisión de nuevos servicios de salud. Novática. revista de la asociación de técnicos en informática(224), 66-71.
    [BibTeX] [Abstract] [Google Scholar]
    Vivimos en una sociedad caracterizada por el envejecimiento de la población y actualmente inmersa en una profunda crisis económica que implica la reducción de costes de los servicios públicos y entre ellos el de salud. Es asimismo ineludible la necesidad de optimizar los recursos de los sistemas sanitarios promoviendo el desarrollo de nuevos servicios médicos basados en telemedicina, monitorización de enfermos crónicos, servicios de salud personalizados, etc. Es de esperar que estas nuevas aplicaciones incrementen de forma significativa el volumen de la información sanitaria a gestionar, incluyendo datos de sensores biológicos, historiales clínicos, información de contexto, etc. que a su vez necesitan de la disponibilidad de las aplicaciones de salud en cualquier lugar y momento y que sean accesibles desde cualquier dispositivo. En este artículo se propone una solución para la creación de estos nuevos servicios, especialmente en entornos exteriores, en base al uso de computación en la nube y monitorización de signos vitales.

    @OTHER{GachetNovatica2013a,
    author = {Gachet Páez, Diego and Ascanio, Juan Ramón and Sánchez de Pedro, Israel},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {224},
    pages = {66-71},
    month = {August},
    title = {Computación en la nube, Big Data y Sensores Inalámbricos para la provisión de nuevos servicios de salud},
    abstract = {Vivimos en una sociedad caracterizada por el envejecimiento de la población y actualmente
    inmersa en una profunda crisis económica que implica la reducción de costes de los servicios públicos y
    entre ellos el de salud. Es asimismo ineludible la necesidad de optimizar los recursos de los sistemas
    sanitarios promoviendo el desarrollo de nuevos servicios médicos basados en telemedicina, monitorización
    de enfermos crónicos, servicios de salud personalizados, etc. Es de esperar que estas nuevas aplicaciones
    incrementen de forma significativa el volumen de la información sanitaria a gestionar, incluyendo datos de
    sensores biológicos, historiales clínicos, información de contexto, etc. que a su vez necesitan de la
    disponibilidad de las aplicaciones de salud en cualquier lugar y momento y que sean accesibles desde
    cualquier dispositivo. En este artículo se propone una solución para la creación de estos nuevos servicios,
    especialmente en entornos exteriores, en base al uso de computación en la nube y monitorización de signos
    vitales.},
    doi = {},
    url = {http://scholar.google.es/scholar?q=novatica+computaci%C3%B3n+en+la+nube%2C+big+data+sensores+inal%C3%A1mbricos+servicios+de+salud&btnG=&hl=es&as_sdt=0%2C5},
    year = {2013},
    urldate = {2014-01-01}
    }

  • Gachet Páez, D., fernando Aparicio, Buenaga, M., & Busto, M. J.. (2013). Virtual cloud carer: new e-health services for chronic patients. Proceedings aal forum 2013.
    [BibTeX] [Abstract]
    Current estimates claims there are 1.300.000 dependent persons in Spain and the public spending in 2010 was 5.500 million Euros for care of 650.000 dependents. Increase in chronic diseases diabetes; €68.300 million in 2007 and will grow until €80.900 millions in 2025 . Increase in cardiovascular diseases; in Europe in 2006 of €109,000 million (10% of the total of the sanitary cost; in Spain 7%). EPOC, asthma, lung cancer, pneumonia and tuberculosis), is responsible for 20% of all the deaths and generates a cost of €84,000 million in Europe. The EPOC affects in Europe 44 million people, with a prevalence of the 5-10% of population greater than 40 years.

    @OTHER{GachetAAL2013a,
    author = {Gachet Páez, Diego and Aparicio, fernando and Buenaga, Manuel and Busto, María José},
    journal = {Proceedings AAL Forum 2013},
    month = {September},
    title = {Virtual CLoud Carer: New e-health Services for Chronic Patients},
    abstract = {Current estimates claims there are 1.300.000 dependent persons in Spain and the public spending in 2010 was 5.500 million Euros for care of 650.000 dependents. Increase in chronic diseases diabetes; €68.300 million in 2007 and will grow until €80.900 millions in 2025 . Increase in cardiovascular diseases; in Europe in 2006 of €109,000 million (10% of the total of the sanitary cost; in Spain 7%). EPOC, asthma, lung cancer, pneumonia and tuberculosis), is responsible for 20% of all the deaths and generates a cost of €84,000 million in Europe. The EPOC affects in Europe 44 million people, with a prevalence of the 5-10% of population greater than 40 years.},
    year = {2013},
    doi = {},
    url = {},
    urldate = {2014-01-01}
    }

  • Puertas, E., Prieto, M. L., & de Buenaga, M.. (2013). Mobile application for accessing biomedical information using linked open data. Paper presented at the Proceedings from the mobilemed 2013 conference.
    [BibTeX] [Abstract] [Google Scholar]
    This paper aims to introduce a mobile application for accessing biomedical information extracted from public open resources like Freebase, DBPedia or PubMed. Our app exploits the interlinked feature of those sources for easing the access to heterogeneous resources. App was developed using HTML5 and Javascript and then it was compiled to different platforms like AnDroid or iOS.

    @inproceedings{MobMedEPuertas,
    author = {Puertas, Enrique and Prieto, Maria Lorena and Buenaga, Manuel de},
    abstract = {This paper aims to introduce a mobile application for accessing biomedical information extracted from public open resources like Freebase, DBPedia or PubMed. Our app exploits the interlinked feature of those sources for easing the access to heterogeneous resources. App was developed using HTML5 and Javascript and then it was compiled to different platforms like AnDroid or iOS.},
    title = {MOBILE APPLICATION FOR ACCESSING BIOMEDICAL INFORMATION USING LINKED OPEN DATA},
    booktitle = {Proceedings from the mobilemed 2013 conference},
    year = {2013},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Application+for+Accessing+Biomedical+Information+Using+Linked+Open+Data&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Prieto, M. L., Aparicio, F., Buenaga, M., Gachet Páez, D., & Gaya, M. C.. (2013). Cross-lingual intelligent information access system from clinical cases using mobile devices. Procesamiento del lenguaje natural, 50, 85-92.
    [BibTeX] [Abstract] [Google Scholar]
    Over the last decade there has been a rapid growth of both the development of new smart mobile devices (Smartphone and Tablet) and their use (through many applications). Furthermore, in the biomedical field there are a greater number of resources in different formats, which can be exploited by using Intelligent Information Access Systems and techniques for information retrieval and extraction. This paper presents the development of a mobile interface access that, using different local knowledge sources (dictionaries and ontologies previously preprocessed), techniques of natural language processing and remote knowledge sources (which performs the annotation of entities in text inputted into the system via Web services), allows the cross-lingual extraction of medical concepts in English and Spanish, from a medical text in English or Spanish (e.g. a clinical case). The mobile application user can enter a medical text or a picture of it, resulting in a set of relevant medical entities. On recognized medical entities, extracted and displayed through the interface, the user can get more information on them, get more information from other concepts related to originally extracted and search for scientific publications from MEDLINE/PubMed.

    @article{PLN4663,
    author = {Prieto , Maria Lorena and Aparicio , Fernando and Buenaga , Manuel and Gachet Páez, Diego and Gaya, Maria Cruz},
    title = {Cross-lingual intelligent information access system from clinical cases using mobile devices},
    journal = {Procesamiento del Lenguaje Natural},
    volume = {50},
    number = {0},
    pages = {85-92},
    year = {2013},
    keywords = {},
    abstract = {Over the last decade there has been a rapid growth of both the development of new smart mobile devices (Smartphone and Tablet) and their use (through many applications). Furthermore, in the biomedical field there are a greater number of resources in different formats, which can be exploited by using Intelligent Information Access Systems and techniques for information retrieval and extraction. This paper presents the development of a mobile interface access that, using different local knowledge sources (dictionaries and ontologies previously preprocessed), techniques of natural language processing and remote knowledge sources (which performs the annotation of entities in text inputted into the system via Web services), allows the cross-lingual extraction of medical concepts in English and Spanish, from a medical text in English or Spanish (e.g. a clinical case). The mobile application user can enter a medical text or a picture of it, resulting in a set of relevant medical entities.
    On recognized medical entities, extracted and displayed through the interface, the user can get more information on them, get more information from other concepts related to originally extracted and search for scientific publications from MEDLINE/PubMed.},
    issn = {1989-7553},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACross-lingual+intelligent+information+access+system+from+clinical+cases+using++mobile+devices&btnG=&hl=es&as_sdt=0%2C5}}

  • Gaya López, M. C., Aparicio Galisteo, F., Villalba Benito, M. T., Gomez Fernandez, E., Ferrari Golinelli, G., Redondo Duarte, S., & Iniesta Casanova, J.. (2013). Improving accessibility in discussion forums. Paper presented at the Inted2013 proceedings.
    [BibTeX] [Google Scholar]
    @InProceedings{GAYALOPEZ2013IMP,
    author = {Gaya L{\'{o}}pez, Maria Cruz and Aparicio Galisteo, Fernando and Villalba Benito, M.T. and Gomez Fernandez, Estrella and Ferrari Golinelli, G. and Redondo Duarte, S. and Iniesta Casanova, Jesus},
    title = {Improving Accessibility In Discussion Forums},
    series = {7th International Technology, Education and Development Conference},
    booktitle = {INTED2013 Proceedings},
    isbn = {978-84-616-2661-8},
    issn = {2340-1079},
    publisher = {IATED},
    location = {Valencia, Spain},
    month = {4-5 March, 2013},
    year = {2013},
    pages = {6658-6665},
    url={http://scholar.google.es/scholar?hl=es&q=allintitle%3A+IMPROVING+ACCESSIBILITY+IN+DISCUSSION+FORUMS&btnG=&lr=}
    }

  • López-Fernández, H., Reboiro-Jato, M., Glez-Peña, D., Aparicio, F., Gachet Páez, D., Buenaga, M., & Fdez-Riverola, F.. (2013). Bioannote: a software platform for annotating biomedical documents with application in medical learning environments. Computer methods and programs in biomedicine, 111(1), 139-147.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Abstract Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations.

    @article{LópezFernández2013139,
    title = {BioAnnote: A software platform for annotating biomedical documents with application in medical learning environments },
    journal = {Computer Methods and Programs in Biomedicine },
    volume = {111},
    number = {1},
    pages = {139 - 147},
    year = {2013},
    issn = {0169-2607},
    doi = {10.1016/j.cmpb.2013.03.007},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABioAnnote%3A+A+software+platform+for+annotating+biomedical+documents+with+application+in+medical+learning+environments&btnG=&hl=es&as_sdt=0%2C5},
    author = {López-Fernández, H. and Reboiro-Jato, M. and Glez-Peña, D. and Aparicio, Fernando and Gachet Páez, Diego and Buenaga, Manuel and Fdez-Riverola, F.},
    abstract = {Abstract Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations. }
    }

2012

  • Molina, M., & Flores, V.. (2012). Generating multimedia presentations that summarize the behavior of dynamic systems using a model-based approach. Expert syst. appl., 39(3), 2759-2770.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This article describes a knowledge-based method for generating multimedia descriptions that summarize the behavior of dynamic systems. We designed this method for users who monitor the behavior of a dynamic system with the help of sensor networks and make decisions according to prefixed management goals. Our method generates presentations using different modes such as text in natural language, 2D graphics and 3D animations. The method uses a qualitative representation of the dynamic system based on hierarchies of components and causal influences. The method includes an abstraction generator that uses the system representation to find and aggregate relevant data at an appropriate level of abstraction. In addition, the method includes a hierarchical planner to generate a presentation using a model with discourse patterns. Our method provides an efficient and flexible solution to generate concise and adapted multimedia presentations that summarize thousands of time series. It is general to be adapted to different dynamic systems with acceptable knowledge acquisition effort by reusing and adapting intuitive representations. We validated our method and evaluated its practical utility by developing several models for an application that worked in continuous real time operation for more than 1 year, summarizing sensor data of a national hydrologic information system in Spain.

    @article{DBLP:journals/eswa/MolinaF12,
    author = {Molina, Martin and Flores, Victor},
    abstract = {This article describes a knowledge-based method for generating multimedia descriptions that summarize the behavior of dynamic systems. We designed this method for users who monitor the behavior of a dynamic system with the help of sensor networks and make decisions according to prefixed management goals. Our method generates presentations using different modes such as text in natural language, 2D graphics and 3D animations. The method uses a qualitative representation of the dynamic system based on hierarchies of components and causal influences. The method includes an abstraction generator that uses the system representation to find and aggregate relevant data at an appropriate level of abstraction. In addition, the method includes a hierarchical planner to generate a presentation using a model with discourse patterns. Our method provides an efficient and flexible solution to generate concise and adapted multimedia presentations that summarize thousands of time series. It is general to be adapted to different dynamic systems with acceptable knowledge acquisition effort by reusing and adapting intuitive representations. We validated our method and evaluated its practical utility by developing several models for an application that worked in continuous real time operation for more than 1 year, summarizing sensor data of a national hydrologic information system in Spain.},
    title = {Generating multimedia presentations that summarize the behavior of dynamic systems using a model-based approach},
    journal = {Expert Syst. Appl.},
    volume = {39},
    number = {3},
    year = {2012},
    pages = {2759-2770},
    doi = {10.1016/j.eswa.2011.08.135},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3AGenerating+multimedia+presentations+that+summarize+the+behavior+of+dynamic+systems+using+a+model-based+approach&btnG=&lr=}
    }

  • Aparicio, F., Buenaga, M., Rubio, M., & Hernando, A.. (2012). An intelligent information access system assisting a case based learning methodology evaluated in higher education with medical students. Computers and education, 58(4), 1282-1295.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In recent years there has been a shift in educational methodologies toward a student-centered approach, one which increasingly emphasizes the integration of computer tools and intelligent systems adopting different roles. In this paper we describe in detail the development of an Intelligent Information Access system used as the basis for producing and assessing a constructivist learning methodology with undergraduate students. The system automatically detects significant concepts available within a given clinical case and facilitates an objective examination, following a proper selection process of the case in which is taken into account the students’ knowledge level. The learning methodology implemented is intimately related to concept-based, case-based and internet-based learning. In spite of growing theoretical research on the use of information technology in higher education, it is rare to find applications that measure learning and students’ perceptions and compare objective results with a free Internet search. Our work enables students to gain understanding of the concepts in a case through Web browser interaction with our computer system identifying these concepts and providing direct access to enriched related information from Medlineplus, Freebase and PubMed. In order to evaluate the learning activity outcomes, we have done a trial run with volunteer students from a 2nd year undergraduate Medicine course, dividing the volunteers into two groups. During the activity all students were provided with a clinical case history and a multiple choice test with medical questions relevant to the case. This test could be done in two different ways: learners in one group were allowed to freely seek information on the Internet, while the other group could only search for information using the newly developed computer tool. In the latter group, we measured how students perceived the tool’s support for solving the activity and the Web interface usability, supplying them with a Likert questionnaire for anonymous completion. The particular case selected was a female with a medical history of heart pathology, from which the system derived medical terms closely associated with her condition description, her clinical evolution and treatment.

    @ARTICLE{Aparicio2012,
    author = {Aparicio , Fernando and Buenaga , Manuel and Rubio , Margarita and Hernando , Asunción},
    title = {An intelligent information access system assisting a case based learning methodology evaluated in higher education with medical students},
    journal = {Computers And Education},
    year = {2012},
    volume = {58},
    pages = {1282-1295},
    number = {4},
    month = {may},
    abstract = {In recent years there has been a shift in educational methodologies toward a student-centered approach, one which increasingly emphasizes the integration of computer tools and intelligent systems adopting different roles. In this paper we describe in detail the development of an Intelligent Information Access system used as the basis for producing and assessing a constructivist learning methodology with undergraduate students. The system automatically detects significant concepts available within a given clinical case and facilitates an objective examination, following a proper selection process of the case in which is taken into account the students’ knowledge level. The learning methodology implemented is intimately related to concept-based, case-based and internet-based learning. In spite of growing theoretical research on the use of information technology in higher education, it is rare to find applications that measure learning and students’ perceptions and compare objective results with a free Internet search. Our work enables students to gain understanding of the concepts in a case through Web browser interaction with our computer system identifying these concepts and providing direct access to enriched related information from Medlineplus, Freebase and PubMed. In order to evaluate the learning activity outcomes, we have done a trial run with volunteer students from a 2nd year undergraduate Medicine course, dividing the volunteers into two groups. During the activity all students were provided with a clinical case history and a multiple choice test with medical questions relevant to the case. This test could be done in two different ways: learners in one group were allowed to freely seek information on the Internet, while the other group could only search for information using the newly developed computer tool. In the latter group, we measured how students perceived the tool’s support for solving the activity and the Web interface usability, supplying them with a Likert questionnaire for anonymous completion. The particular case selected was a female with a medical history of heart pathology, from which the system derived medical terms closely associated with her condition description, her clinical evolution and treatment.},
    doi = {10.1016/j.compedu.2011.12.021},
    issn = {0360-1315},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Intelligent+Information+Access+system+assisting+a+Case+Based+Learning+methodology+evaluated+in+higher+education+with+medical+students&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Aparicio Galisteo, F., & Buenaga Rodríguez, M.. (2012). Métodos cuantitativos y cualitativos de evaluación de sistemas multilingüe y multimedia de acceso inteligente a la información biomédica en contextos de educación superior. Seminario mavir.
    [BibTeX] [Abstract] [Google Scholar]
    Los sistemas de acceso inteligente a la información están relacionados, habitualmente, con aquellos capaces de aglutinar el conocimiento a partir de recursos de terceros. Existe una tendencia creciente en biomedicina que consiste en ofrecer los recursos desarrollados a través de servicios Web, poniéndolos a disposición de otros investigadores biomédicos y haciendo que este campo de estudio sea muy apropiado para el desarrollo de sistemas que aprovechen y exploten diferentes fuentes de información. Por otro lado, estas fuentes de información poco a poco van afrontando la problemática del lenguaje, disponiéndose en algunos casos de recursos en diferentes idiomas. Muchas de las tareas relacionadas con el procesamiento del lenguaje natural, la minería de textos, la recuperación de información o la extracción de información, son evaluadas con medidas cuantitativas basadas en la precisión y la cobertura de los algoritmos. Sin embargo, muchos de estos sistemas tienen ámbitos aplicación aptas para una gran variedad de usuarios finales, siendo imprescindible, en este caso, obtener medidas en las que los usuarios valoren la utilidad de los mismos para llevar a cabo tareas concretas. En este seminario proponemos el análisis de estos sistemas a partir de un conjunto de métodos cuantitativos y cualitativos, que permiten la evaluación de la percepción de los usuarios finales sobre los sistemas para llevar a cabo diferentes tipos de actividades de aprendizaje en el contexto de la educación superior, estando estos grupos de usuarios, por tanto, formados por profesores o alumnos en ciencias de la salud.

    @OTHER{AparicioGalisteo2012,
    abstract = {Los sistemas de acceso inteligente a la información están relacionados, habitualmente, con aquellos capaces de aglutinar el conocimiento a partir de recursos de terceros. Existe una tendencia creciente en biomedicina que consiste en ofrecer los recursos desarrollados a través de servicios Web, poniéndolos a disposición de otros investigadores biomédicos y haciendo que este campo de estudio sea muy apropiado para el desarrollo de sistemas que aprovechen y exploten diferentes fuentes de información. Por otro lado, estas fuentes de información poco a poco van afrontando la problemática del lenguaje, disponiéndose en algunos casos de recursos en diferentes idiomas. Muchas de las tareas relacionadas con el procesamiento del lenguaje natural, la minería de textos, la recuperación de información o la extracción de información, son evaluadas con medidas cuantitativas basadas en la precisión y la cobertura de los algoritmos. Sin embargo, muchos de estos sistemas tienen ámbitos aplicación aptas para una gran variedad de usuarios finales, siendo imprescindible, en este caso, obtener medidas en las que los usuarios valoren la utilidad de los mismos para llevar a cabo tareas concretas. En este seminario proponemos el análisis de estos sistemas a partir de un conjunto de métodos cuantitativos y cualitativos, que permiten la evaluación de la percepción de los usuarios finales sobre los sistemas para llevar a cabo diferentes tipos de actividades de aprendizaje en el contexto de la educación superior, estando estos grupos de usuarios, por tanto, formados por profesores o alumnos en ciencias de la salud.},
    address = {Madrid},
    author = {Aparicio Galisteo , Fernando and Buenaga Rodríguez , Manuel},
    journal = {Seminario MAVIR},
    month = {Jun},
    title = {Métodos cuantitativos y cualitativos de evaluación de sistemas multilingüe y multimedia de acceso inteligente a la información biomédica en contextos de educación superior},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+M%C3%A9todos+cuantitativos+y+cualitativos+de+evaluaci%C3%B3n+de+sistemas+multiling%C3%BCe+y+multimedia+de+acceso+inteligente+a+la+informaci%C3%B3n+biom%C3%A9dica+en+contextos+de+educaci%C3%B3n+superior&btnG=&hl=es&as_sdt=0},
    year = {2012}
    }

  • Cortizo, J. C., Carrero, F., Cantador, I., Troyano, J. A., & Rosso, P.. (2012). Introduction to the special section on search and mining user-generated content. Acm transactions on intelligent systems and technology, 3(4), 1-3.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The primary goal of this special section of ACM Transactions on Intelligent Systems and Technology is to foster research in the interplay between Social Media, Data/Opinion Mining and Search, aiming to reflect the actual developments in technologies that exploit user-generated content.

    @ARTICLE{Cortizo2012,
    author = {Cortizo , José Carlos and Carrero , Francisco and Cantador , Iván and Troyano , José Antonio and Rosso , Paolo},
    title = {Introduction to the Special Section on Search and Mining User-Generated Content},
    journal = {ACM Transactions on Intelligent Systems and Technology},
    year = {2012},
    volume = {3},
    pages = {1-3},
    number = {4},
    month = {September},
    abstract = {The primary goal of this special section of ACM Transactions on Intelligent Systems and Technology is to foster research in the interplay between Social Media, Data/Opinion Mining and Search, aiming to reflect the actual developments in technologies that exploit user-generated content.},
    chapter = {65},
    doi = {10.1145/2337542.2337550},
    issn = {2157-6904},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Introduction+to+the+Special+Section+on+Search+and+Mining+User-Generated+Content&btnG=&hl=es&as_sdt=0%2C5},
    urldate = {2013-01-10}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Padron, V.. (2012). Personalized health care system with virtual reality rehabilitation and appropriate information for seniors. Sensors, 12(5), 5502-5516.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last years. It is assumed that all sectors should have access to information and reap its benefits. Elderly people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elderly and Persons with Disability to Access the Information Society) is a European effort, whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the internet and the information society. Naviga also allows the creation of services targeted to social networks, mind training and personalized health care. In this paper we focus on the health care and information services designed on the project, the technological platform developed and details of two representative elements, the virtual reality hand rehabilitation and the health information intelligent system.

    @ARTICLE{Gachet2012a,
    author = {Gachet Páez, Diego and Aparicio , Fernando and Buenaga , Manuel and Padron , Victor},
    title = {Personalized Health Care System with Virtual Reality Rehabilitation and Appropriate Information for Seniors},
    journal = {Sensors},
    year = {2012},
    volume = {12},
    pages = {5502-5516},
    number = {5},
    month = {april},
    abstract = {The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last years. It is assumed that all sectors should have access to information and reap its benefits. Elderly people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elderly and Persons with Disability to Access the Information Society) is a European effort, whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the internet and the information society. Naviga also allows the creation of services targeted to social networks, mind training and personalized health care. In this paper we focus on the health care and information services designed on the project, the technological platform developed and details of two representative elements, the virtual reality hand rehabilitation and the health information intelligent system.},
    doi = {10.3390/s120505502},
    issn = {1424-8220},
    url = {http://scholar.google.es/scholar?q=allintitle%3APersonalized+Health+Care+System+with+Virtual+Reality+Rehabilitation+and+Appropriate+Information+for+Seniors&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-19}
    }

  • Gachet Páez, D., Buenaga Rodríguez, M., Aparicio Galisteo, F., & Padrón, V.. (2012). Integrating internet of things and cloud computing for health services provisioning: the virtual cloud carer project. Sixth international conference on innovative mobile and internet services in ubiquitous computing, 918-921.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elder people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since elder people have different health problems that the rest of the population, we need a deep change in national´s health policy to get adapted to population aging. This paper describes the preliminary advances of Virtual Cloud Carer (VCC), a Spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics elder lies, using technologies associated with internet of things and cloud computing.

    @OTHER{Gachet2012,
    abstract = {The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elder people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since elder people have different health problems that the rest of the population, we need a deep change in national´s health policy to get adapted to population aging. This paper describes the preliminary advances of Virtual Cloud Carer (VCC), a Spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics elder lies, using technologies associated with internet of things and cloud computing.},
    address = {Palermo},
    author = {Gachet Páez, Diego and Buenaga Rodríguez , Manuel and Aparicio Galisteo , Fernando and Padrón , Victor},
    doi = {10.1109/IMIS.2012.25},
    journal = {Sixth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing},
    month = {July},
    pages = {918-921 },
    title = {Integrating Internet of Things and Cloud Computing for Health Services Provisioning: The Virtual Cloud Carer Project},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Integrating+Internet+of+Things+and+Cloud+Computing+for+Health+Services+Provisioning%3A+The+Virtual+Cloud+Carer+Project&btnG=&hl=es&as_sdt=0},
    year = {2012}
    }

  • Muñoz Gil, R., Aparicio Galisteo, F., & Buenaga Rodríguez, M.. (2012). Sistema de acceso a la información basado en conceptos utilizando freebase en español-inglés sobre el dominio médico y turístico. Procesamiento de lenguaje natural, 49.
    [BibTeX] [Abstract] [Google Scholar]
    En este artículo presentamos una herramienta de acceso a la información, basado en los conceptos, enfocada tanto a textos médicos como turísticos. Usando técnicas para el marcado de entidades reconocidas, el sistema permite extraer conceptos relevantes para aportar más información sobre ellos utilizando bases de conocimiento colaborativas y ontologías. Componentes especialmente interesantes para el desarrollo del sistema son Freebase, una gran base de conocimiento colaborativa, además de recursos formales como MedlinePlus y PubMed. La arquitectura del sistema ha sido construida pensando en términos de escalabilidad, para constituir una gran plataforma de integración de información, con los siguientes objetivos: permitir la integración de diferentes técnicas de procesamiento de lenguaje natural y ampliar las fuentes desde las que se extrae información, así como facilitar la integración de nuevas interfaces de usuario.

    @ARTICLE{MunozGil2012,
    author = {Muñoz Gil , Rafael and Aparicio Galisteo , Fernando and Buenaga Rodríguez , Manuel},
    title = {Sistema de Acceso a la Información basado en conceptos utilizando Freebase en Español-Inglés sobre el dominio Médico y Turístico},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2012},
    volume = {49},
    abstract = {En este artículo presentamos una herramienta de acceso a la información, basado en los conceptos, enfocada tanto a textos médicos como turísticos. Usando técnicas para el marcado de entidades reconocidas, el sistema permite extraer conceptos relevantes para aportar más información sobre ellos utilizando bases de conocimiento colaborativas y ontologías. Componentes especialmente interesantes para el desarrollo del sistema son Freebase, una gran base de conocimiento colaborativa, además de recursos formales como MedlinePlus y PubMed. La arquitectura del sistema ha sido construida pensando en términos de escalabilidad, para constituir una gran plataforma de integración de información, con los siguientes objetivos: permitir la integración de diferentes técnicas de procesamiento de lenguaje natural y ampliar las fuentes desde las que se extrae información, así como facilitar la integración de nuevas interfaces de usuario.},
    issn = {1135-5948},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Sistema+de+Acceso+a+la+Informaci%C3%B3n+basado+en+conceptos+utilizando+Freebase+en+Espa%C3%B1ol-Ingl%C3%A9s+sobre+el+dominio+M%C3%A9dico+y+Tur%C3%ADstico&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Aparicio, F., Ascanio, J. R., & Beaterio, A.. (2012). Innovative health services using cloud computing and internet of things. In Ubiquitous computing and ambient intelligence (pp. 415-421). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elderly people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since the elderly have different health problems that the rest of the population, we need a deep change in national\’s health policy to get adapted to population ageing. This paper describes the preliminary advances of \’Virtual Cloud Carer\’ (VCC), a spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics, using technologies associated with internet of things and cloud computing.

    @INCOLLECTION{Paez2012,
    author = {Gachet Páez, Diego and Aparicio , Fernando and Ascanio , Juan R. and Beaterio , Alberto},
    title = {Innovative Health Services Using Cloud Computing and Internet of Things},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    publisher = {Springer Berlin Heidelberg},
    year = {2012},
    series = {Lecture Notes in Computer Science},
    pages = {415-421},
    month = {jan},
    abstract = {The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elderly people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since the elderly have different health problems that the rest of the population, we need a deep change in national\'s health policy to get adapted to population ageing. This paper describes the preliminary advances of \'Virtual Cloud Carer\' (VCC), a spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics, using technologies associated with internet of things and cloud computing.},
    copyright = {©2012 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/978-3-642-35377-2_58},
    isbn = {978-3-642-35376-5, 978-3-642-35377-2},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+%22innovative+health+services+using+cloud+computing+and+internet+of+things%22&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-21}
    }

  • de la Villa, M., Aparicio, F., Maña, M. J., & Buenaga, M.. (2012). A learning support tool with clinical cases based on concept maps and medical entity recognition. Paper presented at the Proceedings of the 2012 acm international conference on intelligent user interfaces, New York, NY, USA.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The search for truthful health information through Internet is an increasingly complex process due to the growing amount of resources. Access to information can be difficult to control even in environments where the goal pursued is well-defined, as in the case of learning activities with medical students. In this paper, we present a computer tool devised to ease the process of understanding medical concepts from information in clinical case histories. To this end, it automatically constructs concept maps and presents reliable information from different ontologies and knowledge bases. The two main components of the system are an Intelligent Information Access interface and a Concept Map Graph that retrieves medical concepts from a text input, and provides rich information and semantically related concepts. The paper includes a user evaluation of the first component and a systematic assessment for the second component. Results show that our proposal can be efficient and useful for students in a medical learning environment.

    @INPROCEEDINGS{Villa2012,
    author = {de la Villa , Manuel and Aparicio , Fernando and Maña , Manuel J. and Buenaga , Manuel},
    title = {A learning support tool with clinical cases based on concept maps and medical entity recognition},
    booktitle = {Proceedings of the 2012 ACM international conference on Intelligent User Interfaces},
    year = {2012},
    series = {IUI ´12},
    pages = {61-70},
    address = {New York, NY, USA},
    publisher = {ACM},
    abstract = {The search for truthful health information through Internet is an increasingly complex process due to the growing amount of resources. Access to information can be difficult to control even in environments where the goal pursued is well-defined, as in the case of learning activities with medical students. In this paper, we present a computer tool devised to ease the process of understanding medical concepts from information in clinical case histories. To this end, it automatically constructs concept maps and presents reliable information from different ontologies and knowledge bases. The two main components of the system are an Intelligent Information Access interface and a Concept Map Graph that retrieves medical concepts from a text input, and provides rich information and semantically related concepts. The paper includes a user evaluation of the first component and a systematic assessment for the second component. Results show that our proposal can be efficient and useful for students in a medical learning environment.},
    doi = {10.1145/2166966.2166978},
    isbn = {978-1-4503-1048-2},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+A+learning+support+tool+with+clinical+cases+based+on+concept+maps+and+medical+entity+recognition&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}}

2011

  • Aparicio, F., Buenaga, M., Gachet Páez, D., Puertas, E., & Giráldez, I.. (2011). Tmt: a scalable platform to enrich translational medicine environments. Proceedings of the iadis international conference, e-society, 401-405.
    [BibTeX] [Abstract] [Google Scholar]
    In this paper we present TMT (Translational Medicine Tool), a scalable platform to integrate applicable techniques within the paradigm of translational medicine. Particularly relevant components to the development are Freebase, a large collaborative base of knowledge, General Architecture for Text Engineering (GATE), a system for text processing, and PubMed, a scientific literature repository. The platform architecture has been built thinking in scalability, in several ways: to allow the integration of different natural language processing techniques, to expand the sources from which to perform the information extraction and to ease integration of new user interfaces.

    @OTHER{Aparicio2011a,
    abstract = {In this paper we present TMT (Translational Medicine Tool), a scalable platform to integrate applicable techniques within the paradigm of translational medicine. Particularly relevant components to the development are Freebase, a large collaborative base of knowledge, General Architecture for Text Engineering (GATE), a system for text processing, and PubMed, a scientific literature repository. The platform architecture has been built thinking in scalability, in several ways: to allow the integration of different natural language processing techniques, to expand the sources from which to perform the information extraction and to ease integration of new user interfaces.},
    author = {Aparicio , Fernando and Buenaga , Manuel and Gachet Páez, Diego and Puertas , Enrique and Giráldez , Ignacio},
    journal = {Proceedings of the IADIS International conference, e-Society},
    month = {Marzo},
    pages = {401-405},
    title = {TMT: A scalable platform to enrich translational medicine environments},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+TMT%3A+A+scalable+platform+to+enrich+translational+medicine+environments&btnG=&hl=es&as_sdt=0},
    year = {2011}
    }

  • Aparicio, F., Buenaga Rodríguez, M., Rubio, M., Hernando, M. A., Gachet Páez, D., Puertas Sanz, E., & Giráldez, I.. (2011). Tmt: a tool to guide users in finding information on clinical texts. .
    [BibTeX] [Abstract] [Google Scholar]
    The large amount of medical information available through the Internet, in both structure and text formats, makes that different types of users will encounter different problems when they have to carry out an effective search. On the one hand, medical students, health staff and researchers in the field of biomedicine have a variety of sources and tools of different characteristics which require a learning period sometimes insurmountable. On the other hand, patients, family members and people outside of the medical profession, face the added problem of not being sufficiently familiarized with medical terminology. In this paper we present a tool that can extract relevant medical concepts present in a clinical text, using techniques for named entity recognition, applied on lists of concepts, and annotation techniques from ontologies. To propose these concepts, our tool makes use of a non formal knowledge source, such as Freebase, and formal resources such as MedlinePlus and PubMed. We argue that the combination of these resources, with information less formal and more plain language (like Freebase), with formal information and more plain language (like Medlineplus) or with formal information and more technical language (such as the Pubmed scientific literature), optimize the process of discover medical information on a complex clinical case to users with different profiles and needs, such as are patients, doctors or researchers. Our ultimate goal is to build a platform to accommodate different techniques facilitating the practice of translational medicine.

    @MISC{Aparicio2011b,
    author = {Aparicio , Fernando and Buenaga Rodríguez , Manuel and Rubio , Margarita and Hernando , María Asunción and Gachet Páez, Diego and Puertas Sanz , Enrique and Giráldez , Ignacio},
    title = {TMT: A tool to guide users in finding information on clinical texts},
    howpublished = {http://www.sepln.org/ojs/ojs-2.2/index.php/pln/article/viewArticle/836},
    year = {2011},
    abstract = {The large amount of medical information available through the Internet, in both structure and text formats, makes that different types of users will encounter different problems when they have to carry out an effective search. On the one hand, medical students, health staff and researchers in the field of biomedicine have a variety of sources and tools of different characteristics which require a learning period sometimes insurmountable. On the other hand, patients, family members and people outside of the medical profession, face the added problem of not being sufficiently familiarized with medical terminology. In this paper we present a tool that can extract relevant medical concepts present in a clinical text, using techniques for named entity recognition, applied on lists of concepts, and annotation techniques from ontologies. To propose these concepts, our tool makes use of a non formal knowledge source, such as Freebase, and formal resources such as MedlinePlus and PubMed. We argue that the combination of these resources, with information less formal and more plain language (like Freebase), with formal information and more plain language (like Medlineplus) or with formal information and more technical language (such as the Pubmed scientific literature), optimize the process of discover medical information on a complex clinical case to users with different profiles and needs, such as are patients, doctors or researchers. Our ultimate goal is to build a platform to accommodate different techniques facilitating the practice of translational medicine.},
    journal = {Procesamiento de Lenguaje Natural},
    pages = {27-34},
    shorttitle = {TMT},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+TMT%3A+A+tool+to+guide+users+in+finding+information+on+clinical+texts&btnG=&hl=es&as_sdt=0},
    volume = {46}
    }

  • Aparicio, F., Muñoz, R., Buenaga, M., & Puertas, E.. (2011). Mdfaces: an intelligent system to recognize significant terms in texts from different domains using freebase. Procesamiento de lenguaje natural, 47, 317-318.
    [BibTeX] [Abstract] [Google Scholar]
    MDFaces (Multi-Domain Faces) is an intelligent system that allows recognition of relevant concepts in texts, from different domains, and shows detailed and semantics information related to these concepts. For its development, it is have been employed a methodology that uses a general knowledge ontology called Freebase. In particular, we have implemented this methodology for medical and tourism domains.

    @ARTICLE{Aparicio2011,
    author = {Aparicio , Fernando and Muñoz , Rafael and Buenaga , Manuel and Puertas , Enrique},
    title = {MDFaces: An intelligent system to recognize significant terms in texts from different domains using Freebase},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2011},
    volume = {47},
    pages = {317--318},
    month = {september},
    abstract = {MDFaces (Multi-Domain Faces) is an intelligent system that allows recognition of relevant concepts in texts, from different domains, and shows detailed and semantics information related to these concepts. For its development, it is have been employed a methodology that uses a general knowledge ontology called Freebase. In particular, we have implemented this methodology for medical and tourism domains.},
    copyright = {La propiedad intelectual de los artículos pertenece a los autores y los derechos de edición y publicación a la revista. Los artículos publicados en la revista podrán ser usados libremente para propósitos educativos y científicos, siempre y cuando se realice una correcta citación del mismo. Cualquier uso comercial queda expresamente penado por la ley.},
    issn = {1989-7553},
    language = {es\_ES},
    shorttitle = {MDFaces},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMDFaces%3A+An+intelligent+system+to+recognize+significant+terms+in+texts+from+different+domains+using+Freebase&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-19}
    }

  • Buenaga Rodriguez, M., Rubio, M., Aparicio Galisteo, F., & Hernando, A.. (2011). Conceptcase: una metodología para la integración de aprendizaje basado en conceptos sobre casos clínicos mediante sistemas inteligentes de acceso a información en internet. .
    [BibTeX] [Abstract] [Google Scholar]
    En este trabajo presentamos ConceptCase, una metodología orientada a la integración de aprendizaje basado en conceptos y aprendizaje basado en casos. La metodología se basa en que el estudiante pueda profundizar fácilmente en los conceptos que aparecen en un caso (nos hemos focalizado en casos clínicos y estudiantes de medicina), gracias a la utilización de un sistema inteligente de acceso a la información en Internet, que permite identificar los conceptos y acceder de forma directa a información sobre ellos. Para la definición y evaluación de nuestra metodología, hemos desarrollado una experiencia inicial sobre un caso clínico en el marco de una asignatura de 2º curso de Grado en Medicina. El caso en concreto era de una paciente con una patología cardíaca, en el que surgen conceptos relacionados con la descripción de la enfermedad, su evolución y tratamiento, y seleccionamos como ontologías o bases de conceptos MedlinePlus y FreeBase. Conducimos una experiencia de evaluación sobre un conjunto de 60 alumnos, obteniendo resultados positivos, tanto desde el punto de vista de los resultados objetivos del aprendizaje, como de satisfacción de los usuarios.

    @INPROCEEDINGS{BuenagaRodri­guez2011,
    author = {Buenaga Rodriguez , Manuel and Rubio , Margarita and Aparicio Galisteo , Fernando and Hernando , Asunción},
    title = {ConceptCase: Una metodología para la integración de aprendizaje basado en conceptos sobre casos clínicos mediante sistemas inteligentes de acceso a información en Internet},
    year = {2011},
    abstract = {En este trabajo presentamos ConceptCase, una metodología orientada a la integración de aprendizaje basado en conceptos y aprendizaje basado en casos. La metodología se basa en que el estudiante pueda profundizar fácilmente en los conceptos que aparecen en un caso (nos hemos focalizado en casos clínicos y estudiantes de medicina), gracias a la utilización de un sistema inteligente de acceso a la información en Internet, que permite identificar los conceptos y acceder de forma directa a información sobre ellos. Para la definición y evaluación de nuestra metodología, hemos desarrollado una experiencia inicial sobre un caso clínico en el marco de una asignatura de 2º curso de Grado en Medicina. El caso en concreto era de una paciente con una patología cardíaca, en el que surgen conceptos relacionados con la descripción de la enfermedad, su evolución y tratamiento, y seleccionamos como ontologías o bases de conceptos MedlinePlus y FreeBase. Conducimos una experiencia de evaluación sobre un conjunto de 60 alumnos, obteniendo resultados positivos, tanto desde el punto de vista de los resultados objetivos del aprendizaje, como de satisfacción de los usuarios.},
    journal = {VIII Jornadas Internacionales de Innovación Universitaria},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+UNA+METODOLOG%C3%8DA+PARA+LA++INTEGRACI%C3%93N+DE+APRENDIZAJE+BASADO+EN++CONCEPTOS+SOBRE+CASOS+CL%C3%8DNICOS+MEDIANTE++SISTEMAS+INTELIGENTES+DE+ACCESO+A++INFORMACI%C3%93N+EN+INTERNET&btnG=&hl=es&as_sdt=0}
    }

  • Cantador, I., Cortizo, J. C., Carrero, F., Troyano, J. A., Rosso, P., & Schedl, M.. (2011). Overview of the third international workshop on search and mining user-generated contents. Paper presented at the Proceedings of the 20th acm international conference on information and knowledge management, New York, NY, USA.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper, we provide an overview of the 3rd International Workshop on Search and Mining User-generated Contents, held in conjunction with the 20th ACM International Conference on Information and Knowledge Management. We present the motivation and goals of the workshop, and some statistics and details about accepted papers and keynotes.

    @INPROCEEDINGS{Cantador2011,
    author = {Cantador , Ivan and Cortizo , José Carlos and Carrero , Francisco and Troyano , Jose A. and Rosso , Paolo and Schedl , Markus},
    title = {Overview of the third international workshop on search and mining user-generated contents},
    booktitle = {Proceedings of the 20th ACM international conference on Information and knowledge management},
    year = {2011},
    pages = {2625-2626},
    address = {New York, NY, USA},
    publisher = {ACM},
    abstract = {In this paper, we provide an overview of the 3rd International Workshop on Search and Mining User-generated Contents, held in conjunction with the 20th ACM International Conference on Information and Knowledge Management. We present the motivation and goals of the workshop, and some statistics and details about accepted papers and keynotes.},
    doi = {10.1145/2063576.2064045},
    isbn = {978-1-4503-0717-8},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Overview+of+the+third+international+workshop+on+search+and+mining+user-generated+contents&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • Cortizo, J. C., Carrero, F. M., & Gómez, J. M.. (2011). Introduction to the special issue: mining social media. International journal of electronic commerce, 15(3), 5-8.
    [BibTeX] [Ver publicacion] [Google Scholar]
    @ARTICLE{Cortizo2011,
    author = {Cortizo , José Carlos and Carrero , Francisco M. and Gómez , José María},
    title = {Introduction to the Special Issue: Mining Social Media},
    journal = {International Journal of Electronic Commerce},
    year = {2011},
    volume = {15},
    pages = {5-8},
    number = {3},
    month = {April},
    doi = {10.2753/JEC1086-4415150301},
    issn = {1086-4415},
    shorttitle = {Introduction to the Special Issue},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Introduction+to+the+Special+Issue%3A+Mining+Social+Media&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • Cortizo Pérez, J. C., Díaz, L. I., Carrero, F., Yanes, A., & Monsalve, B.. (2011). On the future of mobile phones as the heart of community-built databases Springer.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In retrospect, 10 years ago, we would not have imagined ourselves uploading or consuming high-quality videos via the Web, contributing to an online encyclopedia written by millions of users around the world or instantly sharing information with our friends and colleagues using an online platform that allows us to manage our contacts. And the Web is still evolving and what seemed to be science fiction then would becomen reality within 5-10 years. Nowadays, the Mobile Web concept is still an immature prototype of what will be in a few years´ time, but it represents a giant industry (it is expected that some five billion people will be using mobile/cellular phones in 2010) with even greater possibilities in the future. In this paper, we examine the possible future of mobile devices as the heart of community-built databases. The mobile devices characteristics, as both current and future features, will allow them to have a very relevant role not only as interfaces to community-driven databases, but also as platforms where applications using data from community-driven databases will be running, or even as distributed databases where users can have better control of relevant data they are contributing to those databases.

    @BOOK{CortizoPerez2011,
    title = {On the Future of Mobile Phones as the Heart of Community-Built Databases},
    publisher = {Springer},
    year = {2011},
    author = {Cortizo Pérez , José Carlos and Díaz , Luis Ignacio and Carrero , Francisco and Yanes , Adrián and Monsalve , Borja},
    pages = {261-288},
    month = {jan},
    abstract = {In retrospect, 10 years ago, we would not have imagined ourselves uploading or consuming high-quality videos via the Web, contributing to an online encyclopedia written by millions of users around the world or instantly sharing information with our friends and colleagues using an online platform that allows us to manage our contacts. And the Web is still evolving and what seemed to be science fiction then would becomen reality within 5-10 years. Nowadays, the Mobile Web concept is still an immature prototype of what will be in a few years´ time, but it represents a giant industry (it is expected that some five billion people will be using mobile/cellular phones in 2010) with even greater possibilities in the future. In this paper, we examine the possible future of mobile devices as the heart of community-built databases. The mobile devices characteristics, as both current and future features, will allow them to have a very relevant role not only as interfaces to community-driven databases, but also as platforms where applications using data from community-driven databases will be running, or even as distributed databases where users can have better control of relevant data they are contributing to those databases.},
    booktitle = {Community-Built Databases: Research and Development},
    doi = {10.1007/978-3-642-19047-6_11},
    isbn = {9783642190476},
    language = {en},
    shorttitle = {Community-Built Databases},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+On+the+Future+of+Mobile+Phones+as+the+Heart+of+Community-Built+Databases&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Aparicio Galisteo, F., Buenaga Rodríguez, M., Padrón, V., & Alanbari, M.. (2011). Personalized health care and information services for elders. Proceedings wishwell’2011.
    [BibTeX] [Google Scholar]
    @OTHER{GachetPaez2011a,
    address = {Nottingham},
    author = {Gachet Páez , Diego and Aparicio Galisteo , Fernando and Buenaga Rodríguez , Manuel and Padrón , Victor and Alanbari , Mohammad},
    journal = {Proceedings WISHWell’2011},
    month = {July},
    title = {Personalized Health Care and Information Services for Elders},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Personalized+Health+Care+and+Information+Services+for+Elders&btnG=&hl=es&as_sdt=0},
    year = {2011}
    }

  • Gachet Páez, D., Ascanio, J. R., Giráldez, I., & Rubio, M.. (2011). Integrating personalized health care and information access for elder people. In Novais, P., Preuveneers, D., & Corchado, J. M. (Ed.), In Ambient intelligence – software and applications (, Vol. 92pp. 33-40). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elder people and Persons with Disability to Access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @INCOLLECTION{GachetPaez2011,
    author = {Gachet Páez , Diego and Ascanio , Juan R. and Giráldez , Ignacio and Rubio , Margarita},
    title = {Integrating Personalized Health Care and Information Access for Elder People},
    booktitle = {Ambient Intelligence - Software and Applications},
    publisher = {Springer Berlin Heidelberg},
    year = {2011},
    editor = {Novais, Paulo and Preuveneers, Davy and Corchado, Juan M.},
    volume = {92},
    series = {Advances in Intelligent and Soft Computing},
    pages = {33-40},
    month = {January},
    abstract = {The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elder people and Persons with Disability to Access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    copyright = {©2011 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-642-19937-0_5},
    isbn = {978-3-642-19936-3, 978-3-642-19937-0},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Integrating+Personalized+Health+Care+and+Information+Access+for+Elder+People&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • López-Fernández, H., Aparicio Galisteo, F., Glez-Peña, D., Buenaga Rodríguez, M., & Fdez-Riverola, F.. (2011). Herramienta biomédica de anotación y acceso inteligente a información. Iii jornada gallega de bioinformática.
    [BibTeX] [Google Scholar]
    @OTHER{Lopez-Fernandez2011,
    address = {Vigo},
    author = {López-Fernández , H and Aparicio Galisteo , Fernando and Glez-Peña , D and Buenaga Rodríguez , Manuel and Fdez-Riverola , F},
    journal = {III Jornada Gallega de Bioinformática},
    month = {September},
    title = {Herramienta biomédica de anotación y acceso inteligente a información},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Herramienta+biom%C3%A9dica+de+anotaci%C3%B3n+y+acceso+inteligente+a+informaci%C3%B3n&btnG=&hl=es&as_sdt=0},
    year = {2011}
    }

  • Muñoz Gil, R., Aparicio, F., Buenaga, M., Gachet Páez, D., Puertas, E., Giráldez, I., & Gaya, M. C.. (2011). Tourist face: a contents system based on concepts of freebase for access to the cultural-tourist information. In Muñoz, R., Montoyo, A., & Métais, E. (Ed.), In Natural language processing and information systems (, Vol. 6716, pp. 300-304). Berlin, Heidelberg: Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In more and more application areas large collections of digitized multimedia information are gathered and have to be maintained (e.g. in tourism, medicine, etc). Therefore, there is an increasing demand for tools and techniques supporting the management and usage of digital multimedia data. Furthermore, new large collections of data are available through it every day. In this paper we are presenting Tourist Face, a system aimed at integrating text analyzing techniques into the paradigm of multimedia information, specifically tourist multimedia information. Particularly relevant components to its the development are Freebase, a large collaborative base of knowledge, and General Architecture for Text Engineering (GATE), a system for text processing. The platform architecture has been built thinking in terms of scalability, with the following objectives: to allow the integration of different natural language processing techniques, to expand the sources from which information extraction can be performed and to ease integration of new user interfaces.

    @INCOLLECTION{MunozGil2011,
    author = {Muñoz Gil , Rafael and Aparicio , Fernando and Buenaga , Manuel and Gachet Páez, Diego and Puertas , Enrique and Giráldez , Ignacio and Gaya , Maria Cruz},
    title = {Tourist Face: A Contents System Based on Concepts of Freebase for Access to the Cultural-Tourist Information},
    booktitle = {Natural Language Processing and Information Systems},
    publisher = {Springer Berlin Heidelberg},
    year = {2011},
    editor = {Muñoz, Rafael and Montoyo, Andrés and Métais, Elisabeth},
    volume = {6716},
    series = {Lecture Notes in Computer Science},
    pages = {300-304},
    address = {Berlin, Heidelberg},
    abstract = {In more and more application areas large collections of digitized multimedia information are gathered and have to be maintained (e.g. in tourism, medicine, etc). Therefore, there is an increasing demand for tools and techniques supporting the management and usage of digital multimedia data. Furthermore, new large collections of data are available through it every day. In this paper we are presenting Tourist Face, a system aimed at integrating text analyzing techniques into the paradigm of multimedia information, specifically tourist multimedia information. Particularly relevant components to its the development are Freebase, a large collaborative base of knowledge, and General Architecture for Text Engineering (GATE), a system for text processing. The platform architecture has been built thinking in terms of scalability, with the following objectives: to allow the integration of different natural language processing techniques, to expand the sources from which information extraction can be performed and to ease integration of new user interfaces.},
    doi = {10.1007/978-3-642-22327-3_43},
    isbn = {978-3-642-22326-6},
    shorttitle = {Tourist Face},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Tourist+Face%3A+A+Contents+System+Based+on+Concepts+of+Freebase+for+Access+to+the+Cultural-Tourist+Information&btnG=&hl=es&as_sdt=0}
    }

2010

  • Gachet Páez, D., Exposito, D., Ascanio, J. R., & Garcia Leiva, R.. (2010). Integracion de servicios inteligentes de e-salud y acceso a la informacion para personas mayores. Novática. revista de la asociación de técnicos en informática(208).
    [BibTeX] [Google Scholar]
    @OTHER{GachetNovatica2010a,
    author = {Gachet Páez, Diego and Exposito, Diego and Ascanio, Juan Ramon and Garcia Leiva, Rafael},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {208},
    title = {Integracion de servicios inteligentes de e-salud y acceso a la informacion para personas mayores},
    url = {http://scholar.google.es/scholar?q=Novatica+Integracion+de+servicios+inteligentes+de+e-salud+y+acceso+a+la+informacion+para+personas+mayores&btnG=&hl=es&as_sdt=0%2C5},
    year = {2010}
    }

  • Buenaga, M., Fdez-Riverola, F., Maña, M., Puertas, E., Glez-Peña, D., & Mata, J.. (2010). Medical-miner: integración de conocimiento textual explícito en técnicas de minería de datos para la creación de herramientas traslacionales en medicina. Xxvi congreso de la sepln (sociedad española para el procesamiento del lenguaje natural), 45, 319-320.
    [BibTeX] [Abstract] [Google Scholar]
    The project proposes to analyse, experiment and develop new text and data mining techniques in an interrelated way, in intelligent medical information systems. An intelligent information access system based on them will be developed, offering advanced functionalities able to interrelate medical information, mainly information (text and data) from clinical records and scientific documentation, making use of standard resources of the domain (e.g. UMLS, SNOMED, Gene Ontology). An open source platform will be developed integrating all the elements.

    @OTHER{Buenaga2010,
    abstract = {The project proposes to analyse, experiment and develop new text and data mining techniques in an interrelated way, in intelligent medical information systems. An intelligent information access system based on them will be developed, offering advanced functionalities able to interrelate medical information, mainly information (text and data) from clinical records and scientific documentation, making use of standard resources of the domain (e.g. UMLS, SNOMED, Gene Ontology). An open source platform will be developed integrating all the elements.},
    author = {Buenaga , Manuel and Fdez-Riverola , Florentino and Maña , Manuel and Puertas , Enrique and Glez-Peña , Daniel and Mata , Jacinto},
    journal = {XXVI Congreso de la SEPLN (Sociedad Española para el Procesamiento del Lenguaje Natural)},
    month = {September},
    pages = {319-320},
    title = {Medical-Miner: Integración de conocimiento textual explícito en técnicas de minería de datos para la creación de herramientas traslacionales en medicina},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMedical-Miner%3A+Integraci%C3%B3n+de+conocimiento+textual+expl%C3%ADcito+en+t%C3%A9cnicas+%09de+miner%C3%ADa+de+datos+para+la+creaci%C3%B3n+de+herramientas+traslacionales+%09en+medicina&btnG=&hl=es&as_sdt=0},
    volume = {45},
    year = {2010}
    }

  • Cortizo Pérez, J. C., Carrero, F. M., & Monsalve, B.. (2010). An architecture for a general purpose multi-algorithm recommender system. Roceedings of the workshop on the practical use of recommender systems, algorithms and technologies (prsat 2010), 51-54.
    [BibTeX] [Abstract] [Google Scholar]
    Although the actual state-of-the-art on Recommender Systems is good enough to allow recommendations and personalization along many application fields, developing a general purpose multi-algorithm recommender system is a tough task. In this paper we present the main challenges involved on developing such system and a system\’s architecture that allows us to face this challenges.

    @OTHER{CortizoPerez2010,
    abstract = {Although the actual state-of-the-art on Recommender Systems is good enough to allow recommendations and personalization along many application fields, developing a general purpose multi-algorithm recommender system is a tough task. In this paper we present the main challenges involved on developing such system and a system\'s architecture that allows us to face this challenges.},
    author = {Cortizo Pérez , José Carlos and Carrero , Francisco M. and Monsalve , Borja},
    journal = {roceedings of the Workshop on the Practical Use of Recommender Systems, Algorithms and Technologies (PRSAT 2010)},
    pages = {51-54},
    title = {An Architecture for a General Purpose Multi-Algorithm Recommender System},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+An+Architecture+for+a+General+Purpose+Multi-Algorithm+Recommender+System&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga, M., Padrón, V., & Alanbari, M.. (2010). Helping elderly people and persons with disability to access theinformation society. , 72, 189-192.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010a,
    abstract = {NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Padrón , Víctor and Alanbari , Mohammad},
    booktitle = {Ambient Intelligence and Future Trends-International Symposium onAmbient Intelligence},
    doi = {10.1007/978-3-642-13268-1_23},
    pages = {189-192},
    publisher = {Springer Berlin / Heidelberg},
    series = {Advances in Soft Computing},
    title = {Helping Elderly People and Persons with Disability to Access theInformation Society},
    url = {http://scholar.google.es/scholar?q=allintitle%3AHelping+Elderly+People+and+Persons+with+Disability+to+Access+the+Information+Society&btnG=&hl=es&as_sdt=0},
    volume = {72},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga, M., Padrón, V., & Aparicio, F.. (2010). Integrating intelligent e-services and information access for elder people. Confidence international conference.
    [BibTeX] [Abstract] [Google Scholar]
    The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the elderly people and persons with disability to access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010b,
    abstract = {The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the elderly people and persons with disability to access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Padrón , Víctor and Aparicio , Fernando},
    journal = { CONFIDENCE International Conference},
    title = {Integrating Intelligent e-Services and Information Access for Elder People},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+Intelligent+e-Services+and+Information+Access+for+Elder+%09People&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga, M., Villalba, M., & Lara, P.. (2010). An open and adaptable platform for elderly people and persons with disability to access the information society. .
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010,
    abstract = {NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Villalba , Maite and Lara , Pedro},
    booktitle = {Pervasive Health},
    doi = {10.4108/ICST.PERVASIVEHEALTH2010.8882},
    month = {March},
    title = {An Open and adaptable platform for elderly people and persons with disability to access the information society},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Open+and+adaptable+platform+for+elderly+people+and+persons+with+%09disability+to+access+the+information+society&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gachet Páez, D., Padrón, V., & Alanbari, M.. (2010). Mobile and pervasive computing to helps parents of low birth weight babies. .
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2010c,
    author = {Gachet Páez, Diego and Padrón , Víctor and Alanbari , Mohammad},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    title = {Mobile and Pervasive Computing to Helps Parents of Low Birth Weight Babies},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+and+Pervasive+Computing+to+Helps+Parents+of+Low+Birth+Weight+Babies&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga Rodríguez, M., Escribano Otero, J. J., & Rubio, M.. (2010). Helping elderly people and persons with disability to access the information society: the naviga project. The european ambient assisted living innovation alliance (aaliance) conference 2010.
    [BibTeX] [Google Scholar]
    @OTHER{GachetPaez2010,
    address = {Málaga},
    author = {Gachet Páez , Diego and Buenaga Rodríguez , Manuel and Escribano Otero , Juan José and Rubio , Margarita},
    journal = {The European Ambient Assisted Living Innovation Alliance (AALIANCE) Conference 2010},
    month = {March},
    title = {Helping elderly people and persons with disability to access the Information Society: the Naviga Project},
    url = {http://scholar.google.es/scholar?q=allintitle%3AHelping+elderly+people+and+persons+with+disability+to+access+the+Information+Society%3A+the+Naviga+Project&btnG=&hl=es&as_sdt=0%2C5},
    year = {2010}
    }

  • Gaya, M. C., & Giráldez, I. J.. (2010). Merging local patterns using an evolutionary approach. Knowledge and information systems, 29, 1-24.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes a Decentralized Agent-based model for Theory Synthesis (DATS) implemented by MASETS, a Multi-Agent System for Evolutionary Theory Synthesis. The main contributions are the following: first, a method for the synthesis of a global theory from distributed local theories. Second, a conflict resolution mechanism, based on genetic algorithms, that deals with collision/contradictions in the knowledge discovered by different agents at their corresponding locations. Third, a system-level classification procedure that improves the results obtained from both: the monolithic classifier and the best local classifier. And fourth, a method for mining very large datasets that allows for divide-and-conquer mining followed by merging of discoveries. The model is validated with an experimental application run on 15 datasets. Results show that the global theory outperforms all the local theories, and the monolithic theory (obtained from mining the concatenation of all the available distributed data), in a statistically significant way.

    @ARTICLE{Gaya2010,
    author = {Gaya , María Cruz and Giráldez , J. Ignacio},
    title = {Merging local patterns using an evolutionary approach},
    journal = {Knowledge and Information Systems},
    year = {2010},
    volume = {29},
    pages = {1-24},
    abstract = {This paper describes a Decentralized Agent-based model for Theory Synthesis (DATS) implemented by MASETS, a Multi-Agent System for Evolutionary Theory Synthesis. The main contributions are the following: first, a method for the synthesis of a global theory from distributed local theories. Second, a conflict resolution mechanism, based on genetic algorithms, that deals with collision/contradictions in the knowledge discovered by different agents at their corresponding locations. Third, a system-level classification procedure that improves the results obtained from both: the monolithic classifier and the best local classifier. And fourth, a method for mining very large datasets that allows for divide-and-conquer mining followed by merging of discoveries. The model is validated with an experimental application run on 15 datasets. Results show that the global theory outperforms all the local theories, and the monolithic theory (obtained from mining the concatenation of all the available distributed data), in a statistically significant way.},
    doi = {10.1007/s10115-010-0332-x},
    issn = {0219-1377},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMerging+local+patterns+using+an+evolutionary+approach&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M.. (2010). Experiencias de investigación en la universidad y en la empresa. Novática. revista de la asociación de técnicos en informática(206).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo2010a,
    author = {Gómez Hidalgo , José María},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {206},
    title = {Experiencias de investigación en la universidad y en la empresa},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Experiencias+de+investigaci%C3%B3n+en+la+universidad+y+en+la+empresa&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gómez Hidalgo, J. M., Martín Abreu, J. M., García Bringas, P., & Santos Grueiro, I.. (2010). Content security and privacy preservation in social networks through text mining. Workshop on interoperable social multimedia applications (wisma 2010).
    [BibTeX] [Abstract] [Google Scholar]
    Due to their huge popularity, Social Networks are increasingly being used as malware, spam and phishing propagation applications. Moreover, Social Networks are being widely recognized as a source of private (either corporate or personal) information leaks. Within the project Segur @, Optenet has developed a number of prototypes that deal with these problems, based on several techniques that share text mining as the underlying approach. These prototypes include a malware detection system based on Information Retrieval techniques, a compression-based spam filter, and a Data Leak Prevention system that makes use of Named Entity Recognition techniques.

    @OTHER{GomezHidalgo2010,
    abstract = {Due to their huge popularity, Social Networks are increasingly being used as malware, spam and phishing propagation applications. Moreover, Social Networks are being widely recognized as a source of private (either corporate or personal) information leaks. Within the project Segur
    @, Optenet has developed a number of prototypes that deal with these problems, based on several techniques that share text mining as the underlying approach. These prototypes include a malware detection system based on Information Retrieval techniques, a compression-based spam filter, and a Data Leak Prevention system that makes use of Named Entity Recognition techniques.},
    address = {Barcelona},
    author = {Gómez Hidalgo , José María and Martín Abreu , José Miguel and García Bringas , Pablo and Santos Grueiro , Igor},
    journal = {Workshop on Interoperable Social Multimedia Applications (WISMA 2010)},
    title = {Content Security and Privacy Preservation in Social Networks through Text Mining},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Content+Security+and+Privacy+Preservation+in+Social+Networks+through+Text+Mining&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

2009

  • Cortizo Pérez, J. C., Carrero García, F. M., Gómez Hidalgo, J. M., Monsalve Piqueras, B., & Puertas Sanz, E.. (2009). Introduction to mining social media. 13th conference of the spanish association for artificial intelligence.
    [BibTeX] [Google Scholar]
    @OTHER{CortizoPerez2009,
    author = {Cortizo Pérez , José Carlos and Carrero García , Francisco M. and Gómez Hidalgo , José María and Monsalve Piqueras , Borja and Puertas Sanz , Enrique},
    booktitle = {Proceedings of the 1st International Workshop on Mining Social Media},
    journal = {13th Conference of the Spanish Association for Artificial Intelligence},
    month = {November},
    title = {Introduction to Mining Social Media},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Introduction+to+Mining+Social+Media&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Gachet Páez, D., Buenaga, M., Giraldez, J. I., & Padrón, V.. (2009). Agent based risk patient management. .
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in longstay hospitals, and in some cases, in the homes of patients wich a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center.The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system wil allow to transmit the patient´s location and some information about his/her illness to the hospital or care centre.

    @OTHER{Gachet2009,
    abstract = {This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in longstay hospitals, and in some cases, in the homes of patients wich a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center.The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system wil allow to transmit the patient´s location and some information about his/her illness to the hospital or care centre.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Giraldez , José Ignacio and Padrón , Víctor},
    booktitle = {Ambient Intelligence Perspectives},
    doi = {10.3233/978-1-58603-946-2-90},
    publisher = {Ambient Intelligence Forum },
    title = {Agent Based Risk Patient Management},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAgent+Based+Risk+Patient+Management&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Giráldez, I., & Gachet Páez, D.. (2009). Informatización de procesos de negocio mediante la ejecución de su modelo gráfico. , 201, 61-64.
    [BibTeX] [Google Scholar]
    @OTHER{Giraldez2009,
    author = {Giráldez , Ignacio and Gachet Páez, Diego},
    booktitle = {Novática},
    pages = {61-64},
    title = {Informatización de procesos de negocio mediante la ejecución de su modelo gráfico},
    url = {http://scholar.google.es/scholar?q=allintitle%3AInformatizaci%C3%B3n+de+procesos+de+negocio+mediante+la+ejecuci%C3%B3n+de+su+%09modelo+gr%C3%A1fico&btnG=&hl=es&as_sdt=0},
    volume = {201},
    year = {2009}
    }

  • Gómez Hidalgo, J. M., Puertas, E., Carrero, F., & Buenaga, M.. (2009). Web content filtering. Advances in computers – elsevier academic press, 76, 257-306.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Across the years, Internet has evolved from an academic network to a truly communication medium, reaching impressive levels of audience and becoming a billionaire business. Many of our working, studying, and entertainment activities are nowadays overwhelmingly limited if we get disconnected from the net of networks. And of course, with the use comes abuse. The World Wide Web features a wide variety of content that are harmful for children or just inappropriate in the workplace. Web filtering and monitoring systems have emerged as valuable tools for the enforcement of suitable usage policies. These systems are routinely deployed in corporate, library, and school networks, and contribute to detect and limit Internet abuse. Their techniques are increasingly sophisticated and effective, and their development is contributing to the advance of the state of the art in a number of research fields, like text analysis and image processing. In this chapter, we review the main issues regarding Web content filtering, including its motivation, the main operational concerns and techniques used in filtering tools’ development, their evaluation and security, and a number of singular projects in this field.

    @OTHER{GomezHidalgo2009a,
    abstract = {Across the years, Internet has evolved from an academic network to a truly communication medium, reaching impressive levels of audience and becoming a billionaire business. Many of our working, studying, and entertainment activities are nowadays overwhelmingly limited if we get disconnected from the net of networks. And of course, with the use comes abuse. The World Wide Web features a wide variety of content that are harmful for children or just inappropriate in the workplace. Web filtering and monitoring systems have emerged as valuable tools for the enforcement of suitable usage policies. These systems are routinely deployed in corporate, library, and school networks, and contribute to detect and limit Internet abuse. Their techniques are increasingly sophisticated and effective, and their development is contributing to the advance of the state of the art in a number of research fields, like text analysis and image processing. In this chapter, we review the main issues regarding Web content filtering, including its motivation, the main operational concerns and techniques used in filtering tools’ development, their evaluation and security, and a number of singular projects in this field.},
    author = {Gómez Hidalgo , José María and Puertas , Enrique and Carrero , Francisco and Buenaga , Manuel},
    doi = {10.1016/S0065-2458(09)01007-9},
    journal = {Advances in Computers – Elsevier Academic Press},
    pages = {257-306},
    series = {Social Networking and The Web},
    title = {Web Content Filtering},
    url = {http://scholar.google.es/scholar?as_q=Web+Content+Filtering&as_epq=Web+Content+Filtering&as_oq=&as_eq=&as_occt=title&as_sauthors=Hidalgo+G%C3%B3mez+Garc%C3%ADa+Sanz&as_publication=&as_ylo=2009&as_yhi=&btnG=&hl=es&as_sdt=0},
    volume = {76},
    year = {2009}
    }

  • Gómez Hidalgo, J. M., & Puertas Sanz, E.. (2009). Filtrado de pornografía usando análisis de imagen. Linux+ magazine(51), 62-67.
    [BibTeX] [Abstract] [Google Scholar]
    La pornografía constituye, ya desde los comienzos de Internet, un tipo de contenidos muy extendido y fácilmente localizable. Tal es así, que la propia industria pornográfica ha cambiado para adaptarse a esta nueva realidad.

    @OTHER{GomezHidalgo2009,
    abstract = {La pornografía constituye, ya desde los comienzos de Internet, un tipo de contenidos muy extendido y fácilmente localizable. Tal es así, que la propia industria pornográfica ha cambiado para adaptarse a esta nueva realidad.},
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique},
    journal = { Linux+ Magazine},
    month = {Febrero},
    number = {51},
    pages = {62-67},
    title = {Filtrado de pornografía usando análisis de imagen},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Filtrado+de+pornograf%C3%ADa+usando+an%C3%A1lisis+de+imagen&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Gómez-Pérez, J. M., Kohler, S., Melero, R., Serrano-Balazote, P., Lezcano, L., Sicilia, M. Á., Iglesias, A., Castro, E., Rubio, M., & Buenaga, M.. (2009). Towards interoperability in e-health systems: a three-dimensionalapproach based on standards and semantics. Healthinf, international conference on health informatics(58), 205-210.
    [BibTeX] [Abstract] [Google Scholar]
    The interoperability problem in eHealth can only be addressed by means of combining standards and technology. However, these alone do not suffice. An appropriate framework that articulates such combination is required. In this paper, we adopt a three-dimensional (information, concept, and inference) approach for such framework, based on OWL as formal language for terminological and ontological health resources, SNOMED CT as lexical backbone for all such resources, and the standard CEN 13606 for representing EHRs. Based on such framework, we propose a novel form for creating and supporting networks of clinical terminologies. Additionally, we propose a number of software modules to semantically process and exploit EHRs, including NLP-based search and inference, which can support medical applications in heterogeneous and distributed eHealth systems.

    @OTHER{Gomez-Perez2009,
    abstract = {The interoperability problem in eHealth can only be addressed by means of combining standards and technology. However, these alone do not suffice. An appropriate framework that articulates such combination is required. In this paper, we adopt a three-dimensional (information, concept, and inference) approach for such framework, based on OWL as formal language for terminological and ontological health resources, SNOMED CT as lexical backbone for all such resources, and the standard CEN 13606 for representing EHRs. Based on such framework, we propose a novel form for creating and supporting networks of clinical terminologies. Additionally, we propose a number of software modules to semantically process and exploit EHRs, including NLP-based search and inference, which can support medical applications in heterogeneous and distributed eHealth systems.},
    address = {Oporto,Portugal},
    author = {Gómez-Pérez , Jose Manuel and Kohler , Sandra and Melero , Ricardo and Serrano-Balazote , Pablo and Lezcano , Leonardo and Sicilia , Miguel Ángel and Iglesias , Ana and Castro , Elena and Rubio , Margarita and Buenaga , Manuel},
    journal = {Healthinf, International Conference on Health Informatics},
    month = {Enero},
    number = {58},
    pages = {205-210},
    title = {TOWARDS INTEROPERABILITY IN E-HEALTH SYSTEMS: A Three-DimensionalApproach Based on Standards and Semantics},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+TOWARDS+INTEROPERABILITY+IN+E-HEALTH+SYSTEMS%3A+A+Three-Dimensional+Approach+Based+on+Standards+and+Semantics&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

2008

  • Molina, M., & Flores, V.. (2008). A presentation model for multimedia summaries of behavior. Paper presented at the Iui.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Presentation models are used by intelligent user interfaces to automatically construct adapted presentations according to particular communication goals. This paper describes the characteristics of a presentation model that was designed to automatically produce multimedia presentations about the summarized behavior of dynamic systems. The presentation model is part of the MSB application (Multimedia Summarizer of Behavior). MSB was developed for the problem of management of dynamic systems where different types of users (operators, decision-makers, other institutions, etc.) need to be informed about the evolution of the system, especially during critical situations. The paper describes the details of the presentation model based on a hierarchical planner together with graphical resources. The paper also describes an application in the field of hydrology for which the model was developed.

    @inproceedings{DBLP:conf/iui/MolinaF08,
    author = {Molina, Martin and Flores, Victor},
    abstract = {Presentation models are used by intelligent user interfaces to automatically construct adapted presentations according to particular communication goals. This paper describes the characteristics of a presentation model that was designed to automatically produce multimedia presentations about the summarized behavior of dynamic systems. The presentation model is part of the MSB application (Multimedia Summarizer of Behavior). MSB was developed for the problem of management of dynamic systems where different types of users (operators, decision-makers, other institutions, etc.) need to be informed about the evolution of the system, especially during critical situations. The paper describes the details of the presentation model based on a hierarchical planner together with graphical resources. The paper also describes an application in the field of hydrology for which the model was developed.},
    title = {A presentation model for multimedia summaries of behavior},
    booktitle = {IUI},
    year = {2008},
    pages = {369-372},
    doi = {10.1145/1378773.1378832},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+presentation+model+for+multimedia+summaries+of+behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Buenaga, M., Gachet Páez, D., Maña, M. J., de la Villa, M., & Mata, J.. (2008). Clustering and summarizing medical documents to improve mobile retrieval. Acm-sigir workshop on mobile information retrieval, 54-57.
    [BibTeX] [Abstract] [Google Scholar]
    Access to biomedical databases from PDAs (Personal DigitalAssistant) is a useful tool for health care professionals. Mobiledevices, even with their limited screen size, offer clear advantagesin different scenarios, but the capability to select the crucialinformation, and display it in a synthetic way plays a key role. Wepropose to integrate multidocument summarization (MDS)techniques with a postretrieval clustering interface in a mobiledevice accessing to medical documents. The final result is asystem that offers a summary for each cluster reporting documentsimilarities and a summary for each document highlighting thesingular aspects that it provides with respect to the commoninformation in the cluster.

    @OTHER{Buenaga2008,
    abstract = {Access to biomedical databases from PDAs (Personal DigitalAssistant) is a useful tool for health care professionals. Mobiledevices, even with their limited screen size, offer clear advantagesin different scenarios, but the capability to select the crucialinformation, and display it in a synthetic way plays a key role. Wepropose to integrate multidocument summarization (MDS)techniques with a postretrieval clustering interface in a mobiledevice accessing to medical documents. The final result is asystem that offers a summary for each cluster reporting documentsimilarities and a summary for each document highlighting thesingular aspects that it provides with respect to the commoninformation in the cluster.},
    author = {Buenaga , Manuel and Gachet Páez, Diego and Maña , Manuel J. and de la Villa , Manuel and Mata , Jacinto},
    journal = {ACM-SIGIR Workshop on Mobile Information Retrieval},
    month = {July},
    pages = {54-57},
    publisher = {ACM-SIGIR Workshop on Mobile Information Retrieval},
    title = {Clustering and Summarizing Medical Documents to Improve Mobile Retrieval},
    url = {http://scholar.google.es/scholar?q=allintitle%3AClustering+and+Summarizing+Medical+Documents+to+Improve+Mobile+Retrieval&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Carrero, F., Cortizo, J. C., & Gómez, J. M.. (2008). Building a spanish mmtx by using automatic translation and biomedicalontologies. 9th international conference on intelligent data engineering andautomated learning.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The use of domain ontologies is becoming increasingly popular in Medical Natural Language Processing Systems. A wide variety of knowledge bases in multiple languages has been integrated into the Unified Medical Language System (UMLS) to create a huge knowledge source that can be accessed with diverse lexical tools. MetaMap (and its java version MMTx) is a tool that allows extracting medical concepts from free text, but currently there not exists a Spanish version. Our ongoing research is centered on the application of biomedical concepts to cross-lingual text classification, what makes it necessary to have a Spanish MMTx available. We have combined automatic translation techniques with biomedical ontologies and the existing English MMTx to produce a Spanish version of MMTx. We have evaluated different approaches and applied several types of evaluation according to different concept representations for text classification. Our results prove that the use of existing translation tools such as Google Translate produce translations with a high similarity to original texts in terms of extracted concepts.

    @OTHER{Carrero2008,
    abstract = {The use of domain ontologies is becoming increasingly popular in Medical Natural Language Processing Systems. A wide variety of knowledge bases in multiple languages has been integrated into the Unified Medical Language System (UMLS) to create a huge knowledge source that can be accessed with diverse lexical tools. MetaMap (and its java version MMTx) is a tool that allows extracting medical concepts from free text, but currently there not exists a Spanish version. Our ongoing research is centered on the application of biomedical concepts to cross-lingual text classification, what makes it necessary to have a Spanish MMTx available. We have combined automatic translation techniques with biomedical ontologies and the existing English MMTx to produce a Spanish version of MMTx. We have evaluated different approaches and applied several types of evaluation according to different concept representations for text classification. Our results prove that the use of existing translation tools such as Google Translate produce translations with a high similarity to original texts in terms of extracted concepts.},
    address = {LNCS Springer Verlag},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María},
    doi = {10.1007/978-3-540-88906-9_44},
    journal = {9th International Conference on Intelligent Data Engineering andAutomated Learning},
    publisher = {9th International Conference on Intelligent Data Engineering andAutomated Learning},
    title = {Building a Spanish MMTx by using Automatic Translation and BiomedicalOntologies},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABuilding+a+Spanish+MMTx+by+using+Automatic+Translation+and+Biomedical+Ontologies&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Carrero, F., Cortizo, J. C., & Gómez, J. M.. (2008). Testing concept indexing in crosslingual medical text classifcation. 3th international conference on digital information management.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful for interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers in controlled, configurable environment. Currently there is no Spanish version of MetaMap, which difficult the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Developing a Spanish version of MetaMap would be a huge task, since there has been a lot of work supporting the English version for the last sixteen years. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language. In this paper we show our experiments on combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx in order to extract concepts for text classification.

    @OTHER{Carrero2008b,
    abstract = {MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful for interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers in controlled, configurable environment. Currently there is no Spanish version of MetaMap, which difficult the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Developing a Spanish version of MetaMap would be a huge task, since there has been a lot of work supporting the English version for the last sixteen years. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language. In this paper we show our experiments on combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx in order to extract concepts for text classification.},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María},
    doi = {10.1109/ICDIM.2008.4746715},
    journal = {3th International Conference on Digital Information Management},
    publisher = {3th International Conference on Digital Information Management},
    title = {Testing Concept Indexing in Crosslingual Medical Text Classifcation},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Testing+concept+indexing+in+crosslingual+medical+text+classification&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Carrero, F., Cortizo, J. C., Gómez, J. M., & Buenaga, M.. (2008). In the development of a spanish metamap. Proceedings of the acm 17th conference on information and knowledgemanagement.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers. Currently there is no Spanish version of MetaMap, which difficults the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification and retrieval [3]. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language [4]. In this paper we evaluate the possibility of combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx.

    @OTHER{Carrero2008a,
    abstract = {MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers. Currently there is no Spanish version of MetaMap, which difficults the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification and retrieval [3]. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language [4]. In this paper we evaluate the possibility of combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx.},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María and Buenaga , Manuel},
    doi = {10.1145/1458082.1458335},
    journal = {Proceedings of the ACM 17th Conference on Information and KnowledgeManagement },
    publisher = {Proceedings of the ACM 17th Conference on Information and KnowledgeManagement},
    title = {In the development of a Spanish Metamap},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIn+the+development+of+a+Spanish+Metamap&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Cortizo, J. C., Gachet Páez, D., Buenaga, M., Maña, M., Puertas, E., & de la Villa, M.. (2008). Extending pubmed on tap by means of multidocument summarization. User-centric technologies and applications workshop.
    [BibTeX] [Abstract] [Google Scholar]
    Access to biomedical databases from pockets and hand-held or tablet computers is a useful tool for health care professionals. PubMed on Tap is the standar application for PDA to retrieve information from Medline, the most important and consulted bibliographical database in the biomedical domain. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques improving aspects of PubMed on Tap.

    @OTHER{Cortizo2008,
    abstract = {Access to biomedical databases from pockets and hand-held or tablet computers is a useful tool for health care professionals. PubMed on Tap is the standar application for PDA to retrieve information from Medline, the most important and consulted bibliographical database in the biomedical domain. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques improving aspects of PubMed on Tap.},
    author = {Cortizo , José Carlos and Gachet Páez, Diego and Buenaga , Manuel and Maña , Manuel and Puertas , Enrique and de la Villa , Manuel},
    journal = {User-centric Technologies and Applications Workshop },
    publisher = {User-centric Technologies and Applications Workshop – Madrinet},
    title = {Extending PubMed on Tap by means of MultiDocument Summarization},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExtending+on+Tap+by+means+of+Summarization&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Cortizo, J. C., Gachet Páez, D., Buenaga, M., Maña, M., & de la Villa, M.. (2008). Mobile medical information access by means of multidocument summarizationbased on similarities and differences. Acl workshop on mobile language processing, 46th annual meeting ofthe association of computational linguistics: human language technologies.
    [BibTeX] [Abstract] [Google Scholar]
    Access to Electronic Health Record (EHR) and biomedical databases from pockets and hand-held or tablet computers would be a useful tool for health care professionals. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques to present a large set of results in a restricted size environment.

    @OTHER{Cortizo2008b,
    abstract = {Access to Electronic Health Record (EHR) and biomedical databases from pockets and hand-held or tablet computers would be a useful tool for health care professionals. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques to present a large set of results in a restricted size environment.},
    author = {Cortizo , José Carlos and Gachet Páez, Diego and Buenaga , Manuel and Maña , Manuel and de la Villa , Manuel},
    journal = {ACL Workshop on Mobile Language Processing, 46th Annual Meeting ofthe Association of Computational Linguistics: Human Language Technologies},
    publisher = {ACL Workshop on Mobile Language Processing, 46th Annual Meeting ofthe Association of Computational Linguistics: Human Language Technologies},
    title = {Mobile Medical Information Access by means of Multidocument Summarizationbased on Similarities and Differences},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Medical+Information+Access+by+means+of+Multidocument+Summarization+%09based+on+Similarities+and+Differences&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Cortizo, J. C., Gómez, J. M., Temprado, Y., Martín, D., & Rodríguez, F.. (2008). Mining postal addresses. Proceedings of the iadis european conference on data mining.
    [BibTeX] [Abstract] [Google Scholar]
    This paper presents FuMaS (Fuzzy Matching System), a system capable of an efficient retrieval of postal addresses from noisy queries. The fuzzy postal addresses retrieval has many possible applications, ranging from datawarehouse dedumping, to the correction of input forms, or the integration within online street directories, etc. This paper presents the system architecture along with a series of experiments performed using FuMaS. The experimental results show that FuMaS is a very useful system when retrieving noisy postal addresses, being able to retrieve almost 85% of the total ones. This represents an improvement of the 15% when comparing with other systems tested in this set of experiments.

    @OTHER{Cortizo2008a,
    abstract = {This paper presents FuMaS (Fuzzy Matching System), a system capable of an efficient retrieval of postal addresses from noisy queries. The fuzzy postal addresses retrieval has many possible applications, ranging from datawarehouse dedumping, to the correction of input forms, or the integration within online street directories, etc. This paper presents the system architecture along with a series of experiments performed using FuMaS. The experimental results show that FuMaS is a very useful system when retrieving noisy postal addresses, being able to retrieve almost 85% of the total ones. This represents an improvement of the 15% when comparing with other systems tested in this set of experiments.},
    author = {Cortizo , José Carlos and Gómez , José María and Temprado , Yaiza and Martín , Diego and Rodríguez , Federico},
    journal = {Proceedings of the IADIS European Conference on Data Mining},
    publisher = {Proceedings of the IADIS European Conference on Data Mining},
    title = {Mining Postal Addresses},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMining+Postal+Addresses&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Gachet Páez, D., Buenaga, M., Cortizo, J. C., & Padrón, V.. (2008). Risk patient help and location system using mobile technologies. 3rd symposium of ubiquitous computing and ambient intelligence.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper explores the feasibility of the inclusion of information and communications technologies for helping and localizing risk and early discharge patients, and suggests innovative actions in the area of E- Health services. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonarry Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient’s location and some information about their illness to the Hospital or care centre.

    @OTHER{Gachet2008a,
    abstract = {This paper explores the feasibility of the inclusion of information and communications technologies for helping and localizing risk and early discharge patients, and suggests innovative actions in the area of E- Health services. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonarry Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient’s location and some information about their illness to the Hospital or care centre.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Cortizo , José Carlos and Padrón , Victor},
    doi = {10.1007/978-3-540-85867-6_21},
    journal = {3rd Symposium of Ubiquitous Computing and Ambient Intelligence},
    publisher = {3rd Symposium of Ubiquitous Computing and Ambient Intelligence },
    title = {Risk Patient Help and Location System using Mobile Technologies},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Risk+Patient+Help+and+Location+System+using+Mobile+Technologies&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Gachet Páez, D., Buenaga, M., & Silió, T.. (2008). Recuperación de información médica mediante dispositivos móviles. Novática. revista de la asociación de técnicos en informática, 194, 63-66.
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2008,
    author = {Gachet Páez, Diego and Buenaga , Manuel and Silió , Teresa},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    month = {Julio},
    pages = {63-66},
    title = {Recuperación de información médica mediante dispositivos móviles},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Recuperaci%C3%B3n+de+informaci%C3%B3n+m%C3%A9dica+mediante+dispositivos+m%C3%B3viles&btnG=&hl=es&as_sdt=0},
    volume = {194},
    year = {2008}
    }

  • Gaya, M. C., & Giraldez, J. I.. (2008). Techniques for distributed theory synthesis in multiagent systems. International symposium on distributed computing and artificial intelligence.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.

    @OTHER{Gaya2008a,
    abstract = {Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.},
    author = {Gaya , Maria Cruz and Giraldez , José Ignacio},
    doi = {10.1007/978-3-540-85863-8_46},
    journal = {International Symposium on Distributed Computing and Artificial Intelligence},
    publisher = {Springer Verlag},
    title = {Techniques for Distributed Theory Synthesis in Multiagent Systems},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Techniques+for+Distributed+Theory+Synthesis+in+Multiagent+Systems&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Gaya, M. C., & Giráldez, J. I.. (2008). Experiments in multi agent learning. 3rd international workshop on hybrid artificial intelligence systems, 5271, 78-85.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. The results show that, as a result of knowledge fusion, the accuracy of initial theories is improved as well as the accuracy of the monolithic solution.

    @OTHER{Gaya2008,
    abstract = {Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. The results show that, as a result of knowledge fusion, the accuracy of initial theories is improved as well as the accuracy of the monolithic solution.},
    address = { LNCS },
    author = {Gaya , Maria Cruz and Giráldez , José Ignacio},
    doi = {10.1007/978-3-540-87656-4_11},
    journal = {3rd International Workshop on Hybrid Artificial Intelligence Systems},
    pages = {78-85},
    publisher = {Springer Verlag},
    series = {Lecture Notes in Artificial Intelligence},
    title = {Experiments in Multi Agent Learning},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExperiments+in+Multi+Agent+Learning&btnG=&hl=es&as_sdt=0},
    volume = {5271},
    year = {2008}
    }

  • Puertas Sanz, E., Gómez Hidalgo, J. M., & Cortizo Pérez, J. C.. (2008). Email spam filtering. In Zelkowitz, M. V. (Ed.), In Advances in computers (, Vol. 74pp. 45-114). Elsevier.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In recent years, email spam has become an increasingly important problem, with a big economic impact in society. In this work, we present the problem of spam, how it affects us, and how we can fight against it. We discuss legal, economic, and technical measures used to stop these unsolicited emails. Among all the technical measures, those based on content analysis have been particularly effective in filtering spam, so we focus on them, explaining how they work in detail. In summary, we explain the structure and the process of different Machine Learning methods used for this task, and how we can make them to be cost sensitive through several methods like threshold optimization, instance weighting, or MetaCost. We also discuss how to evaluate spam filters using basic metrics, TREC metrics, and the receiver operating characteristic convex hull method, that best suits classification problems in which target conditions are not known, as it is the case. We also describe how actual filters are used in practice. We also present different methods used by spammers to attack spam filters and what we can expect to find in the coming years in the battle of spam filters against spammers.

    @INCOLLECTION{PuertasSanz2008,
    author = {Puertas Sanz , Enrique and Gómez Hidalgo , José María and Cortizo Pérez , José Carlos},
    title = {Email Spam Filtering},
    booktitle = {Advances in Computers},
    publisher = {Elsevier},
    year = {2008},
    editor = {Marvin V. Zelkowitz},
    volume = {74},
    chapter = {3},
    pages = {45-114},
    abstract = {In recent years, email spam has become an increasingly important problem, with a big economic impact in society. In this work, we present the problem of spam, how it affects us, and how we can fight against it. We discuss legal, economic, and technical measures used to stop these unsolicited emails. Among all the technical measures, those based on content analysis have been particularly effective in filtering spam, so we focus on them, explaining how they work in detail. In summary, we explain the structure and the process of different Machine Learning methods used for this task, and how we can make them to be cost sensitive through several methods like threshold optimization, instance weighting, or MetaCost. We also discuss how to evaluate spam filters using basic metrics, TREC metrics, and the receiver operating characteristic convex hull method, that best suits classification problems in which target conditions are not known, as it is the case. We also describe how actual filters are used in practice. We also present different methods used by spammers to attack spam filters and what we can expect to find in the coming years in the battle of spam filters against spammers.},
    doi = {10.1016/S0065-2458(08)00603-7},
    isbn = {0065-2458},
    shorttitle = {Software Development},
    url = {http://scholar.google.es/scholar?as_q=Email+Spam+Filtering&as_epq=&as_oq=&as_eq=&as_occt=title&as_sauthors=Puertas&as_publication=&as_ylo=&as_yhi=&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

2007

  • Alvarez Montero, F., Vaquero Sánchez, A., Sáenz Pérez, F., & Buenaga Rodríguez, M.. (2007). Bringing forward semantic relations. 7th international conference on intelligent design and applications (isda 2007), 511-519.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain fuzzy or under-specified. This is a pervasive problem in software engineering and artificial intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.

    @OTHER{AlvarezMontero2007,
    abstract = {Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain fuzzy or under-specified. This is a pervasive problem in software engineering and artificial intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.},
    address = {Río de Janeiro},
    author = {Alvarez Montero , Francisco and Vaquero Sánchez , Antonio and Sáenz Pérez , Fernando and Buenaga Rodríguez , Manuel},
    doi = {10.1109/ISDA.2007.82},
    journal = {7th International Conference on Intelligent Design and Applications (ISDA 2007)},
    month = {Octubre},
    pages = {511-519},
    title = {Bringing Forward Semantic Relations},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABringing+Forward+Semantic+Relations&btnG=&hl=es&as_sdt=0%2C5},
    year = {2007}
    }

  • Alvarez Montero, F., Vaquero Sánchez, A., Sáenz Pérez, F., & Buenaga Rodríguez, M.. (2007). Neglecting semantic relations: consequences and proposals. , Lisboa, Portugal.
    [BibTeX] [Abstract] [Google Scholar]
    Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper, we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.

    @INPROCEEDINGS{AlvarezMontero2007a,
    author = {Alvarez Montero , Francisco and Vaquero Sánchez , Antonio and Sáenz Pérez , Fernando and Buenaga Rodríguez , Manuel},
    title = {Neglecting Semantic Relations: Consequences and proposals},
    year = {2007},
    address = {Lisboa, Portugal},
    month = {July},
    abstract = {Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper, we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies. },
    journal = {International Conference on Intelligent Systems and Agents},
    url = {http://scholar.google.es/scholar?q=allintitle%3ANeglecting+Semantic+Relations%3A+Consequences+and+proposals&btnG=&hl=es&as_sdt=0}
    }

  • Alvarez Montero, F., Vaquero Sánchez, A., Sáenz Pérez, F., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (2007). Semantic relations: modelling issues, proposals and possible applications. , Key West, Florida USA.
    [BibTeX] [Abstract] [Google Scholar]
    Semantic relations are an important element in the construction of ontology-based linguistic resources and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in both Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations, abstractions that are not enough to represent the relation richness of problem domains, and even poorly structured taxonomies. However, if provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them that can be an aid in the ontology construction process. In this paper we present some insightful issues about the representation of relations. Moreover, the initiatives aiming to provide relations with clear semantics are explained and the inclusion of their core ideas as part of a methodology for the development of ontology-based linguistic resources is proposed.

    @INPROCEEDINGS{AlvarezMontero2007b,
    author = {Alvarez Montero , Francisco and Vaquero Sánchez , Antonio and Sáenz Pérez , Fernando and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    title = {Semantic Relations: Modelling Issues, Proposals and Possible Applications},
    year = {2007},
    address = {Key West, Florida USA},
    month = {may},
    abstract = {Semantic relations are an important element in the construction of ontology-based linguistic resources and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in both Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations,
    abstractions that are not enough to represent the relation richness of problem domains, and even poorly structured taxonomies. However, if provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them that can be an aid in the ontology construction process. In this paper we present some insightful issues about the representation of relations. Moreover, the initiatives aiming to provide relations with clear semantics are explained and the inclusion of their core ideas as part of a methodology for the development of ontology-based linguistic resources is proposed.},
    journal = {American Association of Artificial Intelligence - AAAI Press},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Semantic+Relations%3A+Modelling+Issues%2C+Proposals+and+Possible+Applications&btnG=&hl=es&as_sdt=0}
    }

  • Arruego, J., Llorente, E., Medina, J. L., Cortizo Pérez, J. C., & Expósito, D.. (2007). Minería de direcciones postales. Paper presented at the Actas del v taller de minería de datos y aprendizaje.
    [BibTeX] [Abstract] [Google Scholar]
    En este artículo se presenta FuMaS (Fuzzy Matching System), un sistema que permite la recuperación eficiente de direcciones postales a partir de consultas con ruido. La recuperación difusa de esta información tiene innumerables aplicaciones, desde encontrar/limpiar duplicados en bases de datos (registros electorales, encontrar nidos de fraude postal, etc.) hasta corregir las entradas de los usuarios en sistemas tales como callejeros o cualquier tipo de formulario dónde haya que introducir una dirección postal. En este artículo se presenta la arquitectura del sistema, así como los experimentos que, hasta el momento, se han realizado sobre el mismo. Los resultados de estos experimentos muestran que FuMaS es una herramienta muy útil para recuperar direcciones postales a partir de consultas con ruido, siendo capaz de resolver cerca del 85% de las direcciones con errores introducidas al sistema, una eficacia un 15% mayor que cualquier otro sistema similar probado.

    @INPROCEEDINGS{Arruego2007,
    author = {Arruego , Javier and Llorente , Ester and Medina , José Luis and Cortizo Pérez , José Carlos and Expósito , Diego},
    title = {Minería de Direcciones Postales},
    booktitle = {Actas del V Taller de Minería de Datos y Aprendizaje},
    year = {2007},
    editor = {F. J. Ferrer-Troyano and A. Troncoso and J. C. Riquelme},
    pages = {49-56},
    abstract = {En este artículo se presenta FuMaS (Fuzzy Matching System), un sistema que permite la recuperación eficiente de direcciones postales a partir de consultas con ruido. La recuperación difusa de esta información tiene innumerables aplicaciones, desde encontrar/limpiar duplicados en bases de datos (registros electorales, encontrar nidos de fraude postal, etc.) hasta corregir las entradas de los usuarios en sistemas tales como callejeros o cualquier tipo de formulario dónde haya que introducir una dirección postal. En este artículo se presenta la arquitectura del sistema, así como los experimentos que, hasta el momento, se han realizado sobre el mismo. Los resultados de estos experimentos muestran que FuMaS es una herramienta muy útil para recuperar direcciones postales a partir de consultas con ruido, siendo capaz de resolver cerca del 85% de las direcciones con errores introducidas al sistema, una eficacia un 15% mayor que cualquier otro sistema similar probado.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMiner%C3%ADa+de+Direcciones+Postales&btnG=&hl=es&as_sdt=0}
    }

  • Buenaga Rodriguez, M., Maña, M., Carrero, F., & Mata, J.. (2007). Diseño e integración de técnicas de categorización automática de textos para el acceso a la información bilingue en un ámbito biomédico. Vii jornada de seguimiento de proyectos en tecnologías informáticas.
    [BibTeX] [Google Scholar]
    @OTHER{BuenagaRodri­guez2007,
    address = {Zaragoza},
    author = {Buenaga Rodriguez , Manuel and Maña , Manuel and Carrero , Francisco and Mata , Jacinto},
    journal = {VII Jornada de Seguimiento de Proyectos en Tecnologías Informáticas},
    month = {September},
    title = {Diseño e Integración de Técnicas de Categorización Automática de Textos para el Acceso a la Información Bilingue en un Ámbito Biomédico},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+e+Integraci%C3%B3n+de+T%C3%A9cnicas+de+Categorizaci%C3%B3n+Autom%C3%A1tica+de+Textos+para+el+Acceso+a+la+Informaci%C3%B3n+Bilingue+en+un+%C3%81mbito+Biom%C3%A9dico&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Carrero García, F., Gómez Hidalgo, J. M., Buenaga Rodríguez, M., Mata, J., & Maña López, M.. (2007). Acceso a la información bilingüe utilizando ontologías específicasdel dominio biomédico. Revista de la sociedad española para el procesamiento del lenguajenatural, 38, 107-118.
    [BibTeX] [Abstract] [Google Scholar]
    One of the most promising approaches to Cross-Language Information Retrieval is the utilization of lexical-semantic resources for concept-indexing documents and queries. We have followed this approach in a proposal of an Information Access system designed for medicine professionals, aiming at easing the preparation of clinical cases, and the development of studies and research. In our proposal, the clinical record information, in Spanish, is connected to related scientific information (research papers), in English and Spanish, by using high quality and coverage resources like the SNOMED ontology. We also describe how we have addressed information privacy.

    @OTHER{CarreroGarcia2007,
    abstract = {One of the most promising approaches to Cross-Language Information Retrieval is the utilization of lexical-semantic resources for concept-indexing documents and queries. We have followed this approach in a proposal of an Information Access system designed for medicine professionals, aiming at easing the preparation of clinical cases, and the development of studies and research. In our proposal, the clinical record information, in Spanish, is connected to related scientific information (research papers), in English and Spanish, by using high quality and coverage resources like the SNOMED ontology. We also describe how we have addressed information privacy.},
    author = {Carrero García , Francisco and Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel and Mata , Jacinto and Maña López , Manuel},
    journal = {Revista de la Sociedad Española para el Procesamiento del LenguajeNatural},
    month = {Abril},
    pages = {107-118},
    title = {Acceso a la información bilingüe utilizando ontologías específicasdel dominio biomédico},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAcceso+a+la+informaci%C3%B3n+biling%C3%BCe+utilizando++ontolog%C3%ADas+espec%C3%ADficas+del+dominio+biom%C3%A9dico&btnG=&hl=es&as_sdt=0%2C5},
    volume = {38},
    year = {2007}
    }

  • Cormack, G., Gómez Hidalgo, J. M., & Puertas Sanz, E.. (2007). Feature engineering for mobile (sms) spam filtering. Paper presented at the Proceedings of the 30th annual international acm sigir conference.
    [BibTeX] [Abstract] [Google Scholar]
    Mobile spam in an increasing threat that may be addressed using filtering systems like those employed against email spam. We believe that email filtering techniques require some adaptation to reach good levels of performance on SMS spam, especially regarding message representation. In order to test this assumption, we have performed experiments on SMS filtering using top performing email spam filters on mobile spam messages using a suitable feature representation, with results supporting our hypothesis.

    @INPROCEEDINGS{Cormack2007,
    author = {Cormack , Gordon and Gómez Hidalgo , José María and Puertas Sanz , Enrique},
    title = {Feature Engineering for Mobile (SMS) Spam Filtering},
    booktitle = {Proceedings of the 30th Annual International ACM SIGIR Conference},
    year = {2007},
    abstract = {Mobile spam in an increasing threat that may be addressed using filtering systems like those employed against email spam. We believe that email filtering techniques require some adaptation to reach good levels of performance on SMS spam, especially regarding message representation. In order to test this assumption, we have performed experiments on SMS filtering using top performing email spam filters on mobile spam messages using a suitable feature representation, with results supporting our hypothesis.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AFeature+Engineering+for+Mobile+%28SMS%29+Spam+Filtering&btnG=&hl=es&as_sdt=0}
    }

  • Cortizo Pérez, J. C., Giráldez, I., & Gaya, M. C.. (2007). Transformando la representación de los datos para mejorar el clasificadorbayesiano simple. Paper presented at the Actas de la xii conferencia de la asociación española para la inteligenciaartificial – caepia/ttia 2007.
    [BibTeX] [Abstract] [Google Scholar]
    El clasificador bayesiano simple se basa en la asunción de independencia entre los valores de los atributos dado el valor de la clase. Así pues, su efectividad puede decrecer en presencia de atributos interdependientes. En este artículo se presenta DGW (Dependency Guided Wrapper), un wrapper que utiliza la información acerca de las dependencias entre atributos para transformar la representación de los datos para mejorar la precisión del clasificador bayesiano simple. Este artículo presenta una serie de experimentos donde se compara las representaciones de datos obtenidas por el DGW contra las representaciones de datos obtenidas por 12 acercamientos previos, como son la construcción inductiva de productos cartesianos de atributos, y wrappers que realizan búsquedas de subconjuntos óptimos de atributos. Los resultados de los experimentos muestran que DGW genera representaciones nuevas de los datos que ayudan a mejorar significativamente la precisión del clasificador bayesiano simple más frecuentemente que cualquier otro acercamiento previo. Además, DGW es mucho más rápido que cualquier otro sistema en el proceso de transformación de la representación de los datos.

    @INPROCEEDINGS{CortizoPerez2007,
    author = {Cortizo Pérez , José Carlos and Giráldez , Ignacio and Gaya , Maria Cruz},
    title = {Transformando la Representación de los Datos para Mejorar el ClasificadorBayesiano Simple},
    booktitle = {Actas de la XII Conferencia de la Asociación Española para la InteligenciaArtificial - CAEPIA/TTIA 2007},
    year = {2007},
    editor = {D. Borrajo and L. Castillo and J. M. Corchado},
    volume = {1},
    pages = {317-326},
    abstract = {El clasificador bayesiano simple se basa en la asunción de independencia entre los valores de los atributos dado el valor de la clase. Así pues, su efectividad puede decrecer en presencia de atributos interdependientes. En este artículo se presenta DGW (Dependency Guided Wrapper), un wrapper que utiliza la información acerca de las dependencias entre atributos para transformar la representación de los datos para mejorar la precisión del clasificador bayesiano simple. Este artículo presenta una serie de experimentos donde se compara las representaciones de datos obtenidas por el DGW contra las representaciones de datos obtenidas por 12 acercamientos previos, como son la construcción inductiva de productos cartesianos de atributos, y wrappers que realizan búsquedas de subconjuntos óptimos de atributos. Los resultados de los experimentos muestran que DGW genera representaciones nuevas de los datos que ayudan a mejorar significativamente la precisión del clasificador bayesiano simple más frecuentemente que cualquier otro acercamiento previo. Además, DGW es mucho más rápido que cualquier otro sistema en el proceso de transformación de la representación de los datos.},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Transformando+la+Representaci%C3%B3n+de+los+Datos+para+Mejorar+el+Clasificador+Bayesiano+Simple&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Buenaga, M., Hernando, A., & Alonso, M.. (2007). Mobile information retrieval for the patient safety improvement in hospitals. , 81-87.
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2007a,
    author = {Gachet Páez, Diego and Buenaga , Manuel and Hernando , Asunción and Alonso , Margarita},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    pages = {81-87},
    title = {Mobile Information Retrieval for the Patient Safety Improvement in Hospitals},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Information+Retrieval+for+the+Patient+Safety+Improvement+in+Hospitals&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Gachet Páez, D., Buenaga, M., Rubio, M., & Silio, T.. (2007). Ubiquitous information retrieval to improve patient safety in hospitals. .
    [BibTeX] [Abstract] [Google Scholar]
    Heterogeneus information management within the biomedical domain requires, a set of text content analysis and data mining techniques. Both, the intelligent information retrieval applied to Electronic Health Record (EHR) and to the biomedical databases, and the access to this information using pocket and hand-held devices or tablet computers, will be a useful tool for health care professionals and a valuable complement to other medical applications. In this paper we present both a description of the SINAMED. research project, and a discussion of some partial results obtained. Our aim is to design new text categorization and summarization algorithms applied to patient clinical records and to the associated medical information, and to design advanced, efficient user interfaces for mobile devices and for on-line access to this results. The proposed system would contribute to improve the medical attention and patient safety.

    @OTHER{Gachet2007b,
    abstract = {Heterogeneus information management within the biomedical domain requires, a set of text content analysis and data mining techniques. Both, the intelligent information retrieval applied to Electronic Health Record (EHR) and to the biomedical databases, and the access to this information using pocket and hand-held devices or tablet computers, will be a useful tool for health care professionals and a valuable complement to other medical applications. In this paper we present both a description of the SINAMED. research project, and a discussion of some partial results obtained. Our aim is to design new text categorization and summarization algorithms applied to patient clinical records and to the associated medical information, and to design advanced, efficient user interfaces for mobile devices and for on-line access to this results. The proposed system would contribute to improve the medical attention and patient safety.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Rubio , Margarita and Silio , Teresa},
    booktitle = {IADIS INternational Conferencie on Ubiquitous Computing},
    title = {Ubiquitous Information Retrieval to improve Patient safety in Hospitals},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Ubiquitous+Information+Retrieval+to+improve+Patient+safety+in+Hospitals&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Gachet Páez, D., Buenaga Rodríguez, M., Rubio, M., & Silió, T.. (2007). Intelligent information retrieval and mobile computing to improve patient safety in hospitals. 2nd symposium on ubiquitous computing & ambient intelligence.
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2007,
    author = {Gachet Páez, Diego and Buenaga Rodríguez , Manuel and Rubio , Margarita and Silió , Teresa},
    journal = {2nd Symposium on Ubiquitous Computing \& Ambient Intelligence },
    month = {September},
    title = {Intelligent Information Retrieval and Mobile Computing to Improve Patient Safety in Hospitals},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Intelligent+Information+Retrieval+and+Mobile+Computing+to+Improve+Patient+Safety+in+Hospitals&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Gaya, M. C., Giráldez, I., & Cortizo Pérez, J. C.. (2007). Uso de algoritmos evolutivos para la fusión de teorías en mineríade datos distribuida. Paper presented at the Actas de la xii conferencia de la asociación española para la inteligenciaartificial.
    [BibTeX] [Google Scholar]
    @INPROCEEDINGS{Gaya2007,
    author = {Gaya , Maria Cruz and Giráldez , Ignacio and Cortizo Pérez , José Carlos},
    title = {Uso de Algoritmos Evolutivos para la Fusión de Teorías en Mineríade Datos Distribuida},
    booktitle = {Actas de la XII Conferencia de la Asociación Española para la InteligenciaArtificial },
    year = {2007},
    editor = {D. Borrajo and L. Castillo and J. M. Corchado},
    volume = {2},
    pages = {121-130},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Uso+de+algoritmos+evolutivos+para+la+fusion+de+teor%C3%ADas+en+miner%C3%ADa+de+datos+distribuida+&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Carrero, F., & Monsalve, B.. (2007). Las tecnologías de los motores de búsqueda del futuro. Dyna, ingeniería e industria, 82(9), 401-410.
    [BibTeX] [Abstract] [Google Scholar]
    Es indudable que Internet en general, y la Web en particular, tienen una influencia creciente en nuestras vidas y se han convertido en un medio de comunicación y un recurso informativo de primer orden. La gran cantidad de información disponible en la Web se hace accesible primordialmente a través de los motores de búsqueda como Google, Yahoo! o Altavista. Las empresas que operan estos motores son ahora multinacionales con enormes ingresos financieros obtenidos a través de la publicidad que logran por el tráfico de usuarios que acumulan. Su supervivencia depende de seguir siendo útiles, y cada vez más, para los usuarios, algo que sólo pueden lograr a través de la innovación e implantación de tecnologías y funcionalidades cada vez más avanzadas. En este artículo presentamos una revisión de algunas de las tecnologías que creemos clave para los motores de búsqueda del presente y del futuro, centrándonos en la personalización y la localización, la búsqueda social, la búsqueda en la Web semántica, la búsqueda translingüe, y el control de fraude en buscadores.

    @ARTICLE{GomezHidalgo2007,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Carrero , Francisco and Monsalve , Borja},
    title = {Las Tecnologías de los Motores de Búsqueda del futuro},
    journal = {DYNA, Ingeniería e industria},
    year = {2007},
    volume = {82},
    pages = {401-410},
    number = {9},
    month = {Novembre},
    abstract = {Es indudable que Internet en general, y la Web en particular, tienen una influencia creciente en nuestras vidas y se han convertido en un medio de comunicación y un recurso informativo de primer orden. La gran cantidad de información disponible en la Web se hace accesible primordialmente a través de los motores de búsqueda como Google, Yahoo! o Altavista. Las empresas que operan estos motores son ahora multinacionales con enormes ingresos financieros obtenidos a través de la publicidad que logran por el tráfico de usuarios que acumulan. Su supervivencia depende de seguir siendo útiles, y cada vez más, para los usuarios, algo que sólo pueden lograr a través de la innovación e implantación de tecnologías y funcionalidades cada vez más avanzadas. En este artículo presentamos una revisión de algunas de las tecnologías que creemos clave para los motores de búsqueda del presente y del futuro, centrándonos en la personalización y la localización, la búsqueda social, la búsqueda en la Web semántica, la búsqueda translingüe, y el control de fraude en buscadores.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ALas+tecnolog%C3%ADas+de+los+motores+de+b%C3%BAsqueda+del+futuro&btnG=&hl=es&as_sdt=0}
    }

  • Valverde, R., & Gachet Páez, D.. (2007). Identificación de sistemas dinámicos utilizando redes neuronales rbf. Revista iberoamericana de automática e informática industrial, 32-42.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    La identificación de sistemas complejos y no-lineales ocupa un lugar importante en las arquitecturas de neurocontrol, como por ejemplo el control inverso, control adaptativo directo e indirecto, etc. Es habitual en esos enfoques utilizar redes neuronales “feedforward” con memoria en la entrada (Tapped Delay) o bien redes recurrentes (modelos de Elman o Jordan) entrenadas off-line para capturar la dinámica del sistema (directa o inversa) y utilizarla en el lazo de control. En este artículo presentamos un esquema de identificación basado en redes del tipo RBF (Radial Basis Function) que se entrena on-line y que dinámicamente modifica su estructura (número de nodos o elementos en la capa oculta) permitiendo una implementación en tiempo real del identificador en el lazo de control.

    @OTHER{Valverde2007,
    abstract = {La identificación de sistemas complejos y no-lineales ocupa un lugar importante en las arquitecturas de neurocontrol, como por ejemplo el control inverso, control adaptativo directo e indirecto, etc. Es habitual en esos enfoques utilizar redes neuronales “feedforward” con memoria en la entrada (Tapped Delay) o bien redes recurrentes (modelos de Elman o Jordan) entrenadas off-line para capturar la dinámica del sistema (directa o inversa) y utilizarla en el lazo de control. En este artículo presentamos un esquema de identificación basado en redes del tipo RBF (Radial Basis Function) que se entrena on-line y que dinámicamente modifica su estructura (número de nodos o elementos en la capa oculta) permitiendo una implementación en tiempo real del identificador en el lazo de control.},
    author = {Valverde , Ricardo and Gachet Páez, Diego},
    doi = {10.4995/riai.v4i2.8023},
    journal = {Revista Iberoamericana de Automática e Informática industrial},
    pages = {32-42},
    publisher = {IFAC},
    title = {Identificación de Sistemas Dinámicos Utilizando Redes Neuronales RBF},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIdentificaci%C3%B3n+de+Sistemas+Din%C3%A1micos+Utilizando+Redes+Neuronales+RBF&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

2006

  • Molina, M., & Flores, V.. (2006). A knowledge-based approach for automatic generation of summaries of behavior. Paper presented at the Aimsa.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Effective automatic summarization usually requires simulating human reasoning such as abstraction or relevance reasoning. In this paper we describe a solution for this type of reasoning in the particular case of surveillance of the behavior of a dynamic system using sensor data. The paper first presents the approach describing the required type of knowledge with a possible representation. This includes knowledge about the system structure, behavior, interpretation and saliency. Then, the paper shows the inference algorithm to produce a summarization tree based on the exploitation of the physical characteristics of the system. The paper illustrates how the method is used in the context of automatic generation of summaries of behavior in an application for basin surveillance in the presence of river floods.

    @inproceedings{DBLP:conf/aimsa/MolinaF06,
    author = {Molina, Martin and Flores, Victor},
    abstract = {Effective automatic summarization usually requires simulating human reasoning such as abstraction or relevance reasoning. In this paper we describe a solution for this type of reasoning in the particular case of surveillance of the behavior of a dynamic system using sensor data. The paper first presents the approach describing the required type of knowledge with a possible representation. This includes knowledge about the system structure, behavior, interpretation and saliency. Then, the paper shows the inference algorithm to produce a summarization tree based on the exploitation of the physical characteristics of the system. The paper illustrates how the method is used in the context of automatic generation of summaries of behavior in an application for basin surveillance in the presence of river floods.},
    title = {A Knowledge-Based Approach for Automatic Generation of Summaries of Behavior},
    booktitle = {AIMSA},
    year = {2006},
    pages = {265-274},
    doi = {10.1007/11861461_28},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+Knowledge-Based+Approach+for+Automatic+Generation+of+Summaries+of+Behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Molina, M., & Flores, V.. (2006). Generating adaptive presentations of hydrologic behavior. Paper presented at the Ideal.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.

    @inproceedings{DBLP:conf/ideal/MolinaF06,
    author = {Molina, Martin and Flores, Victor},
    abstract = {This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.},
    title = {Generating Adaptive Presentations of Hydrologic Behavior},
    booktitle = {IDEAL},
    year = {2006},
    pages = {896-903},
    doi = {10.1007/11875581_107},
    url = {http://scholar.google.es/scholar?q=allintitle%3AGenerating+Adaptive+Presentations+of+Hydrologic+Behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Buenaga, M., Maña, M., Gachet Páez, D., & Mata, J.. (2006). The sinamed and isis projects: applying text mining techniques to improve access to a medical digital library. In Gonzalo, J., Thanos, C., Verdejo, F. M., & Carrasco, R. C. (Ed.), In Research and advanced technology for digital libraries (, Vol. 4172pp. 548-551). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Intelligent information access systems integrate text mining and content analysis capabilities as a relevant element in an increasing way. In this paper we present our work focused on the integration of text categorization and summarization to improve information access on a specific medical domain, patient clinical records and related scientific documentation, in the framework of two different research projects: SINAMED and ISIS, developed by a consortium of two research groups from two universities, one hospital and one software development firm. SINAMED has a basic research orientation and its goal is to design new text categorization and summarization algorithms based on the utilization of lexical resources in the biomedical domain. ISIS is a R&D project with a more applied and technology-transfer orientation, focused on more direct practical aspects of the utilization in a concrete public health institution.

    @INCOLLECTION{Buenaga2006,
    author = {Buenaga , Manuel and Maña , Manuel and Gachet Páez, Diego and Mata , Jacinto},
    title = {The SINAMED and ISIS Projects: Applying Text Mining Techniques to Improve Access to a Medical Digital Library},
    booktitle = {Research and Advanced Technology for Digital Libraries},
    publisher = {Springer Berlin Heidelberg},
    year = {2006},
    editor = {Gonzalo, Julio and Thanos, Costantino and Verdejo, M. Felisa and Carrasco, Rafael C.},
    volume = {4172},
    series = {Lecture Notes in Computer Science},
    pages = {548-551},
    month = {jan},
    abstract = {Intelligent information access systems integrate text mining and content analysis capabilities as a relevant element in an increasing way. In this paper we present our work focused on the integration of text categorization and summarization to improve information access on a specific medical domain, patient clinical records and related scientific documentation, in the framework of two different research projects: SINAMED and ISIS, developed by a consortium of two research groups from two universities, one hospital and one software development firm. SINAMED has a basic research orientation and its goal is to design new text categorization and summarization algorithms based on the utilization of lexical resources in the biomedical domain. ISIS is a R&D project with a more applied and technology-transfer orientation, focused on more direct practical aspects of the utilization in a concrete public health institution.},
    copyright = {©2006 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/11863878_65},
    isbn = {978-3-540-44636-1, 978-3-540-44638-5},
    shorttitle = {The {SINAMED} and {ISIS} Projects},
    url = {http://scholar.google.es/scholar?q=allintitle%3A%3A+The+SINAMED+and+ISIS+Projects%3A+Applying+Text+Mining+Techniques+to+Improve+Access+to+a+Medical+Digital+Library&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Cortizo Pérez, J. C., & Giráldez, I.. (2006). Multicriteria wrapper improvements to naive bayes learning. Paper presented at the Intelligent data engineering and automated learning.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Feature subset selection using a wrapper means to perform a search for an optimal set of attributes using the Machine Learning Algorithm as a black box. The Naive Bayes Classifier is based on the assumption of independence among the values of the attributes given the class value. Consequently, its effectiveness may decrease when the attributes are interdependent. We present FBL, a wrapper that uses the information about dependencies to guide the search for the optimal subset of features and the Naive Bayes Classifier as the black-box Ma- chine Learning algorithm. Experimental results show that FBL allows the Naive Bayes Classifier to achieve greater accuracies and that FBL performs better than other classical filters and wrappers.

    @INPROCEEDINGS{CortizoPerez2006,
    author = {Cortizo Pérez , José Carlos and Giráldez , Ignacio},
    title = {MultiCriteria Wrapper Improvements to Naive Bayes Learning},
    booktitle = {Intelligent Data Engineering and Automated Learning },
    year = {2006},
    editor = {E. Corchado and H. Yin and V. Botti},
    volume = {4224},
    series = {Lecture Notes in Computer Sciennce},
    pages = {419-427},
    publisher = {Springer Verlag},
    abstract = {Feature subset selection using a wrapper means to perform a search for an optimal set of attributes using the Machine Learning Algorithm as a black box. The Naive Bayes Classifier is based on the assumption of independence among the values of the attributes given the class value. Consequently, its effectiveness may decrease when the attributes are interdependent. We present FBL, a wrapper that uses the information about dependencies to guide the search for the optimal subset of features and the Naive Bayes Classifier as the black-box Ma- chine Learning algorithm. Experimental results show that FBL allows the Naive Bayes Classifier to achieve greater accuracies and that FBL performs better than other classical filters and wrappers.},
    doi = {10.1007/11875581_51},
    institution = {Universidad de Burgos},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMulti+Criteria+Wrapper+Improvements+to+Naive+Bayes+Learning&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Buenaga, M., & Maña, M.. (2006). Using mobile devices for intelligent access to medical information in hospitals. .
    [BibTeX] [Google Scholar]
    @ARTICLE{Gachet2006a,
    author = {Gachet Páez, Diego and Buenaga , Manuel and Maña , Manuel},
    title = {Using Mobile Devices for Intelligent Access to Medical Information in Hospitals},
    year = {2006},
    month = {November},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+Mobile+Devices+for+Intelligent+Access+to+Medical+Information+in+Hospitals&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Buenaga, M., & Puertas, E.. (2006). Mobile access to patient clinical records and related medical documentation. .
    [BibTeX] [Abstract] [Google Scholar]
    On-line access to patient clinical records from pocket and hand-held or tablet computers, will be an useful tool for health care professionals and a valuable complement to other medical applications if information delivery and access information systems are designed with handheld computers in mind. In this paper we present and discuss some partial results of two different research projects: SINAMED1 and ISIS2, both of them has as main goals the design of new text categorization and summarization algorithms applied to patient clinical records and associated medical information, and advanced, efficient user interfaces to mobile and on-line access to this results. Continued and new research is expected to improve additional handheld-based user interface design principles as well as guidelines for results organization and system performance and acceptation in a concrete public health institution.

    @OTHER{Gachet2006,
    abstract = {On-line access to patient clinical records from pocket and hand-held or tablet computers, will be an useful tool for health care professionals and a valuable complement to other medical applications if information delivery and access information systems are designed with handheld computers in mind. In this paper we present and discuss some partial results of two different research projects: SINAMED1 and ISIS2, both of them has as main goals the design of new text categorization and summarization algorithms applied to patient clinical records and associated medical information, and advanced, efficient user interfaces to mobile and on-line access to this results. Continued and new research is expected to improve additional handheld-based user interface design principles as well as guidelines for results organization and system performance and acceptation in a concrete public health institution.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Puertas , Enrique},
    booktitle = {International Conferencia on Ubiquitous Computing},
    title = {Mobile Access to Patient Clinical Records and Related Medical Documentation},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Access+to+Patient+Clinical+Records+and+Related+Medical+Documentation&btnG=&hl=es&as_sdt=0},
    year = {2006}
    }

  • Maña, M., Mata, J., Dominguez, J. L., Vaquero, A., Alvarez, F., Gómez Hidalgo, J. M., Gachet Páez, D., & Buenaga, M.. (2006). Los proyectos sinamed e isis: mejoras en el acceso a la información biomédica mediante la integración de generación de resúmenes, categorización automática de textos y ontologías. .
    [BibTeX] [Abstract] [Google Scholar]
    Los sistemas inteligentes de acceso a la información están integrando de manera creciente técnicas de minería de texto y de análisis del contenido, y recursos semánticos como las ontologías. En los proyectos ISIS y SINAMED juegan un papel central la utilización de categorización de texto, la extracción automática de resúmenes y las ontologías, para la mejora del acceso a la información en un dominio biomédico específico: los historiales clínicos de pacientes y la información científica biomédica asociada. En el desarrollo de los dos proyectos participa un consorcio formado por grupos de investigación de tres universidades (Universidad Europea de Madrid, Universidad de Huelva, Universidad Complutense de Madrid), un hospital (Hospital de Fuenlabrada, Madrid), y una compañía de desarrollo de software (Bitext).

    @INPROCEEDINGS{Mana2006,
    author = {Maña , Manuel and Mata , Jacinto and Dominguez , Juan L. and Vaquero , Antonio and Alvarez , Francisco and Gómez Hidalgo , José María and Gachet Páez, Diego and Buenaga , Manuel},
    title = {Los proyectos SINAMED e ISIS: Mejoras en el Acceso a la Información Biomédica mediante la integración de Generación de Resúmenes, Categorización Automática de Textos y Ontologías},
    year = {2006},
    volume = {37},
    abstract = {Los sistemas inteligentes de acceso a la información están integrando de manera creciente técnicas de minería de texto y de análisis del contenido, y recursos semánticos como las ontologías. En los proyectos ISIS y SINAMED juegan un papel central la utilización de categorización de texto, la extracción automática de resúmenes y las ontologías, para la mejora del acceso a la información en un dominio biomédico específico: los historiales clínicos de pacientes y la información científica biomédica asociada. En el desarrollo de los dos proyectos participa un consorcio formado por grupos de investigación de tres universidades (Universidad Europea de Madrid, Universidad de Huelva, Universidad Complutense de Madrid), un hospital (Hospital de Fuenlabrada, Madrid), y una compañía de desarrollo de software (Bitext).},
    journal = {Procesamiento de Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Los+proyectos+SINAMED+e+ISIS%3A+Mejoras+en+el+Acceso+a+la+Informaci%C3%B3n+Biom%C3%A9dica+mediante+la+integraci%C3%B3n+de+Generaci%C3%B3n+de+Res%C3%BAmenes%2C+Categorizaci%C3%B3n+Autom%C3%A1tica+de+Textos+y+Ontolog%C3%ADas&btnG=&hl=es&as_sdt=0}
    }

  • Padrón Nápoles, V. M., Ugarte Suárez, M., Hussain Alanbari, M., & Gachet Páez, D.. (2006). Estudio de las metodologí­as activas y experiencias de su introducción en las asignaturas de sistemas digitales Grafema.
    [BibTeX] [Google Scholar]
    @BOOK{PadronNapoles2006,
    title = {Estudio de las metodologí­as activas y experiencias de su introducción en las asignaturas de sistemas digitales},
    publisher = {Grafema},
    year = {2006},
    author = {Padrón Nápoles , Vi­ctor Manuel and Ugarte Suárez , Marta and Hussain Alanbari , Mohammad and Gachet Páez , Diego},
    isbn = {9788493422561},
    language = {es},
    url = {http://www.google.es/search?tbm=bks&hl=es&q=Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales&btnG=#hl=es&tbm=bks&sclient=psy-ab&q=%22Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales%22&oq=%22Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales%22&gs_l=serp.3...5065.6500.0.6805.2.2.0.0.0.0.0.0..0.0...0.2...1c.1.6.psy-ab.FXP1zEchBms&pbx=1&bav=on.2,or.r_qf.&bvm=bv.43828540,d.ZGU&fp=b9ef6759e3a8d17e&biw=1366&bih=653}
    }

  • Vaquero, A., Saenz, F., Alvarez, F., & Buenaga, M.. (2006). Methodologically designing a hierarchically organized concept-based terminology database to improve access to biomedical documentation. In Meersman, R., Tari, Z., & Herrero, P. (Ed.), In On the move to meaningful internet systems 2006: otm 2006 workshops (, Vol. 4277pp. 658-668). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.

    @INCOLLECTION{Vaquero2006a,
    author = {Vaquero , Antonio and Saenz , Fernando and Alvarez , Francisco and Buenaga , Manuel},
    title = {Methodologically Designing a Hierarchically Organized Concept-Based Terminology Database to Improve Access to Biomedical Documentation},
    booktitle = {On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops},
    publisher = {Springer Berlin Heidelberg},
    year = {2006},
    editor = {Meersman, Robert and Tari, Zahir and Herrero, Pilar},
    volume = {4277},
    series = {Lecture Notes in Computer Science},
    pages = {658-668},
    month = {jan},
    abstract = {Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.},
    copyright = {©2006 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/11915034_90},
    isbn = {978-3-540-48269-7, 978-3-540-48272-7},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMethodologically+Designing+a+Hierarchically+Organized+Concept-Based+Terminology+Database+to+Improve+Access+to+Biomedical+Documentation&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Vaquero, A., Saenz, F., Alvarez, F., & Buenaga, M.. (2006). Conceptual design for domain and task specific ontology-based linguistic resources. Paper presented at the On the move to meaningful internet systems 2006: coopis, doa, gada, and odbase.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Regardless of the knowledge representation schema chosen to implement a linguistic resource, conceptual design is an important step in its development. However, it is normally put aside by developing efforts as they focus on content, implementation and time-saving issues rather than on the software engineering aspects of the construction of linguistic resources. Based on an analysis of common problems found in linguistic resources, we present a reusable conceptual model which incorporates elements that give ontology developers the possibility to establish formal semantic descriptions for concepts and relations, and thus avoiding the aforementioned common problems. The model represents a step forward in our efforts to define a complete methodology for the design and implementation of ontology-based linguistic resources using relational databases and a sound software engineering approach for knowledge representation.

    @INPROCEEDINGS{Vaquero2006,
    author = {Vaquero , Antonio and Saenz , Fernando and Alvarez , Francisco and Buenaga , Manuel},
    title = {Conceptual Design for Domain and Task Specific Ontology-Based Linguistic Resources},
    booktitle = {On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE},
    year = {2006},
    volume = {4275},
    series = {Lecture Notes in Computer Science},
    pages = {855-862},
    month = {November},
    publisher = {Springer Berlin Heidelberg},
    abstract = {Regardless of the knowledge representation schema chosen to implement a linguistic resource, conceptual design is an important step in its development. However, it is normally put aside by developing efforts as they focus on content, implementation and time-saving issues rather than on the software engineering aspects of the construction of linguistic resources. Based on an analysis of common problems found in linguistic resources, we present a reusable conceptual model which incorporates elements that give ontology developers the possibility to establish formal semantic descriptions for concepts and relations, and thus avoiding the aforementioned common problems. The model represents a step forward in our efforts to define a complete methodology for the design and implementation of ontology-based linguistic resources using relational databases and a sound software engineering approach for knowledge representation.},
    doi = {10.1007/11914853_52},
    url = {http://scholar.google.es/scholar?q=allintitle%3AConceptual+Design+for+Domain+and+Task+Specific+Ontology-Based+Linguistic+Resources&btnG=&hl=es&as_sdt=0}
    }

  • Vaquero, A., Saenz, F., Álvarez, F., & Buenaga, M.. (2006). Thinking precedes action: using software engineering for the development of a terminology database to improve access to biomedical documentation. In Maglaveras, N., Chouvarda, I., Koutkias, V., & Brause, R. (Ed.), In Biological and medical data analysis (, Vol. 4345pp. 207-218). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of software and database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.

    @INCOLLECTION{Vaquero2006b,
    author = {Vaquero , Antonio and Saenz , Fernando and Álvarez , Francisco and Buenaga , Manuel},
    title = {Thinking Precedes Action: Using Software Engineering for the Development of a Terminology Database to Improve Access to Biomedical Documentation},
    booktitle = {Biological and Medical Data Analysis},
    publisher = {Springer Berlin Heidelberg},
    year = {2006},
    editor = {Maglaveras, Nicos and Chouvarda, Ioanna and Koutkias, Vassilis and Brause, Rüdiger},
    volume = {4345},
    series = {Lecture Notes in Computer Science},
    pages = {207-218},
    month = {jan},
    abstract = {Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of software and database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.},
    copyright = {©2006 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/11946465_19},
    isbn = {978-3-540-68063-5, 978-3-540-68065-9},
    shorttitle = {Thinking Precedes Action},
    url = {http://scholar.google.es/scholar?q=allintitle%3AThinking+Precedes+Action%3A+Using+Software+Engineering+for+the+Development+of+a+Terminology+Database+to+Improve+Access+to+Biomedical+Documentation&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

2005

  • Gómez Hidalgo, J. M., Buenaga Rodríguez, M., & Cortizo Pérez, J. C.. (2005). The role of word sense disambiguation in automated text categorization. Paper presented at the 10th international conference on applications of natural languageto information systems.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Automated Text Categorization has reached the levels of accuracy of human experts. Provided that enough training data is available, it is possible to learn accurate automatic classifiers by using Information Retrieval and Machine Learning Techniques. However, performance of this approach is damaged by the problems derived from language variation (specially polysemy and synonymy). We investigate how Word Sense Disambiguation can be used to alleviate these problems, by using two traditional methods for thesaurus usage in Information Retrieval, namely Query Expansion and Concept Indexing. These methods are evaluated on the problem of using the Lexical Database WordNet for text categorization, focusing on the Word Sense Disambiguation step involved. Our experiments demonstrate that rather simple dictionary methods, and baseline statistical approaches, can be used to disambiguate words and improve text representation and learning in both Query Expansion and Concept Indexing approaches.

    @INPROCEEDINGS{GomezHidalgo2005,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel and Cortizo Pérez , José Carlos},
    title = {The Role of Word Sense Disambiguation in Automated Text Categorization},
    booktitle = {10th International Conference on Applications of Natural Languageto Information Systems},
    year = {2005},
    pages = {298-309},
    publisher = {Springer Verlag},
    abstract = {Automated Text Categorization has reached the levels of accuracy of human experts. Provided that enough training data is available, it is possible to learn accurate automatic classifiers by using Information Retrieval and Machine Learning Techniques. However, performance of this approach is damaged by the problems derived from language variation (specially polysemy and synonymy). We investigate how Word Sense Disambiguation can be used to alleviate these problems, by using two traditional methods for thesaurus usage in Information Retrieval, namely Query Expansion and Concept Indexing. These methods are evaluated on the problem of using the Lexical Database WordNet for text categorization, focusing on the Word Sense Disambiguation step involved. Our experiments demonstrate that rather simple dictionary methods, and baseline statistical approaches, can be used to disambiguate words and improve text representation and learning in both Query Expansion and Concept Indexing approaches.},
    doi = {10.1007/11428817_27},
    institution = {Universidad de Alicante},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+The+Role+of+Word+Sense+Disambiguation+in+Automated+Text+Categorization&btnG=&hl=es&as_sdt=0}
    }

2004

  • Cortizo Pérez, J. C., & Giráldez, I.. (2004). Discovering data dependencies in web content mining. Paper presented at the Actas de la iadis international conference www/internet.
    [BibTeX] [Abstract] [Google Scholar]
    Web content mining opens up the possibility to use data presented in web pages for the discovery of interesting and useful patterns. Our web mining tool, FBL (Filtered Bayesian Learning), performs a two stage process: first it analyzes data present in a web page, and then, using information about the data dependencies encountered, it performs the mining phase based on bayesian learning. The Näive Bayes classifier is based on the assumption that the attribute values are conditionally independent for a given the class. This makes it perform very well in some data domains, but performs poorly when attributes are dependent. In this paper, we try to identify those dependencies using linear regression on the attribute values, and then eliminate the attributes which are a linear combination of one or two others. We have tested this system on six web domains (extracting the data by parsing the html), where we have added a synthetic attribute which is a linear combination of two of the original ones. The system detects perfectly those synthetic attributes and also some “natural” dependent attributes, obtaining a more accurate classifier.

    @INPROCEEDINGS{CortizoPerez2004,
    author = {Cortizo Pérez , José Carlos and Giráldez , Ignacio},
    title = {Discovering Data Dependencies in Web Content Mining},
    booktitle = {Actas de la IADIS International Conference WWW/Internet },
    year = {2004},
    pages = {6-9},
    abstract = {Web content mining opens up the possibility to use data presented in web pages for the discovery of interesting and useful patterns. Our web mining tool, FBL (Filtered Bayesian Learning), performs a two stage process: first it analyzes data present in a web page, and then, using information about the data dependencies encountered, it performs the mining phase based on bayesian learning. The Näive Bayes classifier is based on the assumption that the attribute values are conditionally independent for a given the class. This makes it perform very well in some data domains, but performs poorly when attributes are dependent. In this paper, we try to identify those dependencies using linear regression on the attribute values, and then eliminate the attributes which are a linear combination of one or two others. We have tested this system on six web domains (extracting the data by parsing the html), where we have added a synthetic attribute which is a linear combination of two of the original ones. The system detects perfectly those synthetic attributes and also some “natural” dependent attributes, obtaining a more accurate classifier.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADiscovering+Data+Dependencies+in+Web+Content+Mining&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Puertas Sanz, E., & Buenaga Rodríguez, M.. (2004). Experimentos en indexación conceptual para la categorización de texto. Paper presented at the Actas de la conferencia ibero-americana www/internet.
    [BibTeX] [Abstract] [Google Scholar]
    En la Categorización de Texto (CT), una tarea de gran importancia para el acceso a la información en Internet y la World Wide Web, juega un papel fundamental el método de representación de documentos o indexación. La representación de los documentos en CT se basa generalmente en la utilización de raíces de palabras, excluyendo aquellas que aparecen en una lista de palabras frecuentes (modelo de lista de palabras). Este enfoque padece del problema habitual en Recuperación de Información (RI), la ambigüedad del lenguaje natural. En este artículo exploramos el potencial de la indexación mediante conceptos, utilizando synsets de WordNet, frente al modelo tradicional basado en lista de palabras, en el marco de la CT. Hemos realizado una serie de experimentos en los cuáles evaluamos ambos modelos de indexación para la CT sobre la concordancia semántica Semcor. Los resultados permiten afirmar que la indexación mixta, usando lista de palabras y conceptos de WordNet, es significativamente más efectiva que ambos modelos por separado.

    @INPROCEEDINGS{GomezHidalgo2004a,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Puertas Sanz , Enrique and Buenaga Rodríguez , Manuel},
    title = {Experimentos en Indexación Conceptual para la Categorización de Texto},
    booktitle = {Actas de la Conferencia Ibero-Americana WWW/Internet },
    year = {2004},
    editor = {J. M. Gutiérrez and J. J. Martínez and P. Isaias},
    pages = {251-258},
    abstract = {En la Categorización de Texto (CT), una tarea de gran importancia para el acceso a la información en Internet y la World Wide Web, juega un papel fundamental el método de representación de documentos o indexación. La representación de los documentos en CT se basa generalmente en la utilización de raíces de palabras, excluyendo aquellas que aparecen en una lista de palabras frecuentes (modelo de lista de palabras). Este enfoque padece del problema habitual en Recuperación de Información (RI), la ambigüedad del lenguaje natural. En este artículo exploramos el potencial de la indexación mediante conceptos, utilizando synsets de WordNet, frente al modelo tradicional basado en lista de palabras, en el marco de la CT. Hemos realizado una serie de experimentos en los cuáles evaluamos ambos modelos de indexación para la CT sobre la concordancia semántica Semcor. Los resultados permiten afirmar que la indexación mixta, usando lista de palabras y conceptos de WordNet, es significativamente más efectiva que ambos modelos por separado.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExperimentos+en+Indexaci%C3%B3n+Conceptual+para+la+Categorizaci%C3%B3n+de+Texto&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Puertas Sanz, E., & Ruiz Leyva, M. J.. (2004). Concept indexing for automated text categorization. Paper presented at the Natural language processing and information systems: 9th internationalconference on applications of natural language to information systems.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper we explore the potential of concept indexing with WordNet synsets for Text categorization, in comparison with the traditional bag of words text representation model. We have performed a series of experiments in which we also test the possibility of using simple yet robust disambiguation methods for concept indexing, and the effectiveness of stoplist-filtering and stemming on the SemCor semantic concordance. Results are not conclusive yet promising.

    @INPROCEEDINGS{GomezHidalgo2004,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Puertas Sanz , Enrique and Ruiz Leyva , Miguel Jaime},
    title = {Concept Indexing for Automated Text Categorization},
    booktitle = {Natural Language Processing and Information Systems: 9th InternationalConference on Applications of Natural Language to Information Systems},
    year = {2004},
    volume = {3136},
    series = {Lecture Notes in Computer Science},
    pages = {195-206},
    publisher = {Springer Verlag},
    abstract = {In this paper we explore the potential of concept indexing with WordNet synsets for Text categorization, in comparison with the traditional bag of words text representation model. We have performed a series of experiments in which we also test the possibility of using simple yet robust disambiguation methods for concept indexing, and the effectiveness of stoplist-filtering and stemming on the SemCor semantic concordance. Results are not conclusive yet promising.},
    doi = {10.1007/978-3-540-27779-8_17},
    institution = {University of Salford},
    url = {http://scholar.google.es/scholar?q=allintitle%3AConcept+Indexing+for+Automated+Text+Categorization&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gómez Hidalgo, J. M., Giráldez, I., & Buenaga, M.. (2004). Text categorization for internet content filtering. .
    [BibTeX] [Abstract] [Google Scholar]
    Text Filtering is one of the most challenging and useful tasks in the Multilingual Information Access field. In a number of filtering applications, Automated Text Categorization of documents plays a key role. In this paper, we present two of that applications (Hermes and POESIA), focused on personalized news delivery and Internet inappropriate content blocking, respectively. We are specifically concerned with the role of Automated Text Categorization in these applications, and how the task is approached in a multilingual environment. Apart from the details of the methods employed in our work, we envisage new solutions for a more complex task we have called Cross-Lingual Text Categorization.

    @INPROCEEDINGS{GomezHidalgo2004b,
    author = {Gómez Hidalgo , José María and Giráldez , Ignacio and Buenaga , Manuel},
    title = {Text Categorization for Internet Content Filtering },
    year = {2004},
    volume = {8},
    number = {22},
    pages = {147-160},
    abstract = {Text Filtering is one of the most challenging and useful tasks in the Multilingual Information Access field. In a number of filtering applications, Automated Text Categorization of documents plays a key role. In this paper, we present two of that applications (Hermes and POESIA), focused on personalized news delivery and Internet inappropriate content blocking, respectively. We are specifically concerned with the role of Automated Text Categorization in these applications, and how the task is approached in a multilingual environment. Apart from the details of the methods employed in our work, we envisage new solutions for a more complex task we have called Cross-Lingual Text Categorization.},
    journal = {Inteligencia Artificial - Revista Iberoamericana de Inteligencia Artificial},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Text+Categorization+for+Internet+Content+Filtering+&btnG=&hl=es&as_sdt=0}
    }

  • Maña López, M. J., Buenaga, M., & Gómez Hidalgo, J. M.. (2004). Multidocument summarization: an added value to clustering in interactive retrieval. Acm trans. inf. syst., 22(2), 215-241.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    A more and more generalized problem in effective information access is the presence in the same corpus of multiple documents that contain similar information. Generally, users may be interested in locating, for a topic addressed by a group of similar documents, one or several particular aspects. This kind of task, called instance or aspectual retrieval, has been explored in several TREC Interactive Tracks. In this article, we propose in addition to the classification capacity of clustering techniques, the possibility of offering a indicative extract about the contents of several sources by means of multidocument summarization techniques. Two kinds of summaries are provided. The first one covers the similarities of each cluster of documents retrieved. The second one shows the particularities of each document with respect to the common topic in the cluster. The document multitopic structure has been used in order to determine similarities and differences of topics in the cluster of documents. The system is independent of document domain and genre. An evaluation of the proposed system with users proves significant improvements in effectiveness. The results of previous experiments that have compared clustering algorithms are also reported.

    @ARTICLE{ManaLopez2004,
    author = {Maña López , Manuel J. and Buenaga , Manuel and Gómez Hidalgo , José María},
    title = {Multidocument summarization: An added value to clustering in interactive retrieval},
    journal = {ACM Trans. Inf. Syst.},
    year = {2004},
    volume = {22},
    pages = {215-241},
    number = {2},
    month = {april},
    abstract = {A more and more generalized problem in effective information access is the presence in the same corpus of multiple documents that contain similar information. Generally, users may be interested in locating, for a topic addressed by a group of similar documents, one or several particular aspects. This kind of task, called instance or aspectual retrieval, has been explored in several TREC Interactive Tracks. In this article, we propose in addition to the classification capacity of clustering techniques, the possibility of offering a indicative extract about the contents of several sources by means of multidocument summarization techniques. Two kinds of summaries are provided. The first one covers the similarities of each cluster of documents retrieved. The second one shows the particularities of each document with respect to the common topic in the cluster. The document multitopic structure has been used in order to determine similarities and differences of topics in the cluster of documents. The system is independent of document domain and genre. An evaluation of the proposed system with users proves significant improvements in effectiveness. The results of previous experiments that have compared clustering algorithms are also reported.},
    doi = {10.1145/984321.984323},
    issn = {1046-8188},
    shorttitle = {Multidocument summarization},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Multidocument+summarization%3A+An+added+value+to+clustering+in+interactive+retrieval&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

2003

  • Gómez Hidalgo, J. M., Puertas Sanz, E., Carrero García, F., & Buenaga Rodríguez, M.. (2003). Categorización de texto sensible al coste para el filtrado de contenidos inapropiados en internet. (, Vol. 31pp. 13-20). .
    [BibTeX] [Abstract] [Google Scholar]
    El creciente problema del acceso a contenidos inapropiados de Internet se puede abordar como un problema de categorización automática de texto sensible al coste. En este artículo presentamos la evaluación comparativa de un rango representativo de algoritmos de aprendizaje y métodos de sensibilización al coste, sobre dos colecciones de páginas Web en espanol ~ e inglés. Los resultados de nuestros experimentos son prometedores.

    @INCOLLECTION{GomezHidalgo2003,
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique and Carrero García , Francisco and Buenaga Rodríguez , Manuel},
    title = {Categorización de texto sensible al coste para el filtrado de contenidos inapropiados en Internet},
    year = {2003},
    volume = {31},
    pages = {13-20},
    abstract = {El creciente problema del acceso a contenidos inapropiados de Internet se puede abordar como un problema de categorización automática de texto sensible al coste. En este artículo presentamos la evaluación comparativa de un rango representativo de algoritmos de aprendizaje y métodos de sensibilización al coste, sobre dos colecciones de páginas Web en espanol ~ e inglés. Los resultados de nuestros experimentos son prometedores.},
    journal = {Procesamiento de Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Categorizaci%C3%B3n+de+texto+sensible+al+coste+para+el+filtrado+de+contenidos+inapropiados+en+Internet&btnG=&hl=es&as_sdt=0}
    }

2002

  • Gómez Hidalgo, J. M., Buenaga Rodríguez, M., Ureña López, L. A., Martín Valdivia, M. T., & García Vega, M.. (2002). Integrating lexical knowledge in learning-based text categorization. , St-Malo, Francia.
    [BibTeX] [Abstract] [Google Scholar]
    Automatic Text Categorization (ATC) is an important task in thefield of Information Access. The prevailing approach to ATC is makinguse of a a collection of prelabeled texts for the induction of adocument classifier through learning methods. With the increasingavailability of lexical resources in electronic form (including LexicalDatabases (LDBs), Machine Readable Dictionaries, etc.), there is an interesting opportunity for the integration of them in learning-basedATC. In this paper, we present an approach to the integration of lexicalknowledge extracted from the LDB WordNet in learning-basedATC, based on Stacked Generalization (SG). The method we suggestis based on combining the lexical knowledge extracted from the LDBinterpreted as a classifier with a learning-based classifier, through SG.

    @INPROCEEDINGS{GomezHidalgo2002,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel and Ureña López , Luis Alfonso and Martín Valdivia , María Teresa and García Vega , Manuel},
    title = {Integrating Lexical Knowledge in Learning-Based Text Categorization},
    year = {2002},
    pages = {313-322},
    address = {St-Malo, Francia},
    month = {March},
    abstract = {Automatic Text Categorization (ATC) is an important task in thefield of Information Access. The prevailing approach to ATC is makinguse of a a collection of prelabeled texts for the induction of adocument classifier through learning methods. With the increasingavailability of lexical resources in electronic form (including LexicalDatabases (LDBs), Machine Readable Dictionaries, etc.), there is an interesting opportunity for the integration of them in learning-basedATC. In this paper, we present an approach to the integration of lexicalknowledge extracted from the LDB WordNet in learning-basedATC, based on Stacked Generalization (SG). The method we suggestis based on combining the lexical knowledge extracted from the LDBinterpreted as a classifier with a learning-based classifier, through SG.},
    journal = {6th International Conference on the Statistical Analysis of Textual Data},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+integrating+lexical+knowledge+in+learning-based+text+categorization&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Puertas Sanz, E., Buenaga Rodríguez, M., & Carrero García, F.. (2002). Text filtering at poesia: a new internet content filtering tool for educational environments. Procesamiento de lenguaje natural, 29, 291-292.
    [BibTeX] [Abstract] [Google Scholar]
    Internet provides to the children an easy access to pornography and other harmful materials. In order to improve the effectiveness of existing filters, we present POESIA, a project which objetive is to develop and evaluate an extensible open-source Internet filtering software in educational environments.

    @ARTICLE{GomezHidalgo2002a,
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique and Buenaga Rodríguez , Manuel and Carrero García , Francisco},
    title = {Text filtering at POESIA: a new Internet content filtering tool for educational environments},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2002},
    volume = {29},
    pages = {291-292},
    month = {September},
    abstract = {Internet provides to the children an easy access to pornography and other harmful materials. In order to improve the effectiveness of existing filters, we present POESIA, a project which objetive is to develop and evaluate an extensible open-source Internet filtering software in educational environments.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AText+filtering+at+POESIA%3A+a+new+Internet+content+filtering+tool+for+educational+environments&btnG=&hl=es&as_sdt=0}
    }

2001

  • Fernandez, J., Benchetrit, D., & Gachet Páez, D.. (2001). Automated visual inspection to assembly of frontal airbag sensors of automobiles. Paper presented at the 2001 8th IEEE international conference on emerging technologies and factory automation, 2001. proceedings.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes an automatic quality control system that supervises through three {CCD} cameras the assembly of automobile airbag sensors. The main characteristics that can be detected are position, angle and geometric parameters of epoxy resin to fix the accelerator sensor. The system can inspect 12000 pieces/hour and now it is at full production in a multinational automobile component factory at Madrid.

    @inproceedings{fernandez_automated_2001,
    title = {Automated visual inspection to assembly of frontal airbag sensors of automobiles},
    volume = {2},
    doi = {10.1109/ETFA.2001.997745},
    abstract = {This paper describes an automatic quality control system that supervises through three {CCD} cameras the assembly of automobile airbag sensors. The main characteristics that can be detected are position, angle and geometric parameters of epoxy resin to fix the accelerator sensor. The system can inspect 12000 pieces/hour and now it is at full production in a multinational automobile component factory at Madrid.},
    booktitle = {2001 8th {IEEE} International Conference on Emerging Technologies and Factory Automation, 2001. Proceedings},
    author = {Fernandez, J. and Benchetrit, D. and Gachet Páez, Diego},
    year = {2001},
    keywords = {Assembly systems, automatic optical inspection, automobile airbag sensors, automobile component factory, automobile industry, Automobiles, {CCD} cameras, Charge coupled devices, Charge-coupled image sensors, Epoxy resins, Inspection, Production systems, quality control, quality control system, Sensor phenomena and characterization, Sensor systems, visual inspection},
    pages = {631--634 vol.2},
    url = {http://scholar.google.es/scholar?q=Automated+visual+inspection+to+assembly+of+frontal+airbag+sensors+of+automobiles&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Buenaga Rodriguez, M., Maña López, M. J., Diaz Esteban, A., & Gervás Gómez-Navarro, P.. (2001). A user model based on content analysis for the intelligent personalization of a news service. In Bauer, M., Gmytrasiewicz, P. J., & Vassileva, J. (Ed.), In User modeling (, Vol. 2109pp. 216-218). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper we present a methodology designed to improve the intelligent personalization of news services. Our methodology integrates textual content analysis tasks to achieve an elaborate user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of user’s interests includes his preferences about content, using a wide coverage and non-specific-domain classification of topics, and structure (newspaper sections). The application of implicit feedback allows a proper and dynamic personalization.

    @INCOLLECTION{BuenagaRodri­guez2001,
    author = {Buenaga Rodriguez , Manuel and Maña López , Manuel J. and Diaz Esteban , Alberto and Gervás Gómez-Navarro , Pablo},
    title = {A User Model Based on Content Analysis for the Intelligent Personalization of a News Service},
    booktitle = {User Modeling},
    publisher = {Springer Berlin Heidelberg},
    year = {2001},
    editor = {Bauer, Mathias and Gmytrasiewicz, Piotr J. and Vassileva, Julita},
    volume = {2109},
    series = {Lecture Notes in Computer Science},
    pages = {216-218},
    month = {jan},
    abstract = {In this paper we present a methodology designed to improve the intelligent personalization of news services. Our methodology integrates textual content analysis tasks to achieve an elaborate user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of user's interests includes his preferences about content, using a wide coverage and non-specific-domain classification of topics, and structure (newspaper sections). The application of implicit feedback allows a proper and dynamic personalization.},
    copyright = {©2001 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/3-540-44566-8_25},
    isbn = {978-3-540-42325-6, 978-3-540-44566-1},
    language = {en},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+User+Model+Based+on+Content+Analysis+for+the+Intelligent+Personalization+of+a+News+Service&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Díaz Esteban, A., Buenaga Rodríguez, M., Giráldez, I., Gómez Hidalgo, J. M., García, A., Chacón, I., San Miguel, B., Puertas Sanz, E., Murciano, R., Alcojor, M., Acero, I., & Gervás, P.. (2001). Proyecto hermes: servicios de personalización inteligente de noticias mediante la integración de técnicas de análisis automático del contenido textual y modelado de usuario con capacidades bilingües. Procesamiento de lenguaje natural, 27, 299-300.
    [BibTeX] [Abstract] [Google Scholar]
    El proyecto Hermes tiene como objetivo el desarrollo de un sistema personalizado inteligente de acceso a la información en un entorno bilingüe, español e inglés. El sistema proporciona una alta efectividad e información especialmente adaptada al cliente, basándose en la utilización de técnicas avanzadas del contenido textual y modelado de usuario. Un objetivo principal del proyecto Hermes radica en la extensión de las tecnologías vigentes para entornos monolingües al campo bilingüe. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle.

    @ARTICLE{DiazEsteban2001,
    author = {Díaz Esteban , Alberto and Buenaga Rodríguez , Manuel and Giráldez , Ignacio and Gómez Hidalgo , José María and García , Antonio and Chacón , Inmaculada and San Miguel , Beatriz and Puertas Sanz , Enrique and Murciano , Raúl and Alcojor , Matías and Acero , Ignacio and Gervás , Pablo},
    title = {Proyecto Hermes: Servicios de Personalización Inteligente de Noticias mediante la Integración de Técnicas de Análisis Automático del Contenido Textual y Modelado de Usuario con Capacidades Bilingües},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2001},
    volume = {27},
    pages = {299-300},
    month = {September},
    abstract = {El proyecto Hermes tiene como objetivo el desarrollo de un sistema personalizado inteligente de acceso a la información en un entorno bilingüe, español e inglés. El sistema proporciona una alta efectividad e información especialmente adaptada al cliente, basándose en la utilización de técnicas avanzadas del contenido textual y modelado de usuario. Un objetivo principal del proyecto Hermes radica en la extensión de las tecnologías vigentes para entornos monolingües al campo bilingüe. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AProyecto+Hermes%3A+Servicios+de+Personalizaci%C3%B3n+Inteligente+de+Noticias+mediante+la+Integraci%C3%B3n+de+T%C3%A9cnicas+de+An%C3%A1lisis+Autom%C3%A1tico+del+Contenido+Textual+y+Modelado+de+Usuario+con+Capacidades+Biling%C3%BCes&btnG=&hl=es&as_sdt=0}
    }

  • Díaz Esteban, A., Maña López, M. J., Buenaga Rodríguez, M., Gómez Hidalgo, J. M., & Gervás Gómez-Navarro, P.. (2001). Using linear classifiers in the integration of user modeling and text content analysis in the personalization of a web-based spanish news service. .
    [BibTeX] [Abstract] [Google Scholar]
    Nowadays many newspapers and news agencies offer personalized information access services and, moreover, there is a growing interest in the improvement of these services. In this paper we present a methodology useful to improve the intelligent personalization of news services and the way it has been applied to a Spanish relevant newspaper: ABC. Our methodology integrates textual content analysis tasks and machine learning techniques to achieve an elaborated user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of a user´s interests includes his preferences about structure (newspaper sections), content and information delivery. A wide coverage and non-specific-domain classification of topics and a personal set of keywords allow the user to define his preferences about content. Machine learning techniques are used to obtain an initial representation of each category of the topic classification. Finally, we introduce some details about the Mercurio system, which is being used to implement this methodology for ABC. We describe our experience and an evaluation of the system in comparison with other commercial systems.

    @OTHER{DiazEsteban2001a,
    abstract = {Nowadays many newspapers and news agencies offer personalized information access services and, moreover, there is a growing interest in the improvement of these services. In this paper we present a methodology useful to improve the intelligent personalization of news services and the way it has been applied to a Spanish relevant newspaper: ABC. Our methodology integrates textual content analysis tasks and machine learning techniques to achieve an elaborated user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of a user´s interests includes his preferences about structure (newspaper sections), content and information delivery. A wide coverage and non-specific-domain classification of topics and a personal set of keywords allow the user to define his preferences about content. Machine learning techniques are used to obtain an initial representation of each category of the topic classification. Finally, we introduce some details about the Mercurio system, which is being used to implement this methodology for ABC. We describe our experience and an evaluation of the system in comparison with other commercial systems.},
    author = {Díaz Esteban , Alberto and Maña López , Manuel J. and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María and Gervás Gómez-Navarro , Pablo},
    booktitle = {Workshop on User Modeling, Machine Learning and Information Retrieval },
    title = {Using linear classifiers in the integration of user modeling and text content analysis in the personalization of a Web-based Spanish News Service},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUsing+linear+classifiers+in+the+integration+of+user+modeling+and+text+content+analysis+in+the+personalization+of+a+Webbased+Spanish+News+&btnG=&hl=es&as_sdt=0},
    year = {2001}
    }

  • Gómez Hidalgo, J. M., Murciano Quejido, R., Díaz Esteban, A., Buenaga Rodríguez, M., & Puertas Sanz, E.. (2001). Categorizing photographs for user-adapted searching in a news agency e-commerce application. .
    [BibTeX] [Abstract] [Google Scholar]
    In this work, we present a system for categorizing photographs based on the text of their captions. The system has been developed as a part of the system CODI, an e-commerce application for an Spanish news agency. The categorization system makes able to the user the personalization of their information interests, improving search possibilities in the CODI application. Our approach for photograph categorization is based on linear text classifiers and Web mining programs, specially selected due to their suitability for industrial applications. The evaluation of our categorization system has shown that it meets the efficiency and effectiveness requirements of the e-commerce application.

    @PROCEEDINGS{GomezHidalgo2001,
    title = {Categorizing photographs for user-adapted searching in a news agency e-commerce application},
    year = {2001},
    abstract = {In this work, we present a system for categorizing photographs based on the text of their captions. The system has been developed as a part of the system CODI, an e-commerce application for an Spanish news agency. The categorization system makes able to the user the personalization of their information interests, improving search possibilities in the CODI application. Our approach for photograph categorization is based on linear text classifiers and Web mining programs, specially selected due to their suitability for industrial applications. The evaluation of our categorization system has shown that it meets the efficiency and effectiveness requirements of the e-commerce application.},
    author = {Gómez Hidalgo , José María and Murciano Quejido , Raúl and Díaz Esteban , Alberto and Buenaga Rodríguez , Manuel and Puertas Sanz , Enrique},
    journal = {First International Workshop on New Developments in Digital Libraries },
    pages = {55-66},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACategorizing+photographs+for+user-adapted+searching+in+a+news+agency+e-commerce&btnG=&hl=es&as_sdt=0}
    }

  • Ureña López, A. L., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (2001). Integrating linguistic resources in tc through wsd. Computers and the humanities, 35(2), 215-230.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Information access methods must be improved to overcome theinformation overload that most professionals face nowadays. Textclassification tasks, like Text Categorization, help the usersto access to the great amount of text they find in the Internetand their organizations.TC is the classification of documents into a predefined set ofcategories. Most approaches to automatic TC are based on theutilization of a training collection, which is a set of manuallyclassified documents. Other linguistic resources that areemerging, like lexical databases, can also be used forclassification tasks. This article describes an approach to TCbased on the integration of a training collection (Reuters-21578)and a lexical database (WordNet 1.6) as knowledge sources.Lexical databases accumulate information on the lexical items ofone or several languages. This information must be filtered inorder to make an effective use of it in our model of TC. Thisfiltering process is a Word Sense Disambiguation task. WSDis the identification of the sense of words in context. This taskis an intermediate process in many natural language processingtasks like machine translation or multilingual informationretrieval. We present the utilization of WSD as an aid for TC. Ourapproach to WSD is also based on the integration of two linguisticresources: a training collection (SemCor and Reuters-21578) and alexical database (WordNet 1.6).We have developed a series of experiments that show that: TC and WSD based on the integration of linguistic resources are veryeffective; and, WSD is necessary to effectively integratelinguistic resources in TC.

    @ARTICLE{UrenaLopez2001,
    author = {Ureña López , L. Alfonso and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    title = {Integrating Linguistic Resources in TC through WSD},
    journal = {Computers and the Humanities},
    year = {2001},
    volume = {35},
    pages = {215-230},
    number = {2},
    month = {may},
    abstract = {Information access methods must be improved to overcome theinformation overload that most professionals face nowadays. Textclassification tasks, like Text Categorization, help the usersto access to the great amount of text they find in the Internetand their organizations.TC is the classification of documents into a predefined set ofcategories. Most approaches to automatic TC are based on theutilization of a training collection, which is a set of manuallyclassified documents. Other linguistic resources that areemerging, like lexical databases, can also be used forclassification tasks. This article describes an approach to TCbased on the integration of a training collection (Reuters-21578)and a lexical database (WordNet 1.6) as knowledge sources.Lexical databases accumulate information on the lexical items ofone or several languages. This information must be filtered inorder to make an effective use of it in our model of TC. Thisfiltering process is a Word Sense Disambiguation task. WSDis the identification of the sense of words in context. This taskis an intermediate process in many natural language processingtasks like machine translation or multilingual informationretrieval. We present the utilization of WSD as an aid for TC. Ourapproach to WSD is also based on the integration of two linguisticresources: a training collection (SemCor and Reuters-21578) and alexical database (WordNet 1.6).We have developed a series of experiments that show that: TC and WSD based on the integration of linguistic resources are veryeffective; and, WSD is necessary to effectively integratelinguistic resources in TC.},
    doi = {10.1023/A:1002632712378},
    issn = {0010-4817, 1572-8412},
    language = {en},
    publisher = {Kluwer Academic Publishers},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Integrating+linguistic+resources+in+TC+through+WSD&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

2000

  • Buenaga Rodríguez, M., Gómez Hidalgo, J. M., & Díaz Agudo, B.. (2000). Using wordnet to complement training information in text categorization. , 185, 353-364.
    [BibTeX] [Abstract] [Google Scholar]
    Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.

    @OTHER{BuenagaRodriguez2000,
    abstract = {Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.},
    address = {Amsterdam/Philadelphia},
    author = {Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María and Díaz Agudo , Belén},
    booktitle = {Recent Advances in Natural Language Processing II: },
    edition = {Selected Papers from RANLP},
    pages = {353-364},
    publisher = {John Benjamins},
    series = {97, Current Issues in Linguistic Theory (CILT)},
    title = {Using Wordnet to Complement Training Information in Text Categorization},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+WordNet+to+Complement+Training+Information+in+Text+Categorization&btnG=&hl=es&as_sdt=0},
    volume = {185},
    year = {2000}
    }

  • Díaz, A., Gervás, P., Gómez, J. M., García, A., Buenaga, M., Chacón, I., San Miguel, B., Murciano, R., Puertas, E., Alcojor, M., & Acero, I.. (2000). Proyecto mercurio: un servicio personalizado de noticias basado entécnicas de clasificación de texto y modelado de usuario. Xvi congreso de la sepln (sociedad española para el procesamientodel lenguaje natural), vigo, españa.
    [BibTeX] [Abstract] [Google Scholar]
    El sistema Mercurio es un servidor personalizado de noticias que trabaja con una representación del cliente basada en los últimos avances sobre modelado de usuario. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle con la frecuencia que haya especificado. El servidor de noticias coopera también con un buscador que permite a los clientes realizar búsquedas puntuales en las noticias del día.

    @OTHER{Diaz2000,
    abstract = {El sistema Mercurio es un servidor personalizado de noticias que trabaja con una representación del cliente basada en los últimos avances sobre modelado de usuario. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle con la frecuencia que haya especificado. El servidor de noticias coopera también con un buscador que permite a los clientes realizar búsquedas puntuales en las noticias del día.},
    author = {Díaz , Alberto and Gervás , Pablo and Gómez , José María and García , Antonio and Buenaga , Manuel and Chacón , Inmaculada and San Miguel , Beatriz and Murciano , Raúl and Puertas , Enrique and Alcojor , Matías and Acero , Ignacio},
    journal = {XVI Congreso de la SEPLN (Sociedad Española para el Procesamientodel Lenguaje Natural), Vigo, España},
    month = {Septiembre},
    title = {Proyecto Mercurio: un servicio personalizado de noticias basado entécnicas de clasificación de texto y modelado de usuario},
    url = {http://scholar.google.es/scholar?hl=es&as_sdt=0,5&q=allintitle%3A+Proyecto+Mercurio%3A+un+servicio+personalizado+de+noticias+basado+en+t%C3%A9cnicas+de+clasificaci%C3%B3n+de+texto+y+modelado+de+usuario},
    year = {2000}
    }

  • Gómez Hidalgo, J. M., Maña López, M., & Puertas Sanz, E.. (2000). Combining text and heuristics for cost-sensitive spam filtering. Fourth computational natural language learning workshop.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Spam filtering is a text categorization task that shows especial features that make it interesting and difficult. First, the task has been performed traditionally using heuristics from the domain. Second, a cost model is required to avoid misclassification of legitimate messages. We present a comparative evaluation of several machine learning algorithms applied to spam filtering, considering the text of the messages and a set of heuristics for the task. Cost-oriented biasing and evaluation is performed.

    @OTHER{GomezHidalgo2000,
    abstract = {Spam filtering is a text categorization task that shows especial features that make it interesting and difficult. First, the task has been performed traditionally using heuristics from the domain. Second, a cost model is required to avoid misclassification of legitimate messages. We present a comparative evaluation of several machine learning algorithms applied to spam filtering, considering the text of the messages and a set of heuristics for the task. Cost-oriented biasing and evaluation is performed.},
    address = {Lisboa},
    author = {Gómez Hidalgo , José María and Maña López , Manuel and Puertas Sanz , Enrique},
    doi = {10.3115/1117601.1117623},
    journal = { Fourth Computational Natural Language Learning Workshop},
    month = {September},
    title = {Combining Text and Heuristics for Cost-Sensitive Spam Filtering},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACombining+Text+and+Heuristics+for+Cost-Sensitive+Spam+Filtering&btnG=&hl=es&as_sdt=0},
    year = {2000}
    }

  • Maña López, M. J., Ureña López, L. A., & Buenaga Rodríguez, M.. (2000). Tareas de análisis del contenido textual para la recuperación de información con realimentación. (, Vol. 26pp. 215-222). .
    [BibTeX] [Abstract] [Google Scholar]
    La utilización de realimentación es una de las técnicas que proporciona mejoras más significativas en la efectividad del proceso de recuperación de información. Por otra parte, cada vez se utilizan en el proceso de recuperación de información, técnicas más avanzadas de análisis del contenido textual con vistas a mejorar la efectividad. En nuestro trabajo estudiamos los beneficios que proporciona la integración de mecanismos de análisis del contenido al utilizar la realimentación en el proceso de recuperación de información. Nos centramos en dos tareas de análisis: desambiguación de palabras y generación de resúmenes, presentando una sistemática para su utilización y experimentos asociados para la evaluación de las mejoras conseguidas.

    @INCOLLECTION{ManaLopez2000,
    author = {Maña López , Manuel J. and Ureña López , Luis Alfonso and Buenaga Rodríguez , Manuel},
    title = {Tareas de análisis del contenido textual para la recuperación de información con realimentación},
    year = {2000},
    volume = {26},
    pages = {215-222},
    month = {September},
    abstract = {La utilización de realimentación es una de las técnicas que proporciona mejoras más significativas en la efectividad del proceso de recuperación de información. Por otra parte, cada vez se utilizan en el proceso de recuperación de información, técnicas más avanzadas de análisis del contenido textual con vistas a mejorar la efectividad. En nuestro trabajo estudiamos los beneficios que proporciona la integración de mecanismos de análisis del contenido al utilizar la realimentación en el proceso de recuperación de información. Nos centramos en dos tareas de análisis: desambiguación de palabras y generación de resúmenes, presentando una sistemática para su utilización y experimentos asociados para la evaluación de las mejoras conseguidas.},
    journal = {Procesamiento de Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Tareas+de+an%C3%A1lisis+del+contenido+textual+para+la+recuperaci%C3%B3n+de+informaci%C3%B3n+con+realimentaci%C3%B3n&btnG=&hl=es&as_sdt=0}
    }

  • Ureña López, L. A., Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (2000). Information retrieval by means of word sense disambiguation. Third international workshop on text, speech and dialoguebrno, czech republic, 1902, 93-98.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The increasing problem of information overload can be reduced by the improvement of information access tasks like Information Retrieval. Relevance Feedback plays a key role in this task, and is typically based only on the information extracted from documents judged by the user for a given query. We propose to make use of a thesaurus to complement this information to improve RF. This must be done by means of a Word Sense Disambiguation process that correctly identifies the suitable information from the thesaurus WordNET. The results of our experiments show that the utilisation of a thesaurus requires Word Sense Disambiguation, and that with this process, Relevance Feedback is substantially improved.

    @OTHER{UrenaLopez2000,
    abstract = {The increasing problem of information overload can be reduced by the improvement of information access tasks like Information Retrieval. Relevance Feedback plays a key role in this task, and is typically based only on the information extracted from documents judged by the user for a given query. We propose to make use of a thesaurus to complement this information to improve RF. This must be done by means of a Word Sense Disambiguation process that correctly identifies the suitable information from the thesaurus WordNET. The results of our experiments show that the utilisation of a thesaurus requires Word Sense Disambiguation, and that with this process, Relevance Feedback is substantially improved.},
    author = {Ureña López , Luis Alfonso and Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    booktitle = {Text, Speech and Dialogue},
    doi = {10.1007/3-540-45323-7_16},
    journal = {Third International Workshop on TEXT, SPEECH and DIALOGUEBrno, Czech Republic},
    month = {Septiembre 13-16},
    pages = {93-98},
    title = {Information Retrieval by means of Word Sense Disambiguation},
    url = {http://scholar.google.es/scholar?q=allintitle%3AInformation+Retrieval+by+means+of+Word+Sense+Disambiguation&btnG=&hl=es&as_sdt=0},
    volume = {1902},
    year = {2000}
    }

1999

  • Gachet Páez, D., & Campos Lorrio, T.. (1999). Design of real time software for industrial process control. Paper presented at the 1999 7th IEEE international conference on emerging technologies and factory automation, 1999. proceedings. ETFA ’99.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The paper describes the details of, and the experiences gained from, a case study undertaken by the authors on the design and implementation of a complex control system for a dosage industrial process used in a manufacturing industry. The goal was to demonstrate that industrial real time control systems could be implemented using a high level programming language and some suitable operating system. The software was designed using Harel’s State Charts as the main tool and implemented on an Intel Pentium based system. Our results indicated that system works correctly and is very flexible. The system has been successfully tested and now is in full production at Lignotok {S.A.}, a large manufacturing company in Vigo, Spain

    @inproceedings{paez_design_1999,
    title = {Design of real time software for industrial process control},
    volume = {2},
    doi = {10.1109/ETFA.1999.813133},
    abstract = {The paper describes the details of, and the experiences gained from, a case study undertaken by the authors on the design and implementation of a complex control system for a dosage industrial process used in a manufacturing industry. The goal was to demonstrate that industrial real time control systems could be implemented using a high level programming language and some suitable operating system. The software was designed using Harel's State Charts as the main tool and implemented on an Intel Pentium based system. Our results indicated that system works correctly and is very flexible. The system has been successfully tested and now is in full production at Lignotok {S.A.}, a large manufacturing company in Vigo, Spain},
    booktitle = {1999 7th {IEEE} International Conference on Emerging Technologies and Factory Automation, 1999. Proceedings. {ETFA} '99},
    author = {Gachet Páez, Diego and Campos Lorrio, Tomas},
    year = {1999},
    keywords = {case study, chemical technology, complex control system, Computer industry, Computer languages, Control systems, dosage industrial process, Electrical equipment industry, high level languages, high level programming language, Industrial control, industrial process control, industrial real time control systems, Intel Pentium based system, Lignotok, manufacturing company, manufacturing industries, manufacturing industry, operating system, Operating systems, operating systems (computers), process control, real time software design, Real time systems, real-time systems, Software Engineering, Software systems, Spain, State Charts},
    pages = {1259--1263 vol.2},
    url={http://scholar.google.es/scholar?q=allintitle%3A++Design+of+real+time+software+for+industrial+process+control&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gómez Hidalgo, J. M., Díaz Esteban, A., Ureña López, L. A., & García Vega, M.. (1999). Utilización y evaluación de la desambiguación en tareas de clasificaciónde texto. Xv congreso de la sepln ,lérida , españa(25), 99-107.
    [BibTeX] [Abstract] [Google Scholar]
    La evaluación de la desambiguación puede realizarse tanto de manera directa como indirecta, es decir, en el marco de otra tarea de procesamiento de lenguaje natural que hace uso de ella. La evaluación directa de la desambiguación está próxima a su estandarización en el marco de competiciones como SENSEVAL. En cambio, la evaluación indirecta ha sido poco utilizada, pero es muy importante porque la desambiguación se utiliza fundamentalmente como ayuda a otras tareas. En este trabajo presentamos dos métodos de desambiguación basados en la integración de recursos, aplicados a una tarea de categorización de documentos, que se basa en la misma idea de integración. Realizamos una evaluación directa e indirecta de las técnicas de desambiguación utilizadas, logrando resultados muy positivos para ambas técnicas. Los resultados son comparables con los que obtendría un desambiguador manual, e indican que es preciso hacer uso de la desambiguación para el método de categorización propuesto.

    @OTHER{GomezHidalgo1999,
    abstract = {La evaluación de la desambiguación puede realizarse tanto de manera directa como indirecta, es decir, en el marco de otra tarea de procesamiento de lenguaje natural que hace uso de ella. La evaluación directa de la desambiguación está próxima a su estandarización en el marco de competiciones como SENSEVAL. En cambio, la evaluación indirecta ha sido poco utilizada, pero es muy importante porque la desambiguación se utiliza fundamentalmente como ayuda a otras tareas. En este trabajo presentamos dos métodos de desambiguación basados en la integración de recursos, aplicados a una tarea de categorización de documentos, que se basa en la misma idea de integración. Realizamos una evaluación directa e indirecta de las técnicas de desambiguación utilizadas, logrando resultados muy positivos para ambas técnicas. Los resultados son comparables con los que obtendría un desambiguador manual, e indican que es preciso hacer uso de la desambiguación para el método de categorización propuesto.},
    author = {Gómez Hidalgo , José María and Díaz Esteban , Alberto and Ureña López , Luis Alfonso and García Vega , Manuel},
    institution = {Sociedad Española para el Procesamiento del Lenguaje Natural},
    journal = {XV Congreso de la SEPLN ,Lérida , España},
    month = {Septiembre},
    number = {25},
    pages = {99-107},
    title = {Utilización y evaluación de la desambiguación en tareas de clasificaciónde texto},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Utilizaci%C3%B3n+y+evaluaci%C3%B3n+de+la+desambiguaci%C3%B3n+en+tareas+de+clasificaci%C3%B3n+de+texto&btnG=&hl=es&as_sdt=0},
    year = {1999}
    }

  • Maña López, M. J., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1999). Using and evaluating user directed summaries to improve information access. Third european conference on research and advanced technology fordigital libraries, 1696, 198-214.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Textual information available has grown so much as to make necessary to study new techniques that assist users in information access (IA). In this paper, we propose utilizing a user directed summarization system in an IA setting for helping users to decide about document relevance. The summaries are generated using a sentence extraction method that scores the sentences performing some heuristics employed successfully in previous works (keywords, title and location). User modeling is carried out exploiting user’s query to an IA system and expanding query terms using WordNet. We present an objective and systematic evaluation method oriented to measure the summary effectiveness in two IA significant tasks: ad hoc retrieval and relevance feedback. Results obtained prove our initial hypothesis, i.e., user adapted summaries are a useful tool assisting users in an IA context.

    @OTHER{ManaLopez1999,
    abstract = {Textual information available has grown so much as to make necessary to study new techniques that assist users in information access (IA). In this paper, we propose utilizing a user directed summarization system in an IA setting for helping users to decide about document relevance. The summaries are generated using a sentence extraction method that scores the sentences performing some heuristics employed successfully in previous works (keywords, title and location). User modeling is carried out exploiting user’s query to an IA system and expanding query terms using WordNet. We present an objective and systematic evaluation method oriented to measure the summary effectiveness in two IA significant tasks: ad hoc retrieval and relevance feedback. Results obtained prove our initial hypothesis, i.e., user adapted summaries are a useful tool assisting users in an IA context.},
    author = {Maña López , Manuel J. and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    booktitle = {Research and Advanced Technology for Digital Libraries},
    doi = {10.1007/3-540-48155-9_14},
    journal = {Third European Conference on Research and Advanced Technology forDigital Libraries},
    pages = {198-214},
    publisher = {Springer Berlin Heidelberg},
    series = {Lecture Notes in Computer Science},
    title = {Using and Evaluating User Directed Summaries to Improve Information Access},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+and+Evaluating+User+Directed+Summaries+to+Improve+Information+Access&btnG=&hl=es&as_sdt=0},
    volume = {1696},
    year = {1999}
    }

  • Ureña López, L. A., & Buenaga Rodríguez, M.. (1999). Utilizando wordnet para complementar la información de entrenamiento en la identificación del significado de las palabras. (, Vol. 3pp. 20). .
    [BibTeX] [Abstract] [Google Scholar]
    La desambiguación del significado de las palabras seha desarrollado como una subárea del Procesamiento del LenguajeNatural (PLN), donde el objetivo es determinar el sentido correcto de aquellaspalabras que tienen más de un significado, no es una tarea finalen sí misma, sino una tarea intermedia necesaria en variadas aplicacionesdel procesamiento del lenguaje natural. La resolución de laambigüedad de las palabras (WSD) es identificar el sentido correctode los relacionados en un diccionario, una base de datos léxicao similar. Es una tarea compleja, pero muy útil en variadas aplicacionesdel procesamiento en lenguaje natural, como Categorización de Texto(TC); traducción automática; restauración de acentos;encaminamiento y filtrado de textos; agrupamiento y segmentaciónde textos, corrección ortográfica y gramatical, reconocimientode voz y, en general, en la recuperación de información.Nuestro enfoque integra información de una base de datos léxica(WordNet) con dos enfoques de entrenamiento a través del Modelodel Espacio Vectorial, incrementando la efectividad de la desambiguación.Probamos los enfoques de entrenamiento con los algoritmos de Rocchio yWidrow-Hoff sobre un gran conjunto de documentos con una fina granularidadde sentidos, como son los de WordNet. Consiguiendo una alta precisiónen la resolución de la ambigüedad léxica, asícomo una gran efectividad en su ejecución.

    @INCOLLECTION{UrenaLopez1999,
    author = {Ureña López , Luis Alfonso and Buenaga Rodríguez , Manuel},
    title = {Utilizando Wordnet para complementar la información de entrenamiento en la identificación del significado de las palabras},
    year = {1999},
    volume = {3},
    number = {7},
    pages = {20},
    abstract = {La desambiguación del significado de las palabras seha desarrollado como una subárea del Procesamiento del LenguajeNatural (PLN), donde el objetivo es determinar el sentido correcto de aquellaspalabras que tienen más de un significado, no es una tarea finalen sí misma, sino una tarea intermedia necesaria en variadas aplicacionesdel procesamiento del lenguaje natural. La resolución de laambigüedad de las palabras (WSD) es identificar el sentido correctode los relacionados en un diccionario, una base de datos léxicao similar. Es una tarea compleja, pero muy útil en variadas aplicacionesdel procesamiento en lenguaje natural, como Categorización de Texto(TC); traducción automática; restauración de acentos;encaminamiento y filtrado de textos; agrupamiento y segmentaciónde textos, corrección ortográfica y gramatical, reconocimientode voz y, en general, en la recuperación de información.Nuestro enfoque integra información de una base de datos léxica(WordNet) con dos enfoques de entrenamiento a través del Modelodel Espacio Vectorial, incrementando la efectividad de la desambiguación.Probamos los enfoques de entrenamiento con los algoritmos de Rocchio yWidrow-Hoff sobre un gran conjunto de documentos con una fina granularidadde sentidos, como son los de WordNet. Consiguiendo una alta precisiónen la resolución de la ambigüedad léxica, asícomo una gran efectividad en su ejecución.},
    journal = {Revista Iberoamericana de Inteligencia Artificial},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUtilizando+Wordnet+para+complementar+la+informaci%C3%B3n+de+entrenamiento+en+la+identificaci%C3%B3n+del+significado+de+las+palabras&btnG=&hl=es&as_sdt=0}
    }

1998

  • Diaz, A., Buenaga, M., Ureña, L. A., & Garcia-Vega, M.. (1998). Integrating linguistic resources in an uniform way for text classification tasks. .
    [BibTeX] [Google Scholar]
    @PROCEEDINGS{Di­az1998,
    title = {Integrating Linguistic Resources in an Uniform Way for Text Classification Tasks },
    year = {1998},
    author = {Diaz , Alberto and Buenaga , Manuel and Ureña , Luis Alfonso and Garcia-Vega , Manuel},
    journal = {First International Conference on Language Resources and Evaluation },
    pages = {1197-1204},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+Linguistic+Resources+in+an+Uniform+Way+for+Text+Classification+Tasks&btnG=&hl=es&as_sdt=0}
    }

  • Maña López, M. J., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1998). Diseño y evaluación de un generador de texto con modelado de usuarioen un entorno de recuperación de información.. Xiv congreso de la sociedad española de procesamiento de lenguajenatural(23), 32-39.
    [BibTeX] [Abstract] [Google Scholar]
    En este trabajo presentamos un generador de resúmenes que incorpora el modelado de las necesidades de información del usuario con el fin de crear resúmenes adaptados a las mismas. Los resúmenes se generan mediante la extracción de las frases que resultan mejor puntuadas bajo tres criterios: palabras clave, localización y título. El modelado del usuario se consigue a partir de las consultas a un sistema de Recuperación de Información y de la expansión de las mismas utilizando WordNet. Se presenta también un método de evaluación sistemático y objetivo que nos permite comparar la eficacia de los distintos tipos de resúmenes generados. Los resultados demuestran la mayor eficacia de los resúmenes adaptados a las consultas y los de aquellos que emplean WordNet.

    @OTHER{ManaLopez1998,
    abstract = {En este trabajo presentamos un generador de resúmenes que incorpora el modelado de las necesidades de información del usuario con el fin de crear resúmenes adaptados a las mismas. Los resúmenes se generan mediante la extracción de las frases que resultan mejor puntuadas bajo tres criterios: palabras clave, localización y título. El modelado del usuario se consigue a partir de las consultas a un sistema de Recuperación de Información y de la expansión de las mismas utilizando WordNet. Se presenta también un método de evaluación sistemático y objetivo que nos permite comparar la eficacia de los distintos tipos de resúmenes generados. Los resultados demuestran la mayor eficacia de los resúmenes adaptados a las consultas y los de aquellos que emplean WordNet.},
    author = {Maña López , Manuel J. and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    editor = {Procesamiento del Lenguaje Natural},
    journal = {XIV Congreso de la Sociedad Española de Procesamiento de LenguajeNatural},
    number = {23},
    pages = {32-39},
    title = {Diseño y evaluación de un generador de texto con modelado de usuarioen un entorno de recuperación de información.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+y+evaluaci%C3%B3n+de+un+generador+de+texto+con+modelado+de+usuario+en+un+entorno+de+recuperaci%C3%B3n+de+informaci%C3%B3n.&btnG=&hl=es&as_sdt=0},
    year = {1998}
    }

  • Ureña López, L. A., Buenaga Rodríguez, M., García Vega, M., & Gómez Hidalgo, J. M.. (1998). Integrating and evaluating wsd in the adaptation of a lexical databasein text categorization task. .
    [BibTeX] [Abstract] [Google Scholar]
    Improvement in the accuracy of identifying the correct word sense (WSD) will give better results for many natural language processing tasks. In this paper, we present a new approach using WSD as an aid for Text Categorization (TC). This approach integrates a set of linguistics resources as knowledge sources. So, our approach, for TC using the Vector Space Model, integrates two different resources in text content analysis tasks: a lexical database (WordNet) and training collections (Reuters-21578). We present the WSD task to TC application. Specifically, we apply WSD to the process of resolving ambiguity in categories WordNet, so we complement training phases. We have developed experiments to evaluate the improvements obtained by the integration of the resources in TC task and for application of WSD in this task, obtaining a high accuracy in disambiguating category senses of WordNet.

    @OTHER{UrenaLopez1998a,
    abstract = {Improvement in the accuracy of identifying the correct word sense (WSD) will give better results for many natural language processing tasks. In this paper, we present a new approach using WSD as an aid for Text Categorization (TC). This approach integrates a set of linguistics resources as knowledge sources. So, our approach, for TC using the Vector Space Model, integrates two different resources in text content analysis tasks: a lexical database (WordNet) and training collections (Reuters-21578). We present the WSD task to TC application. Specifically, we apply WSD to the process of resolving ambiguity in categories WordNet, so we complement training phases. We have developed experiments to evaluate the improvements obtained by the integration of the resources in TC task and for application of WSD in this task, obtaining a high accuracy in disambiguating category senses of WordNet.},
    author = {Ureña López , Luis Alfonso and Buenaga Rodríguez , Manuel and García Vega , M and Gómez Hidalgo , José María},
    howpublished = {First Workshop on Text},
    title = {Integrating and evaluating WSD in the adaptation of a lexical databasein text categorization task},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+and+evaluating+WSD+in+the+adaptation+of+a+lexical+database%09in+text+categorization+task&btnG=&hl=es&as_sdt=0},
    year = {1998}
    }

  • Ureña López, L. A., García Vega, M., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1998). Resolución automática de la ambigüedad léxica fundamentada en el modelo del espacio vectorial usando ventana contextual variable. .
    [BibTeX] [Abstract] [Google Scholar]
    The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a contextual windows (variable set of terms like local context). We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.

    @INPROCEEDINGS{UrenaLopez1998b,
    author = {Ureña López , Luis Alfonso and García Vega , Manuel and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    title = {Resolución Automática de la Ambigüedad Léxica Fundamentada en el Modelo del Espacio Vectorial Usando Ventana Contextual Variable},
    year = {1998},
    abstract = {The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a contextual windows (variable set of terms like local context). We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.},
    journal = {Asociación Española de Lingüística Aplicada},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Resoluci%C3%B3n+Autom%C3%A1tica+de+la+Ambig%C3%BCedad+L%C3%A9xica+Fundamentada+en+el+Modelo+del+Espacio+Vectorial+Usando+Ventana+Contextual+Variable&btnG=&hl=es&as_sdt=0}
    }

  • Ureña López, L. A., Gómez Hidalgo, J. M., García Vega, M., & Díaz Esteban, A.. (1998). Integrando una base de datos léxica y una colección de entrenamientopara la desambiguación del sentido de las palabras. Xiv congreso de la sociedad española de procesamiento de lenguajenatural, 23.
    [BibTeX] [Abstract] [Google Scholar]
    La resolución de la ambigüedad es una tarea compleja y útil para muchas aplicaciones del procesamiento del lenguaje natural. En concreto, la ambigüedad causa problemas en aplicaciones como: la Recuperación de Información (IR), donde los problemas pueden ser substanciales y ser superados si se utilizan grandes consultas, y la traducción automática, donde es un gran problema inherente. Recientemente han sido varios los enfoques y algoritmos propuestos para realizar esta tarea. Presentamos un nuevo enfoque basado en la integración de varios recursos lingüísticos de dominio público, como una base de datos léxica y una colección de entrenamiento. Nuestro enfoque integra la información de sinonimia de WordNet y la colección de entrenamiento SemCor para incrementar la efectividad de la desambiguación, a través del Modelo del Espacio Vectorial. Hemos probado nuestro enfoque sobre un gran conjunto de documentos con una fina granularidad de sentidos, como son los de WordNet, consiguiendo una alta precisión en la resolución de la ambigüedad léxica.

    @OTHER{UrenaLopez1998,
    abstract = {La resolución de la ambigüedad es una tarea compleja y útil para muchas aplicaciones del procesamiento del lenguaje natural. En concreto, la ambigüedad causa problemas en aplicaciones como: la Recuperación de Información (IR), donde los problemas pueden ser substanciales y ser superados si se utilizan grandes consultas, y la traducción automática, donde es un gran problema inherente. Recientemente han sido varios los enfoques y algoritmos propuestos para realizar esta tarea. Presentamos un nuevo enfoque basado en la integración de varios recursos lingüísticos de dominio público, como una base de datos léxica y una colección de entrenamiento. Nuestro enfoque integra la información de sinonimia de WordNet y la colección de entrenamiento SemCor para incrementar la efectividad de la desambiguación, a través del Modelo del Espacio Vectorial. Hemos probado nuestro enfoque sobre un gran conjunto de documentos con una fina granularidad de sentidos, como son los de WordNet, consiguiendo una alta precisión en la resolución de la ambigüedad léxica.},
    author = {Ureña López , Luis Alfonso and Gómez Hidalgo , José María and García Vega , Manuel and Díaz Esteban , Alberto},
    editor = {Procesamiento del Lenguaje Natural},
    journal = {XIV Congreso de la Sociedad Española de Procesamiento de LenguajeNatural},
    pages = {23},
    title = {Integrando una Base de Datos Léxica y una Colección de Entrenamientopara la Desambiguación del Sentido de las Palabras},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrando+una+Base+de+Datos+L%C3%A9xica+y+una+Colecci%C3%B3n+de+Entrenamiento+para+la+Desambiguaci%C3%B3n+del+Sentido+de+las+Palabras&btnG=&hl=es&as_sdt=0},
    year = {1998}
    }

1997

  • Buenaga Rodríguez, M., Gómez Hidalgo, J. M., & Díaz-Agudo, B.. (1997). Using wordnet to complement training information in text categorization. 2nd international conference on recent advances in natural languageprocessing (ranlp), tzigov chark (bulgaria).
    [BibTeX] [Abstract] [Google Scholar]
    Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories´.

    @OTHER{BuenagaRodriguez1997,
    abstract = {Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories´.},
    author = {Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María and Díaz-Agudo , Belén},
    journal = { 2nd International Conference on Recent Advances in Natural LanguageProcessing (RANLP), Tzigov Chark (Bulgaria)},
    title = {Using WordNet to Complement Training Information in Text Categorization},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+WordNet+to+Complement+Training+Information+in+Text+Categorization&btnG=&hl=es&as_sdt=0},
    year = {1997}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1997). Integrating a lexical database and a training collection for textcategorization. Paper presented at the Acl/eacl workshop on automatic information extraction and buildingof lexical semantic resources for nlp.
    [BibTeX] [Abstract] [Google Scholar]
    Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.

    @INPROCEEDINGS{GomezHidalgo1997,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    title = {Integrating a Lexical Database and a Training Collection for TextCategorization},
    booktitle = {ACL/EACL Workshop on Automatic Information Extraction and Buildingof Lexical Semantic Resources for NLP},
    year = {1997},
    month = {September},
    abstract = {Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+a+Lexical+Database+and+a+Training+Collection+for+Text+%09Categorization&btnG=&hl=es&as_sdt=0}
    }

  • Ureña López, L. A., García Vega, M., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1997). Resolución de la ambigüedad léxica mediante información contextual y el modelo del espacio vectorial. Actas de la vii conferencia de la asociación española para inteligencia artificial, 787-796.
    [BibTeX] [Abstract] [Google Scholar]
    The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a variable set of terms like local context. We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.

    @OTHER{UrenaLopez1997,
    abstract = {The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a variable set of terms like local context. We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.},
    author = {Ureña López , Luis Alfonso and García Vega , Manuel and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    journal = {Actas de la VII Conferencia de la Asociación Española para Inteligencia Artificial},
    pages = {787-796},
    title = {Resolución de la Ambigüedad Léxica Mediante Información Contextual y el Modelo del Espacio Vectorial},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Resoluci%C3%B3n+de+la+Ambig%C3%BCedad+L%C3%A9xica+mediante+informaci%C3%B3n+contextual+y+el+modelo+del+espacio+vectorial+&btnG=&hl=es&as_sdt=0},
    year = {1997}
    }

1996

  • Moreno, L., Salichs, M. A., Gachet Páez, D., Pimentel, J., Arroyo, F., & Gonzalo, A.. (1996). Neural network for robotic control. In Zalzala, A. M. S., & Morris, A. S. (Ed.), (, pp. 137-161). Upper Saddle River, NJ, USA: Ellis Horwood.
    [BibTeX] [Google Scholar]
    @incollection{Moreno:1996:NNM:222047.222061,
    author = {Moreno, L. and Salichs, M. A. and Gachet Páez, Diego and Pimentel, J. and Arroyo, F. and Gonzalo, A.},
    chapter = {Neural networks for mobile robot piloting control},
    title = {Neural network for robotic control},
    editor = {Zalzala, Ali M. S. and Morris, A. S.},
    year = {1996},
    isbn = {0-13-119892-0},
    pages = {137--161},
    numpages = {25},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3A++Neural+networks+for+mobile+robot+piloting+control&btnG=&lr=},
    acmid = {222061},
    publisher = {Ellis Horwood},
    address = {Upper Saddle River, NJ, USA},
    }

  • Gómez Hidalgo, J. M.. (1996). Una interfaz world wide web a la base de datos léxica wordnet.. I jornadas de informática de la aeia (asociación española de informáticay automática), almuñecar, granada (españa).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996d,
    author = {Gómez Hidalgo , José María},
    journal = {I Jornadas de Informática de la AEIA (Asociación Española de Informáticay Automática), Almuñecar, Granada (España)},
    title = {Una interfaz World Wide Web a la base de datos léxica WordNet.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUna+interfaz+World+Wide+Web+a+la+base+de+datos+l%C3%A9xica+WordNet&btnG=&hl=es&as_sdt=0#},
    year = {1996}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Aplicaciones de las bases de datos léxicas en la clasificación automáticade documentos. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Aplicaciones de las bases de datos léxicas en la clasificación automáticade documentos},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAplicaciones+de+las+bases+de+datos+l%C3%A9xicas+en+la+clasificaci%C3%B3n+autom%C3%A1tica+de+documentos&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Diseño de experimentos de categorización automática de textos basadaen una colección de entrenamiento y una base de datos léxica. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996a,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Diseño de experimentos de categorización automática de textos basadaen una colección de entrenamiento y una base de datos léxica},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+de+experimentos+de+categorizaci%C3%B3n+autom%C3%A1tica+de+textos+basada+en+una+colecci%C3%B3n+de+entrenamiento+y+una+base+de+datos+l%C3%A9xica&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Formalismos lógicos para el procesamiento del lenguaje natural. Xii congreso de lenguajes naturales y lenguajes formales, seo deurgel, lérida (españa).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996b,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {XII Congreso de Lenguajes Naturales y Lenguajes Formales, Seo deUrgel, Lérida (España)},
    title = {Formalismos Lógicos para el Procesamiento del Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3AFormalismos+L%C3%B3gicos+para+el+Procesamiento+del+Lenguaje+Natural&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Gómez Hidalgo, J. M., de las Gómez Albarrán, M. M., & Fernández-Pampillón Cesteros, A. M.. (1996). Smallhelp: un sistema de ayuda para el entorno smalltalk. (asociación para el desarrollo de la informática educativa), 6, 5-13.
    [BibTeX] [Abstract] [Google Scholar]
    Los entornos de programación orientada a objetos (POO) ofrecen varias ventajas, entre las que cabe destacar la posibilidad de reutilizar trabajo previo. Sin embargo, la tarea de desarrollar programas en la POO no es sencilla, y es importante proporcionar al programador herramientas que faciliten dicha tarea. SmallHelp es un sistema de ayuda basado en técnicas de inteligencia artificial, que facilita al usuario la localización de métodos del lenguaje Smalltalk que realicen funciones determinadas. Hemos seguido la línea tradicional de los sistemas de ayuda inteligentes, simplificando sus objetivos para disminuir el esfuerzo de desarrollo de nuestro sistema. Asimismo, SmallHelp es fácilmente adaptable a otras áreas de aplicación.

    @OTHER{GomezHidalgo1996c,
    abstract = {Los entornos de programación orientada a objetos (POO) ofrecen varias ventajas, entre las que cabe destacar la posibilidad de reutilizar trabajo previo. Sin embargo, la tarea de desarrollar programas en la POO no es sencilla, y es importante proporcionar al programador herramientas que faciliten dicha tarea. SmallHelp es un sistema de ayuda basado en técnicas de inteligencia artificial, que facilita al usuario la localización de métodos del lenguaje Smalltalk que realicen funciones determinadas. Hemos seguido la línea tradicional de los sistemas de ayuda inteligentes, simplificando sus objetivos para disminuir el esfuerzo de desarrollo de nuestro sistema. Asimismo, SmallHelp es fácilmente adaptable a otras áreas de aplicación.},
    author = {Gómez Hidalgo , José María and Gómez Albarrán , M. de las Mercedes and Fernández-Pampillón Cesteros , Ana María},
    journal = {(Asociación para el Desarrollo de la Informática Educativa)},
    pages = {5-13},
    title = {SmallHelp: un sistema de ayuda para el entorno SmallTalk},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+SmallHelp%3A+un+sistema+de+ayuda+para+el+entorno+SmallTalk&btnG=&hl=es&as_sdt=0},
    volume = {6},
    year = {1996}
    }

1995

  • Buenaga Rodriguez, M., Fernández Manjón, B., & Fernández Valmayor, A.. (1995). Information overload at the information age. Adults in innovative learning situations, 17-30.
    [BibTeX] [Google Scholar]
    @OTHER{BuenagaRodri­guez1995,
    author = {Buenaga Rodriguez , Manuel and Fernández Manjón , Baltasar and Fernández Valmayor , A},
    journal = {Adults in Innovative Learning Situations},
    pages = {17-30},
    title = {Information Overload at the Information Age},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Information+Overload+at+the+Information+Age&btnG=&hl=es&as_sdt=0},
    year = {1995}
    }

  • Fernández Manjón, B., & Buenaga Rodríguez, M.. (1995). Internet como herramienta de trabajo en el campo educativo. Adie: asociación para el desarrollo de la informática educativa, 1(4), 14-20.
    [BibTeX] [Google Scholar]
    @ARTICLE{FernandezManjon1995,
    author = {Fernández Manjón , Baltasar and Buenaga Rodríguez , Manuel},
    title = {Internet como herramienta de trabajo en el campo educativo},
    journal = {ADIE: Asociación para el Desarrollo de la Informática Educativa},
    year = {1995},
    volume = {1},
    pages = {14-20},
    number = {4},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Internet+como+herramienta+de+trabajo+en+el+campo+educativo&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M.. (1995). Un sistema de traducción del lenguaje natural a sql. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1995,
    author = {Gómez Hidalgo , José María},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Un sistema de traducción del lenguaje natural a SQL},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUn+sistema+de+traducci%C3%B3n+del+lenguaje+natural+a+SQL&btnG=&hl=es&as_sdt=0#},
    year = {1995}
    }

1994

  • Gachet Páez, D., Salichs, M. A., Moreno, L., & Pimentel, J. R.. (1994). Learning emergent tasks for an autonomous mobile robot. Paper presented at the Proceedings of the IEEE/RSJ/GI international conference on intelligent robots and systems ’94. ‘Advanced robotic systems and the real world’, IROS ’94.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} is used as a fusion supervisor of primitive behaviors in order to execute more complex robot behaviors, for example go to goal, surveillance or follow a path. The fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviors as well as the overall task. The implementation of this autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform, the {UPM} Robuter. Both, simulated and actual results are presented. The performance of the {AHC} neural network is adequate. Portions of this work has been implemented within the {EEC} {ESPRIT} 2483 {PANORAMA} Project

    @inproceedings{gachet_learning_1994,
    title = {Learning emergent tasks for an autonomous mobile robot},
    volume = {1},
    doi = {10.1109/IROS.1994.407378},
    abstract = {We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} is used as a fusion supervisor of primitive behaviors in order to execute more complex robot behaviors, for example go to goal, surveillance or follow a path. The fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviors as well as the overall task. The implementation of this autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform, the {UPM} Robuter. Both, simulated and actual results are presented. The performance of the {AHC} neural network is adequate. Portions of this work has been implemented within the {EEC} {ESPRIT} 2483 {PANORAMA} Project},
    booktitle = {Proceedings of the {IEEE/RSJ/GI} International Conference on Intelligent Robots and Systems '94. {'Advanced} Robotic Systems and the Real World', {IROS} '94},
    author = {Gachet Páez, Diego and Salichs, M.A. and Moreno, L. and Pimentel, J.R.},
    year = {1994},
    keywords = {adaptive heuristic critic, {AHC}, autonomous mobile robot, Discrete event simulation, {EEC} {ESPRIT} 2483 {PANORAMA} Project, emergent task learning, Event detection, fusion supervisor, heuristic programming, learning (artificial intelligence), mobile platform, mobile robots, neural nets, neural network topology, {OPMOR}, reinforcement learning algorithm, Robot kinematics, Robot sensing systems, simulation environment, surveillance, {UPM} Robuter, Vectors},
    pages = {290--297 vol.1},
    url = {http://scholar.google.es/scholar?q=Learning+emergent+tasks+for+an+autonomous+mobile+robot&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Pimentel, J. R., Salichs, M. A., Gachet Páez, D., & Moreno, L.. (1994). A software development environment for autonomous mobile robots. Paper presented at the , 20th international conference on industrial electronics, control and instrumentation, 1994. IECON ’94.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developing software for actual sensor-based mobile robots is not a trivial task because because of a number of practical difficulties. The task of software development can be simplified by the use of an appropriate environment. To be effective, the software development environment must have the following requirements: modularity, hardware independence, capability to work with an actual or simulated system and independence of control modules from system evaluation. In this paper, the authors propose a software development environment which meets the aforementioned requirements. The environment has been used to develop software in the area of reactive control within the Panorama project. Applications of this software environment in a number of projects at the {UPM} are described. Portions of this research have been performed under the {EEC} {ESPRIT} 2483 Panorama Project

    @inproceedings{pimentel_software_1994,
    title = {A software development environment for autonomous mobile robots},
    volume = {2},
    doi = {10.1109/IECON.1994.397944},
    abstract = {Developing software for actual sensor-based mobile robots is not a trivial task because because of a number of practical difficulties. The task of software development can be simplified by the use of an appropriate environment. To be effective, the software development environment must have the following requirements: modularity, hardware independence, capability to work with an actual or simulated system and independence of control modules from system evaluation. In this paper, the authors propose a software development environment which meets the aforementioned requirements. The environment has been used to develop software in the area of reactive control within the Panorama project. Applications of this software environment in a number of projects at the {UPM} are described. Portions of this research have been performed under the {EEC} {ESPRIT} 2483 Panorama Project},
    booktitle = {, 20th International Conference on Industrial Electronics, Control and Instrumentation, 1994. {IECON} '94},
    author = {Pimentel, J.R. and Salichs, M.A. and Gachet Páez, Diego and Moreno, L.},
    year = {1994},
    keywords = {Application software, Art, autonomous mobile robots, Control systems, {EEC} {ESPRIT} 2483 {PANORAMA} Project, Hardware, hardware independence, mobile robots, modularity, path planning, Programming, project support environments, reactive control, Real time systems, research initiatives, robot programming, sensor-based mobile robots, software development environment, Software Engineering, system evaluation, Testing, {USA} Councils, Workstations},
    pages = {1094--1099 vol.2},
    url = {http://scholar.google.es/scholar?q=A+software+development+environment+for+autonomous+mobile+robots&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Buenaga Rodriguez, M., Fernández Manjón, B., & Vaquero Sánchez, A.. (1994). Un asistente inteligente para unix basado en la documentación. Revista de la asociación para el desarrollo de la informática educativa, 1(2).
    [BibTeX] [Abstract] [Google Scholar]
    Este artículo describe un sistema, ARGOS, que proporciona ayuda a los usuarios del sistema operativo UNIX. Proponemos los asistentes inteligentes en línea como una alternativa a los tutores inteligentes. Las características clave de ARGOS son: una interfaz de usuario amigable, el usuario puede especificar sus necesidades en lenguaje natural, es fácil reutilizar información previamente existente y proporciona un entorno cooperativo supervisado. Estas características se basan en la integración de técnicas de recuperación de información, modelado de usuario e hipertexto. ARGOS hace uso de información existente y facilita el acceso del usuario a la información correcta en el momento oportuno. ARGOS proporciona, además, un marco adecuado para el ensayo de la integración de técnicas de modelado del usuario y procesamiento de lenguaje natural en la recuperación de información.

    @OTHER{BuenagaRodri­guez1994,
    abstract = {Este artículo describe un sistema, ARGOS, que proporciona ayuda a los usuarios del sistema operativo UNIX. Proponemos los asistentes inteligentes en línea como una alternativa a los tutores inteligentes. Las características clave de ARGOS son: una interfaz de usuario amigable, el usuario puede especificar sus necesidades en lenguaje natural, es fácil reutilizar información previamente existente y proporciona un entorno cooperativo supervisado. Estas características se basan en la integración de técnicas de recuperación de información, modelado de usuario e hipertexto. ARGOS hace uso de información existente y facilita el acceso del usuario a la información correcta en el momento oportuno. ARGOS proporciona, además, un marco adecuado para el ensayo de la integración de técnicas de modelado del usuario y procesamiento de lenguaje natural en la recuperación de información.},
    author = {Buenaga Rodriguez , Manuel and Fernández Manjón , Baltasar and Vaquero Sánchez , Antonio},
    journal = {Revista de la Asociación para el Desarrollo de la Informática Educativa},
    number = {2},
    title = {Un asistente inteligente para UNIX basado en la documentación},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Un+asistente+inteligente+para+UNIX+basado+en+la+documentaci%C3%B3n&btnG=&hl=es&as_sdt=0},
    volume = {1},
    year = {1994}
    }

1993

  • Pimentel, J. R., Gachet Páez, D., Moreno, L., & Salichs, M. A.. (1993). Learning to coordinate behaviors for real-time path planning of autonomous systems. Paper presented at the , international conference on systems, man and cybernetics, 1993. ‘Systems engineering in the service of humans’, conference proceedings.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    We present a neural network ({NN)} system which learns the appropriate simultaneous activation of primitive behaviors in order to execute more complex robot behaviors. The {NN} implementation is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors in a simultaneous or concurrent fashion. We use a supervised learning technique with a human trainer generating appropriate training for the simultaneous activation of behavior in a simulated environment. The {NN} implementation has been tested within {OPMOR}, a simulation environment for mobile robots and several results are presented. The performance of the neural network is adequate. Portions of this work has been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project

    @inproceedings{pimentel_learning_1993,
    title = {Learning to coordinate behaviors for real-time path planning of autonomous systems},
    doi = {10.1109/ICSMC.1993.390770},
    abstract = {We present a neural network ({NN)} system which learns the appropriate simultaneous activation of primitive behaviors in order to execute more complex robot behaviors. The {NN} implementation is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors in a simultaneous or concurrent fashion. We use a supervised learning technique with a human trainer generating appropriate training for the simultaneous activation of behavior in a simulated environment. The {NN} implementation has been tested within {OPMOR}, a simulation environment for mobile robots and several results are presented. The performance of the neural network is adequate. Portions of this work has been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project},
    booktitle = {, International Conference on Systems, Man and Cybernetics, 1993. {'Systems} Engineering in the Service of Humans', Conference Proceedings},
    author = {Pimentel, J.R. and Gachet Páez, Diego and Moreno, L. and Salichs, M.A.},
    year = {1993},
    keywords = {autonomous systems, Electronic mail, {ESPRIT} 2483 {PANORAMA} Project, Humans, learning (artificial intelligence), mobile robot, mobile robots, neural nets, neural network, Neural networks, {OPMOR}, Orbital robotics, path planning, primitive behaviors, Real time systems, real-time path planning, real-time systems, robot behavior coordination, Robot kinematics, Robot sensing systems, simulation, simulation environment, supervised learning, Testing},
    pages = {541--546 vol.4},
    url = {http://scholar.google.es/scholar?q=Learning+to+coordinate+behaviors+for+real-time+path+planning+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Salichs, M. A., Puente, E. A., Gachet Páez, D., & Pimentel, J. R.. (1993). Learning behavioral control by reinforcement for an autonomous mobile robot. Paper presented at the , international conference on industrial electronics, control, and instrumentation, 1993. proceedings of the IECON ’93.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} constitutes a fusion supervisor of primitive behaviours in order to execute more complex robot behaviours as for example go to goal. This fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviours which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviours as well as the overall task. The implementation of the autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform {UPM} Robuter. Both simulated and real results are presented. The performance of the {AHC} neural network is adequate. Portions of this work have been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project

    @inproceedings{salichs_learning_1993,
    title = {Learning behavioral control by reinforcement for an autonomous mobile robot},
    doi = {10.1109/IECON.1993.339280},
    abstract = {We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} constitutes a fusion supervisor of primitive behaviours in order to execute more complex robot behaviours as for example go to goal. This fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviours which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviours as well as the overall task. The implementation of the autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform {UPM} Robuter. Both simulated and real results are presented. The performance of the {AHC} neural network is adequate. Portions of this work have been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project},
    booktitle = {, International Conference on Industrial Electronics, Control, and Instrumentation, 1993. Proceedings of the {IECON} '93},
    author = {Salichs, M.A. and Puente, E. A. and Gachet Páez, Diego and Pimentel, J.R.},
    year = {1993},
    keywords = {adaptive heuristic critic, autonomous mobile robot, behavioral control, {EEC} {ESPRIT} 2483 {PANORAMA} Project, Electronic mail, fusion supervisor, heuristic programming, Intelligent robots, Intelligent sensors, Intelligent systems, learning (artificial intelligence), Learning systems, Machine intelligence, mobile robots, Network topology, neural nets, neural network topology, {OPMOR}, reinforcement learning algorithm, Robot sensing systems, Robot vision systems, simulation environment, {UPM} Robuter},
    pages = {1436--1441 vol.3},
    url = {http://scholar.google.es/scholar?q=Learning+behavioral+control+by+reinforcement+for+an+autonomous+mobile+robot&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Fernandez-Valmayor, A., Villarrubia, C., & Buenaga, M.. (1993). An intelligent interface to a database system. , Ca, USA.
    [BibTeX] [Abstract] [Google Scholar]
    In this work, we describe the architecture of an intelligent interface that improves the effectiveness of full text retrieval methods through the semantic interpretation of user’s queries in natural language (NL). This interface comprises a user-expert module that integrates a dynamic model of human memory with a NL parser. This paper concentrates on the problem of the elaboration of index patterns out of specific cases or instances. The structure of the dynamic memory of cases and parsing techniques are also discussed.

    @INPROCEEDINGS{Fernandez-Valmayor1993,
    author = {Fernandez-Valmayor , A. and Villarrubia , C. and Buenaga , Manuel},
    title = {An Intelligent Interface to a Database System},
    year = {1993},
    address = {Ca, USA},
    month = {March},
    abstract = {In this work, we describe the architecture of an intelligent interface that improves the effectiveness of full text retrieval methods through the semantic interpretation of user’s queries in natural language (NL). This interface comprises a user-expert module that integrates a dynamic model of human memory with a NL parser. This paper concentrates on the problem of the elaboration of index patterns out of specific cases or instances. The structure of the dynamic memory of cases and parsing techniques are also discussed.},
    journal = {Case-Based Reasoning and Information Retrieval. Exploring the Opportunities for Technology Sharing, AAAI Press},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Intelligent+Interface+to+a+Database+System&btnG=&hl=es&as_sdt=0%2C5}
    }

1992

  • Gachet Páez, D., Salichs, M. A., Pimentel, J. R., Moreno, L., & De la Escalera, A.. (1992). A software architecture for behavioral control strategies of autonomous systems. Paper presented at the , proceedings of the 1992 international conference on industrial electronics, control, instrumentation, and automation, 1992. power electronics and motion control.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors deal with the execution of several tasks for mobile robots while exhibiting various primitive behaviors in a simultaneous or concurrent fashion. The architecture allows for learning to take place, and at the execution level it incorporates the experience gained in executing primitive behaviors as well as the overall task. Some empirical rules are provided for the appropriate mixture of primitive behaviors to produce tasks. The architecture has been implemented in {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the architecture is excellent

    @inproceedings{gachet_software_1992,
    title = {A software architecture for behavioral control strategies of autonomous systems},
    doi = {10.1109/IECON.1992.254475},
    abstract = {The authors deal with the execution of several tasks for mobile robots while exhibiting various primitive behaviors in a simultaneous or concurrent fashion. The architecture allows for learning to take place, and at the execution level it incorporates the experience gained in executing primitive behaviors as well as the overall task. Some empirical rules are provided for the appropriate mixture of primitive behaviors to produce tasks. The architecture has been implemented in {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the architecture is excellent},
    booktitle = {, Proceedings of the 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation, 1992. Power Electronics and Motion Control},
    author = {Gachet Páez, Diego and Salichs, M.A. and Pimentel, J.R. and Moreno, L. and De la Escalera, A.},
    year = {1992},
    keywords = {autonomous systems, Computer architecture, Control systems, Degradation, digital control, Electronic mail, empirical rules, execution level, Humans, learning, mobile robots, Navigation, {OPMOR}, performance, position control, robot programming, simulation environment, software architecture, Software Engineering, Velocity control},
    pages = {1002--1007 vol.2},
    url = {http://scholar.google.es/scholar?q=A+software+architecture+for+behavioral+control+strategies+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Puente, E. A., Gachet Páez, D., Pimentel, J. R., Moreno, L., & Salichs, M. A.. (1992). A neural network supervisor for behavioral primitives of autonomous systems. Paper presented at the , proceedings of the 1992 international conference on industrial electronics, control, instrumentation, and automation, 1992. power electronics and motion control.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors present a neural network implementation of a fusion supervisor of primitive behavior to execute more complex robot behavior. The neural network implementation is part of an architecture for the execution of mobile robot tasks, which is composed of several primitive behaviors, in a simultaneous or concurrent fashion. The architecture allows for learning to take place. At the execution level, it incorporates the experience gained in executing primitive behavior as well as the overall task. The neural network has been trained to supervise the relative contributions of the various primitive robot behaviors to execute a given task. The neural network implementation has been tested within {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the neural network is adequate

    @inproceedings{puente_neural_1992,
    title = {A neural network supervisor for behavioral primitives of autonomous systems},
    doi = {10.1109/IECON.1992.254457},
    abstract = {The authors present a neural network implementation of a fusion supervisor of primitive behavior to execute more complex robot behavior. The neural network implementation is part of an architecture for the execution of mobile robot tasks, which is composed of several primitive behaviors, in a simultaneous or concurrent fashion. The architecture allows for learning to take place. At the execution level, it incorporates the experience gained in executing primitive behavior as well as the overall task. The neural network has been trained to supervise the relative contributions of the various primitive robot behaviors to execute a given task. The neural network implementation has been tested within {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the neural network is adequate},
    booktitle = {, Proceedings of the 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation, 1992. Power Electronics and Motion Control},
    author = {Puente, E. A. and Gachet Páez, Diego and Pimentel, J.R. and Moreno, L. and Salichs, M.A.},
    year = {1992},
    keywords = {Actuators, Automatic control, autonomous systems, behavioral primitives, Control systems, Electronic mail, Engineering management, fusion supervisor, learning (artificial intelligence), mobile robot tasks, mobile robots, Navigation, neural nets, neural network supervisor, Neural networks, {OPMOR}, Robot kinematics, simulation environment, Testing, training},
    pages = {1105--1109 vol.2},
    url = {http://scholar.google.es/scholar?q=A+neural+network+supervisor+for+behavioral+primitives+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

1991

  • Puente, E. A., Moreno, L., Salichs, M. A., & Gachet Páez, D.. (1991). Analysis of data fusion methods in certainty grids application to collision danger monitoring. Paper presented at the , 1991 international conference on industrial electronics, control and instrumentation, 1991. proceedings. IECON ’91.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors focus on the use of the occupancy grid representation to maintain and combine the information acquired from sensors about the environment. This information is subsequently used to monitor the robot collision danger risk and take into account that risk in starting the appropriate maneuver. The occupancy grid representation uses a multidimensional tessellation of space into cells, where each cell stores some information about its state. A general model associates a random vector that encodes multiple properties in a cell state. If the cell property is limited to occupancy, it is usually called occupancy grid. Two main approaches have been used to model the occupancy of a cell: probabilistic estimation and the Dempster-Shafer theory of evidence. Probabilistic estimation and some combination rules based on the Dempster-Shafter theory of evidence are analyzed and their possibilities compared

    @inproceedings{puente_analysis_1991,
    title = {Analysis of data fusion methods in certainty grids application to collision danger monitoring},
    doi = {10.1109/IECON.1991.239281},
    abstract = {The authors focus on the use of the occupancy grid representation to maintain and combine the information acquired from sensors about the environment. This information is subsequently used to monitor the robot collision danger risk and take into account that risk in starting the appropriate maneuver. The occupancy grid representation uses a multidimensional tessellation of space into cells, where each cell stores some information about its state. A general model associates a random vector that encodes multiple properties in a cell state. If the cell property is limited to occupancy, it is usually called occupancy grid. Two main approaches have been used to model the occupancy of a cell: probabilistic estimation and the Dempster-Shafer theory of evidence. Probabilistic estimation and some combination rules based on the Dempster-Shafter theory of evidence are analyzed and their possibilities compared},
    booktitle = {, 1991 International Conference on Industrial Electronics, Control and Instrumentation, 1991. Proceedings. {IECON} '91},
    author = {Puente, E. A. and Moreno, L. and Salichs, M.A. and Gachet Páez, Diego},
    year = {1991},
    keywords = {artificial intelligence, autonomous mobile robots, Buildings, certainty grids, collision danger monitoring, Data analysis, data fusion, Dempster-Shafer theory of evidence, Fuses, Geometry, mobile robots, monitoring, multidimensional tessellation, Navigation, probabilistic estimation, probability, Recursive estimation, Remotely operated vehicles, Sensor fusion, signal processing, State estimation},
    pages = {1133--1137 vol.2},
    url = {http://scholar.google.es/scholar?q=Analysis+of+data+fusion+methods+in+certainty+grids+application+to+collision+danger+monitoring+&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Salichs, M. A., Puente, E. A., Gachet Páez, D., & Moreno, L.. (1991). Trajectory tracking for a mobile robot-an application to contour following. Paper presented at the , 1991 international conference on industrial electronics, control and instrumentation, 1991. proceedings. IECON ’91.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Some control algorithms for the contour following guidance module of a mobile robot are described, and their performance is analyzed. Different approaches such as classical, fuzzy and neural control techniques have been considered in order to optimize and smooth the trajectory of the mobile robot. The module controls a virtual vehicle, by means of two parameters: velocity and curvature. The algorithms have been first simulated and then tested on the {UPM} mobile platform. The best results have been obtained with classical control and fuzzy control

    @inproceedings{salichs_trajectory_1991,
    title = {Trajectory tracking for a mobile robot-An application to contour following},
    doi = {10.1109/IECON.1991.239143},
    abstract = {Some control algorithms for the contour following guidance module of a mobile robot are described, and their performance is analyzed. Different approaches such as classical, fuzzy and neural control techniques have been considered in order to optimize and smooth the trajectory of the mobile robot. The module controls a virtual vehicle, by means of two parameters: velocity and curvature. The algorithms have been first simulated and then tested on the {UPM} mobile platform. The best results have been obtained with classical control and fuzzy control},
    booktitle = {, 1991 International Conference on Industrial Electronics, Control and Instrumentation, 1991. Proceedings. {IECON} '91},
    author = {Salichs, M.A. and Puente, E. A. and Gachet Páez, Diego and Moreno, L.},
    year = {1991},
    keywords = {Algorithm design and analysis, classical control, contour following guidance module, fuzzy control, fuzzy set theory, mobile robot, mobile robots, Navigation, neural control, Performance analysis, position control, Robot control, Testing, tracking, Trajectory, trajectory tracking, Vehicles, Velocity control},
    pages = {1067--1070 vol.2},
    url = {http://scholar.google.es/scholar?q=Trajectory+tracking+for+a+mobile+robot-An+application+to+contour+following&btnG=&hl=es&as_sdt=0%2C5}
    }


AUTOR


  • Alvarez Montero, F., Vaquero Sánchez, A., Sáenz Pérez, F., & Buenaga Rodríguez, M.. (2007). Bringing forward semantic relations. 7th international conference on intelligent design and applications (isda 2007), 511-519.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain fuzzy or under-specified. This is a pervasive problem in software engineering and artificial intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.

    @OTHER{AlvarezMontero2007,
    abstract = {Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain fuzzy or under-specified. This is a pervasive problem in software engineering and artificial intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.},
    address = {Río de Janeiro},
    author = {Alvarez Montero , Francisco and Vaquero Sánchez , Antonio and Sáenz Pérez , Fernando and Buenaga Rodríguez , Manuel},
    doi = {10.1109/ISDA.2007.82},
    journal = {7th International Conference on Intelligent Design and Applications (ISDA 2007)},
    month = {Octubre},
    pages = {511-519},
    title = {Bringing Forward Semantic Relations},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABringing+Forward+Semantic+Relations&btnG=&hl=es&as_sdt=0%2C5},
    year = {2007}
    }

  • Alvarez Montero, F., Vaquero Sánchez, A., Sáenz Pérez, F., & Buenaga Rodríguez, M.. (2007). Neglecting semantic relations: consequences and proposals. , Lisboa, Portugal.
    [BibTeX] [Abstract] [Google Scholar]
    Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper, we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.

    @INPROCEEDINGS{AlvarezMontero2007a,
    author = {Alvarez Montero , Francisco and Vaquero Sánchez , Antonio and Sáenz Pérez , Fernando and Buenaga Rodríguez , Manuel},
    title = {Neglecting Semantic Relations: Consequences and proposals},
    year = {2007},
    address = {Lisboa, Portugal},
    month = {July},
    abstract = {Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper, we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies. },
    journal = {International Conference on Intelligent Systems and Agents},
    url = {http://scholar.google.es/scholar?q=allintitle%3ANeglecting+Semantic+Relations%3A+Consequences+and+proposals&btnG=&hl=es&as_sdt=0}
    }

  • Alvarez Montero, F., Vaquero Sánchez, A., Sáenz Pérez, F., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (2007). Semantic relations: modelling issues, proposals and possible applications. , Key West, Florida USA.
    [BibTeX] [Abstract] [Google Scholar]
    Semantic relations are an important element in the construction of ontology-based linguistic resources and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in both Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations, abstractions that are not enough to represent the relation richness of problem domains, and even poorly structured taxonomies. However, if provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them that can be an aid in the ontology construction process. In this paper we present some insightful issues about the representation of relations. Moreover, the initiatives aiming to provide relations with clear semantics are explained and the inclusion of their core ideas as part of a methodology for the development of ontology-based linguistic resources is proposed.

    @INPROCEEDINGS{AlvarezMontero2007b,
    author = {Alvarez Montero , Francisco and Vaquero Sánchez , Antonio and Sáenz Pérez , Fernando and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    title = {Semantic Relations: Modelling Issues, Proposals and Possible Applications},
    year = {2007},
    address = {Key West, Florida USA},
    month = {may},
    abstract = {Semantic relations are an important element in the construction of ontology-based linguistic resources and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in both Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations,
    abstractions that are not enough to represent the relation richness of problem domains, and even poorly structured taxonomies. However, if provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them that can be an aid in the ontology construction process. In this paper we present some insightful issues about the representation of relations. Moreover, the initiatives aiming to provide relations with clear semantics are explained and the inclusion of their core ideas as part of a methodology for the development of ontology-based linguistic resources is proposed.},
    journal = {American Association of Artificial Intelligence - AAAI Press},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Semantic+Relations%3A+Modelling+Issues%2C+Proposals+and+Possible+Applications&btnG=&hl=es&as_sdt=0}
    }

  • Aparicio, F., Buenaga Rodríguez, M., Rubio, M., Hernando, M. A., Gachet Páez, D., Puertas Sanz, E., & Giráldez, I.. (2011). Tmt: a tool to guide users in finding information on clinical texts. .
    [BibTeX] [Abstract] [Google Scholar]
    The large amount of medical information available through the Internet, in both structure and text formats, makes that different types of users will encounter different problems when they have to carry out an effective search. On the one hand, medical students, health staff and researchers in the field of biomedicine have a variety of sources and tools of different characteristics which require a learning period sometimes insurmountable. On the other hand, patients, family members and people outside of the medical profession, face the added problem of not being sufficiently familiarized with medical terminology. In this paper we present a tool that can extract relevant medical concepts present in a clinical text, using techniques for named entity recognition, applied on lists of concepts, and annotation techniques from ontologies. To propose these concepts, our tool makes use of a non formal knowledge source, such as Freebase, and formal resources such as MedlinePlus and PubMed. We argue that the combination of these resources, with information less formal and more plain language (like Freebase), with formal information and more plain language (like Medlineplus) or with formal information and more technical language (such as the Pubmed scientific literature), optimize the process of discover medical information on a complex clinical case to users with different profiles and needs, such as are patients, doctors or researchers. Our ultimate goal is to build a platform to accommodate different techniques facilitating the practice of translational medicine.

    @MISC{Aparicio2011b,
    author = {Aparicio , Fernando and Buenaga Rodríguez , Manuel and Rubio , Margarita and Hernando , María Asunción and Gachet Páez, Diego and Puertas Sanz , Enrique and Giráldez , Ignacio},
    title = {TMT: A tool to guide users in finding information on clinical texts},
    howpublished = {http://www.sepln.org/ojs/ojs-2.2/index.php/pln/article/viewArticle/836},
    year = {2011},
    abstract = {The large amount of medical information available through the Internet, in both structure and text formats, makes that different types of users will encounter different problems when they have to carry out an effective search. On the one hand, medical students, health staff and researchers in the field of biomedicine have a variety of sources and tools of different characteristics which require a learning period sometimes insurmountable. On the other hand, patients, family members and people outside of the medical profession, face the added problem of not being sufficiently familiarized with medical terminology. In this paper we present a tool that can extract relevant medical concepts present in a clinical text, using techniques for named entity recognition, applied on lists of concepts, and annotation techniques from ontologies. To propose these concepts, our tool makes use of a non formal knowledge source, such as Freebase, and formal resources such as MedlinePlus and PubMed. We argue that the combination of these resources, with information less formal and more plain language (like Freebase), with formal information and more plain language (like Medlineplus) or with formal information and more technical language (such as the Pubmed scientific literature), optimize the process of discover medical information on a complex clinical case to users with different profiles and needs, such as are patients, doctors or researchers. Our ultimate goal is to build a platform to accommodate different techniques facilitating the practice of translational medicine.},
    journal = {Procesamiento de Lenguaje Natural},
    pages = {27-34},
    shorttitle = {TMT},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+TMT%3A+A+tool+to+guide+users+in+finding+information+on+clinical+texts&btnG=&hl=es&as_sdt=0},
    volume = {46}
    }

  • Aparicio, F., Muñoz, R., Buenaga, M., & Puertas, E.. (2011). Mdfaces: an intelligent system to recognize significant terms in texts from different domains using freebase. Procesamiento de lenguaje natural, 47, 317-318.
    [BibTeX] [Abstract] [Google Scholar]
    MDFaces (Multi-Domain Faces) is an intelligent system that allows recognition of relevant concepts in texts, from different domains, and shows detailed and semantics information related to these concepts. For its development, it is have been employed a methodology that uses a general knowledge ontology called Freebase. In particular, we have implemented this methodology for medical and tourism domains.

    @ARTICLE{Aparicio2011,
    author = {Aparicio , Fernando and Muñoz , Rafael and Buenaga , Manuel and Puertas , Enrique},
    title = {MDFaces: An intelligent system to recognize significant terms in texts from different domains using Freebase},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2011},
    volume = {47},
    pages = {317--318},
    month = {september},
    abstract = {MDFaces (Multi-Domain Faces) is an intelligent system that allows recognition of relevant concepts in texts, from different domains, and shows detailed and semantics information related to these concepts. For its development, it is have been employed a methodology that uses a general knowledge ontology called Freebase. In particular, we have implemented this methodology for medical and tourism domains.},
    copyright = {La propiedad intelectual de los artículos pertenece a los autores y los derechos de edición y publicación a la revista. Los artículos publicados en la revista podrán ser usados libremente para propósitos educativos y científicos, siempre y cuando se realice una correcta citación del mismo. Cualquier uso comercial queda expresamente penado por la ley.},
    issn = {1989-7553},
    language = {es\_ES},
    shorttitle = {MDFaces},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMDFaces%3A+An+intelligent+system+to+recognize+significant+terms+in+texts+from+different+domains+using+Freebase&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-19}
    }

  • Aparicio, F., Buenaga, M., Rubio, M., & Hernando, A.. (2012). An intelligent information access system assisting a case based learning methodology evaluated in higher education with medical students. Computers and education, 58(4), 1282-1295.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In recent years there has been a shift in educational methodologies toward a student-centered approach, one which increasingly emphasizes the integration of computer tools and intelligent systems adopting different roles. In this paper we describe in detail the development of an Intelligent Information Access system used as the basis for producing and assessing a constructivist learning methodology with undergraduate students. The system automatically detects significant concepts available within a given clinical case and facilitates an objective examination, following a proper selection process of the case in which is taken into account the students’ knowledge level. The learning methodology implemented is intimately related to concept-based, case-based and internet-based learning. In spite of growing theoretical research on the use of information technology in higher education, it is rare to find applications that measure learning and students’ perceptions and compare objective results with a free Internet search. Our work enables students to gain understanding of the concepts in a case through Web browser interaction with our computer system identifying these concepts and providing direct access to enriched related information from Medlineplus, Freebase and PubMed. In order to evaluate the learning activity outcomes, we have done a trial run with volunteer students from a 2nd year undergraduate Medicine course, dividing the volunteers into two groups. During the activity all students were provided with a clinical case history and a multiple choice test with medical questions relevant to the case. This test could be done in two different ways: learners in one group were allowed to freely seek information on the Internet, while the other group could only search for information using the newly developed computer tool. In the latter group, we measured how students perceived the tool’s support for solving the activity and the Web interface usability, supplying them with a Likert questionnaire for anonymous completion. The particular case selected was a female with a medical history of heart pathology, from which the system derived medical terms closely associated with her condition description, her clinical evolution and treatment.

    @ARTICLE{Aparicio2012,
    author = {Aparicio , Fernando and Buenaga , Manuel and Rubio , Margarita and Hernando , Asunción},
    title = {An intelligent information access system assisting a case based learning methodology evaluated in higher education with medical students},
    journal = {Computers And Education},
    year = {2012},
    volume = {58},
    pages = {1282-1295},
    number = {4},
    month = {may},
    abstract = {In recent years there has been a shift in educational methodologies toward a student-centered approach, one which increasingly emphasizes the integration of computer tools and intelligent systems adopting different roles. In this paper we describe in detail the development of an Intelligent Information Access system used as the basis for producing and assessing a constructivist learning methodology with undergraduate students. The system automatically detects significant concepts available within a given clinical case and facilitates an objective examination, following a proper selection process of the case in which is taken into account the students’ knowledge level. The learning methodology implemented is intimately related to concept-based, case-based and internet-based learning. In spite of growing theoretical research on the use of information technology in higher education, it is rare to find applications that measure learning and students’ perceptions and compare objective results with a free Internet search. Our work enables students to gain understanding of the concepts in a case through Web browser interaction with our computer system identifying these concepts and providing direct access to enriched related information from Medlineplus, Freebase and PubMed. In order to evaluate the learning activity outcomes, we have done a trial run with volunteer students from a 2nd year undergraduate Medicine course, dividing the volunteers into two groups. During the activity all students were provided with a clinical case history and a multiple choice test with medical questions relevant to the case. This test could be done in two different ways: learners in one group were allowed to freely seek information on the Internet, while the other group could only search for information using the newly developed computer tool. In the latter group, we measured how students perceived the tool’s support for solving the activity and the Web interface usability, supplying them with a Likert questionnaire for anonymous completion. The particular case selected was a female with a medical history of heart pathology, from which the system derived medical terms closely associated with her condition description, her clinical evolution and treatment.},
    doi = {10.1016/j.compedu.2011.12.021},
    issn = {0360-1315},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Intelligent+Information+Access+system+assisting+a+Case+Based+Learning+methodology+evaluated+in+higher+education+with+medical+students&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Aparicio, F., Buenaga, M., Gachet Páez, D., Puertas, E., & Giráldez, I.. (2011). Tmt: a scalable platform to enrich translational medicine environments. Proceedings of the iadis international conference, e-society, 401-405.
    [BibTeX] [Abstract] [Google Scholar]
    In this paper we present TMT (Translational Medicine Tool), a scalable platform to integrate applicable techniques within the paradigm of translational medicine. Particularly relevant components to the development are Freebase, a large collaborative base of knowledge, General Architecture for Text Engineering (GATE), a system for text processing, and PubMed, a scientific literature repository. The platform architecture has been built thinking in scalability, in several ways: to allow the integration of different natural language processing techniques, to expand the sources from which to perform the information extraction and to ease integration of new user interfaces.

    @OTHER{Aparicio2011a,
    abstract = {In this paper we present TMT (Translational Medicine Tool), a scalable platform to integrate applicable techniques within the paradigm of translational medicine. Particularly relevant components to the development are Freebase, a large collaborative base of knowledge, General Architecture for Text Engineering (GATE), a system for text processing, and PubMed, a scientific literature repository. The platform architecture has been built thinking in scalability, in several ways: to allow the integration of different natural language processing techniques, to expand the sources from which to perform the information extraction and to ease integration of new user interfaces.},
    author = {Aparicio , Fernando and Buenaga , Manuel and Gachet Páez, Diego and Puertas , Enrique and Giráldez , Ignacio},
    journal = {Proceedings of the IADIS International conference, e-Society},
    month = {Marzo},
    pages = {401-405},
    title = {TMT: A scalable platform to enrich translational medicine environments},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+TMT%3A+A+scalable+platform+to+enrich+translational+medicine+environments&btnG=&hl=es&as_sdt=0},
    year = {2011}
    }

  • Aparicio Galisteo, F., & Buenaga Rodríguez, M.. (2012). Métodos cuantitativos y cualitativos de evaluación de sistemas multilingüe y multimedia de acceso inteligente a la información biomédica en contextos de educación superior. Seminario mavir.
    [BibTeX] [Abstract] [Google Scholar]
    Los sistemas de acceso inteligente a la información están relacionados, habitualmente, con aquellos capaces de aglutinar el conocimiento a partir de recursos de terceros. Existe una tendencia creciente en biomedicina que consiste en ofrecer los recursos desarrollados a través de servicios Web, poniéndolos a disposición de otros investigadores biomédicos y haciendo que este campo de estudio sea muy apropiado para el desarrollo de sistemas que aprovechen y exploten diferentes fuentes de información. Por otro lado, estas fuentes de información poco a poco van afrontando la problemática del lenguaje, disponiéndose en algunos casos de recursos en diferentes idiomas. Muchas de las tareas relacionadas con el procesamiento del lenguaje natural, la minería de textos, la recuperación de información o la extracción de información, son evaluadas con medidas cuantitativas basadas en la precisión y la cobertura de los algoritmos. Sin embargo, muchos de estos sistemas tienen ámbitos aplicación aptas para una gran variedad de usuarios finales, siendo imprescindible, en este caso, obtener medidas en las que los usuarios valoren la utilidad de los mismos para llevar a cabo tareas concretas. En este seminario proponemos el análisis de estos sistemas a partir de un conjunto de métodos cuantitativos y cualitativos, que permiten la evaluación de la percepción de los usuarios finales sobre los sistemas para llevar a cabo diferentes tipos de actividades de aprendizaje en el contexto de la educación superior, estando estos grupos de usuarios, por tanto, formados por profesores o alumnos en ciencias de la salud.

    @OTHER{AparicioGalisteo2012,
    abstract = {Los sistemas de acceso inteligente a la información están relacionados, habitualmente, con aquellos capaces de aglutinar el conocimiento a partir de recursos de terceros. Existe una tendencia creciente en biomedicina que consiste en ofrecer los recursos desarrollados a través de servicios Web, poniéndolos a disposición de otros investigadores biomédicos y haciendo que este campo de estudio sea muy apropiado para el desarrollo de sistemas que aprovechen y exploten diferentes fuentes de información. Por otro lado, estas fuentes de información poco a poco van afrontando la problemática del lenguaje, disponiéndose en algunos casos de recursos en diferentes idiomas. Muchas de las tareas relacionadas con el procesamiento del lenguaje natural, la minería de textos, la recuperación de información o la extracción de información, son evaluadas con medidas cuantitativas basadas en la precisión y la cobertura de los algoritmos. Sin embargo, muchos de estos sistemas tienen ámbitos aplicación aptas para una gran variedad de usuarios finales, siendo imprescindible, en este caso, obtener medidas en las que los usuarios valoren la utilidad de los mismos para llevar a cabo tareas concretas. En este seminario proponemos el análisis de estos sistemas a partir de un conjunto de métodos cuantitativos y cualitativos, que permiten la evaluación de la percepción de los usuarios finales sobre los sistemas para llevar a cabo diferentes tipos de actividades de aprendizaje en el contexto de la educación superior, estando estos grupos de usuarios, por tanto, formados por profesores o alumnos en ciencias de la salud.},
    address = {Madrid},
    author = {Aparicio Galisteo , Fernando and Buenaga Rodríguez , Manuel},
    journal = {Seminario MAVIR},
    month = {Jun},
    title = {Métodos cuantitativos y cualitativos de evaluación de sistemas multilingüe y multimedia de acceso inteligente a la información biomédica en contextos de educación superior},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+M%C3%A9todos+cuantitativos+y+cualitativos+de+evaluaci%C3%B3n+de+sistemas+multiling%C3%BCe+y+multimedia+de+acceso+inteligente+a+la+informaci%C3%B3n+biom%C3%A9dica+en+contextos+de+educaci%C3%B3n+superior&btnG=&hl=es&as_sdt=0},
    year = {2012}
    }

  • Arruego, J., Llorente, E., Medina, J. L., Cortizo Pérez, J. C., & Expósito, D.. (2007). Minería de direcciones postales. Paper presented at the Actas del v taller de minería de datos y aprendizaje.
    [BibTeX] [Abstract] [Google Scholar]
    En este artículo se presenta FuMaS (Fuzzy Matching System), un sistema que permite la recuperación eficiente de direcciones postales a partir de consultas con ruido. La recuperación difusa de esta información tiene innumerables aplicaciones, desde encontrar/limpiar duplicados en bases de datos (registros electorales, encontrar nidos de fraude postal, etc.) hasta corregir las entradas de los usuarios en sistemas tales como callejeros o cualquier tipo de formulario dónde haya que introducir una dirección postal. En este artículo se presenta la arquitectura del sistema, así como los experimentos que, hasta el momento, se han realizado sobre el mismo. Los resultados de estos experimentos muestran que FuMaS es una herramienta muy útil para recuperar direcciones postales a partir de consultas con ruido, siendo capaz de resolver cerca del 85% de las direcciones con errores introducidas al sistema, una eficacia un 15% mayor que cualquier otro sistema similar probado.

    @INPROCEEDINGS{Arruego2007,
    author = {Arruego , Javier and Llorente , Ester and Medina , José Luis and Cortizo Pérez , José Carlos and Expósito , Diego},
    title = {Minería de Direcciones Postales},
    booktitle = {Actas del V Taller de Minería de Datos y Aprendizaje},
    year = {2007},
    editor = {F. J. Ferrer-Troyano and A. Troncoso and J. C. Riquelme},
    pages = {49-56},
    abstract = {En este artículo se presenta FuMaS (Fuzzy Matching System), un sistema que permite la recuperación eficiente de direcciones postales a partir de consultas con ruido. La recuperación difusa de esta información tiene innumerables aplicaciones, desde encontrar/limpiar duplicados en bases de datos (registros electorales, encontrar nidos de fraude postal, etc.) hasta corregir las entradas de los usuarios en sistemas tales como callejeros o cualquier tipo de formulario dónde haya que introducir una dirección postal. En este artículo se presenta la arquitectura del sistema, así como los experimentos que, hasta el momento, se han realizado sobre el mismo. Los resultados de estos experimentos muestran que FuMaS es una herramienta muy útil para recuperar direcciones postales a partir de consultas con ruido, siendo capaz de resolver cerca del 85% de las direcciones con errores introducidas al sistema, una eficacia un 15% mayor que cualquier otro sistema similar probado.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMiner%C3%ADa+de+Direcciones+Postales&btnG=&hl=es&as_sdt=0}
    }

  • de Buenaga, M., Gachet Páez, D., Maña, Mata, J., Borrajo, L., & Lorenzo, E.. (2015). Iphealth: plataforma inteligente basada en open, linked y big data para la toma de decisiones y aprendizaje en el ámbito de la salud. In Procesamiento de lenguaje natural (, Vol. 55pp. 161-164). SEPLN.
    [BibTeX] [Abstract] [Google Scholar]
    The IPHealth project’s main objective is to design and implement a platform with services that enable an integrated and intelligent access to related in the biomedical domain. We propose three usage scenarios: (i) assistance to healthcare professionals during the decision making process at clinical settings, (ii) access to relevant information about their health status and dependent chronic patients and (iii) to support evidence-based training of new medical students. Most effective techniques are proposed for reveral NLP tecniques and extraction of information from large data sets from sets of sensors and using open data. A Web application framework and an architecture that would enable integration of processes and techniques of text and data mining will be designed. Also, this architecture have to allow an integration of information in a fast, consistent and reusable (via plugins) way.

    @INCOLLECTION{Buenaga2015b,
    author = {Buenaga, Manuel de and Gachet Páez, Diego and Maña and Mata, Jaciento and Borrajo, Lourdes and Lorenzo, Eva},
    title = {IPHealth: Plataforma inteligente basada en open, linked y big data para la toma de decisiones y aprendizaje en el ámbito de la salud},
    booktitle = {Procesamiento de Lenguaje Natural},
    publisher = {SEPLN},
    year = {2015},
    editor = {},
    volume = {55},
    series = {},
    pages = {161--164},
    month = {September},
    abstract = {The IPHealth project's main objective is to design and implement a platform with services that enable an integrated and intelligent access to related in the biomedical domain. We propose three usage scenarios: (i) assistance to healthcare professionals during the decision making process at clinical settings, (ii) access to relevant information about their health status and dependent chronic patients and (iii) to support evidence-based training of new medical students. Most effective techniques are proposed for reveral NLP tecniques and extraction of information from large data sets from sets of sensors and using open data. A Web application framework and an architecture that would enable integration of processes and techniques of text and data mining will be designed. Also, this architecture have to allow an integration of information in a fast, consistent and reusable (via plugins) way.},
    copyright = {SEPLN},
    doi = {},
    isbn = {1989-7553},
    url = {https://scholar.google.es/citations?view_op=view_citation&continue=/scholar%3Fhl%3Des%26as_sdt%3D0,5%26as_ylo%3D2015%26scilib%3D2%26scioq%3DIPHealth:%2BPlataforma%2Binteligente%2Bbasada%2Ben%2Bopen,%2Blinked%2By%2Bbig%2Bdata%2Bpara%2Bla%2Btoma%2Bde%2Bdecisiones%2By%2Baprendizaje%2Ben%2B&citilm=1&citation_for_view=0ynMYdoAAAAJ:Tiz5es2fbqcC&hl=es&oi=p},
    urldate = {2015-02-02}
    }

  • Buenaga, M., Fdez-Riverola, F., Maña, M., Puertas, E., Glez-Peña, D., & Mata, J.. (2010). Medical-miner: integración de conocimiento textual explícito en técnicas de minería de datos para la creación de herramientas traslacionales en medicina. Xxvi congreso de la sepln (sociedad española para el procesamiento del lenguaje natural), 45, 319-320.
    [BibTeX] [Abstract] [Google Scholar]
    The project proposes to analyse, experiment and develop new text and data mining techniques in an interrelated way, in intelligent medical information systems. An intelligent information access system based on them will be developed, offering advanced functionalities able to interrelate medical information, mainly information (text and data) from clinical records and scientific documentation, making use of standard resources of the domain (e.g. UMLS, SNOMED, Gene Ontology). An open source platform will be developed integrating all the elements.

    @OTHER{Buenaga2010,
    abstract = {The project proposes to analyse, experiment and develop new text and data mining techniques in an interrelated way, in intelligent medical information systems. An intelligent information access system based on them will be developed, offering advanced functionalities able to interrelate medical information, mainly information (text and data) from clinical records and scientific documentation, making use of standard resources of the domain (e.g. UMLS, SNOMED, Gene Ontology). An open source platform will be developed integrating all the elements.},
    author = {Buenaga , Manuel and Fdez-Riverola , Florentino and Maña , Manuel and Puertas , Enrique and Glez-Peña , Daniel and Mata , Jacinto},
    journal = {XXVI Congreso de la SEPLN (Sociedad Española para el Procesamiento del Lenguaje Natural)},
    month = {September},
    pages = {319-320},
    title = {Medical-Miner: Integración de conocimiento textual explícito en técnicas de minería de datos para la creación de herramientas traslacionales en medicina},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMedical-Miner%3A+Integraci%C3%B3n+de+conocimiento+textual+expl%C3%ADcito+en+t%C3%A9cnicas+%09de+miner%C3%ADa+de+datos+para+la+creaci%C3%B3n+de+herramientas+traslacionales+%09en+medicina&btnG=&hl=es&as_sdt=0},
    volume = {45},
    year = {2010}
    }

  • Buenaga, M., Maña, M., Gachet Páez, D., & Mata, J.. (2006). The sinamed and isis projects: applying text mining techniques to improve access to a medical digital library. In Gonzalo, J., Thanos, C., Verdejo, F. M., & Carrasco, R. C. (Ed.), In Research and advanced technology for digital libraries (, Vol. 4172pp. 548-551). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Intelligent information access systems integrate text mining and content analysis capabilities as a relevant element in an increasing way. In this paper we present our work focused on the integration of text categorization and summarization to improve information access on a specific medical domain, patient clinical records and related scientific documentation, in the framework of two different research projects: SINAMED and ISIS, developed by a consortium of two research groups from two universities, one hospital and one software development firm. SINAMED has a basic research orientation and its goal is to design new text categorization and summarization algorithms based on the utilization of lexical resources in the biomedical domain. ISIS is a R&D project with a more applied and technology-transfer orientation, focused on more direct practical aspects of the utilization in a concrete public health institution.

    @INCOLLECTION{Buenaga2006,
    author = {Buenaga , Manuel and Maña , Manuel and Gachet Páez, Diego and Mata , Jacinto},
    title = {The SINAMED and ISIS Projects: Applying Text Mining Techniques to Improve Access to a Medical Digital Library},
    booktitle = {Research and Advanced Technology for Digital Libraries},
    publisher = {Springer Berlin Heidelberg},
    year = {2006},
    editor = {Gonzalo, Julio and Thanos, Costantino and Verdejo, M. Felisa and Carrasco, Rafael C.},
    volume = {4172},
    series = {Lecture Notes in Computer Science},
    pages = {548-551},
    month = {jan},
    abstract = {Intelligent information access systems integrate text mining and content analysis capabilities as a relevant element in an increasing way. In this paper we present our work focused on the integration of text categorization and summarization to improve information access on a specific medical domain, patient clinical records and related scientific documentation, in the framework of two different research projects: SINAMED and ISIS, developed by a consortium of two research groups from two universities, one hospital and one software development firm. SINAMED has a basic research orientation and its goal is to design new text categorization and summarization algorithms based on the utilization of lexical resources in the biomedical domain. ISIS is a R&D project with a more applied and technology-transfer orientation, focused on more direct practical aspects of the utilization in a concrete public health institution.},
    copyright = {©2006 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/11863878_65},
    isbn = {978-3-540-44636-1, 978-3-540-44638-5},
    shorttitle = {The {SINAMED} and {ISIS} Projects},
    url = {http://scholar.google.es/scholar?q=allintitle%3A%3A+The+SINAMED+and+ISIS+Projects%3A+Applying+Text+Mining+Techniques+to+Improve+Access+to+a+Medical+Digital+Library&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Buenaga, M., Gachet Páez, D., Maña, M. J., de la Villa, M., & Mata, J.. (2008). Clustering and summarizing medical documents to improve mobile retrieval. Acm-sigir workshop on mobile information retrieval, 54-57.
    [BibTeX] [Abstract] [Google Scholar]
    Access to biomedical databases from PDAs (Personal DigitalAssistant) is a useful tool for health care professionals. Mobiledevices, even with their limited screen size, offer clear advantagesin different scenarios, but the capability to select the crucialinformation, and display it in a synthetic way plays a key role. Wepropose to integrate multidocument summarization (MDS)techniques with a postretrieval clustering interface in a mobiledevice accessing to medical documents. The final result is asystem that offers a summary for each cluster reporting documentsimilarities and a summary for each document highlighting thesingular aspects that it provides with respect to the commoninformation in the cluster.

    @OTHER{Buenaga2008,
    abstract = {Access to biomedical databases from PDAs (Personal DigitalAssistant) is a useful tool for health care professionals. Mobiledevices, even with their limited screen size, offer clear advantagesin different scenarios, but the capability to select the crucialinformation, and display it in a synthetic way plays a key role. Wepropose to integrate multidocument summarization (MDS)techniques with a postretrieval clustering interface in a mobiledevice accessing to medical documents. The final result is asystem that offers a summary for each cluster reporting documentsimilarities and a summary for each document highlighting thesingular aspects that it provides with respect to the commoninformation in the cluster.},
    author = {Buenaga , Manuel and Gachet Páez, Diego and Maña , Manuel J. and de la Villa , Manuel and Mata , Jacinto},
    journal = {ACM-SIGIR Workshop on Mobile Information Retrieval},
    month = {July},
    pages = {54-57},
    publisher = {ACM-SIGIR Workshop on Mobile Information Retrieval},
    title = {Clustering and Summarizing Medical Documents to Improve Mobile Retrieval},
    url = {http://scholar.google.es/scholar?q=allintitle%3AClustering+and+Summarizing+Medical+Documents+to+Improve+Mobile+Retrieval&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Buenaga Rodriguez, M., Maña López, M. J., Diaz Esteban, A., & Gervás Gómez-Navarro, P.. (2001). A user model based on content analysis for the intelligent personalization of a news service. In Bauer, M., Gmytrasiewicz, P. J., & Vassileva, J. (Ed.), In User modeling (, Vol. 2109pp. 216-218). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper we present a methodology designed to improve the intelligent personalization of news services. Our methodology integrates textual content analysis tasks to achieve an elaborate user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of user’s interests includes his preferences about content, using a wide coverage and non-specific-domain classification of topics, and structure (newspaper sections). The application of implicit feedback allows a proper and dynamic personalization.

    @INCOLLECTION{BuenagaRodri­guez2001,
    author = {Buenaga Rodriguez , Manuel and Maña López , Manuel J. and Diaz Esteban , Alberto and Gervás Gómez-Navarro , Pablo},
    title = {A User Model Based on Content Analysis for the Intelligent Personalization of a News Service},
    booktitle = {User Modeling},
    publisher = {Springer Berlin Heidelberg},
    year = {2001},
    editor = {Bauer, Mathias and Gmytrasiewicz, Piotr J. and Vassileva, Julita},
    volume = {2109},
    series = {Lecture Notes in Computer Science},
    pages = {216-218},
    month = {jan},
    abstract = {In this paper we present a methodology designed to improve the intelligent personalization of news services. Our methodology integrates textual content analysis tasks to achieve an elaborate user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of user's interests includes his preferences about content, using a wide coverage and non-specific-domain classification of topics, and structure (newspaper sections). The application of implicit feedback allows a proper and dynamic personalization.},
    copyright = {©2001 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/3-540-44566-8_25},
    isbn = {978-3-540-42325-6, 978-3-540-44566-1},
    language = {en},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+User+Model+Based+on+Content+Analysis+for+the+Intelligent+Personalization+of+a+News+Service&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Buenaga Rodriguez, M., Fernández Manjón, B., & Fernández Valmayor, A.. (1995). Information overload at the information age. Adults in innovative learning situations, 17-30.
    [BibTeX] [Google Scholar]
    @OTHER{BuenagaRodri­guez1995,
    author = {Buenaga Rodriguez , Manuel and Fernández Manjón , Baltasar and Fernández Valmayor , A},
    journal = {Adults in Innovative Learning Situations},
    pages = {17-30},
    title = {Information Overload at the Information Age},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Information+Overload+at+the+Information+Age&btnG=&hl=es&as_sdt=0},
    year = {1995}
    }

  • Buenaga Rodriguez, M., Rubio, M., Aparicio Galisteo, F., & Hernando, A.. (2011). Conceptcase: una metodología para la integración de aprendizaje basado en conceptos sobre casos clínicos mediante sistemas inteligentes de acceso a información en internet. .
    [BibTeX] [Abstract] [Google Scholar]
    En este trabajo presentamos ConceptCase, una metodología orientada a la integración de aprendizaje basado en conceptos y aprendizaje basado en casos. La metodología se basa en que el estudiante pueda profundizar fácilmente en los conceptos que aparecen en un caso (nos hemos focalizado en casos clínicos y estudiantes de medicina), gracias a la utilización de un sistema inteligente de acceso a la información en Internet, que permite identificar los conceptos y acceder de forma directa a información sobre ellos. Para la definición y evaluación de nuestra metodología, hemos desarrollado una experiencia inicial sobre un caso clínico en el marco de una asignatura de 2º curso de Grado en Medicina. El caso en concreto era de una paciente con una patología cardíaca, en el que surgen conceptos relacionados con la descripción de la enfermedad, su evolución y tratamiento, y seleccionamos como ontologías o bases de conceptos MedlinePlus y FreeBase. Conducimos una experiencia de evaluación sobre un conjunto de 60 alumnos, obteniendo resultados positivos, tanto desde el punto de vista de los resultados objetivos del aprendizaje, como de satisfacción de los usuarios.

    @INPROCEEDINGS{BuenagaRodri­guez2011,
    author = {Buenaga Rodriguez , Manuel and Rubio , Margarita and Aparicio Galisteo , Fernando and Hernando , Asunción},
    title = {ConceptCase: Una metodología para la integración de aprendizaje basado en conceptos sobre casos clínicos mediante sistemas inteligentes de acceso a información en Internet},
    year = {2011},
    abstract = {En este trabajo presentamos ConceptCase, una metodología orientada a la integración de aprendizaje basado en conceptos y aprendizaje basado en casos. La metodología se basa en que el estudiante pueda profundizar fácilmente en los conceptos que aparecen en un caso (nos hemos focalizado en casos clínicos y estudiantes de medicina), gracias a la utilización de un sistema inteligente de acceso a la información en Internet, que permite identificar los conceptos y acceder de forma directa a información sobre ellos. Para la definición y evaluación de nuestra metodología, hemos desarrollado una experiencia inicial sobre un caso clínico en el marco de una asignatura de 2º curso de Grado en Medicina. El caso en concreto era de una paciente con una patología cardíaca, en el que surgen conceptos relacionados con la descripción de la enfermedad, su evolución y tratamiento, y seleccionamos como ontologías o bases de conceptos MedlinePlus y FreeBase. Conducimos una experiencia de evaluación sobre un conjunto de 60 alumnos, obteniendo resultados positivos, tanto desde el punto de vista de los resultados objetivos del aprendizaje, como de satisfacción de los usuarios.},
    journal = {VIII Jornadas Internacionales de Innovación Universitaria},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+UNA+METODOLOG%C3%8DA+PARA+LA++INTEGRACI%C3%93N+DE+APRENDIZAJE+BASADO+EN++CONCEPTOS+SOBRE+CASOS+CL%C3%8DNICOS+MEDIANTE++SISTEMAS+INTELIGENTES+DE+ACCESO+A++INFORMACI%C3%93N+EN+INTERNET&btnG=&hl=es&as_sdt=0}
    }

  • Buenaga Rodriguez, M., Fernández Manjón, B., & Vaquero Sánchez, A.. (1994). Un asistente inteligente para unix basado en la documentación. Revista de la asociación para el desarrollo de la informática educativa, 1(2).
    [BibTeX] [Abstract] [Google Scholar]
    Este artículo describe un sistema, ARGOS, que proporciona ayuda a los usuarios del sistema operativo UNIX. Proponemos los asistentes inteligentes en línea como una alternativa a los tutores inteligentes. Las características clave de ARGOS son: una interfaz de usuario amigable, el usuario puede especificar sus necesidades en lenguaje natural, es fácil reutilizar información previamente existente y proporciona un entorno cooperativo supervisado. Estas características se basan en la integración de técnicas de recuperación de información, modelado de usuario e hipertexto. ARGOS hace uso de información existente y facilita el acceso del usuario a la información correcta en el momento oportuno. ARGOS proporciona, además, un marco adecuado para el ensayo de la integración de técnicas de modelado del usuario y procesamiento de lenguaje natural en la recuperación de información.

    @OTHER{BuenagaRodri­guez1994,
    abstract = {Este artículo describe un sistema, ARGOS, que proporciona ayuda a los usuarios del sistema operativo UNIX. Proponemos los asistentes inteligentes en línea como una alternativa a los tutores inteligentes. Las características clave de ARGOS son: una interfaz de usuario amigable, el usuario puede especificar sus necesidades en lenguaje natural, es fácil reutilizar información previamente existente y proporciona un entorno cooperativo supervisado. Estas características se basan en la integración de técnicas de recuperación de información, modelado de usuario e hipertexto. ARGOS hace uso de información existente y facilita el acceso del usuario a la información correcta en el momento oportuno. ARGOS proporciona, además, un marco adecuado para el ensayo de la integración de técnicas de modelado del usuario y procesamiento de lenguaje natural en la recuperación de información.},
    author = {Buenaga Rodriguez , Manuel and Fernández Manjón , Baltasar and Vaquero Sánchez , Antonio},
    journal = {Revista de la Asociación para el Desarrollo de la Informática Educativa},
    number = {2},
    title = {Un asistente inteligente para UNIX basado en la documentación},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Un+asistente+inteligente+para+UNIX+basado+en+la+documentaci%C3%B3n&btnG=&hl=es&as_sdt=0},
    volume = {1},
    year = {1994}
    }

  • Buenaga Rodriguez, M., Maña, M., Carrero, F., & Mata, J.. (2007). Diseño e integración de técnicas de categorización automática de textos para el acceso a la información bilingue en un ámbito biomédico. Vii jornada de seguimiento de proyectos en tecnologías informáticas.
    [BibTeX] [Google Scholar]
    @OTHER{BuenagaRodri­guez2007,
    address = {Zaragoza},
    author = {Buenaga Rodriguez , Manuel and Maña , Manuel and Carrero , Francisco and Mata , Jacinto},
    journal = {VII Jornada de Seguimiento de Proyectos en Tecnologías Informáticas},
    month = {September},
    title = {Diseño e Integración de Técnicas de Categorización Automática de Textos para el Acceso a la Información Bilingue en un Ámbito Biomédico},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+e+Integraci%C3%B3n+de+T%C3%A9cnicas+de+Categorizaci%C3%B3n+Autom%C3%A1tica+de+Textos+para+el+Acceso+a+la+Informaci%C3%B3n+Bilingue+en+un+%C3%81mbito+Biom%C3%A9dico&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Buenaga Rodríguez, M., Gómez Hidalgo, J. M., & Díaz Agudo, B.. (2000). Using wordnet to complement training information in text categorization. , 185, 353-364.
    [BibTeX] [Abstract] [Google Scholar]
    Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.

    @OTHER{BuenagaRodriguez2000,
    abstract = {Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.},
    address = {Amsterdam/Philadelphia},
    author = {Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María and Díaz Agudo , Belén},
    booktitle = {Recent Advances in Natural Language Processing II: },
    edition = {Selected Papers from RANLP},
    pages = {353-364},
    publisher = {John Benjamins},
    series = {97, Current Issues in Linguistic Theory (CILT)},
    title = {Using Wordnet to Complement Training Information in Text Categorization},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+WordNet+to+Complement+Training+Information+in+Text+Categorization&btnG=&hl=es&as_sdt=0},
    volume = {185},
    year = {2000}
    }

  • Buenaga Rodríguez, M., Gómez Hidalgo, J. M., & Díaz-Agudo, B.. (1997). Using wordnet to complement training information in text categorization. 2nd international conference on recent advances in natural languageprocessing (ranlp), tzigov chark (bulgaria).
    [BibTeX] [Abstract] [Google Scholar]
    Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories´.

    @OTHER{BuenagaRodriguez1997,
    abstract = {Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories´.},
    author = {Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María and Díaz-Agudo , Belén},
    journal = { 2nd International Conference on Recent Advances in Natural LanguageProcessing (RANLP), Tzigov Chark (Bulgaria)},
    title = {Using WordNet to Complement Training Information in Text Categorization},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+WordNet+to+Complement+Training+Information+in+Text+Categorization&btnG=&hl=es&as_sdt=0},
    year = {1997}
    }

  • Cantador, I., Cortizo, J. C., Carrero, F., Troyano, J. A., Rosso, P., & Schedl, M.. (2011). Overview of the third international workshop on search and mining user-generated contents. Paper presented at the Proceedings of the 20th acm international conference on information and knowledge management, New York, NY, USA.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper, we provide an overview of the 3rd International Workshop on Search and Mining User-generated Contents, held in conjunction with the 20th ACM International Conference on Information and Knowledge Management. We present the motivation and goals of the workshop, and some statistics and details about accepted papers and keynotes.

    @INPROCEEDINGS{Cantador2011,
    author = {Cantador , Ivan and Cortizo , José Carlos and Carrero , Francisco and Troyano , Jose A. and Rosso , Paolo and Schedl , Markus},
    title = {Overview of the third international workshop on search and mining user-generated contents},
    booktitle = {Proceedings of the 20th ACM international conference on Information and knowledge management},
    year = {2011},
    pages = {2625-2626},
    address = {New York, NY, USA},
    publisher = {ACM},
    abstract = {In this paper, we provide an overview of the 3rd International Workshop on Search and Mining User-generated Contents, held in conjunction with the 20th ACM International Conference on Information and Knowledge Management. We present the motivation and goals of the workshop, and some statistics and details about accepted papers and keynotes.},
    doi = {10.1145/2063576.2064045},
    isbn = {978-1-4503-0717-8},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Overview+of+the+third+international+workshop+on+search+and+mining+user-generated+contents&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • Carrero, F., Cortizo, J. C., & Gómez, J. M.. (2008). Testing concept indexing in crosslingual medical text classifcation. 3th international conference on digital information management.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful for interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers in controlled, configurable environment. Currently there is no Spanish version of MetaMap, which difficult the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Developing a Spanish version of MetaMap would be a huge task, since there has been a lot of work supporting the English version for the last sixteen years. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language. In this paper we show our experiments on combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx in order to extract concepts for text classification.

    @OTHER{Carrero2008b,
    abstract = {MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful for interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers in controlled, configurable environment. Currently there is no Spanish version of MetaMap, which difficult the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Developing a Spanish version of MetaMap would be a huge task, since there has been a lot of work supporting the English version for the last sixteen years. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language. In this paper we show our experiments on combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx in order to extract concepts for text classification.},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María},
    doi = {10.1109/ICDIM.2008.4746715},
    journal = {3th International Conference on Digital Information Management},
    publisher = {3th International Conference on Digital Information Management},
    title = {Testing Concept Indexing in Crosslingual Medical Text Classifcation},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Testing+concept+indexing+in+crosslingual+medical+text+classification&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Carrero, F., Cortizo, J. C., & Gómez, J. M.. (2008). Building a spanish mmtx by using automatic translation and biomedicalontologies. 9th international conference on intelligent data engineering andautomated learning.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The use of domain ontologies is becoming increasingly popular in Medical Natural Language Processing Systems. A wide variety of knowledge bases in multiple languages has been integrated into the Unified Medical Language System (UMLS) to create a huge knowledge source that can be accessed with diverse lexical tools. MetaMap (and its java version MMTx) is a tool that allows extracting medical concepts from free text, but currently there not exists a Spanish version. Our ongoing research is centered on the application of biomedical concepts to cross-lingual text classification, what makes it necessary to have a Spanish MMTx available. We have combined automatic translation techniques with biomedical ontologies and the existing English MMTx to produce a Spanish version of MMTx. We have evaluated different approaches and applied several types of evaluation according to different concept representations for text classification. Our results prove that the use of existing translation tools such as Google Translate produce translations with a high similarity to original texts in terms of extracted concepts.

    @OTHER{Carrero2008,
    abstract = {The use of domain ontologies is becoming increasingly popular in Medical Natural Language Processing Systems. A wide variety of knowledge bases in multiple languages has been integrated into the Unified Medical Language System (UMLS) to create a huge knowledge source that can be accessed with diverse lexical tools. MetaMap (and its java version MMTx) is a tool that allows extracting medical concepts from free text, but currently there not exists a Spanish version. Our ongoing research is centered on the application of biomedical concepts to cross-lingual text classification, what makes it necessary to have a Spanish MMTx available. We have combined automatic translation techniques with biomedical ontologies and the existing English MMTx to produce a Spanish version of MMTx. We have evaluated different approaches and applied several types of evaluation according to different concept representations for text classification. Our results prove that the use of existing translation tools such as Google Translate produce translations with a high similarity to original texts in terms of extracted concepts.},
    address = {LNCS Springer Verlag},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María},
    doi = {10.1007/978-3-540-88906-9_44},
    journal = {9th International Conference on Intelligent Data Engineering andAutomated Learning},
    publisher = {9th International Conference on Intelligent Data Engineering andAutomated Learning},
    title = {Building a Spanish MMTx by using Automatic Translation and BiomedicalOntologies},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABuilding+a+Spanish+MMTx+by+using+Automatic+Translation+and+Biomedical+Ontologies&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Carrero, F., Cortizo, J. C., Gómez, J. M., & Buenaga, M.. (2008). In the development of a spanish metamap. Proceedings of the acm 17th conference on information and knowledgemanagement.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers. Currently there is no Spanish version of MetaMap, which difficults the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification and retrieval [3]. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language [4]. In this paper we evaluate the possibility of combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx.

    @OTHER{Carrero2008a,
    abstract = {MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers. Currently there is no Spanish version of MetaMap, which difficults the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification and retrieval [3]. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language [4]. In this paper we evaluate the possibility of combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx.},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María and Buenaga , Manuel},
    doi = {10.1145/1458082.1458335},
    journal = {Proceedings of the ACM 17th Conference on Information and KnowledgeManagement },
    publisher = {Proceedings of the ACM 17th Conference on Information and KnowledgeManagement},
    title = {In the development of a Spanish Metamap},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIn+the+development+of+a+Spanish+Metamap&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Carrero García, F., Gómez Hidalgo, J. M., Buenaga Rodríguez, M., Mata, J., & Maña López, M.. (2007). Acceso a la información bilingüe utilizando ontologías específicasdel dominio biomédico. Revista de la sociedad española para el procesamiento del lenguajenatural, 38, 107-118.
    [BibTeX] [Abstract] [Google Scholar]
    One of the most promising approaches to Cross-Language Information Retrieval is the utilization of lexical-semantic resources for concept-indexing documents and queries. We have followed this approach in a proposal of an Information Access system designed for medicine professionals, aiming at easing the preparation of clinical cases, and the development of studies and research. In our proposal, the clinical record information, in Spanish, is connected to related scientific information (research papers), in English and Spanish, by using high quality and coverage resources like the SNOMED ontology. We also describe how we have addressed information privacy.

    @OTHER{CarreroGarcia2007,
    abstract = {One of the most promising approaches to Cross-Language Information Retrieval is the utilization of lexical-semantic resources for concept-indexing documents and queries. We have followed this approach in a proposal of an Information Access system designed for medicine professionals, aiming at easing the preparation of clinical cases, and the development of studies and research. In our proposal, the clinical record information, in Spanish, is connected to related scientific information (research papers), in English and Spanish, by using high quality and coverage resources like the SNOMED ontology. We also describe how we have addressed information privacy.},
    author = {Carrero García , Francisco and Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel and Mata , Jacinto and Maña López , Manuel},
    journal = {Revista de la Sociedad Española para el Procesamiento del LenguajeNatural},
    month = {Abril},
    pages = {107-118},
    title = {Acceso a la información bilingüe utilizando ontologías específicasdel dominio biomédico},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAcceso+a+la+informaci%C3%B3n+biling%C3%BCe+utilizando++ontolog%C3%ADas+espec%C3%ADficas+del+dominio+biom%C3%A9dico&btnG=&hl=es&as_sdt=0%2C5},
    volume = {38},
    year = {2007}
    }

  • Cormack, G., Gómez Hidalgo, J. M., & Puertas Sanz, E.. (2007). Feature engineering for mobile (sms) spam filtering. Paper presented at the Proceedings of the 30th annual international acm sigir conference.
    [BibTeX] [Abstract] [Google Scholar]
    Mobile spam in an increasing threat that may be addressed using filtering systems like those employed against email spam. We believe that email filtering techniques require some adaptation to reach good levels of performance on SMS spam, especially regarding message representation. In order to test this assumption, we have performed experiments on SMS filtering using top performing email spam filters on mobile spam messages using a suitable feature representation, with results supporting our hypothesis.

    @INPROCEEDINGS{Cormack2007,
    author = {Cormack , Gordon and Gómez Hidalgo , José María and Puertas Sanz , Enrique},
    title = {Feature Engineering for Mobile (SMS) Spam Filtering},
    booktitle = {Proceedings of the 30th Annual International ACM SIGIR Conference},
    year = {2007},
    abstract = {Mobile spam in an increasing threat that may be addressed using filtering systems like those employed against email spam. We believe that email filtering techniques require some adaptation to reach good levels of performance on SMS spam, especially regarding message representation. In order to test this assumption, we have performed experiments on SMS filtering using top performing email spam filters on mobile spam messages using a suitable feature representation, with results supporting our hypothesis.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AFeature+Engineering+for+Mobile+%28SMS%29+Spam+Filtering&btnG=&hl=es&as_sdt=0}
    }

  • Cortizo, J. C., Gómez, J. M., Temprado, Y., Martín, D., & Rodríguez, F.. (2008). Mining postal addresses. Proceedings of the iadis european conference on data mining.
    [BibTeX] [Abstract] [Google Scholar]
    This paper presents FuMaS (Fuzzy Matching System), a system capable of an efficient retrieval of postal addresses from noisy queries. The fuzzy postal addresses retrieval has many possible applications, ranging from datawarehouse dedumping, to the correction of input forms, or the integration within online street directories, etc. This paper presents the system architecture along with a series of experiments performed using FuMaS. The experimental results show that FuMaS is a very useful system when retrieving noisy postal addresses, being able to retrieve almost 85% of the total ones. This represents an improvement of the 15% when comparing with other systems tested in this set of experiments.

    @OTHER{Cortizo2008a,
    abstract = {This paper presents FuMaS (Fuzzy Matching System), a system capable of an efficient retrieval of postal addresses from noisy queries. The fuzzy postal addresses retrieval has many possible applications, ranging from datawarehouse dedumping, to the correction of input forms, or the integration within online street directories, etc. This paper presents the system architecture along with a series of experiments performed using FuMaS. The experimental results show that FuMaS is a very useful system when retrieving noisy postal addresses, being able to retrieve almost 85% of the total ones. This represents an improvement of the 15% when comparing with other systems tested in this set of experiments.},
    author = {Cortizo , José Carlos and Gómez , José María and Temprado , Yaiza and Martín , Diego and Rodríguez , Federico},
    journal = {Proceedings of the IADIS European Conference on Data Mining},
    publisher = {Proceedings of the IADIS European Conference on Data Mining},
    title = {Mining Postal Addresses},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMining+Postal+Addresses&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Cortizo, J. C., Gachet Páez, D., Buenaga, M., Maña, M., & de la Villa, M.. (2008). Mobile medical information access by means of multidocument summarizationbased on similarities and differences. Acl workshop on mobile language processing, 46th annual meeting ofthe association of computational linguistics: human language technologies.
    [BibTeX] [Abstract] [Google Scholar]
    Access to Electronic Health Record (EHR) and biomedical databases from pockets and hand-held or tablet computers would be a useful tool for health care professionals. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques to present a large set of results in a restricted size environment.

    @OTHER{Cortizo2008b,
    abstract = {Access to Electronic Health Record (EHR) and biomedical databases from pockets and hand-held or tablet computers would be a useful tool for health care professionals. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques to present a large set of results in a restricted size environment.},
    author = {Cortizo , José Carlos and Gachet Páez, Diego and Buenaga , Manuel and Maña , Manuel and de la Villa , Manuel},
    journal = {ACL Workshop on Mobile Language Processing, 46th Annual Meeting ofthe Association of Computational Linguistics: Human Language Technologies},
    publisher = {ACL Workshop on Mobile Language Processing, 46th Annual Meeting ofthe Association of Computational Linguistics: Human Language Technologies},
    title = {Mobile Medical Information Access by means of Multidocument Summarizationbased on Similarities and Differences},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Medical+Information+Access+by+means+of+Multidocument+Summarization+%09based+on+Similarities+and+Differences&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Cortizo, J. C., Gachet Páez, D., Buenaga, M., Maña, M., Puertas, E., & de la Villa, M.. (2008). Extending pubmed on tap by means of multidocument summarization. User-centric technologies and applications workshop.
    [BibTeX] [Abstract] [Google Scholar]
    Access to biomedical databases from pockets and hand-held or tablet computers is a useful tool for health care professionals. PubMed on Tap is the standar application for PDA to retrieve information from Medline, the most important and consulted bibliographical database in the biomedical domain. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques improving aspects of PubMed on Tap.

    @OTHER{Cortizo2008,
    abstract = {Access to biomedical databases from pockets and hand-held or tablet computers is a useful tool for health care professionals. PubMed on Tap is the standar application for PDA to retrieve information from Medline, the most important and consulted bibliographical database in the biomedical domain. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques improving aspects of PubMed on Tap.},
    author = {Cortizo , José Carlos and Gachet Páez, Diego and Buenaga , Manuel and Maña , Manuel and Puertas , Enrique and de la Villa , Manuel},
    journal = {User-centric Technologies and Applications Workshop },
    publisher = {User-centric Technologies and Applications Workshop – Madrinet},
    title = {Extending PubMed on Tap by means of MultiDocument Summarization},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExtending+on+Tap+by+means+of+Summarization&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Cortizo, J. C., Carrero, F., Cantador, I., Troyano, J. A., & Rosso, P.. (2012). Introduction to the special section on search and mining user-generated content. Acm transactions on intelligent systems and technology, 3(4), 1-3.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The primary goal of this special section of ACM Transactions on Intelligent Systems and Technology is to foster research in the interplay between Social Media, Data/Opinion Mining and Search, aiming to reflect the actual developments in technologies that exploit user-generated content.

    @ARTICLE{Cortizo2012,
    author = {Cortizo , José Carlos and Carrero , Francisco and Cantador , Iván and Troyano , José Antonio and Rosso , Paolo},
    title = {Introduction to the Special Section on Search and Mining User-Generated Content},
    journal = {ACM Transactions on Intelligent Systems and Technology},
    year = {2012},
    volume = {3},
    pages = {1-3},
    number = {4},
    month = {September},
    abstract = {The primary goal of this special section of ACM Transactions on Intelligent Systems and Technology is to foster research in the interplay between Social Media, Data/Opinion Mining and Search, aiming to reflect the actual developments in technologies that exploit user-generated content.},
    chapter = {65},
    doi = {10.1145/2337542.2337550},
    issn = {2157-6904},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Introduction+to+the+Special+Section+on+Search+and+Mining+User-Generated+Content&btnG=&hl=es&as_sdt=0%2C5},
    urldate = {2013-01-10}
    }

  • Cortizo, J. C., Carrero, F. M., & Gómez, J. M.. (2011). Introduction to the special issue: mining social media. International journal of electronic commerce, 15(3), 5-8.
    [BibTeX] [Ver publicacion] [Google Scholar]
    @ARTICLE{Cortizo2011,
    author = {Cortizo , José Carlos and Carrero , Francisco M. and Gómez , José María},
    title = {Introduction to the Special Issue: Mining Social Media},
    journal = {International Journal of Electronic Commerce},
    year = {2011},
    volume = {15},
    pages = {5-8},
    number = {3},
    month = {April},
    doi = {10.2753/JEC1086-4415150301},
    issn = {1086-4415},
    shorttitle = {Introduction to the Special Issue},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Introduction+to+the+Special+Issue%3A+Mining+Social+Media&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • Cortizo Pérez, J. C., Giráldez, I., & Gaya, M. C.. (2007). Transformando la representación de los datos para mejorar el clasificadorbayesiano simple. Paper presented at the Actas de la xii conferencia de la asociación española para la inteligenciaartificial – caepia/ttia 2007.
    [BibTeX] [Abstract] [Google Scholar]
    El clasificador bayesiano simple se basa en la asunción de independencia entre los valores de los atributos dado el valor de la clase. Así pues, su efectividad puede decrecer en presencia de atributos interdependientes. En este artículo se presenta DGW (Dependency Guided Wrapper), un wrapper que utiliza la información acerca de las dependencias entre atributos para transformar la representación de los datos para mejorar la precisión del clasificador bayesiano simple. Este artículo presenta una serie de experimentos donde se compara las representaciones de datos obtenidas por el DGW contra las representaciones de datos obtenidas por 12 acercamientos previos, como son la construcción inductiva de productos cartesianos de atributos, y wrappers que realizan búsquedas de subconjuntos óptimos de atributos. Los resultados de los experimentos muestran que DGW genera representaciones nuevas de los datos que ayudan a mejorar significativamente la precisión del clasificador bayesiano simple más frecuentemente que cualquier otro acercamiento previo. Además, DGW es mucho más rápido que cualquier otro sistema en el proceso de transformación de la representación de los datos.

    @INPROCEEDINGS{CortizoPerez2007,
    author = {Cortizo Pérez , José Carlos and Giráldez , Ignacio and Gaya , Maria Cruz},
    title = {Transformando la Representación de los Datos para Mejorar el ClasificadorBayesiano Simple},
    booktitle = {Actas de la XII Conferencia de la Asociación Española para la InteligenciaArtificial - CAEPIA/TTIA 2007},
    year = {2007},
    editor = {D. Borrajo and L. Castillo and J. M. Corchado},
    volume = {1},
    pages = {317-326},
    abstract = {El clasificador bayesiano simple se basa en la asunción de independencia entre los valores de los atributos dado el valor de la clase. Así pues, su efectividad puede decrecer en presencia de atributos interdependientes. En este artículo se presenta DGW (Dependency Guided Wrapper), un wrapper que utiliza la información acerca de las dependencias entre atributos para transformar la representación de los datos para mejorar la precisión del clasificador bayesiano simple. Este artículo presenta una serie de experimentos donde se compara las representaciones de datos obtenidas por el DGW contra las representaciones de datos obtenidas por 12 acercamientos previos, como son la construcción inductiva de productos cartesianos de atributos, y wrappers que realizan búsquedas de subconjuntos óptimos de atributos. Los resultados de los experimentos muestran que DGW genera representaciones nuevas de los datos que ayudan a mejorar significativamente la precisión del clasificador bayesiano simple más frecuentemente que cualquier otro acercamiento previo. Además, DGW es mucho más rápido que cualquier otro sistema en el proceso de transformación de la representación de los datos.},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Transformando+la+Representaci%C3%B3n+de+los+Datos+para+Mejorar+el+Clasificador+Bayesiano+Simple&btnG=&hl=es&as_sdt=0}
    }

  • Cortizo Pérez, J. C., & Giráldez, I.. (2006). Multicriteria wrapper improvements to naive bayes learning. Paper presented at the Intelligent data engineering and automated learning.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Feature subset selection using a wrapper means to perform a search for an optimal set of attributes using the Machine Learning Algorithm as a black box. The Naive Bayes Classifier is based on the assumption of independence among the values of the attributes given the class value. Consequently, its effectiveness may decrease when the attributes are interdependent. We present FBL, a wrapper that uses the information about dependencies to guide the search for the optimal subset of features and the Naive Bayes Classifier as the black-box Ma- chine Learning algorithm. Experimental results show that FBL allows the Naive Bayes Classifier to achieve greater accuracies and that FBL performs better than other classical filters and wrappers.

    @INPROCEEDINGS{CortizoPerez2006,
    author = {Cortizo Pérez , José Carlos and Giráldez , Ignacio},
    title = {MultiCriteria Wrapper Improvements to Naive Bayes Learning},
    booktitle = {Intelligent Data Engineering and Automated Learning },
    year = {2006},
    editor = {E. Corchado and H. Yin and V. Botti},
    volume = {4224},
    series = {Lecture Notes in Computer Sciennce},
    pages = {419-427},
    publisher = {Springer Verlag},
    abstract = {Feature subset selection using a wrapper means to perform a search for an optimal set of attributes using the Machine Learning Algorithm as a black box. The Naive Bayes Classifier is based on the assumption of independence among the values of the attributes given the class value. Consequently, its effectiveness may decrease when the attributes are interdependent. We present FBL, a wrapper that uses the information about dependencies to guide the search for the optimal subset of features and the Naive Bayes Classifier as the black-box Ma- chine Learning algorithm. Experimental results show that FBL allows the Naive Bayes Classifier to achieve greater accuracies and that FBL performs better than other classical filters and wrappers.},
    doi = {10.1007/11875581_51},
    institution = {Universidad de Burgos},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMulti+Criteria+Wrapper+Improvements+to+Naive+Bayes+Learning&btnG=&hl=es&as_sdt=0}
    }

  • Cortizo Pérez, J. C., Díaz, L. I., Carrero, F., Yanes, A., & Monsalve, B.. (2011). On the future of mobile phones as the heart of community-built databases Springer.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In retrospect, 10 years ago, we would not have imagined ourselves uploading or consuming high-quality videos via the Web, contributing to an online encyclopedia written by millions of users around the world or instantly sharing information with our friends and colleagues using an online platform that allows us to manage our contacts. And the Web is still evolving and what seemed to be science fiction then would becomen reality within 5-10 years. Nowadays, the Mobile Web concept is still an immature prototype of what will be in a few years´ time, but it represents a giant industry (it is expected that some five billion people will be using mobile/cellular phones in 2010) with even greater possibilities in the future. In this paper, we examine the possible future of mobile devices as the heart of community-built databases. The mobile devices characteristics, as both current and future features, will allow them to have a very relevant role not only as interfaces to community-driven databases, but also as platforms where applications using data from community-driven databases will be running, or even as distributed databases where users can have better control of relevant data they are contributing to those databases.

    @BOOK{CortizoPerez2011,
    title = {On the Future of Mobile Phones as the Heart of Community-Built Databases},
    publisher = {Springer},
    year = {2011},
    author = {Cortizo Pérez , José Carlos and Díaz , Luis Ignacio and Carrero , Francisco and Yanes , Adrián and Monsalve , Borja},
    pages = {261-288},
    month = {jan},
    abstract = {In retrospect, 10 years ago, we would not have imagined ourselves uploading or consuming high-quality videos via the Web, contributing to an online encyclopedia written by millions of users around the world or instantly sharing information with our friends and colleagues using an online platform that allows us to manage our contacts. And the Web is still evolving and what seemed to be science fiction then would becomen reality within 5-10 years. Nowadays, the Mobile Web concept is still an immature prototype of what will be in a few years´ time, but it represents a giant industry (it is expected that some five billion people will be using mobile/cellular phones in 2010) with even greater possibilities in the future. In this paper, we examine the possible future of mobile devices as the heart of community-built databases. The mobile devices characteristics, as both current and future features, will allow them to have a very relevant role not only as interfaces to community-driven databases, but also as platforms where applications using data from community-driven databases will be running, or even as distributed databases where users can have better control of relevant data they are contributing to those databases.},
    booktitle = {Community-Built Databases: Research and Development},
    doi = {10.1007/978-3-642-19047-6_11},
    isbn = {9783642190476},
    language = {en},
    shorttitle = {Community-Built Databases},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+On+the+Future+of+Mobile+Phones+as+the+Heart+of+Community-Built+Databases&btnG=&hl=es&as_sdt=0}
    }

  • Cortizo Pérez, J. C., Carrero García, F. M., Gómez Hidalgo, J. M., Monsalve Piqueras, B., & Puertas Sanz, E.. (2009). Introduction to mining social media. 13th conference of the spanish association for artificial intelligence.
    [BibTeX] [Google Scholar]
    @OTHER{CortizoPerez2009,
    author = {Cortizo Pérez , José Carlos and Carrero García , Francisco M. and Gómez Hidalgo , José María and Monsalve Piqueras , Borja and Puertas Sanz , Enrique},
    booktitle = {Proceedings of the 1st International Workshop on Mining Social Media},
    journal = {13th Conference of the Spanish Association for Artificial Intelligence},
    month = {November},
    title = {Introduction to Mining Social Media},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Introduction+to+Mining+Social+Media&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Cortizo Pérez, J. C., Carrero, F. M., & Monsalve, B.. (2010). An architecture for a general purpose multi-algorithm recommender system. Roceedings of the workshop on the practical use of recommender systems, algorithms and technologies (prsat 2010), 51-54.
    [BibTeX] [Abstract] [Google Scholar]
    Although the actual state-of-the-art on Recommender Systems is good enough to allow recommendations and personalization along many application fields, developing a general purpose multi-algorithm recommender system is a tough task. In this paper we present the main challenges involved on developing such system and a system\’s architecture that allows us to face this challenges.

    @OTHER{CortizoPerez2010,
    abstract = {Although the actual state-of-the-art on Recommender Systems is good enough to allow recommendations and personalization along many application fields, developing a general purpose multi-algorithm recommender system is a tough task. In this paper we present the main challenges involved on developing such system and a system\'s architecture that allows us to face this challenges.},
    author = {Cortizo Pérez , José Carlos and Carrero , Francisco M. and Monsalve , Borja},
    journal = {roceedings of the Workshop on the Practical Use of Recommender Systems, Algorithms and Technologies (PRSAT 2010)},
    pages = {51-54},
    title = {An Architecture for a General Purpose Multi-Algorithm Recommender System},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+An+Architecture+for+a+General+Purpose+Multi-Algorithm+Recommender+System&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Cortizo Pérez, J. C., & Giráldez, I.. (2004). Discovering data dependencies in web content mining. Paper presented at the Actas de la iadis international conference www/internet.
    [BibTeX] [Abstract] [Google Scholar]
    Web content mining opens up the possibility to use data presented in web pages for the discovery of interesting and useful patterns. Our web mining tool, FBL (Filtered Bayesian Learning), performs a two stage process: first it analyzes data present in a web page, and then, using information about the data dependencies encountered, it performs the mining phase based on bayesian learning. The Näive Bayes classifier is based on the assumption that the attribute values are conditionally independent for a given the class. This makes it perform very well in some data domains, but performs poorly when attributes are dependent. In this paper, we try to identify those dependencies using linear regression on the attribute values, and then eliminate the attributes which are a linear combination of one or two others. We have tested this system on six web domains (extracting the data by parsing the html), where we have added a synthetic attribute which is a linear combination of two of the original ones. The system detects perfectly those synthetic attributes and also some “natural” dependent attributes, obtaining a more accurate classifier.

    @INPROCEEDINGS{CortizoPerez2004,
    author = {Cortizo Pérez , José Carlos and Giráldez , Ignacio},
    title = {Discovering Data Dependencies in Web Content Mining},
    booktitle = {Actas de la IADIS International Conference WWW/Internet },
    year = {2004},
    pages = {6-9},
    abstract = {Web content mining opens up the possibility to use data presented in web pages for the discovery of interesting and useful patterns. Our web mining tool, FBL (Filtered Bayesian Learning), performs a two stage process: first it analyzes data present in a web page, and then, using information about the data dependencies encountered, it performs the mining phase based on bayesian learning. The Näive Bayes classifier is based on the assumption that the attribute values are conditionally independent for a given the class. This makes it perform very well in some data domains, but performs poorly when attributes are dependent. In this paper, we try to identify those dependencies using linear regression on the attribute values, and then eliminate the attributes which are a linear combination of one or two others. We have tested this system on six web domains (extracting the data by parsing the html), where we have added a synthetic attribute which is a linear combination of two of the original ones. The system detects perfectly those synthetic attributes and also some “natural” dependent attributes, obtaining a more accurate classifier.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADiscovering+Data+Dependencies+in+Web+Content+Mining&btnG=&hl=es&as_sdt=0}
    }

  • Diaz, A., Buenaga, M., Ureña, L. A., & Garcia-Vega, M.. (1998). Integrating linguistic resources in an uniform way for text classification tasks. .
    [BibTeX] [Google Scholar]
    @PROCEEDINGS{Di­az1998,
    title = {Integrating Linguistic Resources in an Uniform Way for Text Classification Tasks },
    year = {1998},
    author = {Diaz , Alberto and Buenaga , Manuel and Ureña , Luis Alfonso and Garcia-Vega , Manuel},
    journal = {First International Conference on Language Resources and Evaluation },
    pages = {1197-1204},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+Linguistic+Resources+in+an+Uniform+Way+for+Text+Classification+Tasks&btnG=&hl=es&as_sdt=0}
    }

  • Duenas Fuentes, A., Mochon, A., Escribano, A., Pina Fernandez, J., & Gachet Paez, D.. (2014). Mathematical probability model for obstructive sleep apnea syndrome. In Chest (Ed.), In Chest (, Vol. 145pp. 597). Chest.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Establish an econometric probability model to serve as a complementary tool to support the diagnostic test of respiratory polygraphy to predict the probability that a patient has to suffer OSAS.

    @INCOLLECTION{Gachet2014a,
    author = {Duenas Fuentes, Antonio and Mochon, Ana and Escribano, Ana and Pina Fernandez, Juan and Gachet Paez, Diego},
    title = {Mathematical probability model for Obstructive Sleep Apnea Syndrome},
    booktitle = {Chest},
    publisher = {Chest},
    year = {2014},
    editor = {Chest},
    volume = {145},
    series = {},
    pages = {597},
    month = {June},
    abstract = {Establish an econometric probability model to serve as a complementary tool to support the diagnostic test of respiratory polygraphy to predict the probability that a patient has to suffer OSAS.},
    copyright = {},
    doi = {10.1378/chest.1785482},
    isbn = {19313543},
    url = {http://abacus.universidadeuropea.es/handle/11268/3382},
    urldate = {2014-12-12}
    }

  • Díaz, A., Gervás, P., Gómez, J. M., García, A., Buenaga, M., Chacón, I., San Miguel, B., Murciano, R., Puertas, E., Alcojor, M., & Acero, I.. (2000). Proyecto mercurio: un servicio personalizado de noticias basado entécnicas de clasificación de texto y modelado de usuario. Xvi congreso de la sepln (sociedad española para el procesamientodel lenguaje natural), vigo, españa.
    [BibTeX] [Abstract] [Google Scholar]
    El sistema Mercurio es un servidor personalizado de noticias que trabaja con una representación del cliente basada en los últimos avances sobre modelado de usuario. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle con la frecuencia que haya especificado. El servidor de noticias coopera también con un buscador que permite a los clientes realizar búsquedas puntuales en las noticias del día.

    @OTHER{Diaz2000,
    abstract = {El sistema Mercurio es un servidor personalizado de noticias que trabaja con una representación del cliente basada en los últimos avances sobre modelado de usuario. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle con la frecuencia que haya especificado. El servidor de noticias coopera también con un buscador que permite a los clientes realizar búsquedas puntuales en las noticias del día.},
    author = {Díaz , Alberto and Gervás , Pablo and Gómez , José María and García , Antonio and Buenaga , Manuel and Chacón , Inmaculada and San Miguel , Beatriz and Murciano , Raúl and Puertas , Enrique and Alcojor , Matías and Acero , Ignacio},
    journal = {XVI Congreso de la SEPLN (Sociedad Española para el Procesamientodel Lenguaje Natural), Vigo, España},
    month = {Septiembre},
    title = {Proyecto Mercurio: un servicio personalizado de noticias basado entécnicas de clasificación de texto y modelado de usuario},
    url = {http://scholar.google.es/scholar?hl=es&as_sdt=0,5&q=allintitle%3A+Proyecto+Mercurio%3A+un+servicio+personalizado+de+noticias+basado+en+t%C3%A9cnicas+de+clasificaci%C3%B3n+de+texto+y+modelado+de+usuario},
    year = {2000}
    }

  • Díaz Esteban, A., Buenaga Rodríguez, M., Giráldez, I., Gómez Hidalgo, J. M., García, A., Chacón, I., San Miguel, B., Puertas Sanz, E., Murciano, R., Alcojor, M., Acero, I., & Gervás, P.. (2001). Proyecto hermes: servicios de personalización inteligente de noticias mediante la integración de técnicas de análisis automático del contenido textual y modelado de usuario con capacidades bilingües. Procesamiento de lenguaje natural, 27, 299-300.
    [BibTeX] [Abstract] [Google Scholar]
    El proyecto Hermes tiene como objetivo el desarrollo de un sistema personalizado inteligente de acceso a la información en un entorno bilingüe, español e inglés. El sistema proporciona una alta efectividad e información especialmente adaptada al cliente, basándose en la utilización de técnicas avanzadas del contenido textual y modelado de usuario. Un objetivo principal del proyecto Hermes radica en la extensión de las tecnologías vigentes para entornos monolingües al campo bilingüe. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle.

    @ARTICLE{DiazEsteban2001,
    author = {Díaz Esteban , Alberto and Buenaga Rodríguez , Manuel and Giráldez , Ignacio and Gómez Hidalgo , José María and García , Antonio and Chacón , Inmaculada and San Miguel , Beatriz and Puertas Sanz , Enrique and Murciano , Raúl and Alcojor , Matías and Acero , Ignacio and Gervás , Pablo},
    title = {Proyecto Hermes: Servicios de Personalización Inteligente de Noticias mediante la Integración de Técnicas de Análisis Automático del Contenido Textual y Modelado de Usuario con Capacidades Bilingües},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2001},
    volume = {27},
    pages = {299-300},
    month = {September},
    abstract = {El proyecto Hermes tiene como objetivo el desarrollo de un sistema personalizado inteligente de acceso a la información en un entorno bilingüe, español e inglés. El sistema proporciona una alta efectividad e información especialmente adaptada al cliente, basándose en la utilización de técnicas avanzadas del contenido textual y modelado de usuario. Un objetivo principal del proyecto Hermes radica en la extensión de las tecnologías vigentes para entornos monolingües al campo bilingüe. El servidor de noticias está desarrollado como una aplicación Java que recibe suscripciones de los clientes a través de una página web. Durante el proceso de suscripción el cliente especifica sus preferencias a la hora de recibir noticias, y con ellas se genera un modelo de usuario que se utilizará para enviarle las noticias que puedan interesarle.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AProyecto+Hermes%3A+Servicios+de+Personalizaci%C3%B3n+Inteligente+de+Noticias+mediante+la+Integraci%C3%B3n+de+T%C3%A9cnicas+de+An%C3%A1lisis+Autom%C3%A1tico+del+Contenido+Textual+y+Modelado+de+Usuario+con+Capacidades+Biling%C3%BCes&btnG=&hl=es&as_sdt=0}
    }

  • Díaz Esteban, A., Maña López, M. J., Buenaga Rodríguez, M., Gómez Hidalgo, J. M., & Gervás Gómez-Navarro, P.. (2001). Using linear classifiers in the integration of user modeling and text content analysis in the personalization of a web-based spanish news service. .
    [BibTeX] [Abstract] [Google Scholar]
    Nowadays many newspapers and news agencies offer personalized information access services and, moreover, there is a growing interest in the improvement of these services. In this paper we present a methodology useful to improve the intelligent personalization of news services and the way it has been applied to a Spanish relevant newspaper: ABC. Our methodology integrates textual content analysis tasks and machine learning techniques to achieve an elaborated user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of a user´s interests includes his preferences about structure (newspaper sections), content and information delivery. A wide coverage and non-specific-domain classification of topics and a personal set of keywords allow the user to define his preferences about content. Machine learning techniques are used to obtain an initial representation of each category of the topic classification. Finally, we introduce some details about the Mercurio system, which is being used to implement this methodology for ABC. We describe our experience and an evaluation of the system in comparison with other commercial systems.

    @OTHER{DiazEsteban2001a,
    abstract = {Nowadays many newspapers and news agencies offer personalized information access services and, moreover, there is a growing interest in the improvement of these services. In this paper we present a methodology useful to improve the intelligent personalization of news services and the way it has been applied to a Spanish relevant newspaper: ABC. Our methodology integrates textual content analysis tasks and machine learning techniques to achieve an elaborated user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of a user´s interests includes his preferences about structure (newspaper sections), content and information delivery. A wide coverage and non-specific-domain classification of topics and a personal set of keywords allow the user to define his preferences about content. Machine learning techniques are used to obtain an initial representation of each category of the topic classification. Finally, we introduce some details about the Mercurio system, which is being used to implement this methodology for ABC. We describe our experience and an evaluation of the system in comparison with other commercial systems.},
    author = {Díaz Esteban , Alberto and Maña López , Manuel J. and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María and Gervás Gómez-Navarro , Pablo},
    booktitle = {Workshop on User Modeling, Machine Learning and Information Retrieval },
    title = {Using linear classifiers in the integration of user modeling and text content analysis in the personalization of a Web-based Spanish News Service},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUsing+linear+classifiers+in+the+integration+of+user+modeling+and+text+content+analysis+in+the+personalization+of+a+Webbased+Spanish+News+&btnG=&hl=es&as_sdt=0},
    year = {2001}
    }

  • Fernandez, J., Benchetrit, D., & Gachet Páez, D.. (2001). Automated visual inspection to assembly of frontal airbag sensors of automobiles. Paper presented at the 2001 8th IEEE international conference on emerging technologies and factory automation, 2001. proceedings.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes an automatic quality control system that supervises through three {CCD} cameras the assembly of automobile airbag sensors. The main characteristics that can be detected are position, angle and geometric parameters of epoxy resin to fix the accelerator sensor. The system can inspect 12000 pieces/hour and now it is at full production in a multinational automobile component factory at Madrid.

    @inproceedings{fernandez_automated_2001,
    title = {Automated visual inspection to assembly of frontal airbag sensors of automobiles},
    volume = {2},
    doi = {10.1109/ETFA.2001.997745},
    abstract = {This paper describes an automatic quality control system that supervises through three {CCD} cameras the assembly of automobile airbag sensors. The main characteristics that can be detected are position, angle and geometric parameters of epoxy resin to fix the accelerator sensor. The system can inspect 12000 pieces/hour and now it is at full production in a multinational automobile component factory at Madrid.},
    booktitle = {2001 8th {IEEE} International Conference on Emerging Technologies and Factory Automation, 2001. Proceedings},
    author = {Fernandez, J. and Benchetrit, D. and Gachet Páez, Diego},
    year = {2001},
    keywords = {Assembly systems, automatic optical inspection, automobile airbag sensors, automobile component factory, automobile industry, Automobiles, {CCD} cameras, Charge coupled devices, Charge-coupled image sensors, Epoxy resins, Inspection, Production systems, quality control, quality control system, Sensor phenomena and characterization, Sensor systems, visual inspection},
    pages = {631--634 vol.2},
    url = {http://scholar.google.es/scholar?q=Automated+visual+inspection+to+assembly+of+frontal+airbag+sensors+of+automobiles&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Fernandez-Valmayor, A., Villarrubia, C., & Buenaga, M.. (1993). An intelligent interface to a database system. , Ca, USA.
    [BibTeX] [Abstract] [Google Scholar]
    In this work, we describe the architecture of an intelligent interface that improves the effectiveness of full text retrieval methods through the semantic interpretation of user’s queries in natural language (NL). This interface comprises a user-expert module that integrates a dynamic model of human memory with a NL parser. This paper concentrates on the problem of the elaboration of index patterns out of specific cases or instances. The structure of the dynamic memory of cases and parsing techniques are also discussed.

    @INPROCEEDINGS{Fernandez-Valmayor1993,
    author = {Fernandez-Valmayor , A. and Villarrubia , C. and Buenaga , Manuel},
    title = {An Intelligent Interface to a Database System},
    year = {1993},
    address = {Ca, USA},
    month = {March},
    abstract = {In this work, we describe the architecture of an intelligent interface that improves the effectiveness of full text retrieval methods through the semantic interpretation of user’s queries in natural language (NL). This interface comprises a user-expert module that integrates a dynamic model of human memory with a NL parser. This paper concentrates on the problem of the elaboration of index patterns out of specific cases or instances. The structure of the dynamic memory of cases and parsing techniques are also discussed.},
    journal = {Case-Based Reasoning and Information Retrieval. Exploring the Opportunities for Technology Sharing, AAAI Press},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Intelligent+Interface+to+a+Database+System&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Fernández Manjón, B., & Buenaga Rodríguez, M.. (1995). Internet como herramienta de trabajo en el campo educativo. Adie: asociación para el desarrollo de la informática educativa, 1(4), 14-20.
    [BibTeX] [Google Scholar]
    @ARTICLE{FernandezManjon1995,
    author = {Fernández Manjón , Baltasar and Buenaga Rodríguez , Manuel},
    title = {Internet como herramienta de trabajo en el campo educativo},
    journal = {ADIE: Asociación para el Desarrollo de la Informática Educativa},
    year = {1995},
    volume = {1},
    pages = {14-20},
    number = {4},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Internet+como+herramienta+de+trabajo+en+el+campo+educativo&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Buenaga, M., Puertas, E., Villalba, M. T., & Muñoz Gil, R.. (2015). Big data processing using wearable devices for wellbeing and healthy activities promotion. In Cleland, I., Guerrero, L., & Bravo, J. (Ed.), In Ambient assisted living. ict-based solutions in real life situations (pp. 196-205). Springer International Publishing.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Abstract The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this paper we describe a system for promoting an active and healthy lifestyle

    @INCOLLECTION{Gachet2015a,
    author = {Gachet Páez, Diego and Buenaga, Manuel and Puertas, Enrique and Villalba, María Teresa and Muñoz Gil, Rafael},
    title = {Big Data Processing Using Wearable Devices for Wellbeing and Healthy Activities Promotion},
    booktitle = {Ambient Assisted Living. ICT-based Solutions in Real Life Situations},
    publisher = {Springer International Publishing},
    year = {2015},
    editor = {Cleland, Ian and Guerrero, Luis and Bravo, Jos{\'e}},
    volume = {},
    series = {},
    pages = {196--205},
    month = {December},
    abstract = {Abstract The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this paper we describe a system for promoting an active and healthy lifestyle},
    copyright = {Springer},
    doi = {10.1007/978-3-319-26410-3_19},
    isbn = {978-3-319-26410-3},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=0ynMYdoAAAAJ&sortby=pubdate&citation_for_view=0ynMYdoAAAAJ:vRqMK49ujn8C},
    urldate = {2015-02-02}
    }

  • Gachet Páez, D., Buenaga, M., Puertas, E., & Villalba, M. T.. (2015). Big data processing of bio-signal sensors information for self-management of health and diseases. In Imis 2015 proceedings (pp. 330-335). IEEE.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    European countries are characterized by aging population and economical crisis, as a consequence, the funds dedicated to social services has been diminished specially those dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependant people to care centers and enabling the management of chronic diseases outside institutions. It is necessary to streamline the health system resources leading to the development of new medical services

    @INCOLLECTION{Gachet2015b,
    author = {Gachet Páez, Diego and Buenaga, Manuel and Puertas, Enrique and Villalba, María Teresa},
    title = {Big Data Processing of Bio-signal Sensors Information for Self-management of Health and Diseases},
    booktitle = {IMIS 2015 Proceedings},
    publisher = {IEEE},
    year = {2015},
    editor = {},
    volume = {},
    series = {},
    pages = {330--335},
    month = {July},
    abstract = {European countries are characterized by aging population and economical crisis, as a consequence, the funds dedicated to social services has been diminished specially those
    dedicated to healthcare, is then desirable to optimize the costs of public and private
    healthcare systems reducing the affluence of chronic and dependant people to care centers
    and enabling the management of chronic diseases outside institutions. It is necessary to
    streamline the health system resources leading to the development of new medical services},
    copyright = {IEEE},
    doi = {10.1109/IMIS.2015.51},
    isbn = {978-1-4799-8872-3 },
    url = {https://scholar.google.es/citations?view_op=view_citation&continue=/scholar%3Fq%3DBig%2BData%2BProcessing%2Bof%2BBio-signal%2BSensors%2BInformation%2Bfor%2BSelf-management%2Bof%2BHealth%2Band%2BDiseases%26hl%3Des%26as_sdt%3D0,5%26as_ylo%3D2015%26scilib%3D2%26scioq%3DIPHealth:%2BPlataforma%2Binteligente%2Bbasada%2Ben%2Bopen,%2Blinked%2By%2Bbig%2Bdata%2Bpara%2Bla%2Btoma%2Bde%2Bdecisiones%2By%2Baprendizaje%2Ben%2B&citilm=1&citation_for_view=0ynMYdoAAAAJ:K3LRdlH-MEoC&hl=es&oi=p},
    urldate = {2015-02-08}
    }

  • Gachet Páez, D., Aparicio Galisteo, F., Buenaga Rodríguez, M., Padrón, V., & Alanbari, M.. (2011). Personalized health care and information services for elders. Proceedings wishwell’2011.
    [BibTeX] [Google Scholar]
    @OTHER{GachetPaez2011a,
    address = {Nottingham},
    author = {Gachet Páez , Diego and Aparicio Galisteo , Fernando and Buenaga Rodríguez , Manuel and Padrón , Victor and Alanbari , Mohammad},
    journal = {Proceedings WISHWell’2011},
    month = {July},
    title = {Personalized Health Care and Information Services for Elders},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Personalized+Health+Care+and+Information+Services+for+Elders&btnG=&hl=es&as_sdt=0},
    year = {2011}
    }

  • Gachet Páez, D., Buenaga, M., Rubio, M., & Silio, T.. (2007). Ubiquitous information retrieval to improve patient safety in hospitals. .
    [BibTeX] [Abstract] [Google Scholar]
    Heterogeneus information management within the biomedical domain requires, a set of text content analysis and data mining techniques. Both, the intelligent information retrieval applied to Electronic Health Record (EHR) and to the biomedical databases, and the access to this information using pocket and hand-held devices or tablet computers, will be a useful tool for health care professionals and a valuable complement to other medical applications. In this paper we present both a description of the SINAMED. research project, and a discussion of some partial results obtained. Our aim is to design new text categorization and summarization algorithms applied to patient clinical records and to the associated medical information, and to design advanced, efficient user interfaces for mobile devices and for on-line access to this results. The proposed system would contribute to improve the medical attention and patient safety.

    @OTHER{Gachet2007b,
    abstract = {Heterogeneus information management within the biomedical domain requires, a set of text content analysis and data mining techniques. Both, the intelligent information retrieval applied to Electronic Health Record (EHR) and to the biomedical databases, and the access to this information using pocket and hand-held devices or tablet computers, will be a useful tool for health care professionals and a valuable complement to other medical applications. In this paper we present both a description of the SINAMED. research project, and a discussion of some partial results obtained. Our aim is to design new text categorization and summarization algorithms applied to patient clinical records and to the associated medical information, and to design advanced, efficient user interfaces for mobile devices and for on-line access to this results. The proposed system would contribute to improve the medical attention and patient safety.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Rubio , Margarita and Silio , Teresa},
    booktitle = {IADIS INternational Conferencie on Ubiquitous Computing},
    title = {Ubiquitous Information Retrieval to improve Patient safety in Hospitals},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Ubiquitous+Information+Retrieval+to+improve+Patient+safety+in+Hospitals&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Gachet Páez, D., Buenaga, M., & Puertas, E.. (2006). Mobile access to patient clinical records and related medical documentation. .
    [BibTeX] [Abstract] [Google Scholar]
    On-line access to patient clinical records from pocket and hand-held or tablet computers, will be an useful tool for health care professionals and a valuable complement to other medical applications if information delivery and access information systems are designed with handheld computers in mind. In this paper we present and discuss some partial results of two different research projects: SINAMED1 and ISIS2, both of them has as main goals the design of new text categorization and summarization algorithms applied to patient clinical records and associated medical information, and advanced, efficient user interfaces to mobile and on-line access to this results. Continued and new research is expected to improve additional handheld-based user interface design principles as well as guidelines for results organization and system performance and acceptation in a concrete public health institution.

    @OTHER{Gachet2006,
    abstract = {On-line access to patient clinical records from pocket and hand-held or tablet computers, will be an useful tool for health care professionals and a valuable complement to other medical applications if information delivery and access information systems are designed with handheld computers in mind. In this paper we present and discuss some partial results of two different research projects: SINAMED1 and ISIS2, both of them has as main goals the design of new text categorization and summarization algorithms applied to patient clinical records and associated medical information, and advanced, efficient user interfaces to mobile and on-line access to this results. Continued and new research is expected to improve additional handheld-based user interface design principles as well as guidelines for results organization and system performance and acceptation in a concrete public health institution.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Puertas , Enrique},
    booktitle = {International Conferencia on Ubiquitous Computing},
    title = {Mobile Access to Patient Clinical Records and Related Medical Documentation},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Access+to+Patient+Clinical+Records+and+Related+Medical+Documentation&btnG=&hl=es&as_sdt=0},
    year = {2006}
    }

  • Gachet Páez, D., Buenaga, M., Padrón, V., & Aparicio, F.. (2010). Integrating intelligent e-services and information access for elder people. Confidence international conference.
    [BibTeX] [Abstract] [Google Scholar]
    The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the elderly people and persons with disability to access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010b,
    abstract = {The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the elderly people and persons with disability to access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Padrón , Víctor and Aparicio , Fernando},
    journal = { CONFIDENCE International Conference},
    title = {Integrating Intelligent e-Services and Information Access for Elder People},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+Intelligent+e-Services+and+Information+Access+for+Elder+%09People&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga, M., & Silió, T.. (2008). Recuperación de información médica mediante dispositivos móviles. Novática. revista de la asociación de técnicos en informática, 194, 63-66.
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2008,
    author = {Gachet Páez, Diego and Buenaga , Manuel and Silió , Teresa},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    month = {Julio},
    pages = {63-66},
    title = {Recuperación de información médica mediante dispositivos móviles},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Recuperaci%C3%B3n+de+informaci%C3%B3n+m%C3%A9dica+mediante+dispositivos+m%C3%B3viles&btnG=&hl=es&as_sdt=0},
    volume = {194},
    year = {2008}
    }

  • Gachet Páez, D., Buenaga, M., Villalba, M., & Lara, P.. (2010). An open and adaptable platform for elderly people and persons with disability to access the information society. .
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010,
    abstract = {NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Villalba , Maite and Lara , Pedro},
    booktitle = {Pervasive Health},
    doi = {10.4108/ICST.PERVASIVEHEALTH2010.8882},
    month = {March},
    title = {An Open and adaptable platform for elderly people and persons with disability to access the information society},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Open+and+adaptable+platform+for+elderly+people+and+persons+with+%09disability+to+access+the+information+society&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga Rodríguez, M., Rubio, M., & Silió, T.. (2007). Intelligent information retrieval and mobile computing to improve patient safety in hospitals. 2nd symposium on ubiquitous computing & ambient intelligence.
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2007,
    author = {Gachet Páez, Diego and Buenaga Rodríguez , Manuel and Rubio , Margarita and Silió , Teresa},
    journal = {2nd Symposium on Ubiquitous Computing \& Ambient Intelligence },
    month = {September},
    title = {Intelligent Information Retrieval and Mobile Computing to Improve Patient Safety in Hospitals},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Intelligent+Information+Retrieval+and+Mobile+Computing+to+Improve+Patient+Safety+in+Hospitals&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Gachet Páez, D., Buenaga Rodríguez, M., Aparicio Galisteo, F., & Padrón, V.. (2012). Integrating internet of things and cloud computing for health services provisioning: the virtual cloud carer project. Sixth international conference on innovative mobile and internet services in ubiquitous computing, 918-921.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elder people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since elder people have different health problems that the rest of the population, we need a deep change in national´s health policy to get adapted to population aging. This paper describes the preliminary advances of Virtual Cloud Carer (VCC), a Spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics elder lies, using technologies associated with internet of things and cloud computing.

    @OTHER{Gachet2012,
    abstract = {The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elder people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since elder people have different health problems that the rest of the population, we need a deep change in national´s health policy to get adapted to population aging. This paper describes the preliminary advances of Virtual Cloud Carer (VCC), a Spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics elder lies, using technologies associated with internet of things and cloud computing.},
    address = {Palermo},
    author = {Gachet Páez, Diego and Buenaga Rodríguez , Manuel and Aparicio Galisteo , Fernando and Padrón , Victor},
    doi = {10.1109/IMIS.2012.25},
    journal = {Sixth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing},
    month = {July},
    pages = {918-921 },
    title = {Integrating Internet of Things and Cloud Computing for Health Services Provisioning: The Virtual Cloud Carer Project},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Integrating+Internet+of+Things+and+Cloud+Computing+for+Health+Services+Provisioning%3A+The+Virtual+Cloud+Carer+Project&btnG=&hl=es&as_sdt=0},
    year = {2012}
    }

  • Gachet Páez, D., Buenaga, M., Padrón, V., & Alanbari, M.. (2010). Helping elderly people and persons with disability to access theinformation society. , 72, 189-192.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010a,
    abstract = {NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Padrón , Víctor and Alanbari , Mohammad},
    booktitle = {Ambient Intelligence and Future Trends-International Symposium onAmbient Intelligence},
    doi = {10.1007/978-3-642-13268-1_23},
    pages = {189-192},
    publisher = {Springer Berlin / Heidelberg},
    series = {Advances in Soft Computing},
    title = {Helping Elderly People and Persons with Disability to Access theInformation Society},
    url = {http://scholar.google.es/scholar?q=allintitle%3AHelping+Elderly+People+and+Persons+with+Disability+to+Access+the+Information+Society&btnG=&hl=es&as_sdt=0},
    volume = {72},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga, M., & Maña, M.. (2006). Using mobile devices for intelligent access to medical information in hospitals. .
    [BibTeX] [Google Scholar]
    @ARTICLE{Gachet2006a,
    author = {Gachet Páez, Diego and Buenaga , Manuel and Maña , Manuel},
    title = {Using Mobile Devices for Intelligent Access to Medical Information in Hospitals},
    year = {2006},
    month = {November},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+Mobile+Devices+for+Intelligent+Access+to+Medical+Information+in+Hospitals&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Buenaga Rodríguez, M., Escribano Otero, J. J., & Rubio, M.. (2010). Helping elderly people and persons with disability to access the information society: the naviga project. The european ambient assisted living innovation alliance (aaliance) conference 2010.
    [BibTeX] [Google Scholar]
    @OTHER{GachetPaez2010,
    address = {Málaga},
    author = {Gachet Páez , Diego and Buenaga Rodríguez , Manuel and Escribano Otero , Juan José and Rubio , Margarita},
    journal = {The European Ambient Assisted Living Innovation Alliance (AALIANCE) Conference 2010},
    month = {March},
    title = {Helping elderly people and persons with disability to access the Information Society: the Naviga Project},
    url = {http://scholar.google.es/scholar?q=allintitle%3AHelping+elderly+people+and+persons+with+disability+to+access+the+Information+Society%3A+the+Naviga+Project&btnG=&hl=es&as_sdt=0%2C5},
    year = {2010}
    }

  • Gachet Páez, D., Aparicio, F., Ascanio, J. R., & Beaterio, A.. (2012). Innovative health services using cloud computing and internet of things. In Ubiquitous computing and ambient intelligence (pp. 415-421). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elderly people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since the elderly have different health problems that the rest of the population, we need a deep change in national\’s health policy to get adapted to population ageing. This paper describes the preliminary advances of \’Virtual Cloud Carer\’ (VCC), a spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics, using technologies associated with internet of things and cloud computing.

    @INCOLLECTION{Paez2012,
    author = {Gachet Páez, Diego and Aparicio , Fernando and Ascanio , Juan R. and Beaterio , Alberto},
    title = {Innovative Health Services Using Cloud Computing and Internet of Things},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    publisher = {Springer Berlin Heidelberg},
    year = {2012},
    series = {Lecture Notes in Computer Science},
    pages = {415-421},
    month = {jan},
    abstract = {The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elderly people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since the elderly have different health problems that the rest of the population, we need a deep change in national\'s health policy to get adapted to population ageing. This paper describes the preliminary advances of \'Virtual Cloud Carer\' (VCC), a spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics, using technologies associated with internet of things and cloud computing.},
    copyright = {©2012 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/978-3-642-35377-2_58},
    isbn = {978-3-642-35376-5, 978-3-642-35377-2},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+%22innovative+health+services+using+cloud+computing+and+internet+of+things%22&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-21}
    }

  • Gachet Páez, D., Ascanio, J. R., Giráldez, I., & Rubio, M.. (2011). Integrating personalized health care and information access for elder people. In Novais, P., Preuveneers, D., & Corchado, J. M. (Ed.), In Ambient intelligence – software and applications (, Vol. 92pp. 33-40). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elder people and Persons with Disability to Access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @INCOLLECTION{GachetPaez2011,
    author = {Gachet Páez , Diego and Ascanio , Juan R. and Giráldez , Ignacio and Rubio , Margarita},
    title = {Integrating Personalized Health Care and Information Access for Elder People},
    booktitle = {Ambient Intelligence - Software and Applications},
    publisher = {Springer Berlin Heidelberg},
    year = {2011},
    editor = {Novais, Paulo and Preuveneers, Davy and Corchado, Juan M.},
    volume = {92},
    series = {Advances in Intelligent and Soft Computing},
    pages = {33-40},
    month = {January},
    abstract = {The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last century. It is assumed that all sectors should have access to information and reap its benefits. Elder people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elder people and Persons with Disability to Access the Information Society) is an European effort whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the Internet and the Information Society. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    copyright = {©2011 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-642-19937-0_5},
    isbn = {978-3-642-19936-3, 978-3-642-19937-0},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Integrating+Personalized+Health+Care+and+Information+Access+for+Elder+People&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • Gachet Páez, D., Buenaga, M., Cortizo, J. C., & Padrón, V.. (2008). Risk patient help and location system using mobile technologies. 3rd symposium of ubiquitous computing and ambient intelligence.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper explores the feasibility of the inclusion of information and communications technologies for helping and localizing risk and early discharge patients, and suggests innovative actions in the area of E- Health services. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonarry Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient’s location and some information about their illness to the Hospital or care centre.

    @OTHER{Gachet2008a,
    abstract = {This paper explores the feasibility of the inclusion of information and communications technologies for helping and localizing risk and early discharge patients, and suggests innovative actions in the area of E- Health services. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonarry Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient’s location and some information about their illness to the Hospital or care centre.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Cortizo , José Carlos and Padrón , Victor},
    doi = {10.1007/978-3-540-85867-6_21},
    journal = {3rd Symposium of Ubiquitous Computing and Ambient Intelligence},
    publisher = {3rd Symposium of Ubiquitous Computing and Ambient Intelligence },
    title = {Risk Patient Help and Location System using Mobile Technologies},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Risk+Patient+Help+and+Location+System+using+Mobile+Technologies&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Gachet Páez, D., Buenaga, M., Hernando, A., & Alonso, M.. (2007). Mobile information retrieval for the patient safety improvement in hospitals. , 81-87.
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2007a,
    author = {Gachet Páez, Diego and Buenaga , Manuel and Hernando , Asunción and Alonso , Margarita},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    pages = {81-87},
    title = {Mobile Information Retrieval for the Patient Safety Improvement in Hospitals},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Information+Retrieval+for+the+Patient+Safety+Improvement+in+Hospitals&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Gachet Páez, D., Buenaga, M., Giraldez, J. I., & Padrón, V.. (2009). Agent based risk patient management. .
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in longstay hospitals, and in some cases, in the homes of patients wich a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center.The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system wil allow to transmit the patient´s location and some information about his/her illness to the hospital or care centre.

    @OTHER{Gachet2009,
    abstract = {This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in longstay hospitals, and in some cases, in the homes of patients wich a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center.The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system wil allow to transmit the patient´s location and some information about his/her illness to the hospital or care centre.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Giraldez , José Ignacio and Padrón , Víctor},
    booktitle = {Ambient Intelligence Perspectives},
    doi = {10.3233/978-1-58603-946-2-90},
    publisher = {Ambient Intelligence Forum },
    title = {Agent Based Risk Patient Management},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAgent+Based+Risk+Patient+Management&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Gachet Páez, D., Padrón, V., & Alanbari, M.. (2010). Mobile and pervasive computing to helps parents of low birth weight babies. .
    [BibTeX] [Google Scholar]
    @OTHER{Gachet2010c,
    author = {Gachet Páez, Diego and Padrón , Víctor and Alanbari , Mohammad},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    title = {Mobile and Pervasive Computing to Helps Parents of Low Birth Weight Babies},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+and+Pervasive+Computing+to+Helps+Parents+of+Low+Birth+Weight+Babies&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Padron, V.. (2012). Personalized health care system with virtual reality rehabilitation and appropriate information for seniors. Sensors, 12(5), 5502-5516.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last years. It is assumed that all sectors should have access to information and reap its benefits. Elderly people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elderly and Persons with Disability to Access the Information Society) is a European effort, whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the internet and the information society. Naviga also allows the creation of services targeted to social networks, mind training and personalized health care. In this paper we focus on the health care and information services designed on the project, the technological platform developed and details of two representative elements, the virtual reality hand rehabilitation and the health information intelligent system.

    @ARTICLE{Gachet2012a,
    author = {Gachet Páez, Diego and Aparicio , Fernando and Buenaga , Manuel and Padron , Victor},
    title = {Personalized Health Care System with Virtual Reality Rehabilitation and Appropriate Information for Seniors},
    journal = {Sensors},
    year = {2012},
    volume = {12},
    pages = {5502-5516},
    number = {5},
    month = {april},
    abstract = {The concept of the information society is now a common one, as opposed to the industrial society that dominated the economy during the last years. It is assumed that all sectors should have access to information and reap its benefits. Elderly people are, in this respect, a major challenge, due to their lack of interest in technological progress and their lack of knowledge regarding the potential benefits that information society technologies might have on their lives. The Naviga Project (An Open and Adaptable Platform for the Elderly and Persons with Disability to Access the Information Society) is a European effort, whose main goal is to design and develop a technological platform allowing elder people and persons with disability to access the internet and the information society. Naviga also allows the creation of services targeted to social networks, mind training and personalized health care. In this paper we focus on the health care and information services designed on the project, the technological platform developed and details of two representative elements, the virtual reality hand rehabilitation and the health information intelligent system.},
    doi = {10.3390/s120505502},
    issn = {1424-8220},
    url = {http://scholar.google.es/scholar?q=allintitle%3APersonalized+Health+Care+System+with+Virtual+Reality+Rehabilitation+and+Appropriate+Information+for+Seniors&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-19}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Ascanio, J. R.. (2014). Chronic patients monitoring using wireless sensors and big data processing. In Ubiquitous computing & ambient intelligence (pp. 33-38). IEEE.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.

    @INCOLLECTION{Gachet2014c,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Ascanio , J. R.},
    title = {Chronic patients monitoring using wireless sensors and Big Data Processing},
    booktitle = {Ubiquitous Computing & Ambient Intelligence},
    publisher = {IEEE},
    year = {2014},
    editor = {},
    volume = {},
    series = {IMIS 2014 Proceeding},
    pages = {33-38},
    month = {December},
    abstract = {Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.},
    copyright = {IEEE},
    doi = {10.1109/IMIS.2014.54},
    isbn = {9781479943319},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=en&user=Mwr8bDQAAAAJ&citation_for_view=Mwr8bDQAAAAJ:mB3voiENLucC},
    urldate = {2014-12-12}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Ascanio, J. R.. (2014). Big data and iot for chronic patients monitoring. In Nugent, C., Coronato Antonio, D., & Bravo, J. (Ed.), In Ubiquitous computing & ambient intelligence (, Vol. 8277pp. 33-38). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.

    @INCOLLECTION{Gachet2014b,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Ascanio , J. R.},
    title = {Big data and IoT for chronic patients monitoring},
    booktitle = {Ubiquitous Computing & Ambient Intelligence},
    publisher = {Springer Berlin Heidelberg},
    year = {2014},
    editor = {Nugent, Christofer and Coronato Antonio, Davy and Bravo, José.},
    volume = {8277},
    series = {Lecture Notes in Computer Science},
    pages = {33-38},
    month = {December},
    abstract = {Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.},
    copyright = {©2013 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-319-13102-3_68},
    isbn = {03029743},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=Mwr8bDQAAAAJ&citation_for_view=Mwr8bDQAAAAJ:HDshCWvjkbEC},
    urldate = {2014-12-12}
    }

  • Gachet Páez, D., fernando Aparicio, Buenaga, M., & Busto, M. J.. (2013). Virtual cloud carer: new e-health services for chronic patients. Proceedings aal forum 2013.
    [BibTeX] [Abstract]
    Current estimates claims there are 1.300.000 dependent persons in Spain and the public spending in 2010 was 5.500 million Euros for care of 650.000 dependents. Increase in chronic diseases diabetes; €68.300 million in 2007 and will grow until €80.900 millions in 2025 . Increase in cardiovascular diseases; in Europe in 2006 of €109,000 million (10% of the total of the sanitary cost; in Spain 7%). EPOC, asthma, lung cancer, pneumonia and tuberculosis), is responsible for 20% of all the deaths and generates a cost of €84,000 million in Europe. The EPOC affects in Europe 44 million people, with a prevalence of the 5-10% of population greater than 40 years.

    @OTHER{GachetAAL2013a,
    author = {Gachet Páez, Diego and Aparicio, fernando and Buenaga, Manuel and Busto, María José},
    journal = {Proceedings AAL Forum 2013},
    month = {September},
    title = {Virtual CLoud Carer: New e-health Services for Chronic Patients},
    abstract = {Current estimates claims there are 1.300.000 dependent persons in Spain and the public spending in 2010 was 5.500 million Euros for care of 650.000 dependents. Increase in chronic diseases diabetes; €68.300 million in 2007 and will grow until €80.900 millions in 2025 . Increase in cardiovascular diseases; in Europe in 2006 of €109,000 million (10% of the total of the sanitary cost; in Spain 7%). EPOC, asthma, lung cancer, pneumonia and tuberculosis), is responsible for 20% of all the deaths and generates a cost of €84,000 million in Europe. The EPOC affects in Europe 44 million people, with a prevalence of the 5-10% of population greater than 40 years.},
    year = {2013},
    doi = {},
    url = {},
    urldate = {2014-01-01}
    }

  • Gachet Páez, D., Ascanio, J. R., & Sánchez de Pedro, I.. (2013). Computación en la nube, big data y sensores inalámbricos para la provisión de nuevos servicios de salud. Novática. revista de la asociación de técnicos en informática(224), 66-71.
    [BibTeX] [Abstract] [Google Scholar]
    Vivimos en una sociedad caracterizada por el envejecimiento de la población y actualmente inmersa en una profunda crisis económica que implica la reducción de costes de los servicios públicos y entre ellos el de salud. Es asimismo ineludible la necesidad de optimizar los recursos de los sistemas sanitarios promoviendo el desarrollo de nuevos servicios médicos basados en telemedicina, monitorización de enfermos crónicos, servicios de salud personalizados, etc. Es de esperar que estas nuevas aplicaciones incrementen de forma significativa el volumen de la información sanitaria a gestionar, incluyendo datos de sensores biológicos, historiales clínicos, información de contexto, etc. que a su vez necesitan de la disponibilidad de las aplicaciones de salud en cualquier lugar y momento y que sean accesibles desde cualquier dispositivo. En este artículo se propone una solución para la creación de estos nuevos servicios, especialmente en entornos exteriores, en base al uso de computación en la nube y monitorización de signos vitales.

    @OTHER{GachetNovatica2013a,
    author = {Gachet Páez, Diego and Ascanio, Juan Ramón and Sánchez de Pedro, Israel},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {224},
    pages = {66-71},
    month = {August},
    title = {Computación en la nube, Big Data y Sensores Inalámbricos para la provisión de nuevos servicios de salud},
    abstract = {Vivimos en una sociedad caracterizada por el envejecimiento de la población y actualmente
    inmersa en una profunda crisis económica que implica la reducción de costes de los servicios públicos y
    entre ellos el de salud. Es asimismo ineludible la necesidad de optimizar los recursos de los sistemas
    sanitarios promoviendo el desarrollo de nuevos servicios médicos basados en telemedicina, monitorización
    de enfermos crónicos, servicios de salud personalizados, etc. Es de esperar que estas nuevas aplicaciones
    incrementen de forma significativa el volumen de la información sanitaria a gestionar, incluyendo datos de
    sensores biológicos, historiales clínicos, información de contexto, etc. que a su vez necesitan de la
    disponibilidad de las aplicaciones de salud en cualquier lugar y momento y que sean accesibles desde
    cualquier dispositivo. En este artículo se propone una solución para la creación de estos nuevos servicios,
    especialmente en entornos exteriores, en base al uso de computación en la nube y monitorización de signos
    vitales.},
    doi = {},
    url = {http://scholar.google.es/scholar?q=novatica+computaci%C3%B3n+en+la+nube%2C+big+data+sensores+inal%C3%A1mbricos+servicios+de+salud&btnG=&hl=es&as_sdt=0%2C5},
    year = {2013},
    urldate = {2014-01-01}
    }

  • Gachet Páez, D., Padrón, V., Buenaga, M., & Aparicio, F.. (2013). Improving health services using cloud computing, big data and wireless sensors. In Nugent, C., Coronato Antonio, D., & Bravo, J. (Ed.), In Ambient assisted living and active aging (, Vol. 8277pp. 33-38). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services, especially in outdoors environments, based on cloud computing and vital signs monitoring.

    @INCOLLECTION{Gachet2013a,
    author = {Gachet Páez, Diego and Padrón, Víctor and Buenaga, Manuel and Aparicio, Fernando},
    title = {Improving Health Services Using Cloud Computing, Big Data and Wireless Sensors},
    booktitle = {Ambient Assisted Living and Active Aging},
    publisher = {Springer Berlin Heidelberg},
    year = {2013},
    editor = {Nugent, Christofer and Coronato Antonio, Davy and Bravo, José.},
    volume = {8277},
    series = {Lecture Notes in Computer Science},
    pages = {33-38},
    month = {December},
    abstract = {In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services, especially in outdoors environments, based on cloud computing and vital signs monitoring.},
    copyright = {©2013 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-319-03092-0_5},
    isbn = {978-3-319-03091-3},
    url = {http://scholar.google.es/scholar?q=allintitle%3AImproving+Health+Services+Using+Cloud+Computing%2C+Big+Data+and+Wireless+Sensors&btnG=&hl=es&as_sdt=0%2C5},
    urldate = {2014-01-01}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Rubio, M.. (2013). Highly personalized health services using cloud and sensors. In Proceedings of the 2013 seventh international conference on innovative mobile and internet services in ubiquitous computing (pp. 451-455). IEEE Computer Society.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services based on cloud computing and vital signs sensors.

    @INCOLLECTION{Gachet2013b,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Rubio, Margarita},
    title = {Highly Personalized Health Services Using Cloud and Sensors},
    booktitle = { Proceedings of the 2013 Seventh International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing},
    publisher = {IEEE Computer Society},
    year = {2013},
    pages = {451-455},
    month = {July},
    abstract = {In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services based on cloud computing and vital signs sensors.},
    copyright = {©2013 IEEE},
    doi = {10.1109/IMIS.2013.81},
    isbn = {978-3-319-03091-3},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3AHighly+Personalized+Health+Services+Using+Cloud+and+Sensors&btnG=&lr=},
    urldate = {2014-01-01}
    }

  • Gachet Páez, D., & Campos Lorrio, T.. (1999). Design of real time software for industrial process control. Paper presented at the 1999 7th IEEE international conference on emerging technologies and factory automation, 1999. proceedings. ETFA ’99.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The paper describes the details of, and the experiences gained from, a case study undertaken by the authors on the design and implementation of a complex control system for a dosage industrial process used in a manufacturing industry. The goal was to demonstrate that industrial real time control systems could be implemented using a high level programming language and some suitable operating system. The software was designed using Harel’s State Charts as the main tool and implemented on an Intel Pentium based system. Our results indicated that system works correctly and is very flexible. The system has been successfully tested and now is in full production at Lignotok {S.A.}, a large manufacturing company in Vigo, Spain

    @inproceedings{paez_design_1999,
    title = {Design of real time software for industrial process control},
    volume = {2},
    doi = {10.1109/ETFA.1999.813133},
    abstract = {The paper describes the details of, and the experiences gained from, a case study undertaken by the authors on the design and implementation of a complex control system for a dosage industrial process used in a manufacturing industry. The goal was to demonstrate that industrial real time control systems could be implemented using a high level programming language and some suitable operating system. The software was designed using Harel's State Charts as the main tool and implemented on an Intel Pentium based system. Our results indicated that system works correctly and is very flexible. The system has been successfully tested and now is in full production at Lignotok {S.A.}, a large manufacturing company in Vigo, Spain},
    booktitle = {1999 7th {IEEE} International Conference on Emerging Technologies and Factory Automation, 1999. Proceedings. {ETFA} '99},
    author = {Gachet Páez, Diego and Campos Lorrio, Tomas},
    year = {1999},
    keywords = {case study, chemical technology, complex control system, Computer industry, Computer languages, Control systems, dosage industrial process, Electrical equipment industry, high level languages, high level programming language, Industrial control, industrial process control, industrial real time control systems, Intel Pentium based system, Lignotok, manufacturing company, manufacturing industries, manufacturing industry, operating system, Operating systems, operating systems (computers), process control, real time software design, Real time systems, real-time systems, Software Engineering, Software systems, Spain, State Charts},
    pages = {1259--1263 vol.2},
    url={http://scholar.google.es/scholar?q=allintitle%3A++Design+of+real+time+software+for+industrial+process+control&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gachet Páez, D., Salichs, M. A., Pimentel, J. R., Moreno, L., & De la Escalera, A.. (1992). A software architecture for behavioral control strategies of autonomous systems. Paper presented at the , proceedings of the 1992 international conference on industrial electronics, control, instrumentation, and automation, 1992. power electronics and motion control.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors deal with the execution of several tasks for mobile robots while exhibiting various primitive behaviors in a simultaneous or concurrent fashion. The architecture allows for learning to take place, and at the execution level it incorporates the experience gained in executing primitive behaviors as well as the overall task. Some empirical rules are provided for the appropriate mixture of primitive behaviors to produce tasks. The architecture has been implemented in {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the architecture is excellent

    @inproceedings{gachet_software_1992,
    title = {A software architecture for behavioral control strategies of autonomous systems},
    doi = {10.1109/IECON.1992.254475},
    abstract = {The authors deal with the execution of several tasks for mobile robots while exhibiting various primitive behaviors in a simultaneous or concurrent fashion. The architecture allows for learning to take place, and at the execution level it incorporates the experience gained in executing primitive behaviors as well as the overall task. Some empirical rules are provided for the appropriate mixture of primitive behaviors to produce tasks. The architecture has been implemented in {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the architecture is excellent},
    booktitle = {, Proceedings of the 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation, 1992. Power Electronics and Motion Control},
    author = {Gachet Páez, Diego and Salichs, M.A. and Pimentel, J.R. and Moreno, L. and De la Escalera, A.},
    year = {1992},
    keywords = {autonomous systems, Computer architecture, Control systems, Degradation, digital control, Electronic mail, empirical rules, execution level, Humans, learning, mobile robots, Navigation, {OPMOR}, performance, position control, robot programming, simulation environment, software architecture, Software Engineering, Velocity control},
    pages = {1002--1007 vol.2},
    url = {http://scholar.google.es/scholar?q=A+software+architecture+for+behavioral+control+strategies+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gachet Páez, D., Salichs, M. A., Moreno, L., & Pimentel, J. R.. (1994). Learning emergent tasks for an autonomous mobile robot. Paper presented at the Proceedings of the IEEE/RSJ/GI international conference on intelligent robots and systems ’94. ‘Advanced robotic systems and the real world’, IROS ’94.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} is used as a fusion supervisor of primitive behaviors in order to execute more complex robot behaviors, for example go to goal, surveillance or follow a path. The fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviors as well as the overall task. The implementation of this autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform, the {UPM} Robuter. Both, simulated and actual results are presented. The performance of the {AHC} neural network is adequate. Portions of this work has been implemented within the {EEC} {ESPRIT} 2483 {PANORAMA} Project

    @inproceedings{gachet_learning_1994,
    title = {Learning emergent tasks for an autonomous mobile robot},
    volume = {1},
    doi = {10.1109/IROS.1994.407378},
    abstract = {We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} is used as a fusion supervisor of primitive behaviors in order to execute more complex robot behaviors, for example go to goal, surveillance or follow a path. The fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviors as well as the overall task. The implementation of this autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform, the {UPM} Robuter. Both, simulated and actual results are presented. The performance of the {AHC} neural network is adequate. Portions of this work has been implemented within the {EEC} {ESPRIT} 2483 {PANORAMA} Project},
    booktitle = {Proceedings of the {IEEE/RSJ/GI} International Conference on Intelligent Robots and Systems '94. {'Advanced} Robotic Systems and the Real World', {IROS} '94},
    author = {Gachet Páez, Diego and Salichs, M.A. and Moreno, L. and Pimentel, J.R.},
    year = {1994},
    keywords = {adaptive heuristic critic, {AHC}, autonomous mobile robot, Discrete event simulation, {EEC} {ESPRIT} 2483 {PANORAMA} Project, emergent task learning, Event detection, fusion supervisor, heuristic programming, learning (artificial intelligence), mobile platform, mobile robots, neural nets, neural network topology, {OPMOR}, reinforcement learning algorithm, Robot kinematics, Robot sensing systems, simulation environment, surveillance, {UPM} Robuter, Vectors},
    pages = {290--297 vol.1},
    url = {http://scholar.google.es/scholar?q=Learning+emergent+tasks+for+an+autonomous+mobile+robot&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gachet Páez, D., Exposito, D., Ascanio, J. R., & Garcia Leiva, R.. (2010). Integracion de servicios inteligentes de e-salud y acceso a la informacion para personas mayores. Novática. revista de la asociación de técnicos en informática(208).
    [BibTeX] [Google Scholar]
    @OTHER{GachetNovatica2010a,
    author = {Gachet Páez, Diego and Exposito, Diego and Ascanio, Juan Ramon and Garcia Leiva, Rafael},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {208},
    title = {Integracion de servicios inteligentes de e-salud y acceso a la informacion para personas mayores},
    url = {http://scholar.google.es/scholar?q=Novatica+Integracion+de+servicios+inteligentes+de+e-salud+y+acceso+a+la+informacion+para+personas+mayores&btnG=&hl=es&as_sdt=0%2C5},
    year = {2010}
    }

  • Gaya, M. C., & Giráldez, I. J.. (2010). Merging local patterns using an evolutionary approach. Knowledge and information systems, 29, 1-24.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes a Decentralized Agent-based model for Theory Synthesis (DATS) implemented by MASETS, a Multi-Agent System for Evolutionary Theory Synthesis. The main contributions are the following: first, a method for the synthesis of a global theory from distributed local theories. Second, a conflict resolution mechanism, based on genetic algorithms, that deals with collision/contradictions in the knowledge discovered by different agents at their corresponding locations. Third, a system-level classification procedure that improves the results obtained from both: the monolithic classifier and the best local classifier. And fourth, a method for mining very large datasets that allows for divide-and-conquer mining followed by merging of discoveries. The model is validated with an experimental application run on 15 datasets. Results show that the global theory outperforms all the local theories, and the monolithic theory (obtained from mining the concatenation of all the available distributed data), in a statistically significant way.

    @ARTICLE{Gaya2010,
    author = {Gaya , María Cruz and Giráldez , J. Ignacio},
    title = {Merging local patterns using an evolutionary approach},
    journal = {Knowledge and Information Systems},
    year = {2010},
    volume = {29},
    pages = {1-24},
    abstract = {This paper describes a Decentralized Agent-based model for Theory Synthesis (DATS) implemented by MASETS, a Multi-Agent System for Evolutionary Theory Synthesis. The main contributions are the following: first, a method for the synthesis of a global theory from distributed local theories. Second, a conflict resolution mechanism, based on genetic algorithms, that deals with collision/contradictions in the knowledge discovered by different agents at their corresponding locations. Third, a system-level classification procedure that improves the results obtained from both: the monolithic classifier and the best local classifier. And fourth, a method for mining very large datasets that allows for divide-and-conquer mining followed by merging of discoveries. The model is validated with an experimental application run on 15 datasets. Results show that the global theory outperforms all the local theories, and the monolithic theory (obtained from mining the concatenation of all the available distributed data), in a statistically significant way.},
    doi = {10.1007/s10115-010-0332-x},
    issn = {0219-1377},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMerging+local+patterns+using+an+evolutionary+approach&btnG=&hl=es&as_sdt=0}
    }

  • Gaya, M. C., & Giraldez, J. I.. (2008). Techniques for distributed theory synthesis in multiagent systems. International symposium on distributed computing and artificial intelligence.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.

    @OTHER{Gaya2008a,
    abstract = {Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.},
    author = {Gaya , Maria Cruz and Giraldez , José Ignacio},
    doi = {10.1007/978-3-540-85863-8_46},
    journal = {International Symposium on Distributed Computing and Artificial Intelligence},
    publisher = {Springer Verlag},
    title = {Techniques for Distributed Theory Synthesis in Multiagent Systems},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Techniques+for+Distributed+Theory+Synthesis+in+Multiagent+Systems&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Gaya, M. C., Giráldez, I., & Cortizo Pérez, J. C.. (2007). Uso de algoritmos evolutivos para la fusión de teorías en mineríade datos distribuida. Paper presented at the Actas de la xii conferencia de la asociación española para la inteligenciaartificial.
    [BibTeX] [Google Scholar]
    @INPROCEEDINGS{Gaya2007,
    author = {Gaya , Maria Cruz and Giráldez , Ignacio and Cortizo Pérez , José Carlos},
    title = {Uso de Algoritmos Evolutivos para la Fusión de Teorías en Mineríade Datos Distribuida},
    booktitle = {Actas de la XII Conferencia de la Asociación Española para la InteligenciaArtificial },
    year = {2007},
    editor = {D. Borrajo and L. Castillo and J. M. Corchado},
    volume = {2},
    pages = {121-130},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Uso+de+algoritmos+evolutivos+para+la+fusion+de+teor%C3%ADas+en+miner%C3%ADa+de+datos+distribuida+&btnG=&hl=es&as_sdt=0}
    }

  • Gaya, M. C., & Giráldez, J. I.. (2008). Experiments in multi agent learning. 3rd international workshop on hybrid artificial intelligence systems, 5271, 78-85.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. The results show that, as a result of knowledge fusion, the accuracy of initial theories is improved as well as the accuracy of the monolithic solution.

    @OTHER{Gaya2008,
    abstract = {Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. The results show that, as a result of knowledge fusion, the accuracy of initial theories is improved as well as the accuracy of the monolithic solution.},
    address = { LNCS },
    author = {Gaya , Maria Cruz and Giráldez , José Ignacio},
    doi = {10.1007/978-3-540-87656-4_11},
    journal = {3rd International Workshop on Hybrid Artificial Intelligence Systems},
    pages = {78-85},
    publisher = {Springer Verlag},
    series = {Lecture Notes in Artificial Intelligence},
    title = {Experiments in Multi Agent Learning},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExperiments+in+Multi+Agent+Learning&btnG=&hl=es&as_sdt=0},
    volume = {5271},
    year = {2008}
    }

  • Gaya López, M. C., Aparicio Galisteo, F., Villalba Benito, M. T., Gomez Fernandez, E., Ferrari Golinelli, G., Redondo Duarte, S., & Iniesta Casanova, J.. (2013). Improving accessibility in discussion forums. Paper presented at the Inted2013 proceedings.
    [BibTeX] [Google Scholar]
    @InProceedings{GAYALOPEZ2013IMP,
    author = {Gaya L{\'{o}}pez, Maria Cruz and Aparicio Galisteo, Fernando and Villalba Benito, M.T. and Gomez Fernandez, Estrella and Ferrari Golinelli, G. and Redondo Duarte, S. and Iniesta Casanova, Jesus},
    title = {Improving Accessibility In Discussion Forums},
    series = {7th International Technology, Education and Development Conference},
    booktitle = {INTED2013 Proceedings},
    isbn = {978-84-616-2661-8},
    issn = {2340-1079},
    publisher = {IATED},
    location = {Valencia, Spain},
    month = {4-5 March, 2013},
    year = {2013},
    pages = {6658-6665},
    url={http://scholar.google.es/scholar?hl=es&q=allintitle%3A+IMPROVING+ACCESSIBILITY+IN+DISCUSSION+FORUMS&btnG=&lr=}
    }

  • Giráldez, I., & Gachet Páez, D.. (2009). Informatización de procesos de negocio mediante la ejecución de su modelo gráfico. , 201, 61-64.
    [BibTeX] [Google Scholar]
    @OTHER{Giraldez2009,
    author = {Giráldez , Ignacio and Gachet Páez, Diego},
    booktitle = {Novática},
    pages = {61-64},
    title = {Informatización de procesos de negocio mediante la ejecución de su modelo gráfico},
    url = {http://scholar.google.es/scholar?q=allintitle%3AInformatizaci%C3%B3n+de+procesos+de+negocio+mediante+la+ejecuci%C3%B3n+de+su+%09modelo+gr%C3%A1fico&btnG=&hl=es&as_sdt=0},
    volume = {201},
    year = {2009}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Carrero, F., & Monsalve, B.. (2007). Las tecnologías de los motores de búsqueda del futuro. Dyna, ingeniería e industria, 82(9), 401-410.
    [BibTeX] [Abstract] [Google Scholar]
    Es indudable que Internet en general, y la Web en particular, tienen una influencia creciente en nuestras vidas y se han convertido en un medio de comunicación y un recurso informativo de primer orden. La gran cantidad de información disponible en la Web se hace accesible primordialmente a través de los motores de búsqueda como Google, Yahoo! o Altavista. Las empresas que operan estos motores son ahora multinacionales con enormes ingresos financieros obtenidos a través de la publicidad que logran por el tráfico de usuarios que acumulan. Su supervivencia depende de seguir siendo útiles, y cada vez más, para los usuarios, algo que sólo pueden lograr a través de la innovación e implantación de tecnologías y funcionalidades cada vez más avanzadas. En este artículo presentamos una revisión de algunas de las tecnologías que creemos clave para los motores de búsqueda del presente y del futuro, centrándonos en la personalización y la localización, la búsqueda social, la búsqueda en la Web semántica, la búsqueda translingüe, y el control de fraude en buscadores.

    @ARTICLE{GomezHidalgo2007,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Carrero , Francisco and Monsalve , Borja},
    title = {Las Tecnologías de los Motores de Búsqueda del futuro},
    journal = {DYNA, Ingeniería e industria},
    year = {2007},
    volume = {82},
    pages = {401-410},
    number = {9},
    month = {Novembre},
    abstract = {Es indudable que Internet en general, y la Web en particular, tienen una influencia creciente en nuestras vidas y se han convertido en un medio de comunicación y un recurso informativo de primer orden. La gran cantidad de información disponible en la Web se hace accesible primordialmente a través de los motores de búsqueda como Google, Yahoo! o Altavista. Las empresas que operan estos motores son ahora multinacionales con enormes ingresos financieros obtenidos a través de la publicidad que logran por el tráfico de usuarios que acumulan. Su supervivencia depende de seguir siendo útiles, y cada vez más, para los usuarios, algo que sólo pueden lograr a través de la innovación e implantación de tecnologías y funcionalidades cada vez más avanzadas. En este artículo presentamos una revisión de algunas de las tecnologías que creemos clave para los motores de búsqueda del presente y del futuro, centrándonos en la personalización y la localización, la búsqueda social, la búsqueda en la Web semántica, la búsqueda translingüe, y el control de fraude en buscadores.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ALas+tecnolog%C3%ADas+de+los+motores+de+b%C3%BAsqueda+del+futuro&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Buenaga Rodríguez, M., & Cortizo Pérez, J. C.. (2005). The role of word sense disambiguation in automated text categorization. Paper presented at the 10th international conference on applications of natural languageto information systems.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Automated Text Categorization has reached the levels of accuracy of human experts. Provided that enough training data is available, it is possible to learn accurate automatic classifiers by using Information Retrieval and Machine Learning Techniques. However, performance of this approach is damaged by the problems derived from language variation (specially polysemy and synonymy). We investigate how Word Sense Disambiguation can be used to alleviate these problems, by using two traditional methods for thesaurus usage in Information Retrieval, namely Query Expansion and Concept Indexing. These methods are evaluated on the problem of using the Lexical Database WordNet for text categorization, focusing on the Word Sense Disambiguation step involved. Our experiments demonstrate that rather simple dictionary methods, and baseline statistical approaches, can be used to disambiguate words and improve text representation and learning in both Query Expansion and Concept Indexing approaches.

    @INPROCEEDINGS{GomezHidalgo2005,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel and Cortizo Pérez , José Carlos},
    title = {The Role of Word Sense Disambiguation in Automated Text Categorization},
    booktitle = {10th International Conference on Applications of Natural Languageto Information Systems},
    year = {2005},
    pages = {298-309},
    publisher = {Springer Verlag},
    abstract = {Automated Text Categorization has reached the levels of accuracy of human experts. Provided that enough training data is available, it is possible to learn accurate automatic classifiers by using Information Retrieval and Machine Learning Techniques. However, performance of this approach is damaged by the problems derived from language variation (specially polysemy and synonymy). We investigate how Word Sense Disambiguation can be used to alleviate these problems, by using two traditional methods for thesaurus usage in Information Retrieval, namely Query Expansion and Concept Indexing. These methods are evaluated on the problem of using the Lexical Database WordNet for text categorization, focusing on the Word Sense Disambiguation step involved. Our experiments demonstrate that rather simple dictionary methods, and baseline statistical approaches, can be used to disambiguate words and improve text representation and learning in both Query Expansion and Concept Indexing approaches.},
    doi = {10.1007/11428817_27},
    institution = {Universidad de Alicante},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+The+Role+of+Word+Sense+Disambiguation+in+Automated+Text+Categorization&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Buenaga Rodríguez, M., Ureña López, L. A., Martín Valdivia, M. T., & García Vega, M.. (2002). Integrating lexical knowledge in learning-based text categorization. , St-Malo, Francia.
    [BibTeX] [Abstract] [Google Scholar]
    Automatic Text Categorization (ATC) is an important task in thefield of Information Access. The prevailing approach to ATC is makinguse of a a collection of prelabeled texts for the induction of adocument classifier through learning methods. With the increasingavailability of lexical resources in electronic form (including LexicalDatabases (LDBs), Machine Readable Dictionaries, etc.), there is an interesting opportunity for the integration of them in learning-basedATC. In this paper, we present an approach to the integration of lexicalknowledge extracted from the LDB WordNet in learning-basedATC, based on Stacked Generalization (SG). The method we suggestis based on combining the lexical knowledge extracted from the LDBinterpreted as a classifier with a learning-based classifier, through SG.

    @INPROCEEDINGS{GomezHidalgo2002,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel and Ureña López , Luis Alfonso and Martín Valdivia , María Teresa and García Vega , Manuel},
    title = {Integrating Lexical Knowledge in Learning-Based Text Categorization},
    year = {2002},
    pages = {313-322},
    address = {St-Malo, Francia},
    month = {March},
    abstract = {Automatic Text Categorization (ATC) is an important task in thefield of Information Access. The prevailing approach to ATC is makinguse of a a collection of prelabeled texts for the induction of adocument classifier through learning methods. With the increasingavailability of lexical resources in electronic form (including LexicalDatabases (LDBs), Machine Readable Dictionaries, etc.), there is an interesting opportunity for the integration of them in learning-basedATC. In this paper, we present an approach to the integration of lexicalknowledge extracted from the LDB WordNet in learning-basedATC, based on Stacked Generalization (SG). The method we suggestis based on combining the lexical knowledge extracted from the LDBinterpreted as a classifier with a learning-based classifier, through SG.},
    journal = {6th International Conference on the Statistical Analysis of Textual Data},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+integrating+lexical+knowledge+in+learning-based+text+categorization&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Formalismos lógicos para el procesamiento del lenguaje natural. Xii congreso de lenguajes naturales y lenguajes formales, seo deurgel, lérida (españa).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996b,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {XII Congreso de Lenguajes Naturales y Lenguajes Formales, Seo deUrgel, Lérida (España)},
    title = {Formalismos Lógicos para el Procesamiento del Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3AFormalismos+L%C3%B3gicos+para+el+Procesamiento+del+Lenguaje+Natural&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Puertas Sanz, E., & Ruiz Leyva, M. J.. (2004). Concept indexing for automated text categorization. Paper presented at the Natural language processing and information systems: 9th internationalconference on applications of natural language to information systems.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper we explore the potential of concept indexing with WordNet synsets for Text categorization, in comparison with the traditional bag of words text representation model. We have performed a series of experiments in which we also test the possibility of using simple yet robust disambiguation methods for concept indexing, and the effectiveness of stoplist-filtering and stemming on the SemCor semantic concordance. Results are not conclusive yet promising.

    @INPROCEEDINGS{GomezHidalgo2004,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Puertas Sanz , Enrique and Ruiz Leyva , Miguel Jaime},
    title = {Concept Indexing for Automated Text Categorization},
    booktitle = {Natural Language Processing and Information Systems: 9th InternationalConference on Applications of Natural Language to Information Systems},
    year = {2004},
    volume = {3136},
    series = {Lecture Notes in Computer Science},
    pages = {195-206},
    publisher = {Springer Verlag},
    abstract = {In this paper we explore the potential of concept indexing with WordNet synsets for Text categorization, in comparison with the traditional bag of words text representation model. We have performed a series of experiments in which we also test the possibility of using simple yet robust disambiguation methods for concept indexing, and the effectiveness of stoplist-filtering and stemming on the SemCor semantic concordance. Results are not conclusive yet promising.},
    doi = {10.1007/978-3-540-27779-8_17},
    institution = {University of Salford},
    url = {http://scholar.google.es/scholar?q=allintitle%3AConcept+Indexing+for+Automated+Text+Categorization&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gómez Hidalgo, J. M., Puertas, E., Carrero, F., & Buenaga, M.. (2009). Web content filtering. Advances in computers – elsevier academic press, 76, 257-306.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Across the years, Internet has evolved from an academic network to a truly communication medium, reaching impressive levels of audience and becoming a billionaire business. Many of our working, studying, and entertainment activities are nowadays overwhelmingly limited if we get disconnected from the net of networks. And of course, with the use comes abuse. The World Wide Web features a wide variety of content that are harmful for children or just inappropriate in the workplace. Web filtering and monitoring systems have emerged as valuable tools for the enforcement of suitable usage policies. These systems are routinely deployed in corporate, library, and school networks, and contribute to detect and limit Internet abuse. Their techniques are increasingly sophisticated and effective, and their development is contributing to the advance of the state of the art in a number of research fields, like text analysis and image processing. In this chapter, we review the main issues regarding Web content filtering, including its motivation, the main operational concerns and techniques used in filtering tools’ development, their evaluation and security, and a number of singular projects in this field.

    @OTHER{GomezHidalgo2009a,
    abstract = {Across the years, Internet has evolved from an academic network to a truly communication medium, reaching impressive levels of audience and becoming a billionaire business. Many of our working, studying, and entertainment activities are nowadays overwhelmingly limited if we get disconnected from the net of networks. And of course, with the use comes abuse. The World Wide Web features a wide variety of content that are harmful for children or just inappropriate in the workplace. Web filtering and monitoring systems have emerged as valuable tools for the enforcement of suitable usage policies. These systems are routinely deployed in corporate, library, and school networks, and contribute to detect and limit Internet abuse. Their techniques are increasingly sophisticated and effective, and their development is contributing to the advance of the state of the art in a number of research fields, like text analysis and image processing. In this chapter, we review the main issues regarding Web content filtering, including its motivation, the main operational concerns and techniques used in filtering tools’ development, their evaluation and security, and a number of singular projects in this field.},
    author = {Gómez Hidalgo , José María and Puertas , Enrique and Carrero , Francisco and Buenaga , Manuel},
    doi = {10.1016/S0065-2458(09)01007-9},
    journal = {Advances in Computers – Elsevier Academic Press},
    pages = {257-306},
    series = {Social Networking and The Web},
    title = {Web Content Filtering},
    url = {http://scholar.google.es/scholar?as_q=Web+Content+Filtering&as_epq=Web+Content+Filtering&as_oq=&as_eq=&as_occt=title&as_sauthors=Hidalgo+G%C3%B3mez+Garc%C3%ADa+Sanz&as_publication=&as_ylo=2009&as_yhi=&btnG=&hl=es&as_sdt=0},
    volume = {76},
    year = {2009}
    }

  • Gómez Hidalgo, J. M., & Puertas Sanz, E.. (2009). Filtrado de pornografía usando análisis de imagen. Linux+ magazine(51), 62-67.
    [BibTeX] [Abstract] [Google Scholar]
    La pornografía constituye, ya desde los comienzos de Internet, un tipo de contenidos muy extendido y fácilmente localizable. Tal es así, que la propia industria pornográfica ha cambiado para adaptarse a esta nueva realidad.

    @OTHER{GomezHidalgo2009,
    abstract = {La pornografía constituye, ya desde los comienzos de Internet, un tipo de contenidos muy extendido y fácilmente localizable. Tal es así, que la propia industria pornográfica ha cambiado para adaptarse a esta nueva realidad.},
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique},
    journal = { Linux+ Magazine},
    month = {Febrero},
    number = {51},
    pages = {62-67},
    title = {Filtrado de pornografía usando análisis de imagen},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Filtrado+de+pornograf%C3%ADa+usando+an%C3%A1lisis+de+imagen&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Gómez Hidalgo, J. M., Puertas Sanz, E., Buenaga Rodríguez, M., & Carrero García, F.. (2002). Text filtering at poesia: a new internet content filtering tool for educational environments. Procesamiento de lenguaje natural, 29, 291-292.
    [BibTeX] [Abstract] [Google Scholar]
    Internet provides to the children an easy access to pornography and other harmful materials. In order to improve the effectiveness of existing filters, we present POESIA, a project which objetive is to develop and evaluate an extensible open-source Internet filtering software in educational environments.

    @ARTICLE{GomezHidalgo2002a,
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique and Buenaga Rodríguez , Manuel and Carrero García , Francisco},
    title = {Text filtering at POESIA: a new Internet content filtering tool for educational environments},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2002},
    volume = {29},
    pages = {291-292},
    month = {September},
    abstract = {Internet provides to the children an easy access to pornography and other harmful materials. In order to improve the effectiveness of existing filters, we present POESIA, a project which objetive is to develop and evaluate an extensible open-source Internet filtering software in educational environments.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AText+filtering+at+POESIA%3A+a+new+Internet+content+filtering+tool+for+educational+environments&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Puertas Sanz, E., Carrero García, F., & Buenaga Rodríguez, M.. (2003). Categorización de texto sensible al coste para el filtrado de contenidos inapropiados en internet. (, Vol. 31pp. 13-20). .
    [BibTeX] [Abstract] [Google Scholar]
    El creciente problema del acceso a contenidos inapropiados de Internet se puede abordar como un problema de categorización automática de texto sensible al coste. En este artículo presentamos la evaluación comparativa de un rango representativo de algoritmos de aprendizaje y métodos de sensibilización al coste, sobre dos colecciones de páginas Web en espanol ~ e inglés. Los resultados de nuestros experimentos son prometedores.

    @INCOLLECTION{GomezHidalgo2003,
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique and Carrero García , Francisco and Buenaga Rodríguez , Manuel},
    title = {Categorización de texto sensible al coste para el filtrado de contenidos inapropiados en Internet},
    year = {2003},
    volume = {31},
    pages = {13-20},
    abstract = {El creciente problema del acceso a contenidos inapropiados de Internet se puede abordar como un problema de categorización automática de texto sensible al coste. En este artículo presentamos la evaluación comparativa de un rango representativo de algoritmos de aprendizaje y métodos de sensibilización al coste, sobre dos colecciones de páginas Web en espanol ~ e inglés. Los resultados de nuestros experimentos son prometedores.},
    journal = {Procesamiento de Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Categorizaci%C3%B3n+de+texto+sensible+al+coste+para+el+filtrado+de+contenidos+inapropiados+en+Internet&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Murciano Quejido, R., Díaz Esteban, A., Buenaga Rodríguez, M., & Puertas Sanz, E.. (2001). Categorizing photographs for user-adapted searching in a news agency e-commerce application. .
    [BibTeX] [Abstract] [Google Scholar]
    In this work, we present a system for categorizing photographs based on the text of their captions. The system has been developed as a part of the system CODI, an e-commerce application for an Spanish news agency. The categorization system makes able to the user the personalization of their information interests, improving search possibilities in the CODI application. Our approach for photograph categorization is based on linear text classifiers and Web mining programs, specially selected due to their suitability for industrial applications. The evaluation of our categorization system has shown that it meets the efficiency and effectiveness requirements of the e-commerce application.

    @PROCEEDINGS{GomezHidalgo2001,
    title = {Categorizing photographs for user-adapted searching in a news agency e-commerce application},
    year = {2001},
    abstract = {In this work, we present a system for categorizing photographs based on the text of their captions. The system has been developed as a part of the system CODI, an e-commerce application for an Spanish news agency. The categorization system makes able to the user the personalization of their information interests, improving search possibilities in the CODI application. Our approach for photograph categorization is based on linear text classifiers and Web mining programs, specially selected due to their suitability for industrial applications. The evaluation of our categorization system has shown that it meets the efficiency and effectiveness requirements of the e-commerce application.},
    author = {Gómez Hidalgo , José María and Murciano Quejido , Raúl and Díaz Esteban , Alberto and Buenaga Rodríguez , Manuel and Puertas Sanz , Enrique},
    journal = {First International Workshop on New Developments in Digital Libraries },
    pages = {55-66},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACategorizing+photographs+for+user-adapted+searching+in+a+news+agency+e-commerce&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Martín Abreu, J. M., García Bringas, P., & Santos Grueiro, I.. (2010). Content security and privacy preservation in social networks through text mining. Workshop on interoperable social multimedia applications (wisma 2010).
    [BibTeX] [Abstract] [Google Scholar]
    Due to their huge popularity, Social Networks are increasingly being used as malware, spam and phishing propagation applications. Moreover, Social Networks are being widely recognized as a source of private (either corporate or personal) information leaks. Within the project Segur @, Optenet has developed a number of prototypes that deal with these problems, based on several techniques that share text mining as the underlying approach. These prototypes include a malware detection system based on Information Retrieval techniques, a compression-based spam filter, and a Data Leak Prevention system that makes use of Named Entity Recognition techniques.

    @OTHER{GomezHidalgo2010,
    abstract = {Due to their huge popularity, Social Networks are increasingly being used as malware, spam and phishing propagation applications. Moreover, Social Networks are being widely recognized as a source of private (either corporate or personal) information leaks. Within the project Segur
    @, Optenet has developed a number of prototypes that deal with these problems, based on several techniques that share text mining as the underlying approach. These prototypes include a malware detection system based on Information Retrieval techniques, a compression-based spam filter, and a Data Leak Prevention system that makes use of Named Entity Recognition techniques.},
    address = {Barcelona},
    author = {Gómez Hidalgo , José María and Martín Abreu , José Miguel and García Bringas , Pablo and Santos Grueiro , Igor},
    journal = {Workshop on Interoperable Social Multimedia Applications (WISMA 2010)},
    title = {Content Security and Privacy Preservation in Social Networks through Text Mining},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Content+Security+and+Privacy+Preservation+in+Social+Networks+through+Text+Mining&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gómez Hidalgo, J. M., Díaz Esteban, A., Ureña López, L. A., & García Vega, M.. (1999). Utilización y evaluación de la desambiguación en tareas de clasificaciónde texto. Xv congreso de la sepln ,lérida , españa(25), 99-107.
    [BibTeX] [Abstract] [Google Scholar]
    La evaluación de la desambiguación puede realizarse tanto de manera directa como indirecta, es decir, en el marco de otra tarea de procesamiento de lenguaje natural que hace uso de ella. La evaluación directa de la desambiguación está próxima a su estandarización en el marco de competiciones como SENSEVAL. En cambio, la evaluación indirecta ha sido poco utilizada, pero es muy importante porque la desambiguación se utiliza fundamentalmente como ayuda a otras tareas. En este trabajo presentamos dos métodos de desambiguación basados en la integración de recursos, aplicados a una tarea de categorización de documentos, que se basa en la misma idea de integración. Realizamos una evaluación directa e indirecta de las técnicas de desambiguación utilizadas, logrando resultados muy positivos para ambas técnicas. Los resultados son comparables con los que obtendría un desambiguador manual, e indican que es preciso hacer uso de la desambiguación para el método de categorización propuesto.

    @OTHER{GomezHidalgo1999,
    abstract = {La evaluación de la desambiguación puede realizarse tanto de manera directa como indirecta, es decir, en el marco de otra tarea de procesamiento de lenguaje natural que hace uso de ella. La evaluación directa de la desambiguación está próxima a su estandarización en el marco de competiciones como SENSEVAL. En cambio, la evaluación indirecta ha sido poco utilizada, pero es muy importante porque la desambiguación se utiliza fundamentalmente como ayuda a otras tareas. En este trabajo presentamos dos métodos de desambiguación basados en la integración de recursos, aplicados a una tarea de categorización de documentos, que se basa en la misma idea de integración. Realizamos una evaluación directa e indirecta de las técnicas de desambiguación utilizadas, logrando resultados muy positivos para ambas técnicas. Los resultados son comparables con los que obtendría un desambiguador manual, e indican que es preciso hacer uso de la desambiguación para el método de categorización propuesto.},
    author = {Gómez Hidalgo , José María and Díaz Esteban , Alberto and Ureña López , Luis Alfonso and García Vega , Manuel},
    institution = {Sociedad Española para el Procesamiento del Lenguaje Natural},
    journal = {XV Congreso de la SEPLN ,Lérida , España},
    month = {Septiembre},
    number = {25},
    pages = {99-107},
    title = {Utilización y evaluación de la desambiguación en tareas de clasificaciónde texto},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Utilizaci%C3%B3n+y+evaluaci%C3%B3n+de+la+desambiguaci%C3%B3n+en+tareas+de+clasificaci%C3%B3n+de+texto&btnG=&hl=es&as_sdt=0},
    year = {1999}
    }

  • Gómez Hidalgo, J. M., Giráldez, I., & Buenaga, M.. (2004). Text categorization for internet content filtering. .
    [BibTeX] [Abstract] [Google Scholar]
    Text Filtering is one of the most challenging and useful tasks in the Multilingual Information Access field. In a number of filtering applications, Automated Text Categorization of documents plays a key role. In this paper, we present two of that applications (Hermes and POESIA), focused on personalized news delivery and Internet inappropriate content blocking, respectively. We are specifically concerned with the role of Automated Text Categorization in these applications, and how the task is approached in a multilingual environment. Apart from the details of the methods employed in our work, we envisage new solutions for a more complex task we have called Cross-Lingual Text Categorization.

    @INPROCEEDINGS{GomezHidalgo2004b,
    author = {Gómez Hidalgo , José María and Giráldez , Ignacio and Buenaga , Manuel},
    title = {Text Categorization for Internet Content Filtering },
    year = {2004},
    volume = {8},
    number = {22},
    pages = {147-160},
    abstract = {Text Filtering is one of the most challenging and useful tasks in the Multilingual Information Access field. In a number of filtering applications, Automated Text Categorization of documents plays a key role. In this paper, we present two of that applications (Hermes and POESIA), focused on personalized news delivery and Internet inappropriate content blocking, respectively. We are specifically concerned with the role of Automated Text Categorization in these applications, and how the task is approached in a multilingual environment. Apart from the details of the methods employed in our work, we envisage new solutions for a more complex task we have called Cross-Lingual Text Categorization.},
    journal = {Inteligencia Artificial - Revista Iberoamericana de Inteligencia Artificial},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Text+Categorization+for+Internet+Content+Filtering+&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., de las Gómez Albarrán, M. M., & Fernández-Pampillón Cesteros, A. M.. (1996). Smallhelp: un sistema de ayuda para el entorno smalltalk. (asociación para el desarrollo de la informática educativa), 6, 5-13.
    [BibTeX] [Abstract] [Google Scholar]
    Los entornos de programación orientada a objetos (POO) ofrecen varias ventajas, entre las que cabe destacar la posibilidad de reutilizar trabajo previo. Sin embargo, la tarea de desarrollar programas en la POO no es sencilla, y es importante proporcionar al programador herramientas que faciliten dicha tarea. SmallHelp es un sistema de ayuda basado en técnicas de inteligencia artificial, que facilita al usuario la localización de métodos del lenguaje Smalltalk que realicen funciones determinadas. Hemos seguido la línea tradicional de los sistemas de ayuda inteligentes, simplificando sus objetivos para disminuir el esfuerzo de desarrollo de nuestro sistema. Asimismo, SmallHelp es fácilmente adaptable a otras áreas de aplicación.

    @OTHER{GomezHidalgo1996c,
    abstract = {Los entornos de programación orientada a objetos (POO) ofrecen varias ventajas, entre las que cabe destacar la posibilidad de reutilizar trabajo previo. Sin embargo, la tarea de desarrollar programas en la POO no es sencilla, y es importante proporcionar al programador herramientas que faciliten dicha tarea. SmallHelp es un sistema de ayuda basado en técnicas de inteligencia artificial, que facilita al usuario la localización de métodos del lenguaje Smalltalk que realicen funciones determinadas. Hemos seguido la línea tradicional de los sistemas de ayuda inteligentes, simplificando sus objetivos para disminuir el esfuerzo de desarrollo de nuestro sistema. Asimismo, SmallHelp es fácilmente adaptable a otras áreas de aplicación.},
    author = {Gómez Hidalgo , José María and Gómez Albarrán , M. de las Mercedes and Fernández-Pampillón Cesteros , Ana María},
    journal = {(Asociación para el Desarrollo de la Informática Educativa)},
    pages = {5-13},
    title = {SmallHelp: un sistema de ayuda para el entorno SmallTalk},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+SmallHelp%3A+un+sistema+de+ayuda+para+el+entorno+SmallTalk&btnG=&hl=es&as_sdt=0},
    volume = {6},
    year = {1996}
    }

  • Gómez Hidalgo, J. M., Maña López, M., & Puertas Sanz, E.. (2000). Combining text and heuristics for cost-sensitive spam filtering. Fourth computational natural language learning workshop.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Spam filtering is a text categorization task that shows especial features that make it interesting and difficult. First, the task has been performed traditionally using heuristics from the domain. Second, a cost model is required to avoid misclassification of legitimate messages. We present a comparative evaluation of several machine learning algorithms applied to spam filtering, considering the text of the messages and a set of heuristics for the task. Cost-oriented biasing and evaluation is performed.

    @OTHER{GomezHidalgo2000,
    abstract = {Spam filtering is a text categorization task that shows especial features that make it interesting and difficult. First, the task has been performed traditionally using heuristics from the domain. Second, a cost model is required to avoid misclassification of legitimate messages. We present a comparative evaluation of several machine learning algorithms applied to spam filtering, considering the text of the messages and a set of heuristics for the task. Cost-oriented biasing and evaluation is performed.},
    address = {Lisboa},
    author = {Gómez Hidalgo , José María and Maña López , Manuel and Puertas Sanz , Enrique},
    doi = {10.3115/1117601.1117623},
    journal = { Fourth Computational Natural Language Learning Workshop},
    month = {September},
    title = {Combining Text and Heuristics for Cost-Sensitive Spam Filtering},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACombining+Text+and+Heuristics+for+Cost-Sensitive+Spam+Filtering&btnG=&hl=es&as_sdt=0},
    year = {2000}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Diseño de experimentos de categorización automática de textos basadaen una colección de entrenamiento y una base de datos léxica. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996a,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Diseño de experimentos de categorización automática de textos basadaen una colección de entrenamiento y una base de datos léxica},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+de+experimentos+de+categorizaci%C3%B3n+autom%C3%A1tica+de+textos+basada+en+una+colecci%C3%B3n+de+entrenamiento+y+una+base+de+datos+l%C3%A9xica&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Puertas Sanz, E., & Buenaga Rodríguez, M.. (2004). Experimentos en indexación conceptual para la categorización de texto. Paper presented at the Actas de la conferencia ibero-americana www/internet.
    [BibTeX] [Abstract] [Google Scholar]
    En la Categorización de Texto (CT), una tarea de gran importancia para el acceso a la información en Internet y la World Wide Web, juega un papel fundamental el método de representación de documentos o indexación. La representación de los documentos en CT se basa generalmente en la utilización de raíces de palabras, excluyendo aquellas que aparecen en una lista de palabras frecuentes (modelo de lista de palabras). Este enfoque padece del problema habitual en Recuperación de Información (RI), la ambigüedad del lenguaje natural. En este artículo exploramos el potencial de la indexación mediante conceptos, utilizando synsets de WordNet, frente al modelo tradicional basado en lista de palabras, en el marco de la CT. Hemos realizado una serie de experimentos en los cuáles evaluamos ambos modelos de indexación para la CT sobre la concordancia semántica Semcor. Los resultados permiten afirmar que la indexación mixta, usando lista de palabras y conceptos de WordNet, es significativamente más efectiva que ambos modelos por separado.

    @INPROCEEDINGS{GomezHidalgo2004a,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Puertas Sanz , Enrique and Buenaga Rodríguez , Manuel},
    title = {Experimentos en Indexación Conceptual para la Categorización de Texto},
    booktitle = {Actas de la Conferencia Ibero-Americana WWW/Internet },
    year = {2004},
    editor = {J. M. Gutiérrez and J. J. Martínez and P. Isaias},
    pages = {251-258},
    abstract = {En la Categorización de Texto (CT), una tarea de gran importancia para el acceso a la información en Internet y la World Wide Web, juega un papel fundamental el método de representación de documentos o indexación. La representación de los documentos en CT se basa generalmente en la utilización de raíces de palabras, excluyendo aquellas que aparecen en una lista de palabras frecuentes (modelo de lista de palabras). Este enfoque padece del problema habitual en Recuperación de Información (RI), la ambigüedad del lenguaje natural. En este artículo exploramos el potencial de la indexación mediante conceptos, utilizando synsets de WordNet, frente al modelo tradicional basado en lista de palabras, en el marco de la CT. Hemos realizado una serie de experimentos en los cuáles evaluamos ambos modelos de indexación para la CT sobre la concordancia semántica Semcor. Los resultados permiten afirmar que la indexación mixta, usando lista de palabras y conceptos de WordNet, es significativamente más efectiva que ambos modelos por separado.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExperimentos+en+Indexaci%C3%B3n+Conceptual+para+la+Categorizaci%C3%B3n+de+Texto&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M.. (2010). Experiencias de investigación en la universidad y en la empresa. Novática. revista de la asociación de técnicos en informática(206).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo2010a,
    author = {Gómez Hidalgo , José María},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {206},
    title = {Experiencias de investigación en la universidad y en la empresa},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Experiencias+de+investigaci%C3%B3n+en+la+universidad+y+en+la+empresa&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Aplicaciones de las bases de datos léxicas en la clasificación automáticade documentos. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Aplicaciones de las bases de datos léxicas en la clasificación automáticade documentos},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAplicaciones+de+las+bases+de+datos+l%C3%A9xicas+en+la+clasificaci%C3%B3n+autom%C3%A1tica+de+documentos&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Gómez Hidalgo, J. M.. (1996). Una interfaz world wide web a la base de datos léxica wordnet.. I jornadas de informática de la aeia (asociación española de informáticay automática), almuñecar, granada (españa).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996d,
    author = {Gómez Hidalgo , José María},
    journal = {I Jornadas de Informática de la AEIA (Asociación Española de Informáticay Automática), Almuñecar, Granada (España)},
    title = {Una interfaz World Wide Web a la base de datos léxica WordNet.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUna+interfaz+World+Wide+Web+a+la+base+de+datos+l%C3%A9xica+WordNet&btnG=&hl=es&as_sdt=0#},
    year = {1996}
    }

  • Gómez Hidalgo, J. M.. (1995). Un sistema de traducción del lenguaje natural a sql. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1995,
    author = {Gómez Hidalgo , José María},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Un sistema de traducción del lenguaje natural a SQL},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUn+sistema+de+traducci%C3%B3n+del+lenguaje+natural+a+SQL&btnG=&hl=es&as_sdt=0#},
    year = {1995}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1997). Integrating a lexical database and a training collection for textcategorization. Paper presented at the Acl/eacl workshop on automatic information extraction and buildingof lexical semantic resources for nlp.
    [BibTeX] [Abstract] [Google Scholar]
    Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.

    @INPROCEEDINGS{GomezHidalgo1997,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    title = {Integrating a Lexical Database and a Training Collection for TextCategorization},
    booktitle = {ACL/EACL Workshop on Automatic Information Extraction and Buildingof Lexical Semantic Resources for NLP},
    year = {1997},
    month = {September},
    abstract = {Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+a+Lexical+Database+and+a+Training+Collection+for+Text+%09Categorization&btnG=&hl=es&as_sdt=0}
    }

  • Gómez-Pérez, J. M., Kohler, S., Melero, R., Serrano-Balazote, P., Lezcano, L., Sicilia, M. Á., Iglesias, A., Castro, E., Rubio, M., & Buenaga, M.. (2009). Towards interoperability in e-health systems: a three-dimensionalapproach based on standards and semantics. Healthinf, international conference on health informatics(58), 205-210.
    [BibTeX] [Abstract] [Google Scholar]
    The interoperability problem in eHealth can only be addressed by means of combining standards and technology. However, these alone do not suffice. An appropriate framework that articulates such combination is required. In this paper, we adopt a three-dimensional (information, concept, and inference) approach for such framework, based on OWL as formal language for terminological and ontological health resources, SNOMED CT as lexical backbone for all such resources, and the standard CEN 13606 for representing EHRs. Based on such framework, we propose a novel form for creating and supporting networks of clinical terminologies. Additionally, we propose a number of software modules to semantically process and exploit EHRs, including NLP-based search and inference, which can support medical applications in heterogeneous and distributed eHealth systems.

    @OTHER{Gomez-Perez2009,
    abstract = {The interoperability problem in eHealth can only be addressed by means of combining standards and technology. However, these alone do not suffice. An appropriate framework that articulates such combination is required. In this paper, we adopt a three-dimensional (information, concept, and inference) approach for such framework, based on OWL as formal language for terminological and ontological health resources, SNOMED CT as lexical backbone for all such resources, and the standard CEN 13606 for representing EHRs. Based on such framework, we propose a novel form for creating and supporting networks of clinical terminologies. Additionally, we propose a number of software modules to semantically process and exploit EHRs, including NLP-based search and inference, which can support medical applications in heterogeneous and distributed eHealth systems.},
    address = {Oporto,Portugal},
    author = {Gómez-Pérez , Jose Manuel and Kohler , Sandra and Melero , Ricardo and Serrano-Balazote , Pablo and Lezcano , Leonardo and Sicilia , Miguel Ángel and Iglesias , Ana and Castro , Elena and Rubio , Margarita and Buenaga , Manuel},
    journal = {Healthinf, International Conference on Health Informatics},
    month = {Enero},
    number = {58},
    pages = {205-210},
    title = {TOWARDS INTEROPERABILITY IN E-HEALTH SYSTEMS: A Three-DimensionalApproach Based on Standards and Semantics},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+TOWARDS+INTEROPERABILITY+IN+E-HEALTH+SYSTEMS%3A+A+Three-Dimensional+Approach+Based+on+Standards+and+Semantics&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • López-Fernández, H., Aparicio Galisteo, F., Glez-Peña, D., Buenaga Rodríguez, M., & Fdez-Riverola, F.. (2011). Herramienta biomédica de anotación y acceso inteligente a información. Iii jornada gallega de bioinformática.
    [BibTeX] [Google Scholar]
    @OTHER{Lopez-Fernandez2011,
    address = {Vigo},
    author = {López-Fernández , H and Aparicio Galisteo , Fernando and Glez-Peña , D and Buenaga Rodríguez , Manuel and Fdez-Riverola , F},
    journal = {III Jornada Gallega de Bioinformática},
    month = {September},
    title = {Herramienta biomédica de anotación y acceso inteligente a información},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Herramienta+biom%C3%A9dica+de+anotaci%C3%B3n+y+acceso+inteligente+a+informaci%C3%B3n&btnG=&hl=es&as_sdt=0},
    year = {2011}
    }

  • López-Fernández, H., Reboiro-Jato, M., Glez-Peña, D., Aparicio, F., Gachet Páez, D., Buenaga, M., & Fdez-Riverola, F.. (2013). Bioannote: a software platform for annotating biomedical documents with application in medical learning environments. Computer methods and programs in biomedicine, 111(1), 139-147.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Abstract Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations.

    @article{LópezFernández2013139,
    title = {BioAnnote: A software platform for annotating biomedical documents with application in medical learning environments },
    journal = {Computer Methods and Programs in Biomedicine },
    volume = {111},
    number = {1},
    pages = {139 - 147},
    year = {2013},
    issn = {0169-2607},
    doi = {10.1016/j.cmpb.2013.03.007},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABioAnnote%3A+A+software+platform+for+annotating+biomedical+documents+with+application+in+medical+learning+environments&btnG=&hl=es&as_sdt=0%2C5},
    author = {López-Fernández, H. and Reboiro-Jato, M. and Glez-Peña, D. and Aparicio, Fernando and Gachet Páez, Diego and Buenaga, Manuel and Fdez-Riverola, F.},
    abstract = {Abstract Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations. }
    }

  • Maña, M., Mata, J., Dominguez, J. L., Vaquero, A., Alvarez, F., Gómez Hidalgo, J. M., Gachet Páez, D., & Buenaga, M.. (2006). Los proyectos sinamed e isis: mejoras en el acceso a la información biomédica mediante la integración de generación de resúmenes, categorización automática de textos y ontologías. .
    [BibTeX] [Abstract] [Google Scholar]
    Los sistemas inteligentes de acceso a la información están integrando de manera creciente técnicas de minería de texto y de análisis del contenido, y recursos semánticos como las ontologías. En los proyectos ISIS y SINAMED juegan un papel central la utilización de categorización de texto, la extracción automática de resúmenes y las ontologías, para la mejora del acceso a la información en un dominio biomédico específico: los historiales clínicos de pacientes y la información científica biomédica asociada. En el desarrollo de los dos proyectos participa un consorcio formado por grupos de investigación de tres universidades (Universidad Europea de Madrid, Universidad de Huelva, Universidad Complutense de Madrid), un hospital (Hospital de Fuenlabrada, Madrid), y una compañía de desarrollo de software (Bitext).

    @INPROCEEDINGS{Mana2006,
    author = {Maña , Manuel and Mata , Jacinto and Dominguez , Juan L. and Vaquero , Antonio and Alvarez , Francisco and Gómez Hidalgo , José María and Gachet Páez, Diego and Buenaga , Manuel},
    title = {Los proyectos SINAMED e ISIS: Mejoras en el Acceso a la Información Biomédica mediante la integración de Generación de Resúmenes, Categorización Automática de Textos y Ontologías},
    year = {2006},
    volume = {37},
    abstract = {Los sistemas inteligentes de acceso a la información están integrando de manera creciente técnicas de minería de texto y de análisis del contenido, y recursos semánticos como las ontologías. En los proyectos ISIS y SINAMED juegan un papel central la utilización de categorización de texto, la extracción automática de resúmenes y las ontologías, para la mejora del acceso a la información en un dominio biomédico específico: los historiales clínicos de pacientes y la información científica biomédica asociada. En el desarrollo de los dos proyectos participa un consorcio formado por grupos de investigación de tres universidades (Universidad Europea de Madrid, Universidad de Huelva, Universidad Complutense de Madrid), un hospital (Hospital de Fuenlabrada, Madrid), y una compañía de desarrollo de software (Bitext).},
    journal = {Procesamiento de Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Los+proyectos+SINAMED+e+ISIS%3A+Mejoras+en+el+Acceso+a+la+Informaci%C3%B3n+Biom%C3%A9dica+mediante+la+integraci%C3%B3n+de+Generaci%C3%B3n+de+Res%C3%BAmenes%2C+Categorizaci%C3%B3n+Autom%C3%A1tica+de+Textos+y+Ontolog%C3%ADas&btnG=&hl=es&as_sdt=0}
    }

  • Maña López, M. J., Ureña López, L. A., & Buenaga Rodríguez, M.. (2000). Tareas de análisis del contenido textual para la recuperación de información con realimentación. (, Vol. 26pp. 215-222). .
    [BibTeX] [Abstract] [Google Scholar]
    La utilización de realimentación es una de las técnicas que proporciona mejoras más significativas en la efectividad del proceso de recuperación de información. Por otra parte, cada vez se utilizan en el proceso de recuperación de información, técnicas más avanzadas de análisis del contenido textual con vistas a mejorar la efectividad. En nuestro trabajo estudiamos los beneficios que proporciona la integración de mecanismos de análisis del contenido al utilizar la realimentación en el proceso de recuperación de información. Nos centramos en dos tareas de análisis: desambiguación de palabras y generación de resúmenes, presentando una sistemática para su utilización y experimentos asociados para la evaluación de las mejoras conseguidas.

    @INCOLLECTION{ManaLopez2000,
    author = {Maña López , Manuel J. and Ureña López , Luis Alfonso and Buenaga Rodríguez , Manuel},
    title = {Tareas de análisis del contenido textual para la recuperación de información con realimentación},
    year = {2000},
    volume = {26},
    pages = {215-222},
    month = {September},
    abstract = {La utilización de realimentación es una de las técnicas que proporciona mejoras más significativas en la efectividad del proceso de recuperación de información. Por otra parte, cada vez se utilizan en el proceso de recuperación de información, técnicas más avanzadas de análisis del contenido textual con vistas a mejorar la efectividad. En nuestro trabajo estudiamos los beneficios que proporciona la integración de mecanismos de análisis del contenido al utilizar la realimentación en el proceso de recuperación de información. Nos centramos en dos tareas de análisis: desambiguación de palabras y generación de resúmenes, presentando una sistemática para su utilización y experimentos asociados para la evaluación de las mejoras conseguidas.},
    journal = {Procesamiento de Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Tareas+de+an%C3%A1lisis+del+contenido+textual+para+la+recuperaci%C3%B3n+de+informaci%C3%B3n+con+realimentaci%C3%B3n&btnG=&hl=es&as_sdt=0}
    }

  • Maña López, M. J., Buenaga, M., & Gómez Hidalgo, J. M.. (2004). Multidocument summarization: an added value to clustering in interactive retrieval. Acm trans. inf. syst., 22(2), 215-241.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    A more and more generalized problem in effective information access is the presence in the same corpus of multiple documents that contain similar information. Generally, users may be interested in locating, for a topic addressed by a group of similar documents, one or several particular aspects. This kind of task, called instance or aspectual retrieval, has been explored in several TREC Interactive Tracks. In this article, we propose in addition to the classification capacity of clustering techniques, the possibility of offering a indicative extract about the contents of several sources by means of multidocument summarization techniques. Two kinds of summaries are provided. The first one covers the similarities of each cluster of documents retrieved. The second one shows the particularities of each document with respect to the common topic in the cluster. The document multitopic structure has been used in order to determine similarities and differences of topics in the cluster of documents. The system is independent of document domain and genre. An evaluation of the proposed system with users proves significant improvements in effectiveness. The results of previous experiments that have compared clustering algorithms are also reported.

    @ARTICLE{ManaLopez2004,
    author = {Maña López , Manuel J. and Buenaga , Manuel and Gómez Hidalgo , José María},
    title = {Multidocument summarization: An added value to clustering in interactive retrieval},
    journal = {ACM Trans. Inf. Syst.},
    year = {2004},
    volume = {22},
    pages = {215-241},
    number = {2},
    month = {april},
    abstract = {A more and more generalized problem in effective information access is the presence in the same corpus of multiple documents that contain similar information. Generally, users may be interested in locating, for a topic addressed by a group of similar documents, one or several particular aspects. This kind of task, called instance or aspectual retrieval, has been explored in several TREC Interactive Tracks. In this article, we propose in addition to the classification capacity of clustering techniques, the possibility of offering a indicative extract about the contents of several sources by means of multidocument summarization techniques. Two kinds of summaries are provided. The first one covers the similarities of each cluster of documents retrieved. The second one shows the particularities of each document with respect to the common topic in the cluster. The document multitopic structure has been used in order to determine similarities and differences of topics in the cluster of documents. The system is independent of document domain and genre. An evaluation of the proposed system with users proves significant improvements in effectiveness. The results of previous experiments that have compared clustering algorithms are also reported.},
    doi = {10.1145/984321.984323},
    issn = {1046-8188},
    shorttitle = {Multidocument summarization},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Multidocument+summarization%3A+An+added+value+to+clustering+in+interactive+retrieval&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Maña López, M. J., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1998). Diseño y evaluación de un generador de texto con modelado de usuarioen un entorno de recuperación de información.. Xiv congreso de la sociedad española de procesamiento de lenguajenatural(23), 32-39.
    [BibTeX] [Abstract] [Google Scholar]
    En este trabajo presentamos un generador de resúmenes que incorpora el modelado de las necesidades de información del usuario con el fin de crear resúmenes adaptados a las mismas. Los resúmenes se generan mediante la extracción de las frases que resultan mejor puntuadas bajo tres criterios: palabras clave, localización y título. El modelado del usuario se consigue a partir de las consultas a un sistema de Recuperación de Información y de la expansión de las mismas utilizando WordNet. Se presenta también un método de evaluación sistemático y objetivo que nos permite comparar la eficacia de los distintos tipos de resúmenes generados. Los resultados demuestran la mayor eficacia de los resúmenes adaptados a las consultas y los de aquellos que emplean WordNet.

    @OTHER{ManaLopez1998,
    abstract = {En este trabajo presentamos un generador de resúmenes que incorpora el modelado de las necesidades de información del usuario con el fin de crear resúmenes adaptados a las mismas. Los resúmenes se generan mediante la extracción de las frases que resultan mejor puntuadas bajo tres criterios: palabras clave, localización y título. El modelado del usuario se consigue a partir de las consultas a un sistema de Recuperación de Información y de la expansión de las mismas utilizando WordNet. Se presenta también un método de evaluación sistemático y objetivo que nos permite comparar la eficacia de los distintos tipos de resúmenes generados. Los resultados demuestran la mayor eficacia de los resúmenes adaptados a las consultas y los de aquellos que emplean WordNet.},
    author = {Maña López , Manuel J. and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    editor = {Procesamiento del Lenguaje Natural},
    journal = {XIV Congreso de la Sociedad Española de Procesamiento de LenguajeNatural},
    number = {23},
    pages = {32-39},
    title = {Diseño y evaluación de un generador de texto con modelado de usuarioen un entorno de recuperación de información.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+y+evaluaci%C3%B3n+de+un+generador+de+texto+con+modelado+de+usuario+en+un+entorno+de+recuperaci%C3%B3n+de+informaci%C3%B3n.&btnG=&hl=es&as_sdt=0},
    year = {1998}
    }

  • Maña López, M. J., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1999). Using and evaluating user directed summaries to improve information access. Third european conference on research and advanced technology fordigital libraries, 1696, 198-214.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Textual information available has grown so much as to make necessary to study new techniques that assist users in information access (IA). In this paper, we propose utilizing a user directed summarization system in an IA setting for helping users to decide about document relevance. The summaries are generated using a sentence extraction method that scores the sentences performing some heuristics employed successfully in previous works (keywords, title and location). User modeling is carried out exploiting user’s query to an IA system and expanding query terms using WordNet. We present an objective and systematic evaluation method oriented to measure the summary effectiveness in two IA significant tasks: ad hoc retrieval and relevance feedback. Results obtained prove our initial hypothesis, i.e., user adapted summaries are a useful tool assisting users in an IA context.

    @OTHER{ManaLopez1999,
    abstract = {Textual information available has grown so much as to make necessary to study new techniques that assist users in information access (IA). In this paper, we propose utilizing a user directed summarization system in an IA setting for helping users to decide about document relevance. The summaries are generated using a sentence extraction method that scores the sentences performing some heuristics employed successfully in previous works (keywords, title and location). User modeling is carried out exploiting user’s query to an IA system and expanding query terms using WordNet. We present an objective and systematic evaluation method oriented to measure the summary effectiveness in two IA significant tasks: ad hoc retrieval and relevance feedback. Results obtained prove our initial hypothesis, i.e., user adapted summaries are a useful tool assisting users in an IA context.},
    author = {Maña López , Manuel J. and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    booktitle = {Research and Advanced Technology for Digital Libraries},
    doi = {10.1007/3-540-48155-9_14},
    journal = {Third European Conference on Research and Advanced Technology forDigital Libraries},
    pages = {198-214},
    publisher = {Springer Berlin Heidelberg},
    series = {Lecture Notes in Computer Science},
    title = {Using and Evaluating User Directed Summaries to Improve Information Access},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Using+and+Evaluating+User+Directed+Summaries+to+Improve+Information+Access&btnG=&hl=es&as_sdt=0},
    volume = {1696},
    year = {1999}
    }

  • Molina, M., & Flores, V.. (2006). Generating adaptive presentations of hydrologic behavior. Paper presented at the Ideal.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.

    @inproceedings{DBLP:conf/ideal/MolinaF06,
    author = {Molina, Martin and Flores, Victor},
    abstract = {This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.},
    title = {Generating Adaptive Presentations of Hydrologic Behavior},
    booktitle = {IDEAL},
    year = {2006},
    pages = {896-903},
    doi = {10.1007/11875581_107},
    url = {http://scholar.google.es/scholar?q=allintitle%3AGenerating+Adaptive+Presentations+of+Hydrologic+Behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Molina, M., & Flores, V.. (2006). A knowledge-based approach for automatic generation of summaries of behavior. Paper presented at the Aimsa.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Effective automatic summarization usually requires simulating human reasoning such as abstraction or relevance reasoning. In this paper we describe a solution for this type of reasoning in the particular case of surveillance of the behavior of a dynamic system using sensor data. The paper first presents the approach describing the required type of knowledge with a possible representation. This includes knowledge about the system structure, behavior, interpretation and saliency. Then, the paper shows the inference algorithm to produce a summarization tree based on the exploitation of the physical characteristics of the system. The paper illustrates how the method is used in the context of automatic generation of summaries of behavior in an application for basin surveillance in the presence of river floods.

    @inproceedings{DBLP:conf/aimsa/MolinaF06,
    author = {Molina, Martin and Flores, Victor},
    abstract = {Effective automatic summarization usually requires simulating human reasoning such as abstraction or relevance reasoning. In this paper we describe a solution for this type of reasoning in the particular case of surveillance of the behavior of a dynamic system using sensor data. The paper first presents the approach describing the required type of knowledge with a possible representation. This includes knowledge about the system structure, behavior, interpretation and saliency. Then, the paper shows the inference algorithm to produce a summarization tree based on the exploitation of the physical characteristics of the system. The paper illustrates how the method is used in the context of automatic generation of summaries of behavior in an application for basin surveillance in the presence of river floods.},
    title = {A Knowledge-Based Approach for Automatic Generation of Summaries of Behavior},
    booktitle = {AIMSA},
    year = {2006},
    pages = {265-274},
    doi = {10.1007/11861461_28},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+Knowledge-Based+Approach+for+Automatic+Generation+of+Summaries+of+Behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Molina, M., & Flores, V.. (2008). A presentation model for multimedia summaries of behavior. Paper presented at the Iui.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Presentation models are used by intelligent user interfaces to automatically construct adapted presentations according to particular communication goals. This paper describes the characteristics of a presentation model that was designed to automatically produce multimedia presentations about the summarized behavior of dynamic systems. The presentation model is part of the MSB application (Multimedia Summarizer of Behavior). MSB was developed for the problem of management of dynamic systems where different types of users (operators, decision-makers, other institutions, etc.) need to be informed about the evolution of the system, especially during critical situations. The paper describes the details of the presentation model based on a hierarchical planner together with graphical resources. The paper also describes an application in the field of hydrology for which the model was developed.

    @inproceedings{DBLP:conf/iui/MolinaF08,
    author = {Molina, Martin and Flores, Victor},
    abstract = {Presentation models are used by intelligent user interfaces to automatically construct adapted presentations according to particular communication goals. This paper describes the characteristics of a presentation model that was designed to automatically produce multimedia presentations about the summarized behavior of dynamic systems. The presentation model is part of the MSB application (Multimedia Summarizer of Behavior). MSB was developed for the problem of management of dynamic systems where different types of users (operators, decision-makers, other institutions, etc.) need to be informed about the evolution of the system, especially during critical situations. The paper describes the details of the presentation model based on a hierarchical planner together with graphical resources. The paper also describes an application in the field of hydrology for which the model was developed.},
    title = {A presentation model for multimedia summaries of behavior},
    booktitle = {IUI},
    year = {2008},
    pages = {369-372},
    doi = {10.1145/1378773.1378832},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+presentation+model+for+multimedia+summaries+of+behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Molina, M., & Flores, V.. (2012). Generating multimedia presentations that summarize the behavior of dynamic systems using a model-based approach. Expert syst. appl., 39(3), 2759-2770.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This article describes a knowledge-based method for generating multimedia descriptions that summarize the behavior of dynamic systems. We designed this method for users who monitor the behavior of a dynamic system with the help of sensor networks and make decisions according to prefixed management goals. Our method generates presentations using different modes such as text in natural language, 2D graphics and 3D animations. The method uses a qualitative representation of the dynamic system based on hierarchies of components and causal influences. The method includes an abstraction generator that uses the system representation to find and aggregate relevant data at an appropriate level of abstraction. In addition, the method includes a hierarchical planner to generate a presentation using a model with discourse patterns. Our method provides an efficient and flexible solution to generate concise and adapted multimedia presentations that summarize thousands of time series. It is general to be adapted to different dynamic systems with acceptable knowledge acquisition effort by reusing and adapting intuitive representations. We validated our method and evaluated its practical utility by developing several models for an application that worked in continuous real time operation for more than 1 year, summarizing sensor data of a national hydrologic information system in Spain.

    @article{DBLP:journals/eswa/MolinaF12,
    author = {Molina, Martin and Flores, Victor},
    abstract = {This article describes a knowledge-based method for generating multimedia descriptions that summarize the behavior of dynamic systems. We designed this method for users who monitor the behavior of a dynamic system with the help of sensor networks and make decisions according to prefixed management goals. Our method generates presentations using different modes such as text in natural language, 2D graphics and 3D animations. The method uses a qualitative representation of the dynamic system based on hierarchies of components and causal influences. The method includes an abstraction generator that uses the system representation to find and aggregate relevant data at an appropriate level of abstraction. In addition, the method includes a hierarchical planner to generate a presentation using a model with discourse patterns. Our method provides an efficient and flexible solution to generate concise and adapted multimedia presentations that summarize thousands of time series. It is general to be adapted to different dynamic systems with acceptable knowledge acquisition effort by reusing and adapting intuitive representations. We validated our method and evaluated its practical utility by developing several models for an application that worked in continuous real time operation for more than 1 year, summarizing sensor data of a national hydrologic information system in Spain.},
    title = {Generating multimedia presentations that summarize the behavior of dynamic systems using a model-based approach},
    journal = {Expert Syst. Appl.},
    volume = {39},
    number = {3},
    year = {2012},
    pages = {2759-2770},
    doi = {10.1016/j.eswa.2011.08.135},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3AGenerating+multimedia+presentations+that+summarize+the+behavior+of+dynamic+systems+using+a+model-based+approach&btnG=&lr=}
    }

  • Moreno, L., Salichs, M. A., Gachet Páez, D., Pimentel, J., Arroyo, F., & Gonzalo, A.. (1996). Neural network for robotic control. In Zalzala, A. M. S., & Morris, A. S. (Ed.), (, pp. 137-161). Upper Saddle River, NJ, USA: Ellis Horwood.
    [BibTeX] [Google Scholar]
    @incollection{Moreno:1996:NNM:222047.222061,
    author = {Moreno, L. and Salichs, M. A. and Gachet Páez, Diego and Pimentel, J. and Arroyo, F. and Gonzalo, A.},
    chapter = {Neural networks for mobile robot piloting control},
    title = {Neural network for robotic control},
    editor = {Zalzala, Ali M. S. and Morris, A. S.},
    year = {1996},
    isbn = {0-13-119892-0},
    pages = {137--161},
    numpages = {25},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3A++Neural+networks+for+mobile+robot+piloting+control&btnG=&lr=},
    acmid = {222061},
    publisher = {Ellis Horwood},
    address = {Upper Saddle River, NJ, USA},
    }

  • Muñoz Gil, R., Aparicio Galisteo, F., & Buenaga Rodríguez, M.. (2012). Sistema de acceso a la información basado en conceptos utilizando freebase en español-inglés sobre el dominio médico y turístico. Procesamiento de lenguaje natural, 49.
    [BibTeX] [Abstract] [Google Scholar]
    En este artículo presentamos una herramienta de acceso a la información, basado en los conceptos, enfocada tanto a textos médicos como turísticos. Usando técnicas para el marcado de entidades reconocidas, el sistema permite extraer conceptos relevantes para aportar más información sobre ellos utilizando bases de conocimiento colaborativas y ontologías. Componentes especialmente interesantes para el desarrollo del sistema son Freebase, una gran base de conocimiento colaborativa, además de recursos formales como MedlinePlus y PubMed. La arquitectura del sistema ha sido construida pensando en términos de escalabilidad, para constituir una gran plataforma de integración de información, con los siguientes objetivos: permitir la integración de diferentes técnicas de procesamiento de lenguaje natural y ampliar las fuentes desde las que se extrae información, así como facilitar la integración de nuevas interfaces de usuario.

    @ARTICLE{MunozGil2012,
    author = {Muñoz Gil , Rafael and Aparicio Galisteo , Fernando and Buenaga Rodríguez , Manuel},
    title = {Sistema de Acceso a la Información basado en conceptos utilizando Freebase en Español-Inglés sobre el dominio Médico y Turístico},
    journal = {Procesamiento de Lenguaje Natural},
    year = {2012},
    volume = {49},
    abstract = {En este artículo presentamos una herramienta de acceso a la información, basado en los conceptos, enfocada tanto a textos médicos como turísticos. Usando técnicas para el marcado de entidades reconocidas, el sistema permite extraer conceptos relevantes para aportar más información sobre ellos utilizando bases de conocimiento colaborativas y ontologías. Componentes especialmente interesantes para el desarrollo del sistema son Freebase, una gran base de conocimiento colaborativa, además de recursos formales como MedlinePlus y PubMed. La arquitectura del sistema ha sido construida pensando en términos de escalabilidad, para constituir una gran plataforma de integración de información, con los siguientes objetivos: permitir la integración de diferentes técnicas de procesamiento de lenguaje natural y ampliar las fuentes desde las que se extrae información, así como facilitar la integración de nuevas interfaces de usuario.},
    issn = {1135-5948},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Sistema+de+Acceso+a+la+Informaci%C3%B3n+basado+en+conceptos+utilizando+Freebase+en+Espa%C3%B1ol-Ingl%C3%A9s+sobre+el+dominio+M%C3%A9dico+y+Tur%C3%ADstico&btnG=&hl=es&as_sdt=0}
    }

  • Muñoz Gil, R., Aparicio, F., Buenaga, M., Gachet Páez, D., Puertas, E., Giráldez, I., & Gaya, M. C.. (2011). Tourist face: a contents system based on concepts of freebase for access to the cultural-tourist information. In Muñoz, R., Montoyo, A., & Métais, E. (Ed.), In Natural language processing and information systems (, Vol. 6716, pp. 300-304). Berlin, Heidelberg: Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In more and more application areas large collections of digitized multimedia information are gathered and have to be maintained (e.g. in tourism, medicine, etc). Therefore, there is an increasing demand for tools and techniques supporting the management and usage of digital multimedia data. Furthermore, new large collections of data are available through it every day. In this paper we are presenting Tourist Face, a system aimed at integrating text analyzing techniques into the paradigm of multimedia information, specifically tourist multimedia information. Particularly relevant components to its the development are Freebase, a large collaborative base of knowledge, and General Architecture for Text Engineering (GATE), a system for text processing. The platform architecture has been built thinking in terms of scalability, with the following objectives: to allow the integration of different natural language processing techniques, to expand the sources from which information extraction can be performed and to ease integration of new user interfaces.

    @INCOLLECTION{MunozGil2011,
    author = {Muñoz Gil , Rafael and Aparicio , Fernando and Buenaga , Manuel and Gachet Páez, Diego and Puertas , Enrique and Giráldez , Ignacio and Gaya , Maria Cruz},
    title = {Tourist Face: A Contents System Based on Concepts of Freebase for Access to the Cultural-Tourist Information},
    booktitle = {Natural Language Processing and Information Systems},
    publisher = {Springer Berlin Heidelberg},
    year = {2011},
    editor = {Muñoz, Rafael and Montoyo, Andrés and Métais, Elisabeth},
    volume = {6716},
    series = {Lecture Notes in Computer Science},
    pages = {300-304},
    address = {Berlin, Heidelberg},
    abstract = {In more and more application areas large collections of digitized multimedia information are gathered and have to be maintained (e.g. in tourism, medicine, etc). Therefore, there is an increasing demand for tools and techniques supporting the management and usage of digital multimedia data. Furthermore, new large collections of data are available through it every day. In this paper we are presenting Tourist Face, a system aimed at integrating text analyzing techniques into the paradigm of multimedia information, specifically tourist multimedia information. Particularly relevant components to its the development are Freebase, a large collaborative base of knowledge, and General Architecture for Text Engineering (GATE), a system for text processing. The platform architecture has been built thinking in terms of scalability, with the following objectives: to allow the integration of different natural language processing techniques, to expand the sources from which information extraction can be performed and to ease integration of new user interfaces.},
    doi = {10.1007/978-3-642-22327-3_43},
    isbn = {978-3-642-22326-6},
    shorttitle = {Tourist Face},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Tourist+Face%3A+A+Contents+System+Based+on+Concepts+of+Freebase+for+Access+to+the+Cultural-Tourist+Information&btnG=&hl=es&as_sdt=0}
    }

  • Padrón Nápoles, V. M., Ugarte Suárez, M., Hussain Alanbari, M., & Gachet Páez, D.. (2006). Estudio de las metodologí­as activas y experiencias de su introducción en las asignaturas de sistemas digitales Grafema.
    [BibTeX] [Google Scholar]
    @BOOK{PadronNapoles2006,
    title = {Estudio de las metodologí­as activas y experiencias de su introducción en las asignaturas de sistemas digitales},
    publisher = {Grafema},
    year = {2006},
    author = {Padrón Nápoles , Vi­ctor Manuel and Ugarte Suárez , Marta and Hussain Alanbari , Mohammad and Gachet Páez , Diego},
    isbn = {9788493422561},
    language = {es},
    url = {http://www.google.es/search?tbm=bks&hl=es&q=Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales&btnG=#hl=es&tbm=bks&sclient=psy-ab&q=%22Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales%22&oq=%22Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales%22&gs_l=serp.3...5065.6500.0.6805.2.2.0.0.0.0.0.0..0.0...0.2...1c.1.6.psy-ab.FXP1zEchBms&pbx=1&bav=on.2,or.r_qf.&bvm=bv.43828540,d.ZGU&fp=b9ef6759e3a8d17e&biw=1366&bih=653}
    }

  • Pimentel, J. R., Salichs, M. A., Gachet Páez, D., & Moreno, L.. (1994). A software development environment for autonomous mobile robots. Paper presented at the , 20th international conference on industrial electronics, control and instrumentation, 1994. IECON ’94.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developing software for actual sensor-based mobile robots is not a trivial task because because of a number of practical difficulties. The task of software development can be simplified by the use of an appropriate environment. To be effective, the software development environment must have the following requirements: modularity, hardware independence, capability to work with an actual or simulated system and independence of control modules from system evaluation. In this paper, the authors propose a software development environment which meets the aforementioned requirements. The environment has been used to develop software in the area of reactive control within the Panorama project. Applications of this software environment in a number of projects at the {UPM} are described. Portions of this research have been performed under the {EEC} {ESPRIT} 2483 Panorama Project

    @inproceedings{pimentel_software_1994,
    title = {A software development environment for autonomous mobile robots},
    volume = {2},
    doi = {10.1109/IECON.1994.397944},
    abstract = {Developing software for actual sensor-based mobile robots is not a trivial task because because of a number of practical difficulties. The task of software development can be simplified by the use of an appropriate environment. To be effective, the software development environment must have the following requirements: modularity, hardware independence, capability to work with an actual or simulated system and independence of control modules from system evaluation. In this paper, the authors propose a software development environment which meets the aforementioned requirements. The environment has been used to develop software in the area of reactive control within the Panorama project. Applications of this software environment in a number of projects at the {UPM} are described. Portions of this research have been performed under the {EEC} {ESPRIT} 2483 Panorama Project},
    booktitle = {, 20th International Conference on Industrial Electronics, Control and Instrumentation, 1994. {IECON} '94},
    author = {Pimentel, J.R. and Salichs, M.A. and Gachet Páez, Diego and Moreno, L.},
    year = {1994},
    keywords = {Application software, Art, autonomous mobile robots, Control systems, {EEC} {ESPRIT} 2483 {PANORAMA} Project, Hardware, hardware independence, mobile robots, modularity, path planning, Programming, project support environments, reactive control, Real time systems, research initiatives, robot programming, sensor-based mobile robots, software development environment, Software Engineering, system evaluation, Testing, {USA} Councils, Workstations},
    pages = {1094--1099 vol.2},
    url = {http://scholar.google.es/scholar?q=A+software+development+environment+for+autonomous+mobile+robots&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Pimentel, J. R., Gachet Páez, D., Moreno, L., & Salichs, M. A.. (1993). Learning to coordinate behaviors for real-time path planning of autonomous systems. Paper presented at the , international conference on systems, man and cybernetics, 1993. ‘Systems engineering in the service of humans’, conference proceedings.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    We present a neural network ({NN)} system which learns the appropriate simultaneous activation of primitive behaviors in order to execute more complex robot behaviors. The {NN} implementation is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors in a simultaneous or concurrent fashion. We use a supervised learning technique with a human trainer generating appropriate training for the simultaneous activation of behavior in a simulated environment. The {NN} implementation has been tested within {OPMOR}, a simulation environment for mobile robots and several results are presented. The performance of the neural network is adequate. Portions of this work has been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project

    @inproceedings{pimentel_learning_1993,
    title = {Learning to coordinate behaviors for real-time path planning of autonomous systems},
    doi = {10.1109/ICSMC.1993.390770},
    abstract = {We present a neural network ({NN)} system which learns the appropriate simultaneous activation of primitive behaviors in order to execute more complex robot behaviors. The {NN} implementation is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviors in a simultaneous or concurrent fashion. We use a supervised learning technique with a human trainer generating appropriate training for the simultaneous activation of behavior in a simulated environment. The {NN} implementation has been tested within {OPMOR}, a simulation environment for mobile robots and several results are presented. The performance of the neural network is adequate. Portions of this work has been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project},
    booktitle = {, International Conference on Systems, Man and Cybernetics, 1993. {'Systems} Engineering in the Service of Humans', Conference Proceedings},
    author = {Pimentel, J.R. and Gachet Páez, Diego and Moreno, L. and Salichs, M.A.},
    year = {1993},
    keywords = {autonomous systems, Electronic mail, {ESPRIT} 2483 {PANORAMA} Project, Humans, learning (artificial intelligence), mobile robot, mobile robots, neural nets, neural network, Neural networks, {OPMOR}, Orbital robotics, path planning, primitive behaviors, Real time systems, real-time path planning, real-time systems, robot behavior coordination, Robot kinematics, Robot sensing systems, simulation, simulation environment, supervised learning, Testing},
    pages = {541--546 vol.4},
    url = {http://scholar.google.es/scholar?q=Learning+to+coordinate+behaviors+for+real-time+path+planning+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Prieto, M. L., Aparicio, F., Buenaga, M., Gachet Páez, D., & Gaya, M. C.. (2013). Cross-lingual intelligent information access system from clinical cases using mobile devices. Procesamiento del lenguaje natural, 50, 85-92.
    [BibTeX] [Abstract] [Google Scholar]
    Over the last decade there has been a rapid growth of both the development of new smart mobile devices (Smartphone and Tablet) and their use (through many applications). Furthermore, in the biomedical field there are a greater number of resources in different formats, which can be exploited by using Intelligent Information Access Systems and techniques for information retrieval and extraction. This paper presents the development of a mobile interface access that, using different local knowledge sources (dictionaries and ontologies previously preprocessed), techniques of natural language processing and remote knowledge sources (which performs the annotation of entities in text inputted into the system via Web services), allows the cross-lingual extraction of medical concepts in English and Spanish, from a medical text in English or Spanish (e.g. a clinical case). The mobile application user can enter a medical text or a picture of it, resulting in a set of relevant medical entities. On recognized medical entities, extracted and displayed through the interface, the user can get more information on them, get more information from other concepts related to originally extracted and search for scientific publications from MEDLINE/PubMed.

    @article{PLN4663,
    author = {Prieto , Maria Lorena and Aparicio , Fernando and Buenaga , Manuel and Gachet Páez, Diego and Gaya, Maria Cruz},
    title = {Cross-lingual intelligent information access system from clinical cases using mobile devices},
    journal = {Procesamiento del Lenguaje Natural},
    volume = {50},
    number = {0},
    pages = {85-92},
    year = {2013},
    keywords = {},
    abstract = {Over the last decade there has been a rapid growth of both the development of new smart mobile devices (Smartphone and Tablet) and their use (through many applications). Furthermore, in the biomedical field there are a greater number of resources in different formats, which can be exploited by using Intelligent Information Access Systems and techniques for information retrieval and extraction. This paper presents the development of a mobile interface access that, using different local knowledge sources (dictionaries and ontologies previously preprocessed), techniques of natural language processing and remote knowledge sources (which performs the annotation of entities in text inputted into the system via Web services), allows the cross-lingual extraction of medical concepts in English and Spanish, from a medical text in English or Spanish (e.g. a clinical case). The mobile application user can enter a medical text or a picture of it, resulting in a set of relevant medical entities.
    On recognized medical entities, extracted and displayed through the interface, the user can get more information on them, get more information from other concepts related to originally extracted and search for scientific publications from MEDLINE/PubMed.},
    issn = {1989-7553},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACross-lingual+intelligent+information+access+system+from+clinical+cases+using++mobile+devices&btnG=&hl=es&as_sdt=0%2C5}}

  • Puente, E. A., Gachet Páez, D., Pimentel, J. R., Moreno, L., & Salichs, M. A.. (1992). A neural network supervisor for behavioral primitives of autonomous systems. Paper presented at the , proceedings of the 1992 international conference on industrial electronics, control, instrumentation, and automation, 1992. power electronics and motion control.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors present a neural network implementation of a fusion supervisor of primitive behavior to execute more complex robot behavior. The neural network implementation is part of an architecture for the execution of mobile robot tasks, which is composed of several primitive behaviors, in a simultaneous or concurrent fashion. The architecture allows for learning to take place. At the execution level, it incorporates the experience gained in executing primitive behavior as well as the overall task. The neural network has been trained to supervise the relative contributions of the various primitive robot behaviors to execute a given task. The neural network implementation has been tested within {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the neural network is adequate

    @inproceedings{puente_neural_1992,
    title = {A neural network supervisor for behavioral primitives of autonomous systems},
    doi = {10.1109/IECON.1992.254457},
    abstract = {The authors present a neural network implementation of a fusion supervisor of primitive behavior to execute more complex robot behavior. The neural network implementation is part of an architecture for the execution of mobile robot tasks, which is composed of several primitive behaviors, in a simultaneous or concurrent fashion. The architecture allows for learning to take place. At the execution level, it incorporates the experience gained in executing primitive behavior as well as the overall task. The neural network has been trained to supervise the relative contributions of the various primitive robot behaviors to execute a given task. The neural network implementation has been tested within {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the neural network is adequate},
    booktitle = {, Proceedings of the 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation, 1992. Power Electronics and Motion Control},
    author = {Puente, E. A. and Gachet Páez, Diego and Pimentel, J.R. and Moreno, L. and Salichs, M.A.},
    year = {1992},
    keywords = {Actuators, Automatic control, autonomous systems, behavioral primitives, Control systems, Electronic mail, Engineering management, fusion supervisor, learning (artificial intelligence), mobile robot tasks, mobile robots, Navigation, neural nets, neural network supervisor, Neural networks, {OPMOR}, Robot kinematics, simulation environment, Testing, training},
    pages = {1105--1109 vol.2},
    url = {http://scholar.google.es/scholar?q=A+neural+network+supervisor+for+behavioral+primitives+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Puente, E. A., Moreno, L., Salichs, M. A., & Gachet Páez, D.. (1991). Analysis of data fusion methods in certainty grids application to collision danger monitoring. Paper presented at the , 1991 international conference on industrial electronics, control and instrumentation, 1991. proceedings. IECON ’91.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors focus on the use of the occupancy grid representation to maintain and combine the information acquired from sensors about the environment. This information is subsequently used to monitor the robot collision danger risk and take into account that risk in starting the appropriate maneuver. The occupancy grid representation uses a multidimensional tessellation of space into cells, where each cell stores some information about its state. A general model associates a random vector that encodes multiple properties in a cell state. If the cell property is limited to occupancy, it is usually called occupancy grid. Two main approaches have been used to model the occupancy of a cell: probabilistic estimation and the Dempster-Shafer theory of evidence. Probabilistic estimation and some combination rules based on the Dempster-Shafter theory of evidence are analyzed and their possibilities compared

    @inproceedings{puente_analysis_1991,
    title = {Analysis of data fusion methods in certainty grids application to collision danger monitoring},
    doi = {10.1109/IECON.1991.239281},
    abstract = {The authors focus on the use of the occupancy grid representation to maintain and combine the information acquired from sensors about the environment. This information is subsequently used to monitor the robot collision danger risk and take into account that risk in starting the appropriate maneuver. The occupancy grid representation uses a multidimensional tessellation of space into cells, where each cell stores some information about its state. A general model associates a random vector that encodes multiple properties in a cell state. If the cell property is limited to occupancy, it is usually called occupancy grid. Two main approaches have been used to model the occupancy of a cell: probabilistic estimation and the Dempster-Shafer theory of evidence. Probabilistic estimation and some combination rules based on the Dempster-Shafter theory of evidence are analyzed and their possibilities compared},
    booktitle = {, 1991 International Conference on Industrial Electronics, Control and Instrumentation, 1991. Proceedings. {IECON} '91},
    author = {Puente, E. A. and Moreno, L. and Salichs, M.A. and Gachet Páez, Diego},
    year = {1991},
    keywords = {artificial intelligence, autonomous mobile robots, Buildings, certainty grids, collision danger monitoring, Data analysis, data fusion, Dempster-Shafer theory of evidence, Fuses, Geometry, mobile robots, monitoring, multidimensional tessellation, Navigation, probabilistic estimation, probability, Recursive estimation, Remotely operated vehicles, Sensor fusion, signal processing, State estimation},
    pages = {1133--1137 vol.2},
    url = {http://scholar.google.es/scholar?q=Analysis+of+data+fusion+methods+in+certainty+grids+application+to+collision+danger+monitoring+&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Puertas, E., Prieto, M. L., & de Buenaga, M.. (2013). Mobile application for accessing biomedical information using linked open data. Paper presented at the Proceedings from the mobilemed 2013 conference.
    [BibTeX] [Abstract] [Google Scholar]
    This paper aims to introduce a mobile application for accessing biomedical information extracted from public open resources like Freebase, DBPedia or PubMed. Our app exploits the interlinked feature of those sources for easing the access to heterogeneous resources. App was developed using HTML5 and Javascript and then it was compiled to different platforms like AnDroid or iOS.

    @inproceedings{MobMedEPuertas,
    author = {Puertas, Enrique and Prieto, Maria Lorena and Buenaga, Manuel de},
    abstract = {This paper aims to introduce a mobile application for accessing biomedical information extracted from public open resources like Freebase, DBPedia or PubMed. Our app exploits the interlinked feature of those sources for easing the access to heterogeneous resources. App was developed using HTML5 and Javascript and then it was compiled to different platforms like AnDroid or iOS.},
    title = {MOBILE APPLICATION FOR ACCESSING BIOMEDICAL INFORMATION USING LINKED OPEN DATA},
    booktitle = {Proceedings from the mobilemed 2013 conference},
    year = {2013},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMobile+Application+for+Accessing+Biomedical+Information+Using+Linked+Open+Data&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Puertas Sanz, E., Gómez Hidalgo, J. M., & Cortizo Pérez, J. C.. (2008). Email spam filtering. In Zelkowitz, M. V. (Ed.), In Advances in computers (, Vol. 74pp. 45-114). Elsevier.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In recent years, email spam has become an increasingly important problem, with a big economic impact in society. In this work, we present the problem of spam, how it affects us, and how we can fight against it. We discuss legal, economic, and technical measures used to stop these unsolicited emails. Among all the technical measures, those based on content analysis have been particularly effective in filtering spam, so we focus on them, explaining how they work in detail. In summary, we explain the structure and the process of different Machine Learning methods used for this task, and how we can make them to be cost sensitive through several methods like threshold optimization, instance weighting, or MetaCost. We also discuss how to evaluate spam filters using basic metrics, TREC metrics, and the receiver operating characteristic convex hull method, that best suits classification problems in which target conditions are not known, as it is the case. We also describe how actual filters are used in practice. We also present different methods used by spammers to attack spam filters and what we can expect to find in the coming years in the battle of spam filters against spammers.

    @INCOLLECTION{PuertasSanz2008,
    author = {Puertas Sanz , Enrique and Gómez Hidalgo , José María and Cortizo Pérez , José Carlos},
    title = {Email Spam Filtering},
    booktitle = {Advances in Computers},
    publisher = {Elsevier},
    year = {2008},
    editor = {Marvin V. Zelkowitz},
    volume = {74},
    chapter = {3},
    pages = {45-114},
    abstract = {In recent years, email spam has become an increasingly important problem, with a big economic impact in society. In this work, we present the problem of spam, how it affects us, and how we can fight against it. We discuss legal, economic, and technical measures used to stop these unsolicited emails. Among all the technical measures, those based on content analysis have been particularly effective in filtering spam, so we focus on them, explaining how they work in detail. In summary, we explain the structure and the process of different Machine Learning methods used for this task, and how we can make them to be cost sensitive through several methods like threshold optimization, instance weighting, or MetaCost. We also discuss how to evaluate spam filters using basic metrics, TREC metrics, and the receiver operating characteristic convex hull method, that best suits classification problems in which target conditions are not known, as it is the case. We also describe how actual filters are used in practice. We also present different methods used by spammers to attack spam filters and what we can expect to find in the coming years in the battle of spam filters against spammers.},
    doi = {10.1016/S0065-2458(08)00603-7},
    isbn = {0065-2458},
    shorttitle = {Software Development},
    url = {http://scholar.google.es/scholar?as_q=Email+Spam+Filtering&as_epq=&as_oq=&as_eq=&as_occt=title&as_sauthors=Puertas&as_publication=&as_ylo=&as_yhi=&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • Salichs, M. A., Puente, E. A., Gachet Páez, D., & Moreno, L.. (1991). Trajectory tracking for a mobile robot-an application to contour following. Paper presented at the , 1991 international conference on industrial electronics, control and instrumentation, 1991. proceedings. IECON ’91.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Some control algorithms for the contour following guidance module of a mobile robot are described, and their performance is analyzed. Different approaches such as classical, fuzzy and neural control techniques have been considered in order to optimize and smooth the trajectory of the mobile robot. The module controls a virtual vehicle, by means of two parameters: velocity and curvature. The algorithms have been first simulated and then tested on the {UPM} mobile platform. The best results have been obtained with classical control and fuzzy control

    @inproceedings{salichs_trajectory_1991,
    title = {Trajectory tracking for a mobile robot-An application to contour following},
    doi = {10.1109/IECON.1991.239143},
    abstract = {Some control algorithms for the contour following guidance module of a mobile robot are described, and their performance is analyzed. Different approaches such as classical, fuzzy and neural control techniques have been considered in order to optimize and smooth the trajectory of the mobile robot. The module controls a virtual vehicle, by means of two parameters: velocity and curvature. The algorithms have been first simulated and then tested on the {UPM} mobile platform. The best results have been obtained with classical control and fuzzy control},
    booktitle = {, 1991 International Conference on Industrial Electronics, Control and Instrumentation, 1991. Proceedings. {IECON} '91},
    author = {Salichs, M.A. and Puente, E. A. and Gachet Páez, Diego and Moreno, L.},
    year = {1991},
    keywords = {Algorithm design and analysis, classical control, contour following guidance module, fuzzy control, fuzzy set theory, mobile robot, mobile robots, Navigation, neural control, Performance analysis, position control, Robot control, Testing, tracking, Trajectory, trajectory tracking, Vehicles, Velocity control},
    pages = {1067--1070 vol.2},
    url = {http://scholar.google.es/scholar?q=Trajectory+tracking+for+a+mobile+robot-An+application+to+contour+following&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Salichs, M. A., Puente, E. A., Gachet Páez, D., & Pimentel, J. R.. (1993). Learning behavioral control by reinforcement for an autonomous mobile robot. Paper presented at the , international conference on industrial electronics, control, and instrumentation, 1993. proceedings of the IECON ’93.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} constitutes a fusion supervisor of primitive behaviours in order to execute more complex robot behaviours as for example go to goal. This fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviours which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviours as well as the overall task. The implementation of the autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform {UPM} Robuter. Both simulated and real results are presented. The performance of the {AHC} neural network is adequate. Portions of this work have been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project

    @inproceedings{salichs_learning_1993,
    title = {Learning behavioral control by reinforcement for an autonomous mobile robot},
    doi = {10.1109/IECON.1993.339280},
    abstract = {We present an implementation of a reinforcement learning algorithm through the use of a special neural network topology, the {AHC} (adaptive heuristic critic). The {AHC} constitutes a fusion supervisor of primitive behaviours in order to execute more complex robot behaviours as for example go to goal. This fusion supervisor is part of an architecture for the execution of mobile robot tasks which are composed of several primitive behaviours which act in a simultaneous or concurrent fashion. The architecture allows for learning to take place at the execution level, it incorporates the experience gained in executing primitive behaviours as well as the overall task. The implementation of the autonomous learning approach has been tested within {OPMOR}, a simulation environment for mobile robots and with our mobile platform {UPM} Robuter. Both simulated and real results are presented. The performance of the {AHC} neural network is adequate. Portions of this work have been implemented in the {EEC} {ESPRIT} 2483 {PANORAMA} Project},
    booktitle = {, International Conference on Industrial Electronics, Control, and Instrumentation, 1993. Proceedings of the {IECON} '93},
    author = {Salichs, M.A. and Puente, E. A. and Gachet Páez, Diego and Pimentel, J.R.},
    year = {1993},
    keywords = {adaptive heuristic critic, autonomous mobile robot, behavioral control, {EEC} {ESPRIT} 2483 {PANORAMA} Project, Electronic mail, fusion supervisor, heuristic programming, Intelligent robots, Intelligent sensors, Intelligent systems, learning (artificial intelligence), Learning systems, Machine intelligence, mobile robots, Network topology, neural nets, neural network topology, {OPMOR}, reinforcement learning algorithm, Robot sensing systems, Robot vision systems, simulation environment, {UPM} Robuter},
    pages = {1436--1441 vol.3},
    url = {http://scholar.google.es/scholar?q=Learning+behavioral+control+by+reinforcement+for+an+autonomous+mobile+robot&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Sasián, F., Theró;n, R., & Gachet Páez, D.. (2014). Protocolo para comunicaci?n inal?mbrica de alta eficiencia en instalaciones de energ?as renovable. In Nov?;tica (pp. 33-38). ATI.
    [BibTeX] [Abstract] [Google Scholar]
    Durante estos ultimos cuatro aos, la industria fotovoltaica (FV) ha tenido que enfrentarse a su primer proceso de consolidacion, debido, entre otros factores, a la crisis economica. En esas circunstancias, la FV tiene la necesidad vital de reducir los costes. Una nueva lnea de trabajo, la electrnica de potencia empotrada a nivel de modulo (MLPE Module Level Power Electronic), esta en plena expansin y promete aumentar no solo la eficiencia sino tambin la flexibilidad y la seguridad de los sistemas fotovoltaicos.

    @INCOLLECTION{Gachet2014d,
    author = {Sasián, Felix and Theró;n, Ricardo and Gachet Páez, Diego},
    title = {Protocolo para comunicacin inalmbrica de alta eficiencia en instalaciones de energas renovable},
    booktitle = {Nov;tica},
    publisher = {ATI},
    year = {2014},
    editor = {},
    volume = {},
    series = {},
    pages = {33-38},
    month = {December},
    abstract = {Durante estos ultimos cuatro aos, la industria fotovoltaica (FV) ha tenido que enfrentarse a su primer proceso de consolidacion, debido, entre otros factores, a la crisis economica. En esas circunstancias, la FV tiene la necesidad vital de reducir los costes. Una nueva lnea de trabajo, la electrnica de potencia empotrada a nivel de modulo (MLPE Module Level Power Electronic), esta en plena expansin y promete aumentar no solo la eficiencia sino tambin la flexibilidad y la seguridad de los sistemas fotovoltaicos.},
    copyright = {Open Access},
    doi = {},
    isbn = {02112124},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=0ynMYdoAAAAJ&sortby=pubdate&citation_for_view=0ynMYdoAAAAJ:OU6Ihb5iCvQC},
    urldate = {2014-12-12}
    }

  • Ureña López, L. A., & Buenaga Rodríguez, M.. (1999). Utilizando wordnet para complementar la información de entrenamiento en la identificación del significado de las palabras. (, Vol. 3pp. 20). .
    [BibTeX] [Abstract] [Google Scholar]
    La desambiguación del significado de las palabras seha desarrollado como una subárea del Procesamiento del LenguajeNatural (PLN), donde el objetivo es determinar el sentido correcto de aquellaspalabras que tienen más de un significado, no es una tarea finalen sí misma, sino una tarea intermedia necesaria en variadas aplicacionesdel procesamiento del lenguaje natural. La resolución de laambigüedad de las palabras (WSD) es identificar el sentido correctode los relacionados en un diccionario, una base de datos léxicao similar. Es una tarea compleja, pero muy útil en variadas aplicacionesdel procesamiento en lenguaje natural, como Categorización de Texto(TC); traducción automática; restauración de acentos;encaminamiento y filtrado de textos; agrupamiento y segmentaciónde textos, corrección ortográfica y gramatical, reconocimientode voz y, en general, en la recuperación de información.Nuestro enfoque integra información de una base de datos léxica(WordNet) con dos enfoques de entrenamiento a través del Modelodel Espacio Vectorial, incrementando la efectividad de la desambiguación.Probamos los enfoques de entrenamiento con los algoritmos de Rocchio yWidrow-Hoff sobre un gran conjunto de documentos con una fina granularidadde sentidos, como son los de WordNet. Consiguiendo una alta precisiónen la resolución de la ambigüedad léxica, asícomo una gran efectividad en su ejecución.

    @INCOLLECTION{UrenaLopez1999,
    author = {Ureña López , Luis Alfonso and Buenaga Rodríguez , Manuel},
    title = {Utilizando Wordnet para complementar la información de entrenamiento en la identificación del significado de las palabras},
    year = {1999},
    volume = {3},
    number = {7},
    pages = {20},
    abstract = {La desambiguación del significado de las palabras seha desarrollado como una subárea del Procesamiento del LenguajeNatural (PLN), donde el objetivo es determinar el sentido correcto de aquellaspalabras que tienen más de un significado, no es una tarea finalen sí misma, sino una tarea intermedia necesaria en variadas aplicacionesdel procesamiento del lenguaje natural. La resolución de laambigüedad de las palabras (WSD) es identificar el sentido correctode los relacionados en un diccionario, una base de datos léxicao similar. Es una tarea compleja, pero muy útil en variadas aplicacionesdel procesamiento en lenguaje natural, como Categorización de Texto(TC); traducción automática; restauración de acentos;encaminamiento y filtrado de textos; agrupamiento y segmentaciónde textos, corrección ortográfica y gramatical, reconocimientode voz y, en general, en la recuperación de información.Nuestro enfoque integra información de una base de datos léxica(WordNet) con dos enfoques de entrenamiento a través del Modelodel Espacio Vectorial, incrementando la efectividad de la desambiguación.Probamos los enfoques de entrenamiento con los algoritmos de Rocchio yWidrow-Hoff sobre un gran conjunto de documentos con una fina granularidadde sentidos, como son los de WordNet. Consiguiendo una alta precisiónen la resolución de la ambigüedad léxica, asícomo una gran efectividad en su ejecución.},
    journal = {Revista Iberoamericana de Inteligencia Artificial},
    url = {http://scholar.google.es/scholar?q=allintitle%3AUtilizando+Wordnet+para+complementar+la+informaci%C3%B3n+de+entrenamiento+en+la+identificaci%C3%B3n+del+significado+de+las+palabras&btnG=&hl=es&as_sdt=0}
    }

  • Ureña López, L. A., Gómez Hidalgo, J. M., García Vega, M., & Díaz Esteban, A.. (1998). Integrando una base de datos léxica y una colección de entrenamientopara la desambiguación del sentido de las palabras. Xiv congreso de la sociedad española de procesamiento de lenguajenatural, 23.
    [BibTeX] [Abstract] [Google Scholar]
    La resolución de la ambigüedad es una tarea compleja y útil para muchas aplicaciones del procesamiento del lenguaje natural. En concreto, la ambigüedad causa problemas en aplicaciones como: la Recuperación de Información (IR), donde los problemas pueden ser substanciales y ser superados si se utilizan grandes consultas, y la traducción automática, donde es un gran problema inherente. Recientemente han sido varios los enfoques y algoritmos propuestos para realizar esta tarea. Presentamos un nuevo enfoque basado en la integración de varios recursos lingüísticos de dominio público, como una base de datos léxica y una colección de entrenamiento. Nuestro enfoque integra la información de sinonimia de WordNet y la colección de entrenamiento SemCor para incrementar la efectividad de la desambiguación, a través del Modelo del Espacio Vectorial. Hemos probado nuestro enfoque sobre un gran conjunto de documentos con una fina granularidad de sentidos, como son los de WordNet, consiguiendo una alta precisión en la resolución de la ambigüedad léxica.

    @OTHER{UrenaLopez1998,
    abstract = {La resolución de la ambigüedad es una tarea compleja y útil para muchas aplicaciones del procesamiento del lenguaje natural. En concreto, la ambigüedad causa problemas en aplicaciones como: la Recuperación de Información (IR), donde los problemas pueden ser substanciales y ser superados si se utilizan grandes consultas, y la traducción automática, donde es un gran problema inherente. Recientemente han sido varios los enfoques y algoritmos propuestos para realizar esta tarea. Presentamos un nuevo enfoque basado en la integración de varios recursos lingüísticos de dominio público, como una base de datos léxica y una colección de entrenamiento. Nuestro enfoque integra la información de sinonimia de WordNet y la colección de entrenamiento SemCor para incrementar la efectividad de la desambiguación, a través del Modelo del Espacio Vectorial. Hemos probado nuestro enfoque sobre un gran conjunto de documentos con una fina granularidad de sentidos, como son los de WordNet, consiguiendo una alta precisión en la resolución de la ambigüedad léxica.},
    author = {Ureña López , Luis Alfonso and Gómez Hidalgo , José María and García Vega , Manuel and Díaz Esteban , Alberto},
    editor = {Procesamiento del Lenguaje Natural},
    journal = {XIV Congreso de la Sociedad Española de Procesamiento de LenguajeNatural},
    pages = {23},
    title = {Integrando una Base de Datos Léxica y una Colección de Entrenamientopara la Desambiguación del Sentido de las Palabras},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrando+una+Base+de+Datos+L%C3%A9xica+y+una+Colecci%C3%B3n+de+Entrenamiento+para+la+Desambiguaci%C3%B3n+del+Sentido+de+las+Palabras&btnG=&hl=es&as_sdt=0},
    year = {1998}
    }

  • Ureña López, L. A., García Vega, M., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1997). Resolución de la ambigüedad léxica mediante información contextual y el modelo del espacio vectorial. Actas de la vii conferencia de la asociación española para inteligencia artificial, 787-796.
    [BibTeX] [Abstract] [Google Scholar]
    The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a variable set of terms like local context. We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.

    @OTHER{UrenaLopez1997,
    abstract = {The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a variable set of terms like local context. We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.},
    author = {Ureña López , Luis Alfonso and García Vega , Manuel and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    journal = {Actas de la VII Conferencia de la Asociación Española para Inteligencia Artificial},
    pages = {787-796},
    title = {Resolución de la Ambigüedad Léxica Mediante Información Contextual y el Modelo del Espacio Vectorial},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Resoluci%C3%B3n+de+la+Ambig%C3%BCedad+L%C3%A9xica+mediante+informaci%C3%B3n+contextual+y+el+modelo+del+espacio+vectorial+&btnG=&hl=es&as_sdt=0},
    year = {1997}
    }

  • Ureña López, L. A., Buenaga Rodríguez, M., García Vega, M., & Gómez Hidalgo, J. M.. (1998). Integrating and evaluating wsd in the adaptation of a lexical databasein text categorization task. .
    [BibTeX] [Abstract] [Google Scholar]
    Improvement in the accuracy of identifying the correct word sense (WSD) will give better results for many natural language processing tasks. In this paper, we present a new approach using WSD as an aid for Text Categorization (TC). This approach integrates a set of linguistics resources as knowledge sources. So, our approach, for TC using the Vector Space Model, integrates two different resources in text content analysis tasks: a lexical database (WordNet) and training collections (Reuters-21578). We present the WSD task to TC application. Specifically, we apply WSD to the process of resolving ambiguity in categories WordNet, so we complement training phases. We have developed experiments to evaluate the improvements obtained by the integration of the resources in TC task and for application of WSD in this task, obtaining a high accuracy in disambiguating category senses of WordNet.

    @OTHER{UrenaLopez1998a,
    abstract = {Improvement in the accuracy of identifying the correct word sense (WSD) will give better results for many natural language processing tasks. In this paper, we present a new approach using WSD as an aid for Text Categorization (TC). This approach integrates a set of linguistics resources as knowledge sources. So, our approach, for TC using the Vector Space Model, integrates two different resources in text content analysis tasks: a lexical database (WordNet) and training collections (Reuters-21578). We present the WSD task to TC application. Specifically, we apply WSD to the process of resolving ambiguity in categories WordNet, so we complement training phases. We have developed experiments to evaluate the improvements obtained by the integration of the resources in TC task and for application of WSD in this task, obtaining a high accuracy in disambiguating category senses of WordNet.},
    author = {Ureña López , Luis Alfonso and Buenaga Rodríguez , Manuel and García Vega , M and Gómez Hidalgo , José María},
    howpublished = {First Workshop on Text},
    title = {Integrating and evaluating WSD in the adaptation of a lexical databasein text categorization task},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIntegrating+and+evaluating+WSD+in+the+adaptation+of+a+lexical+database%09in+text+categorization+task&btnG=&hl=es&as_sdt=0},
    year = {1998}
    }

  • Ureña López, A. L., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (2001). Integrating linguistic resources in tc through wsd. Computers and the humanities, 35(2), 215-230.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Information access methods must be improved to overcome theinformation overload that most professionals face nowadays. Textclassification tasks, like Text Categorization, help the usersto access to the great amount of text they find in the Internetand their organizations.TC is the classification of documents into a predefined set ofcategories. Most approaches to automatic TC are based on theutilization of a training collection, which is a set of manuallyclassified documents. Other linguistic resources that areemerging, like lexical databases, can also be used forclassification tasks. This article describes an approach to TCbased on the integration of a training collection (Reuters-21578)and a lexical database (WordNet 1.6) as knowledge sources.Lexical databases accumulate information on the lexical items ofone or several languages. This information must be filtered inorder to make an effective use of it in our model of TC. Thisfiltering process is a Word Sense Disambiguation task. WSDis the identification of the sense of words in context. This taskis an intermediate process in many natural language processingtasks like machine translation or multilingual informationretrieval. We present the utilization of WSD as an aid for TC. Ourapproach to WSD is also based on the integration of two linguisticresources: a training collection (SemCor and Reuters-21578) and alexical database (WordNet 1.6).We have developed a series of experiments that show that: TC and WSD based on the integration of linguistic resources are veryeffective; and, WSD is necessary to effectively integratelinguistic resources in TC.

    @ARTICLE{UrenaLopez2001,
    author = {Ureña López , L. Alfonso and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    title = {Integrating Linguistic Resources in TC through WSD},
    journal = {Computers and the Humanities},
    year = {2001},
    volume = {35},
    pages = {215-230},
    number = {2},
    month = {may},
    abstract = {Information access methods must be improved to overcome theinformation overload that most professionals face nowadays. Textclassification tasks, like Text Categorization, help the usersto access to the great amount of text they find in the Internetand their organizations.TC is the classification of documents into a predefined set ofcategories. Most approaches to automatic TC are based on theutilization of a training collection, which is a set of manuallyclassified documents. Other linguistic resources that areemerging, like lexical databases, can also be used forclassification tasks. This article describes an approach to TCbased on the integration of a training collection (Reuters-21578)and a lexical database (WordNet 1.6) as knowledge sources.Lexical databases accumulate information on the lexical items ofone or several languages. This information must be filtered inorder to make an effective use of it in our model of TC. Thisfiltering process is a Word Sense Disambiguation task. WSDis the identification of the sense of words in context. This taskis an intermediate process in many natural language processingtasks like machine translation or multilingual informationretrieval. We present the utilization of WSD as an aid for TC. Ourapproach to WSD is also based on the integration of two linguisticresources: a training collection (SemCor and Reuters-21578) and alexical database (WordNet 1.6).We have developed a series of experiments that show that: TC and WSD based on the integration of linguistic resources are veryeffective; and, WSD is necessary to effectively integratelinguistic resources in TC.},
    doi = {10.1023/A:1002632712378},
    issn = {0010-4817, 1572-8412},
    language = {en},
    publisher = {Kluwer Academic Publishers},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Integrating+linguistic+resources+in+TC+through+WSD&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Ureña López, L. A., García Vega, M., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1998). Resolución automática de la ambigüedad léxica fundamentada en el modelo del espacio vectorial usando ventana contextual variable. .
    [BibTeX] [Abstract] [Google Scholar]
    The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a contextual windows (variable set of terms like local context). We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.

    @INPROCEEDINGS{UrenaLopez1998b,
    author = {Ureña López , Luis Alfonso and García Vega , Manuel and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    title = {Resolución Automática de la Ambigüedad Léxica Fundamentada en el Modelo del Espacio Vectorial Usando Ventana Contextual Variable},
    year = {1998},
    abstract = {The resolution of lexical ambiguity of polysemics words is a complex and useful task for many natural language processing applications. We present a new approach for word sense disambiguation based in the vector space model and a widely available training collection as linguistic resource. This approach uses a contextual windows (variable set of terms like local context). We have tested our disambiguator algorithm on a large documents collection, achieving high precision in the resolution of lexical ambiguity.},
    journal = {Asociación Española de Lingüística Aplicada},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Resoluci%C3%B3n+Autom%C3%A1tica+de+la+Ambig%C3%BCedad+L%C3%A9xica+Fundamentada+en+el+Modelo+del+Espacio+Vectorial+Usando+Ventana+Contextual+Variable&btnG=&hl=es&as_sdt=0}
    }

  • Ureña López, L. A., Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (2000). Information retrieval by means of word sense disambiguation. Third international workshop on text, speech and dialoguebrno, czech republic, 1902, 93-98.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The increasing problem of information overload can be reduced by the improvement of information access tasks like Information Retrieval. Relevance Feedback plays a key role in this task, and is typically based only on the information extracted from documents judged by the user for a given query. We propose to make use of a thesaurus to complement this information to improve RF. This must be done by means of a Word Sense Disambiguation process that correctly identifies the suitable information from the thesaurus WordNET. The results of our experiments show that the utilisation of a thesaurus requires Word Sense Disambiguation, and that with this process, Relevance Feedback is substantially improved.

    @OTHER{UrenaLopez2000,
    abstract = {The increasing problem of information overload can be reduced by the improvement of information access tasks like Information Retrieval. Relevance Feedback plays a key role in this task, and is typically based only on the information extracted from documents judged by the user for a given query. We propose to make use of a thesaurus to complement this information to improve RF. This must be done by means of a Word Sense Disambiguation process that correctly identifies the suitable information from the thesaurus WordNET. The results of our experiments show that the utilisation of a thesaurus requires Word Sense Disambiguation, and that with this process, Relevance Feedback is substantially improved.},
    author = {Ureña López , Luis Alfonso and Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    booktitle = {Text, Speech and Dialogue},
    doi = {10.1007/3-540-45323-7_16},
    journal = {Third International Workshop on TEXT, SPEECH and DIALOGUEBrno, Czech Republic},
    month = {Septiembre 13-16},
    pages = {93-98},
    title = {Information Retrieval by means of Word Sense Disambiguation},
    url = {http://scholar.google.es/scholar?q=allintitle%3AInformation+Retrieval+by+means+of+Word+Sense+Disambiguation&btnG=&hl=es&as_sdt=0},
    volume = {1902},
    year = {2000}
    }

  • Valverde, R., & Gachet Páez, D.. (2007). Identificación de sistemas dinámicos utilizando redes neuronales rbf. Revista iberoamericana de automática e informática industrial, 32-42.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    La identificación de sistemas complejos y no-lineales ocupa un lugar importante en las arquitecturas de neurocontrol, como por ejemplo el control inverso, control adaptativo directo e indirecto, etc. Es habitual en esos enfoques utilizar redes neuronales “feedforward” con memoria en la entrada (Tapped Delay) o bien redes recurrentes (modelos de Elman o Jordan) entrenadas off-line para capturar la dinámica del sistema (directa o inversa) y utilizarla en el lazo de control. En este artículo presentamos un esquema de identificación basado en redes del tipo RBF (Radial Basis Function) que se entrena on-line y que dinámicamente modifica su estructura (número de nodos o elementos en la capa oculta) permitiendo una implementación en tiempo real del identificador en el lazo de control.

    @OTHER{Valverde2007,
    abstract = {La identificación de sistemas complejos y no-lineales ocupa un lugar importante en las arquitecturas de neurocontrol, como por ejemplo el control inverso, control adaptativo directo e indirecto, etc. Es habitual en esos enfoques utilizar redes neuronales “feedforward” con memoria en la entrada (Tapped Delay) o bien redes recurrentes (modelos de Elman o Jordan) entrenadas off-line para capturar la dinámica del sistema (directa o inversa) y utilizarla en el lazo de control. En este artículo presentamos un esquema de identificación basado en redes del tipo RBF (Radial Basis Function) que se entrena on-line y que dinámicamente modifica su estructura (número de nodos o elementos en la capa oculta) permitiendo una implementación en tiempo real del identificador en el lazo de control.},
    author = {Valverde , Ricardo and Gachet Páez, Diego},
    doi = {10.4995/riai.v4i2.8023},
    journal = {Revista Iberoamericana de Automática e Informática industrial},
    pages = {32-42},
    publisher = {IFAC},
    title = {Identificación de Sistemas Dinámicos Utilizando Redes Neuronales RBF},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIdentificaci%C3%B3n+de+Sistemas+Din%C3%A1micos+Utilizando+Redes+Neuronales+RBF&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Vaquero, A., Saenz, F., Alvarez, F., & Buenaga, M.. (2006). Methodologically designing a hierarchically organized concept-based terminology database to improve access to biomedical documentation. In Meersman, R., Tari, Z., & Herrero, P. (Ed.), In On the move to meaningful internet systems 2006: otm 2006 workshops (, Vol. 4277pp. 658-668). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.

    @INCOLLECTION{Vaquero2006a,
    author = {Vaquero , Antonio and Saenz , Fernando and Alvarez , Francisco and Buenaga , Manuel},
    title = {Methodologically Designing a Hierarchically Organized Concept-Based Terminology Database to Improve Access to Biomedical Documentation},
    booktitle = {On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops},
    publisher = {Springer Berlin Heidelberg},
    year = {2006},
    editor = {Meersman, Robert and Tari, Zahir and Herrero, Pilar},
    volume = {4277},
    series = {Lecture Notes in Computer Science},
    pages = {658-668},
    month = {jan},
    abstract = {Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.},
    copyright = {©2006 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/11915034_90},
    isbn = {978-3-540-48269-7, 978-3-540-48272-7},
    url = {http://scholar.google.es/scholar?q=allintitle%3AMethodologically+Designing+a+Hierarchically+Organized+Concept-Based+Terminology+Database+to+Improve+Access+to+Biomedical+Documentation&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Vaquero, A., Saenz, F., Alvarez, F., & Buenaga, M.. (2006). Conceptual design for domain and task specific ontology-based linguistic resources. Paper presented at the On the move to meaningful internet systems 2006: coopis, doa, gada, and odbase.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Regardless of the knowledge representation schema chosen to implement a linguistic resource, conceptual design is an important step in its development. However, it is normally put aside by developing efforts as they focus on content, implementation and time-saving issues rather than on the software engineering aspects of the construction of linguistic resources. Based on an analysis of common problems found in linguistic resources, we present a reusable conceptual model which incorporates elements that give ontology developers the possibility to establish formal semantic descriptions for concepts and relations, and thus avoiding the aforementioned common problems. The model represents a step forward in our efforts to define a complete methodology for the design and implementation of ontology-based linguistic resources using relational databases and a sound software engineering approach for knowledge representation.

    @INPROCEEDINGS{Vaquero2006,
    author = {Vaquero , Antonio and Saenz , Fernando and Alvarez , Francisco and Buenaga , Manuel},
    title = {Conceptual Design for Domain and Task Specific Ontology-Based Linguistic Resources},
    booktitle = {On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE},
    year = {2006},
    volume = {4275},
    series = {Lecture Notes in Computer Science},
    pages = {855-862},
    month = {November},
    publisher = {Springer Berlin Heidelberg},
    abstract = {Regardless of the knowledge representation schema chosen to implement a linguistic resource, conceptual design is an important step in its development. However, it is normally put aside by developing efforts as they focus on content, implementation and time-saving issues rather than on the software engineering aspects of the construction of linguistic resources. Based on an analysis of common problems found in linguistic resources, we present a reusable conceptual model which incorporates elements that give ontology developers the possibility to establish formal semantic descriptions for concepts and relations, and thus avoiding the aforementioned common problems. The model represents a step forward in our efforts to define a complete methodology for the design and implementation of ontology-based linguistic resources using relational databases and a sound software engineering approach for knowledge representation.},
    doi = {10.1007/11914853_52},
    url = {http://scholar.google.es/scholar?q=allintitle%3AConceptual+Design+for+Domain+and+Task+Specific+Ontology-Based+Linguistic+Resources&btnG=&hl=es&as_sdt=0}
    }

  • Vaquero, A., Saenz, F., Álvarez, F., & Buenaga, M.. (2006). Thinking precedes action: using software engineering for the development of a terminology database to improve access to biomedical documentation. In Maglaveras, N., Chouvarda, I., Koutkias, V., & Brause, R. (Ed.), In Biological and medical data analysis (, Vol. 4345pp. 207-218). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of software and database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.

    @INCOLLECTION{Vaquero2006b,
    author = {Vaquero , Antonio and Saenz , Fernando and Álvarez , Francisco and Buenaga , Manuel},
    title = {Thinking Precedes Action: Using Software Engineering for the Development of a Terminology Database to Improve Access to Biomedical Documentation},
    booktitle = {Biological and Medical Data Analysis},
    publisher = {Springer Berlin Heidelberg},
    year = {2006},
    editor = {Maglaveras, Nicos and Chouvarda, Ioanna and Koutkias, Vassilis and Brause, Rüdiger},
    volume = {4345},
    series = {Lecture Notes in Computer Science},
    pages = {207-218},
    month = {jan},
    abstract = {Relational databases have been used to represent lexical knowledge since the days of machine-readable dictionaries. However, although software engineering provides a methodological framework for the construction of databases, most developing efforts focus on content, implementation and time-saving issues, and forget about the software engineering aspects of software and database construction. We have defined a methodology for the development of lexical resources that covers this and other aspects, by following a sound software engineering approach to formally represent knowledge. Nonetheless, the conceptual model from which it departs has some major limitations that need to be overcome. Based on a short analysis of common problems in existing lexical resources, we present an upgraded conceptual model as a first step towards the methodological development of a hierarchically organized concept-based terminology database, to improve the access to medical information as part of the SINAMED and ISIS projects.},
    copyright = {©2006 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/11946465_19},
    isbn = {978-3-540-68063-5, 978-3-540-68065-9},
    shorttitle = {Thinking Precedes Action},
    url = {http://scholar.google.es/scholar?q=allintitle%3AThinking+Precedes+Action%3A+Using+Software+Engineering+for+the+Development+of+a+Terminology+Database+to+Improve+Access+to+Biomedical+Documentation&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • de la Villa, M., Aparicio, F., Maña, M. J., & Buenaga, M.. (2012). A learning support tool with clinical cases based on concept maps and medical entity recognition. Paper presented at the Proceedings of the 2012 acm international conference on intelligent user interfaces, New York, NY, USA.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The search for truthful health information through Internet is an increasingly complex process due to the growing amount of resources. Access to information can be difficult to control even in environments where the goal pursued is well-defined, as in the case of learning activities with medical students. In this paper, we present a computer tool devised to ease the process of understanding medical concepts from information in clinical case histories. To this end, it automatically constructs concept maps and presents reliable information from different ontologies and knowledge bases. The two main components of the system are an Intelligent Information Access interface and a Concept Map Graph that retrieves medical concepts from a text input, and provides rich information and semantically related concepts. The paper includes a user evaluation of the first component and a systematic assessment for the second component. Results show that our proposal can be efficient and useful for students in a medical learning environment.

    @INPROCEEDINGS{Villa2012,
    author = {de la Villa , Manuel and Aparicio , Fernando and Maña , Manuel J. and Buenaga , Manuel},
    title = {A learning support tool with clinical cases based on concept maps and medical entity recognition},
    booktitle = {Proceedings of the 2012 ACM international conference on Intelligent User Interfaces},
    year = {2012},
    series = {IUI ´12},
    pages = {61-70},
    address = {New York, NY, USA},
    publisher = {ACM},
    abstract = {The search for truthful health information through Internet is an increasingly complex process due to the growing amount of resources. Access to information can be difficult to control even in environments where the goal pursued is well-defined, as in the case of learning activities with medical students. In this paper, we present a computer tool devised to ease the process of understanding medical concepts from information in clinical case histories. To this end, it automatically constructs concept maps and presents reliable information from different ontologies and knowledge bases. The two main components of the system are an Intelligent Information Access interface and a Concept Map Graph that retrieves medical concepts from a text input, and provides rich information and semantically related concepts. The paper includes a user evaluation of the first component and a systematic assessment for the second component. Results show that our proposal can be efficient and useful for students in a medical learning environment.},
    doi = {10.1145/2166966.2166978},
    isbn = {978-1-4503-1048-2},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+A+learning+support+tool+with+clinical+cases+based+on+concept+maps+and+medical+entity+recognition&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}}


TÍTULO


  • Molina, M., & Flores, V.. (2006). A knowledge-based approach for automatic generation of summaries of behavior. Paper presented at the Aimsa.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Effective automatic summarization usually requires simulating human reasoning such as abstraction or relevance reasoning. In this paper we describe a solution for this type of reasoning in the particular case of surveillance of the behavior of a dynamic system using sensor data. The paper first presents the approach describing the required type of knowledge with a possible representation. This includes knowledge about the system structure, behavior, interpretation and saliency. Then, the paper shows the inference algorithm to produce a summarization tree based on the exploitation of the physical characteristics of the system. The paper illustrates how the method is used in the context of automatic generation of summaries of behavior in an application for basin surveillance in the presence of river floods.

    @inproceedings{DBLP:conf/aimsa/MolinaF06,
    author = {Molina, Martin and Flores, Victor},
    abstract = {Effective automatic summarization usually requires simulating human reasoning such as abstraction or relevance reasoning. In this paper we describe a solution for this type of reasoning in the particular case of surveillance of the behavior of a dynamic system using sensor data. The paper first presents the approach describing the required type of knowledge with a possible representation. This includes knowledge about the system structure, behavior, interpretation and saliency. Then, the paper shows the inference algorithm to produce a summarization tree based on the exploitation of the physical characteristics of the system. The paper illustrates how the method is used in the context of automatic generation of summaries of behavior in an application for basin surveillance in the presence of river floods.},
    title = {A Knowledge-Based Approach for Automatic Generation of Summaries of Behavior},
    booktitle = {AIMSA},
    year = {2006},
    pages = {265-274},
    doi = {10.1007/11861461_28},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+Knowledge-Based+Approach+for+Automatic+Generation+of+Summaries+of+Behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • de la Villa, M., Aparicio, F., Maña, M. J., & Buenaga, M.. (2012). A learning support tool with clinical cases based on concept maps and medical entity recognition. Paper presented at the Proceedings of the 2012 acm international conference on intelligent user interfaces, New York, NY, USA.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The search for truthful health information through Internet is an increasingly complex process due to the growing amount of resources. Access to information can be difficult to control even in environments where the goal pursued is well-defined, as in the case of learning activities with medical students. In this paper, we present a computer tool devised to ease the process of understanding medical concepts from information in clinical case histories. To this end, it automatically constructs concept maps and presents reliable information from different ontologies and knowledge bases. The two main components of the system are an Intelligent Information Access interface and a Concept Map Graph that retrieves medical concepts from a text input, and provides rich information and semantically related concepts. The paper includes a user evaluation of the first component and a systematic assessment for the second component. Results show that our proposal can be efficient and useful for students in a medical learning environment.

    @INPROCEEDINGS{Villa2012,
    author = {de la Villa , Manuel and Aparicio , Fernando and Maña , Manuel J. and Buenaga , Manuel},
    title = {A learning support tool with clinical cases based on concept maps and medical entity recognition},
    booktitle = {Proceedings of the 2012 ACM international conference on Intelligent User Interfaces},
    year = {2012},
    series = {IUI ´12},
    pages = {61-70},
    address = {New York, NY, USA},
    publisher = {ACM},
    abstract = {The search for truthful health information through Internet is an increasingly complex process due to the growing amount of resources. Access to information can be difficult to control even in environments where the goal pursued is well-defined, as in the case of learning activities with medical students. In this paper, we present a computer tool devised to ease the process of understanding medical concepts from information in clinical case histories. To this end, it automatically constructs concept maps and presents reliable information from different ontologies and knowledge bases. The two main components of the system are an Intelligent Information Access interface and a Concept Map Graph that retrieves medical concepts from a text input, and provides rich information and semantically related concepts. The paper includes a user evaluation of the first component and a systematic assessment for the second component. Results show that our proposal can be efficient and useful for students in a medical learning environment.},
    doi = {10.1145/2166966.2166978},
    isbn = {978-1-4503-1048-2},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+A+learning+support+tool+with+clinical+cases+based+on+concept+maps+and+medical+entity+recognition&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}}

  • Puente, E. A., Gachet Páez, D., Pimentel, J. R., Moreno, L., & Salichs, M. A.. (1992). A neural network supervisor for behavioral primitives of autonomous systems. Paper presented at the , proceedings of the 1992 international conference on industrial electronics, control, instrumentation, and automation, 1992. power electronics and motion control.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors present a neural network implementation of a fusion supervisor of primitive behavior to execute more complex robot behavior. The neural network implementation is part of an architecture for the execution of mobile robot tasks, which is composed of several primitive behaviors, in a simultaneous or concurrent fashion. The architecture allows for learning to take place. At the execution level, it incorporates the experience gained in executing primitive behavior as well as the overall task. The neural network has been trained to supervise the relative contributions of the various primitive robot behaviors to execute a given task. The neural network implementation has been tested within {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the neural network is adequate

    @inproceedings{puente_neural_1992,
    title = {A neural network supervisor for behavioral primitives of autonomous systems},
    doi = {10.1109/IECON.1992.254457},
    abstract = {The authors present a neural network implementation of a fusion supervisor of primitive behavior to execute more complex robot behavior. The neural network implementation is part of an architecture for the execution of mobile robot tasks, which is composed of several primitive behaviors, in a simultaneous or concurrent fashion. The architecture allows for learning to take place. At the execution level, it incorporates the experience gained in executing primitive behavior as well as the overall task. The neural network has been trained to supervise the relative contributions of the various primitive robot behaviors to execute a given task. The neural network implementation has been tested within {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the neural network is adequate},
    booktitle = {, Proceedings of the 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation, 1992. Power Electronics and Motion Control},
    author = {Puente, E. A. and Gachet Páez, Diego and Pimentel, J.R. and Moreno, L. and Salichs, M.A.},
    year = {1992},
    keywords = {Actuators, Automatic control, autonomous systems, behavioral primitives, Control systems, Electronic mail, Engineering management, fusion supervisor, learning (artificial intelligence), mobile robot tasks, mobile robots, Navigation, neural nets, neural network supervisor, Neural networks, {OPMOR}, Robot kinematics, simulation environment, Testing, training},
    pages = {1105--1109 vol.2},
    url = {http://scholar.google.es/scholar?q=A+neural+network+supervisor+for+behavioral+primitives+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Molina, M., & Flores, V.. (2008). A presentation model for multimedia summaries of behavior. Paper presented at the Iui.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Presentation models are used by intelligent user interfaces to automatically construct adapted presentations according to particular communication goals. This paper describes the characteristics of a presentation model that was designed to automatically produce multimedia presentations about the summarized behavior of dynamic systems. The presentation model is part of the MSB application (Multimedia Summarizer of Behavior). MSB was developed for the problem of management of dynamic systems where different types of users (operators, decision-makers, other institutions, etc.) need to be informed about the evolution of the system, especially during critical situations. The paper describes the details of the presentation model based on a hierarchical planner together with graphical resources. The paper also describes an application in the field of hydrology for which the model was developed.

    @inproceedings{DBLP:conf/iui/MolinaF08,
    author = {Molina, Martin and Flores, Victor},
    abstract = {Presentation models are used by intelligent user interfaces to automatically construct adapted presentations according to particular communication goals. This paper describes the characteristics of a presentation model that was designed to automatically produce multimedia presentations about the summarized behavior of dynamic systems. The presentation model is part of the MSB application (Multimedia Summarizer of Behavior). MSB was developed for the problem of management of dynamic systems where different types of users (operators, decision-makers, other institutions, etc.) need to be informed about the evolution of the system, especially during critical situations. The paper describes the details of the presentation model based on a hierarchical planner together with graphical resources. The paper also describes an application in the field of hydrology for which the model was developed.},
    title = {A presentation model for multimedia summaries of behavior},
    booktitle = {IUI},
    year = {2008},
    pages = {369-372},
    doi = {10.1145/1378773.1378832},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+presentation+model+for+multimedia+summaries+of+behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gachet Páez, D., Salichs, M. A., Pimentel, J. R., Moreno, L., & De la Escalera, A.. (1992). A software architecture for behavioral control strategies of autonomous systems. Paper presented at the , proceedings of the 1992 international conference on industrial electronics, control, instrumentation, and automation, 1992. power electronics and motion control.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors deal with the execution of several tasks for mobile robots while exhibiting various primitive behaviors in a simultaneous or concurrent fashion. The architecture allows for learning to take place, and at the execution level it incorporates the experience gained in executing primitive behaviors as well as the overall task. Some empirical rules are provided for the appropriate mixture of primitive behaviors to produce tasks. The architecture has been implemented in {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the architecture is excellent

    @inproceedings{gachet_software_1992,
    title = {A software architecture for behavioral control strategies of autonomous systems},
    doi = {10.1109/IECON.1992.254475},
    abstract = {The authors deal with the execution of several tasks for mobile robots while exhibiting various primitive behaviors in a simultaneous or concurrent fashion. The architecture allows for learning to take place, and at the execution level it incorporates the experience gained in executing primitive behaviors as well as the overall task. Some empirical rules are provided for the appropriate mixture of primitive behaviors to produce tasks. The architecture has been implemented in {OPMOR}, a simulation environment for mobile robots, and several results are presented. The performance of the architecture is excellent},
    booktitle = {, Proceedings of the 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation, 1992. Power Electronics and Motion Control},
    author = {Gachet Páez, Diego and Salichs, M.A. and Pimentel, J.R. and Moreno, L. and De la Escalera, A.},
    year = {1992},
    keywords = {autonomous systems, Computer architecture, Control systems, Degradation, digital control, Electronic mail, empirical rules, execution level, Humans, learning, mobile robots, Navigation, {OPMOR}, performance, position control, robot programming, simulation environment, software architecture, Software Engineering, Velocity control},
    pages = {1002--1007 vol.2},
    url = {http://scholar.google.es/scholar?q=A+software+architecture+for+behavioral+control+strategies+of+autonomous+systems&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Pimentel, J. R., Salichs, M. A., Gachet Páez, D., & Moreno, L.. (1994). A software development environment for autonomous mobile robots. Paper presented at the , 20th international conference on industrial electronics, control and instrumentation, 1994. IECON ’94.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developing software for actual sensor-based mobile robots is not a trivial task because because of a number of practical difficulties. The task of software development can be simplified by the use of an appropriate environment. To be effective, the software development environment must have the following requirements: modularity, hardware independence, capability to work with an actual or simulated system and independence of control modules from system evaluation. In this paper, the authors propose a software development environment which meets the aforementioned requirements. The environment has been used to develop software in the area of reactive control within the Panorama project. Applications of this software environment in a number of projects at the {UPM} are described. Portions of this research have been performed under the {EEC} {ESPRIT} 2483 Panorama Project

    @inproceedings{pimentel_software_1994,
    title = {A software development environment for autonomous mobile robots},
    volume = {2},
    doi = {10.1109/IECON.1994.397944},
    abstract = {Developing software for actual sensor-based mobile robots is not a trivial task because because of a number of practical difficulties. The task of software development can be simplified by the use of an appropriate environment. To be effective, the software development environment must have the following requirements: modularity, hardware independence, capability to work with an actual or simulated system and independence of control modules from system evaluation. In this paper, the authors propose a software development environment which meets the aforementioned requirements. The environment has been used to develop software in the area of reactive control within the Panorama project. Applications of this software environment in a number of projects at the {UPM} are described. Portions of this research have been performed under the {EEC} {ESPRIT} 2483 Panorama Project},
    booktitle = {, 20th International Conference on Industrial Electronics, Control and Instrumentation, 1994. {IECON} '94},
    author = {Pimentel, J.R. and Salichs, M.A. and Gachet Páez, Diego and Moreno, L.},
    year = {1994},
    keywords = {Application software, Art, autonomous mobile robots, Control systems, {EEC} {ESPRIT} 2483 {PANORAMA} Project, Hardware, hardware independence, mobile robots, modularity, path planning, Programming, project support environments, reactive control, Real time systems, research initiatives, robot programming, sensor-based mobile robots, software development environment, Software Engineering, system evaluation, Testing, {USA} Councils, Workstations},
    pages = {1094--1099 vol.2},
    url = {http://scholar.google.es/scholar?q=A+software+development+environment+for+autonomous+mobile+robots&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Buenaga Rodriguez, M., Maña López, M. J., Diaz Esteban, A., & Gervás Gómez-Navarro, P.. (2001). A user model based on content analysis for the intelligent personalization of a news service. In Bauer, M., Gmytrasiewicz, P. J., & Vassileva, J. (Ed.), In User modeling (, Vol. 2109pp. 216-218). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper we present a methodology designed to improve the intelligent personalization of news services. Our methodology integrates textual content analysis tasks to achieve an elaborate user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of user’s interests includes his preferences about content, using a wide coverage and non-specific-domain classification of topics, and structure (newspaper sections). The application of implicit feedback allows a proper and dynamic personalization.

    @INCOLLECTION{BuenagaRodri­guez2001,
    author = {Buenaga Rodriguez , Manuel and Maña López , Manuel J. and Diaz Esteban , Alberto and Gervás Gómez-Navarro , Pablo},
    title = {A User Model Based on Content Analysis for the Intelligent Personalization of a News Service},
    booktitle = {User Modeling},
    publisher = {Springer Berlin Heidelberg},
    year = {2001},
    editor = {Bauer, Mathias and Gmytrasiewicz, Piotr J. and Vassileva, Julita},
    volume = {2109},
    series = {Lecture Notes in Computer Science},
    pages = {216-218},
    month = {jan},
    abstract = {In this paper we present a methodology designed to improve the intelligent personalization of news services. Our methodology integrates textual content analysis tasks to achieve an elaborate user model, which represents separately short-term needs and long-term multi-topic interests. The characterization of user's interests includes his preferences about content, using a wide coverage and non-specific-domain classification of topics, and structure (newspaper sections). The application of implicit feedback allows a proper and dynamic personalization.},
    copyright = {©2001 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/3-540-44566-8_25},
    isbn = {978-3-540-42325-6, 978-3-540-44566-1},
    language = {en},
    url = {http://scholar.google.es/scholar?q=allintitle%3AA+User+Model+Based+on+Content+Analysis+for+the+Intelligent+Personalization+of+a+News+Service&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Carrero García, F., Gómez Hidalgo, J. M., Buenaga Rodríguez, M., Mata, J., & Maña López, M.. (2007). Acceso a la información bilingüe utilizando ontologías específicasdel dominio biomédico. Revista de la sociedad española para el procesamiento del lenguajenatural, 38, 107-118.
    [BibTeX] [Abstract] [Google Scholar]
    One of the most promising approaches to Cross-Language Information Retrieval is the utilization of lexical-semantic resources for concept-indexing documents and queries. We have followed this approach in a proposal of an Information Access system designed for medicine professionals, aiming at easing the preparation of clinical cases, and the development of studies and research. In our proposal, the clinical record information, in Spanish, is connected to related scientific information (research papers), in English and Spanish, by using high quality and coverage resources like the SNOMED ontology. We also describe how we have addressed information privacy.

    @OTHER{CarreroGarcia2007,
    abstract = {One of the most promising approaches to Cross-Language Information Retrieval is the utilization of lexical-semantic resources for concept-indexing documents and queries. We have followed this approach in a proposal of an Information Access system designed for medicine professionals, aiming at easing the preparation of clinical cases, and the development of studies and research. In our proposal, the clinical record information, in Spanish, is connected to related scientific information (research papers), in English and Spanish, by using high quality and coverage resources like the SNOMED ontology. We also describe how we have addressed information privacy.},
    author = {Carrero García , Francisco and Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel and Mata , Jacinto and Maña López , Manuel},
    journal = {Revista de la Sociedad Española para el Procesamiento del LenguajeNatural},
    month = {Abril},
    pages = {107-118},
    title = {Acceso a la información bilingüe utilizando ontologías específicasdel dominio biomédico},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAcceso+a+la+informaci%C3%B3n+biling%C3%BCe+utilizando++ontolog%C3%ADas+espec%C3%ADficas+del+dominio+biom%C3%A9dico&btnG=&hl=es&as_sdt=0%2C5},
    volume = {38},
    year = {2007}
    }

  • Gachet Páez, D., Buenaga, M., Giraldez, J. I., & Padrón, V.. (2009). Agent based risk patient management. .
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in longstay hospitals, and in some cases, in the homes of patients wich a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center.The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system wil allow to transmit the patient´s location and some information about his/her illness to the hospital or care centre.

    @OTHER{Gachet2009,
    abstract = {This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in longstay hospitals, and in some cases, in the homes of patients wich a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center.The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system wil allow to transmit the patient´s location and some information about his/her illness to the hospital or care centre.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Giraldez , José Ignacio and Padrón , Víctor},
    booktitle = {Ambient Intelligence Perspectives},
    doi = {10.3233/978-1-58603-946-2-90},
    publisher = {Ambient Intelligence Forum },
    title = {Agent Based Risk Patient Management},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAgent+Based+Risk+Patient+Management&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Cortizo Pérez, J. C., Carrero, F. M., & Monsalve, B.. (2010). An architecture for a general purpose multi-algorithm recommender system. Roceedings of the workshop on the practical use of recommender systems, algorithms and technologies (prsat 2010), 51-54.
    [BibTeX] [Abstract] [Google Scholar]
    Although the actual state-of-the-art on Recommender Systems is good enough to allow recommendations and personalization along many application fields, developing a general purpose multi-algorithm recommender system is a tough task. In this paper we present the main challenges involved on developing such system and a system\’s architecture that allows us to face this challenges.

    @OTHER{CortizoPerez2010,
    abstract = {Although the actual state-of-the-art on Recommender Systems is good enough to allow recommendations and personalization along many application fields, developing a general purpose multi-algorithm recommender system is a tough task. In this paper we present the main challenges involved on developing such system and a system\'s architecture that allows us to face this challenges.},
    author = {Cortizo Pérez , José Carlos and Carrero , Francisco M. and Monsalve , Borja},
    journal = {roceedings of the Workshop on the Practical Use of Recommender Systems, Algorithms and Technologies (PRSAT 2010)},
    pages = {51-54},
    title = {An Architecture for a General Purpose Multi-Algorithm Recommender System},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+An+Architecture+for+a+General+Purpose+Multi-Algorithm+Recommender+System&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Aparicio, F., Buenaga, M., Rubio, M., & Hernando, A.. (2012). An intelligent information access system assisting a case based learning methodology evaluated in higher education with medical students. Computers and education, 58(4), 1282-1295.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In recent years there has been a shift in educational methodologies toward a student-centered approach, one which increasingly emphasizes the integration of computer tools and intelligent systems adopting different roles. In this paper we describe in detail the development of an Intelligent Information Access system used as the basis for producing and assessing a constructivist learning methodology with undergraduate students. The system automatically detects significant concepts available within a given clinical case and facilitates an objective examination, following a proper selection process of the case in which is taken into account the students’ knowledge level. The learning methodology implemented is intimately related to concept-based, case-based and internet-based learning. In spite of growing theoretical research on the use of information technology in higher education, it is rare to find applications that measure learning and students’ perceptions and compare objective results with a free Internet search. Our work enables students to gain understanding of the concepts in a case through Web browser interaction with our computer system identifying these concepts and providing direct access to enriched related information from Medlineplus, Freebase and PubMed. In order to evaluate the learning activity outcomes, we have done a trial run with volunteer students from a 2nd year undergraduate Medicine course, dividing the volunteers into two groups. During the activity all students were provided with a clinical case history and a multiple choice test with medical questions relevant to the case. This test could be done in two different ways: learners in one group were allowed to freely seek information on the Internet, while the other group could only search for information using the newly developed computer tool. In the latter group, we measured how students perceived the tool’s support for solving the activity and the Web interface usability, supplying them with a Likert questionnaire for anonymous completion. The particular case selected was a female with a medical history of heart pathology, from which the system derived medical terms closely associated with her condition description, her clinical evolution and treatment.

    @ARTICLE{Aparicio2012,
    author = {Aparicio , Fernando and Buenaga , Manuel and Rubio , Margarita and Hernando , Asunción},
    title = {An intelligent information access system assisting a case based learning methodology evaluated in higher education with medical students},
    journal = {Computers And Education},
    year = {2012},
    volume = {58},
    pages = {1282-1295},
    number = {4},
    month = {may},
    abstract = {In recent years there has been a shift in educational methodologies toward a student-centered approach, one which increasingly emphasizes the integration of computer tools and intelligent systems adopting different roles. In this paper we describe in detail the development of an Intelligent Information Access system used as the basis for producing and assessing a constructivist learning methodology with undergraduate students. The system automatically detects significant concepts available within a given clinical case and facilitates an objective examination, following a proper selection process of the case in which is taken into account the students’ knowledge level. The learning methodology implemented is intimately related to concept-based, case-based and internet-based learning. In spite of growing theoretical research on the use of information technology in higher education, it is rare to find applications that measure learning and students’ perceptions and compare objective results with a free Internet search. Our work enables students to gain understanding of the concepts in a case through Web browser interaction with our computer system identifying these concepts and providing direct access to enriched related information from Medlineplus, Freebase and PubMed. In order to evaluate the learning activity outcomes, we have done a trial run with volunteer students from a 2nd year undergraduate Medicine course, dividing the volunteers into two groups. During the activity all students were provided with a clinical case history and a multiple choice test with medical questions relevant to the case. This test could be done in two different ways: learners in one group were allowed to freely seek information on the Internet, while the other group could only search for information using the newly developed computer tool. In the latter group, we measured how students perceived the tool’s support for solving the activity and the Web interface usability, supplying them with a Likert questionnaire for anonymous completion. The particular case selected was a female with a medical history of heart pathology, from which the system derived medical terms closely associated with her condition description, her clinical evolution and treatment.},
    doi = {10.1016/j.compedu.2011.12.021},
    issn = {0360-1315},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Intelligent+Information+Access+system+assisting+a+Case+Based+Learning+methodology+evaluated+in+higher+education+with+medical+students&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-20}
    }

  • Fernandez-Valmayor, A., Villarrubia, C., & Buenaga, M.. (1993). An intelligent interface to a database system. , Ca, USA.
    [BibTeX] [Abstract] [Google Scholar]
    In this work, we describe the architecture of an intelligent interface that improves the effectiveness of full text retrieval methods through the semantic interpretation of user’s queries in natural language (NL). This interface comprises a user-expert module that integrates a dynamic model of human memory with a NL parser. This paper concentrates on the problem of the elaboration of index patterns out of specific cases or instances. The structure of the dynamic memory of cases and parsing techniques are also discussed.

    @INPROCEEDINGS{Fernandez-Valmayor1993,
    author = {Fernandez-Valmayor , A. and Villarrubia , C. and Buenaga , Manuel},
    title = {An Intelligent Interface to a Database System},
    year = {1993},
    address = {Ca, USA},
    month = {March},
    abstract = {In this work, we describe the architecture of an intelligent interface that improves the effectiveness of full text retrieval methods through the semantic interpretation of user’s queries in natural language (NL). This interface comprises a user-expert module that integrates a dynamic model of human memory with a NL parser. This paper concentrates on the problem of the elaboration of index patterns out of specific cases or instances. The structure of the dynamic memory of cases and parsing techniques are also discussed.},
    journal = {Case-Based Reasoning and Information Retrieval. Exploring the Opportunities for Technology Sharing, AAAI Press},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Intelligent+Interface+to+a+Database+System&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gachet Páez, D., Buenaga, M., Villalba, M., & Lara, P.. (2010). An open and adaptable platform for elderly people and persons with disability to access the information society. .
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010,
    abstract = {NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Villalba , Maite and Lara , Pedro},
    booktitle = {Pervasive Health},
    doi = {10.4108/ICST.PERVASIVEHEALTH2010.8882},
    month = {March},
    title = {An Open and adaptable platform for elderly people and persons with disability to access the information society},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAn+Open+and+adaptable+platform+for+elderly+people+and+persons+with+%09disability+to+access+the+information+society&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Puente, E. A., Moreno, L., Salichs, M. A., & Gachet Páez, D.. (1991). Analysis of data fusion methods in certainty grids application to collision danger monitoring. Paper presented at the , 1991 international conference on industrial electronics, control and instrumentation, 1991. proceedings. IECON ’91.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The authors focus on the use of the occupancy grid representation to maintain and combine the information acquired from sensors about the environment. This information is subsequently used to monitor the robot collision danger risk and take into account that risk in starting the appropriate maneuver. The occupancy grid representation uses a multidimensional tessellation of space into cells, where each cell stores some information about its state. A general model associates a random vector that encodes multiple properties in a cell state. If the cell property is limited to occupancy, it is usually called occupancy grid. Two main approaches have been used to model the occupancy of a cell: probabilistic estimation and the Dempster-Shafer theory of evidence. Probabilistic estimation and some combination rules based on the Dempster-Shafter theory of evidence are analyzed and their possibilities compared

    @inproceedings{puente_analysis_1991,
    title = {Analysis of data fusion methods in certainty grids application to collision danger monitoring},
    doi = {10.1109/IECON.1991.239281},
    abstract = {The authors focus on the use of the occupancy grid representation to maintain and combine the information acquired from sensors about the environment. This information is subsequently used to monitor the robot collision danger risk and take into account that risk in starting the appropriate maneuver. The occupancy grid representation uses a multidimensional tessellation of space into cells, where each cell stores some information about its state. A general model associates a random vector that encodes multiple properties in a cell state. If the cell property is limited to occupancy, it is usually called occupancy grid. Two main approaches have been used to model the occupancy of a cell: probabilistic estimation and the Dempster-Shafer theory of evidence. Probabilistic estimation and some combination rules based on the Dempster-Shafter theory of evidence are analyzed and their possibilities compared},
    booktitle = {, 1991 International Conference on Industrial Electronics, Control and Instrumentation, 1991. Proceedings. {IECON} '91},
    author = {Puente, E. A. and Moreno, L. and Salichs, M.A. and Gachet Páez, Diego},
    year = {1991},
    keywords = {artificial intelligence, autonomous mobile robots, Buildings, certainty grids, collision danger monitoring, Data analysis, data fusion, Dempster-Shafer theory of evidence, Fuses, Geometry, mobile robots, monitoring, multidimensional tessellation, Navigation, probabilistic estimation, probability, Recursive estimation, Remotely operated vehicles, Sensor fusion, signal processing, State estimation},
    pages = {1133--1137 vol.2},
    url = {http://scholar.google.es/scholar?q=Analysis+of+data+fusion+methods+in+certainty+grids+application+to+collision+danger+monitoring+&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Aplicaciones de las bases de datos léxicas en la clasificación automáticade documentos. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Aplicaciones de las bases de datos léxicas en la clasificación automáticade documentos},
    url = {http://scholar.google.es/scholar?q=allintitle%3AAplicaciones+de+las+bases+de+datos+l%C3%A9xicas+en+la+clasificaci%C3%B3n+autom%C3%A1tica+de+documentos&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Fernandez, J., Benchetrit, D., & Gachet Páez, D.. (2001). Automated visual inspection to assembly of frontal airbag sensors of automobiles. Paper presented at the 2001 8th IEEE international conference on emerging technologies and factory automation, 2001. proceedings.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes an automatic quality control system that supervises through three {CCD} cameras the assembly of automobile airbag sensors. The main characteristics that can be detected are position, angle and geometric parameters of epoxy resin to fix the accelerator sensor. The system can inspect 12000 pieces/hour and now it is at full production in a multinational automobile component factory at Madrid.

    @inproceedings{fernandez_automated_2001,
    title = {Automated visual inspection to assembly of frontal airbag sensors of automobiles},
    volume = {2},
    doi = {10.1109/ETFA.2001.997745},
    abstract = {This paper describes an automatic quality control system that supervises through three {CCD} cameras the assembly of automobile airbag sensors. The main characteristics that can be detected are position, angle and geometric parameters of epoxy resin to fix the accelerator sensor. The system can inspect 12000 pieces/hour and now it is at full production in a multinational automobile component factory at Madrid.},
    booktitle = {2001 8th {IEEE} International Conference on Emerging Technologies and Factory Automation, 2001. Proceedings},
    author = {Fernandez, J. and Benchetrit, D. and Gachet Páez, Diego},
    year = {2001},
    keywords = {Assembly systems, automatic optical inspection, automobile airbag sensors, automobile component factory, automobile industry, Automobiles, {CCD} cameras, Charge coupled devices, Charge-coupled image sensors, Epoxy resins, Inspection, Production systems, quality control, quality control system, Sensor phenomena and characterization, Sensor systems, visual inspection},
    pages = {631--634 vol.2},
    url = {http://scholar.google.es/scholar?q=Automated+visual+inspection+to+assembly+of+frontal+airbag+sensors+of+automobiles&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Ascanio, J. R.. (2014). Big data and iot for chronic patients monitoring. In Nugent, C., Coronato Antonio, D., & Bravo, J. (Ed.), In Ubiquitous computing & ambient intelligence (, Vol. 8277pp. 33-38). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.

    @INCOLLECTION{Gachet2014b,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Ascanio , J. R.},
    title = {Big data and IoT for chronic patients monitoring},
    booktitle = {Ubiquitous Computing & Ambient Intelligence},
    publisher = {Springer Berlin Heidelberg},
    year = {2014},
    editor = {Nugent, Christofer and Coronato Antonio, Davy and Bravo, José.},
    volume = {8277},
    series = {Lecture Notes in Computer Science},
    pages = {33-38},
    month = {December},
    abstract = {Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.},
    copyright = {©2013 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-319-13102-3_68},
    isbn = {03029743},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=Mwr8bDQAAAAJ&citation_for_view=Mwr8bDQAAAAJ:HDshCWvjkbEC},
    urldate = {2014-12-12}
    }

  • Gachet Páez, D., Buenaga, M., Puertas, E., & Villalba, M. T.. (2015). Big data processing of bio-signal sensors information for self-management of health and diseases. In Imis 2015 proceedings (pp. 330-335). IEEE.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    European countries are characterized by aging population and economical crisis, as a consequence, the funds dedicated to social services has been diminished specially those dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependant people to care centers and enabling the management of chronic diseases outside institutions. It is necessary to streamline the health system resources leading to the development of new medical services

    @INCOLLECTION{Gachet2015b,
    author = {Gachet Páez, Diego and Buenaga, Manuel and Puertas, Enrique and Villalba, María Teresa},
    title = {Big Data Processing of Bio-signal Sensors Information for Self-management of Health and Diseases},
    booktitle = {IMIS 2015 Proceedings},
    publisher = {IEEE},
    year = {2015},
    editor = {},
    volume = {},
    series = {},
    pages = {330--335},
    month = {July},
    abstract = {European countries are characterized by aging population and economical crisis, as a consequence, the funds dedicated to social services has been diminished specially those
    dedicated to healthcare, is then desirable to optimize the costs of public and private
    healthcare systems reducing the affluence of chronic and dependant people to care centers
    and enabling the management of chronic diseases outside institutions. It is necessary to
    streamline the health system resources leading to the development of new medical services},
    copyright = {IEEE},
    doi = {10.1109/IMIS.2015.51},
    isbn = {978-1-4799-8872-3 },
    url = {https://scholar.google.es/citations?view_op=view_citation&continue=/scholar%3Fq%3DBig%2BData%2BProcessing%2Bof%2BBio-signal%2BSensors%2BInformation%2Bfor%2BSelf-management%2Bof%2BHealth%2Band%2BDiseases%26hl%3Des%26as_sdt%3D0,5%26as_ylo%3D2015%26scilib%3D2%26scioq%3DIPHealth:%2BPlataforma%2Binteligente%2Bbasada%2Ben%2Bopen,%2Blinked%2By%2Bbig%2Bdata%2Bpara%2Bla%2Btoma%2Bde%2Bdecisiones%2By%2Baprendizaje%2Ben%2B&citilm=1&citation_for_view=0ynMYdoAAAAJ:K3LRdlH-MEoC&hl=es&oi=p},
    urldate = {2015-02-08}
    }

  • Gachet Páez, D., Buenaga, M., Puertas, E., Villalba, M. T., & Muñoz Gil, R.. (2015). Big data processing using wearable devices for wellbeing and healthy activities promotion. In Cleland, I., Guerrero, L., & Bravo, J. (Ed.), In Ambient assisted living. ict-based solutions in real life situations (pp. 196-205). Springer International Publishing.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Abstract The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this paper we describe a system for promoting an active and healthy lifestyle

    @INCOLLECTION{Gachet2015a,
    author = {Gachet Páez, Diego and Buenaga, Manuel and Puertas, Enrique and Villalba, María Teresa and Muñoz Gil, Rafael},
    title = {Big Data Processing Using Wearable Devices for Wellbeing and Healthy Activities Promotion},
    booktitle = {Ambient Assisted Living. ICT-based Solutions in Real Life Situations},
    publisher = {Springer International Publishing},
    year = {2015},
    editor = {Cleland, Ian and Guerrero, Luis and Bravo, Jos{\'e}},
    volume = {},
    series = {},
    pages = {196--205},
    month = {December},
    abstract = {Abstract The aging population and economic crisis specially in developed countries have as a consequence the reduction in funds dedicated to healthcare, is then desirable to optimize the costs of public and private healthcare systems reducing the affluence of chronic and dependent people to care centers; promoting healthy lifestyle and activities can allow people to avoid chronic diseases as for example hypertension. In this paper we describe a system for promoting an active and healthy lifestyle},
    copyright = {Springer},
    doi = {10.1007/978-3-319-26410-3_19},
    isbn = {978-3-319-26410-3},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=es&user=0ynMYdoAAAAJ&sortby=pubdate&citation_for_view=0ynMYdoAAAAJ:vRqMK49ujn8C},
    urldate = {2015-02-02}
    }

  • López-Fernández, H., Reboiro-Jato, M., Glez-Peña, D., Aparicio, F., Gachet Páez, D., Buenaga, M., & Fdez-Riverola, F.. (2013). Bioannote: a software platform for annotating biomedical documents with application in medical learning environments. Computer methods and programs in biomedicine, 111(1), 139-147.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Abstract Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations.

    @article{LópezFernández2013139,
    title = {BioAnnote: A software platform for annotating biomedical documents with application in medical learning environments },
    journal = {Computer Methods and Programs in Biomedicine },
    volume = {111},
    number = {1},
    pages = {139 - 147},
    year = {2013},
    issn = {0169-2607},
    doi = {10.1016/j.cmpb.2013.03.007},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABioAnnote%3A+A+software+platform+for+annotating+biomedical+documents+with+application+in+medical+learning+environments&btnG=&hl=es&as_sdt=0%2C5},
    author = {López-Fernández, H. and Reboiro-Jato, M. and Glez-Peña, D. and Aparicio, Fernando and Gachet Páez, Diego and Buenaga, Manuel and Fdez-Riverola, F.},
    abstract = {Abstract Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations. }
    }

  • Alvarez Montero, F., Vaquero Sánchez, A., Sáenz Pérez, F., & Buenaga Rodríguez, M.. (2007). Bringing forward semantic relations. 7th international conference on intelligent design and applications (isda 2007), 511-519.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain fuzzy or under-specified. This is a pervasive problem in software engineering and artificial intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.

    @OTHER{AlvarezMontero2007,
    abstract = {Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain fuzzy or under-specified. This is a pervasive problem in software engineering and artificial intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.},
    address = {Río de Janeiro},
    author = {Alvarez Montero , Francisco and Vaquero Sánchez , Antonio and Sáenz Pérez , Fernando and Buenaga Rodríguez , Manuel},
    doi = {10.1109/ISDA.2007.82},
    journal = {7th International Conference on Intelligent Design and Applications (ISDA 2007)},
    month = {Octubre},
    pages = {511-519},
    title = {Bringing Forward Semantic Relations},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABringing+Forward+Semantic+Relations&btnG=&hl=es&as_sdt=0%2C5},
    year = {2007}
    }

  • Carrero, F., Cortizo, J. C., & Gómez, J. M.. (2008). Building a spanish mmtx by using automatic translation and biomedicalontologies. 9th international conference on intelligent data engineering andautomated learning.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The use of domain ontologies is becoming increasingly popular in Medical Natural Language Processing Systems. A wide variety of knowledge bases in multiple languages has been integrated into the Unified Medical Language System (UMLS) to create a huge knowledge source that can be accessed with diverse lexical tools. MetaMap (and its java version MMTx) is a tool that allows extracting medical concepts from free text, but currently there not exists a Spanish version. Our ongoing research is centered on the application of biomedical concepts to cross-lingual text classification, what makes it necessary to have a Spanish MMTx available. We have combined automatic translation techniques with biomedical ontologies and the existing English MMTx to produce a Spanish version of MMTx. We have evaluated different approaches and applied several types of evaluation according to different concept representations for text classification. Our results prove that the use of existing translation tools such as Google Translate produce translations with a high similarity to original texts in terms of extracted concepts.

    @OTHER{Carrero2008,
    abstract = {The use of domain ontologies is becoming increasingly popular in Medical Natural Language Processing Systems. A wide variety of knowledge bases in multiple languages has been integrated into the Unified Medical Language System (UMLS) to create a huge knowledge source that can be accessed with diverse lexical tools. MetaMap (and its java version MMTx) is a tool that allows extracting medical concepts from free text, but currently there not exists a Spanish version. Our ongoing research is centered on the application of biomedical concepts to cross-lingual text classification, what makes it necessary to have a Spanish MMTx available. We have combined automatic translation techniques with biomedical ontologies and the existing English MMTx to produce a Spanish version of MMTx. We have evaluated different approaches and applied several types of evaluation according to different concept representations for text classification. Our results prove that the use of existing translation tools such as Google Translate produce translations with a high similarity to original texts in terms of extracted concepts.},
    address = {LNCS Springer Verlag},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María},
    doi = {10.1007/978-3-540-88906-9_44},
    journal = {9th International Conference on Intelligent Data Engineering andAutomated Learning},
    publisher = {9th International Conference on Intelligent Data Engineering andAutomated Learning},
    title = {Building a Spanish MMTx by using Automatic Translation and BiomedicalOntologies},
    url = {http://scholar.google.es/scholar?q=allintitle%3ABuilding+a+Spanish+MMTx+by+using+Automatic+Translation+and+Biomedical+Ontologies&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Gómez Hidalgo, J. M., Puertas Sanz, E., Carrero García, F., & Buenaga Rodríguez, M.. (2003). Categorización de texto sensible al coste para el filtrado de contenidos inapropiados en internet. (, Vol. 31pp. 13-20). .
    [BibTeX] [Abstract] [Google Scholar]
    El creciente problema del acceso a contenidos inapropiados de Internet se puede abordar como un problema de categorización automática de texto sensible al coste. En este artículo presentamos la evaluación comparativa de un rango representativo de algoritmos de aprendizaje y métodos de sensibilización al coste, sobre dos colecciones de páginas Web en espanol ~ e inglés. Los resultados de nuestros experimentos son prometedores.

    @INCOLLECTION{GomezHidalgo2003,
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique and Carrero García , Francisco and Buenaga Rodríguez , Manuel},
    title = {Categorización de texto sensible al coste para el filtrado de contenidos inapropiados en Internet},
    year = {2003},
    volume = {31},
    pages = {13-20},
    abstract = {El creciente problema del acceso a contenidos inapropiados de Internet se puede abordar como un problema de categorización automática de texto sensible al coste. En este artículo presentamos la evaluación comparativa de un rango representativo de algoritmos de aprendizaje y métodos de sensibilización al coste, sobre dos colecciones de páginas Web en espanol ~ e inglés. Los resultados de nuestros experimentos son prometedores.},
    journal = {Procesamiento de Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Categorizaci%C3%B3n+de+texto+sensible+al+coste+para+el+filtrado+de+contenidos+inapropiados+en+Internet&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Murciano Quejido, R., Díaz Esteban, A., Buenaga Rodríguez, M., & Puertas Sanz, E.. (2001). Categorizing photographs for user-adapted searching in a news agency e-commerce application. .
    [BibTeX] [Abstract] [Google Scholar]
    In this work, we present a system for categorizing photographs based on the text of their captions. The system has been developed as a part of the system CODI, an e-commerce application for an Spanish news agency. The categorization system makes able to the user the personalization of their information interests, improving search possibilities in the CODI application. Our approach for photograph categorization is based on linear text classifiers and Web mining programs, specially selected due to their suitability for industrial applications. The evaluation of our categorization system has shown that it meets the efficiency and effectiveness requirements of the e-commerce application.

    @PROCEEDINGS{GomezHidalgo2001,
    title = {Categorizing photographs for user-adapted searching in a news agency e-commerce application},
    year = {2001},
    abstract = {In this work, we present a system for categorizing photographs based on the text of their captions. The system has been developed as a part of the system CODI, an e-commerce application for an Spanish news agency. The categorization system makes able to the user the personalization of their information interests, improving search possibilities in the CODI application. Our approach for photograph categorization is based on linear text classifiers and Web mining programs, specially selected due to their suitability for industrial applications. The evaluation of our categorization system has shown that it meets the efficiency and effectiveness requirements of the e-commerce application.},
    author = {Gómez Hidalgo , José María and Murciano Quejido , Raúl and Díaz Esteban , Alberto and Buenaga Rodríguez , Manuel and Puertas Sanz , Enrique},
    journal = {First International Workshop on New Developments in Digital Libraries },
    pages = {55-66},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACategorizing+photographs+for+user-adapted+searching+in+a+news+agency+e-commerce&btnG=&hl=es&as_sdt=0}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Ascanio, J. R.. (2014). Chronic patients monitoring using wireless sensors and big data processing. In Ubiquitous computing & ambient intelligence (pp. 33-38). IEEE.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.

    @INCOLLECTION{Gachet2014c,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Ascanio , J. R.},
    title = {Chronic patients monitoring using wireless sensors and Big Data Processing},
    booktitle = {Ubiquitous Computing & Ambient Intelligence},
    publisher = {IEEE},
    year = {2014},
    editor = {},
    volume = {},
    series = {IMIS 2014 Proceeding},
    pages = {33-38},
    month = {December},
    abstract = {Developed countries are characterized by aging population and economical crisis, so it is desirable to reduce the costs of public and private healthcare systems. It is necessary to streamline the health system resources leading to the development of new medical services based on telemedicine, remote monitoring of chronic patients, personalized health services, new services for dependants, etc. New medical applications based on remote monitoring will significantly increasing the volume of health information to manage, including data from medical and biological sensors, is then necessary process this huge volume of data using techniques from Big Data. In this paper we propose one potential solution for creating those new services, based on Big Data processing and vital signs monitoring.},
    copyright = {IEEE},
    doi = {10.1109/IMIS.2014.54},
    isbn = {9781479943319},
    url = {https://scholar.google.es/citations?view_op=view_citation&hl=en&user=Mwr8bDQAAAAJ&citation_for_view=Mwr8bDQAAAAJ:mB3voiENLucC},
    urldate = {2014-12-12}
    }

  • Buenaga, M., Gachet Páez, D., Maña, M. J., de la Villa, M., & Mata, J.. (2008). Clustering and summarizing medical documents to improve mobile retrieval. Acm-sigir workshop on mobile information retrieval, 54-57.
    [BibTeX] [Abstract] [Google Scholar]
    Access to biomedical databases from PDAs (Personal DigitalAssistant) is a useful tool for health care professionals. Mobiledevices, even with their limited screen size, offer clear advantagesin different scenarios, but the capability to select the crucialinformation, and display it in a synthetic way plays a key role. Wepropose to integrate multidocument summarization (MDS)techniques with a postretrieval clustering interface in a mobiledevice accessing to medical documents. The final result is asystem that offers a summary for each cluster reporting documentsimilarities and a summary for each document highlighting thesingular aspects that it provides with respect to the commoninformation in the cluster.

    @OTHER{Buenaga2008,
    abstract = {Access to biomedical databases from PDAs (Personal DigitalAssistant) is a useful tool for health care professionals. Mobiledevices, even with their limited screen size, offer clear advantagesin different scenarios, but the capability to select the crucialinformation, and display it in a synthetic way plays a key role. Wepropose to integrate multidocument summarization (MDS)techniques with a postretrieval clustering interface in a mobiledevice accessing to medical documents. The final result is asystem that offers a summary for each cluster reporting documentsimilarities and a summary for each document highlighting thesingular aspects that it provides with respect to the commoninformation in the cluster.},
    author = {Buenaga , Manuel and Gachet Páez, Diego and Maña , Manuel J. and de la Villa , Manuel and Mata , Jacinto},
    journal = {ACM-SIGIR Workshop on Mobile Information Retrieval},
    month = {July},
    pages = {54-57},
    publisher = {ACM-SIGIR Workshop on Mobile Information Retrieval},
    title = {Clustering and Summarizing Medical Documents to Improve Mobile Retrieval},
    url = {http://scholar.google.es/scholar?q=allintitle%3AClustering+and+Summarizing+Medical+Documents+to+Improve+Mobile+Retrieval&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Gómez Hidalgo, J. M., Maña López, M., & Puertas Sanz, E.. (2000). Combining text and heuristics for cost-sensitive spam filtering. Fourth computational natural language learning workshop.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Spam filtering is a text categorization task that shows especial features that make it interesting and difficult. First, the task has been performed traditionally using heuristics from the domain. Second, a cost model is required to avoid misclassification of legitimate messages. We present a comparative evaluation of several machine learning algorithms applied to spam filtering, considering the text of the messages and a set of heuristics for the task. Cost-oriented biasing and evaluation is performed.

    @OTHER{GomezHidalgo2000,
    abstract = {Spam filtering is a text categorization task that shows especial features that make it interesting and difficult. First, the task has been performed traditionally using heuristics from the domain. Second, a cost model is required to avoid misclassification of legitimate messages. We present a comparative evaluation of several machine learning algorithms applied to spam filtering, considering the text of the messages and a set of heuristics for the task. Cost-oriented biasing and evaluation is performed.},
    address = {Lisboa},
    author = {Gómez Hidalgo , José María and Maña López , Manuel and Puertas Sanz , Enrique},
    doi = {10.3115/1117601.1117623},
    journal = { Fourth Computational Natural Language Learning Workshop},
    month = {September},
    title = {Combining Text and Heuristics for Cost-Sensitive Spam Filtering},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACombining+Text+and+Heuristics+for+Cost-Sensitive+Spam+Filtering&btnG=&hl=es&as_sdt=0},
    year = {2000}
    }

  • Gachet Páez, D., Ascanio, J. R., & Sánchez de Pedro, I.. (2013). Computación en la nube, big data y sensores inalámbricos para la provisión de nuevos servicios de salud. Novática. revista de la asociación de técnicos en informática(224), 66-71.
    [BibTeX] [Abstract] [Google Scholar]
    Vivimos en una sociedad caracterizada por el envejecimiento de la población y actualmente inmersa en una profunda crisis económica que implica la reducción de costes de los servicios públicos y entre ellos el de salud. Es asimismo ineludible la necesidad de optimizar los recursos de los sistemas sanitarios promoviendo el desarrollo de nuevos servicios médicos basados en telemedicina, monitorización de enfermos crónicos, servicios de salud personalizados, etc. Es de esperar que estas nuevas aplicaciones incrementen de forma significativa el volumen de la información sanitaria a gestionar, incluyendo datos de sensores biológicos, historiales clínicos, información de contexto, etc. que a su vez necesitan de la disponibilidad de las aplicaciones de salud en cualquier lugar y momento y que sean accesibles desde cualquier dispositivo. En este artículo se propone una solución para la creación de estos nuevos servicios, especialmente en entornos exteriores, en base al uso de computación en la nube y monitorización de signos vitales.

    @OTHER{GachetNovatica2013a,
    author = {Gachet Páez, Diego and Ascanio, Juan Ramón and Sánchez de Pedro, Israel},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {224},
    pages = {66-71},
    month = {August},
    title = {Computación en la nube, Big Data y Sensores Inalámbricos para la provisión de nuevos servicios de salud},
    abstract = {Vivimos en una sociedad caracterizada por el envejecimiento de la población y actualmente
    inmersa en una profunda crisis económica que implica la reducción de costes de los servicios públicos y
    entre ellos el de salud. Es asimismo ineludible la necesidad de optimizar los recursos de los sistemas
    sanitarios promoviendo el desarrollo de nuevos servicios médicos basados en telemedicina, monitorización
    de enfermos crónicos, servicios de salud personalizados, etc. Es de esperar que estas nuevas aplicaciones
    incrementen de forma significativa el volumen de la información sanitaria a gestionar, incluyendo datos de
    sensores biológicos, historiales clínicos, información de contexto, etc. que a su vez necesitan de la
    disponibilidad de las aplicaciones de salud en cualquier lugar y momento y que sean accesibles desde
    cualquier dispositivo. En este artículo se propone una solución para la creación de estos nuevos servicios,
    especialmente en entornos exteriores, en base al uso de computación en la nube y monitorización de signos
    vitales.},
    doi = {},
    url = {http://scholar.google.es/scholar?q=novatica+computaci%C3%B3n+en+la+nube%2C+big+data+sensores+inal%C3%A1mbricos+servicios+de+salud&btnG=&hl=es&as_sdt=0%2C5},
    year = {2013},
    urldate = {2014-01-01}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Puertas Sanz, E., & Ruiz Leyva, M. J.. (2004). Concept indexing for automated text categorization. Paper presented at the Natural language processing and information systems: 9th internationalconference on applications of natural language to information systems.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In this paper we explore the potential of concept indexing with WordNet synsets for Text categorization, in comparison with the traditional bag of words text representation model. We have performed a series of experiments in which we also test the possibility of using simple yet robust disambiguation methods for concept indexing, and the effectiveness of stoplist-filtering and stemming on the SemCor semantic concordance. Results are not conclusive yet promising.

    @INPROCEEDINGS{GomezHidalgo2004,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Puertas Sanz , Enrique and Ruiz Leyva , Miguel Jaime},
    title = {Concept Indexing for Automated Text Categorization},
    booktitle = {Natural Language Processing and Information Systems: 9th InternationalConference on Applications of Natural Language to Information Systems},
    year = {2004},
    volume = {3136},
    series = {Lecture Notes in Computer Science},
    pages = {195-206},
    publisher = {Springer Verlag},
    abstract = {In this paper we explore the potential of concept indexing with WordNet synsets for Text categorization, in comparison with the traditional bag of words text representation model. We have performed a series of experiments in which we also test the possibility of using simple yet robust disambiguation methods for concept indexing, and the effectiveness of stoplist-filtering and stemming on the SemCor semantic concordance. Results are not conclusive yet promising.},
    doi = {10.1007/978-3-540-27779-8_17},
    institution = {University of Salford},
    url = {http://scholar.google.es/scholar?q=allintitle%3AConcept+Indexing+for+Automated+Text+Categorization&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Buenaga Rodriguez, M., Rubio, M., Aparicio Galisteo, F., & Hernando, A.. (2011). Conceptcase: una metodología para la integración de aprendizaje basado en conceptos sobre casos clínicos mediante sistemas inteligentes de acceso a información en internet. .
    [BibTeX] [Abstract] [Google Scholar]
    En este trabajo presentamos ConceptCase, una metodología orientada a la integración de aprendizaje basado en conceptos y aprendizaje basado en casos. La metodología se basa en que el estudiante pueda profundizar fácilmente en los conceptos que aparecen en un caso (nos hemos focalizado en casos clínicos y estudiantes de medicina), gracias a la utilización de un sistema inteligente de acceso a la información en Internet, que permite identificar los conceptos y acceder de forma directa a información sobre ellos. Para la definición y evaluación de nuestra metodología, hemos desarrollado una experiencia inicial sobre un caso clínico en el marco de una asignatura de 2º curso de Grado en Medicina. El caso en concreto era de una paciente con una patología cardíaca, en el que surgen conceptos relacionados con la descripción de la enfermedad, su evolución y tratamiento, y seleccionamos como ontologías o bases de conceptos MedlinePlus y FreeBase. Conducimos una experiencia de evaluación sobre un conjunto de 60 alumnos, obteniendo resultados positivos, tanto desde el punto de vista de los resultados objetivos del aprendizaje, como de satisfacción de los usuarios.

    @INPROCEEDINGS{BuenagaRodri­guez2011,
    author = {Buenaga Rodriguez , Manuel and Rubio , Margarita and Aparicio Galisteo , Fernando and Hernando , Asunción},
    title = {ConceptCase: Una metodología para la integración de aprendizaje basado en conceptos sobre casos clínicos mediante sistemas inteligentes de acceso a información en Internet},
    year = {2011},
    abstract = {En este trabajo presentamos ConceptCase, una metodología orientada a la integración de aprendizaje basado en conceptos y aprendizaje basado en casos. La metodología se basa en que el estudiante pueda profundizar fácilmente en los conceptos que aparecen en un caso (nos hemos focalizado en casos clínicos y estudiantes de medicina), gracias a la utilización de un sistema inteligente de acceso a la información en Internet, que permite identificar los conceptos y acceder de forma directa a información sobre ellos. Para la definición y evaluación de nuestra metodología, hemos desarrollado una experiencia inicial sobre un caso clínico en el marco de una asignatura de 2º curso de Grado en Medicina. El caso en concreto era de una paciente con una patología cardíaca, en el que surgen conceptos relacionados con la descripción de la enfermedad, su evolución y tratamiento, y seleccionamos como ontologías o bases de conceptos MedlinePlus y FreeBase. Conducimos una experiencia de evaluación sobre un conjunto de 60 alumnos, obteniendo resultados positivos, tanto desde el punto de vista de los resultados objetivos del aprendizaje, como de satisfacción de los usuarios.},
    journal = {VIII Jornadas Internacionales de Innovación Universitaria},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+UNA+METODOLOG%C3%8DA+PARA+LA++INTEGRACI%C3%93N+DE+APRENDIZAJE+BASADO+EN++CONCEPTOS+SOBRE+CASOS+CL%C3%8DNICOS+MEDIANTE++SISTEMAS+INTELIGENTES+DE+ACCESO+A++INFORMACI%C3%93N+EN+INTERNET&btnG=&hl=es&as_sdt=0}
    }

  • Vaquero, A., Saenz, F., Alvarez, F., & Buenaga, M.. (2006). Conceptual design for domain and task specific ontology-based linguistic resources. Paper presented at the On the move to meaningful internet systems 2006: coopis, doa, gada, and odbase.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Regardless of the knowledge representation schema chosen to implement a linguistic resource, conceptual design is an important step in its development. However, it is normally put aside by developing efforts as they focus on content, implementation and time-saving issues rather than on the software engineering aspects of the construction of linguistic resources. Based on an analysis of common problems found in linguistic resources, we present a reusable conceptual model which incorporates elements that give ontology developers the possibility to establish formal semantic descriptions for concepts and relations, and thus avoiding the aforementioned common problems. The model represents a step forward in our efforts to define a complete methodology for the design and implementation of ontology-based linguistic resources using relational databases and a sound software engineering approach for knowledge representation.

    @INPROCEEDINGS{Vaquero2006,
    author = {Vaquero , Antonio and Saenz , Fernando and Alvarez , Francisco and Buenaga , Manuel},
    title = {Conceptual Design for Domain and Task Specific Ontology-Based Linguistic Resources},
    booktitle = {On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE},
    year = {2006},
    volume = {4275},
    series = {Lecture Notes in Computer Science},
    pages = {855-862},
    month = {November},
    publisher = {Springer Berlin Heidelberg},
    abstract = {Regardless of the knowledge representation schema chosen to implement a linguistic resource, conceptual design is an important step in its development. However, it is normally put aside by developing efforts as they focus on content, implementation and time-saving issues rather than on the software engineering aspects of the construction of linguistic resources. Based on an analysis of common problems found in linguistic resources, we present a reusable conceptual model which incorporates elements that give ontology developers the possibility to establish formal semantic descriptions for concepts and relations, and thus avoiding the aforementioned common problems. The model represents a step forward in our efforts to define a complete methodology for the design and implementation of ontology-based linguistic resources using relational databases and a sound software engineering approach for knowledge representation.},
    doi = {10.1007/11914853_52},
    url = {http://scholar.google.es/scholar?q=allintitle%3AConceptual+Design+for+Domain+and+Task+Specific+Ontology-Based+Linguistic+Resources&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., Martín Abreu, J. M., García Bringas, P., & Santos Grueiro, I.. (2010). Content security and privacy preservation in social networks through text mining. Workshop on interoperable social multimedia applications (wisma 2010).
    [BibTeX] [Abstract] [Google Scholar]
    Due to their huge popularity, Social Networks are increasingly being used as malware, spam and phishing propagation applications. Moreover, Social Networks are being widely recognized as a source of private (either corporate or personal) information leaks. Within the project Segur @, Optenet has developed a number of prototypes that deal with these problems, based on several techniques that share text mining as the underlying approach. These prototypes include a malware detection system based on Information Retrieval techniques, a compression-based spam filter, and a Data Leak Prevention system that makes use of Named Entity Recognition techniques.

    @OTHER{GomezHidalgo2010,
    abstract = {Due to their huge popularity, Social Networks are increasingly being used as malware, spam and phishing propagation applications. Moreover, Social Networks are being widely recognized as a source of private (either corporate or personal) information leaks. Within the project Segur
    @, Optenet has developed a number of prototypes that deal with these problems, based on several techniques that share text mining as the underlying approach. These prototypes include a malware detection system based on Information Retrieval techniques, a compression-based spam filter, and a Data Leak Prevention system that makes use of Named Entity Recognition techniques.},
    address = {Barcelona},
    author = {Gómez Hidalgo , José María and Martín Abreu , José Miguel and García Bringas , Pablo and Santos Grueiro , Igor},
    journal = {Workshop on Interoperable Social Multimedia Applications (WISMA 2010)},
    title = {Content Security and Privacy Preservation in Social Networks through Text Mining},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Content+Security+and+Privacy+Preservation+in+Social+Networks+through+Text+Mining&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Prieto, M. L., Aparicio, F., Buenaga, M., Gachet Páez, D., & Gaya, M. C.. (2013). Cross-lingual intelligent information access system from clinical cases using mobile devices. Procesamiento del lenguaje natural, 50, 85-92.
    [BibTeX] [Abstract] [Google Scholar]
    Over the last decade there has been a rapid growth of both the development of new smart mobile devices (Smartphone and Tablet) and their use (through many applications). Furthermore, in the biomedical field there are a greater number of resources in different formats, which can be exploited by using Intelligent Information Access Systems and techniques for information retrieval and extraction. This paper presents the development of a mobile interface access that, using different local knowledge sources (dictionaries and ontologies previously preprocessed), techniques of natural language processing and remote knowledge sources (which performs the annotation of entities in text inputted into the system via Web services), allows the cross-lingual extraction of medical concepts in English and Spanish, from a medical text in English or Spanish (e.g. a clinical case). The mobile application user can enter a medical text or a picture of it, resulting in a set of relevant medical entities. On recognized medical entities, extracted and displayed through the interface, the user can get more information on them, get more information from other concepts related to originally extracted and search for scientific publications from MEDLINE/PubMed.

    @article{PLN4663,
    author = {Prieto , Maria Lorena and Aparicio , Fernando and Buenaga , Manuel and Gachet Páez, Diego and Gaya, Maria Cruz},
    title = {Cross-lingual intelligent information access system from clinical cases using mobile devices},
    journal = {Procesamiento del Lenguaje Natural},
    volume = {50},
    number = {0},
    pages = {85-92},
    year = {2013},
    keywords = {},
    abstract = {Over the last decade there has been a rapid growth of both the development of new smart mobile devices (Smartphone and Tablet) and their use (through many applications). Furthermore, in the biomedical field there are a greater number of resources in different formats, which can be exploited by using Intelligent Information Access Systems and techniques for information retrieval and extraction. This paper presents the development of a mobile interface access that, using different local knowledge sources (dictionaries and ontologies previously preprocessed), techniques of natural language processing and remote knowledge sources (which performs the annotation of entities in text inputted into the system via Web services), allows the cross-lingual extraction of medical concepts in English and Spanish, from a medical text in English or Spanish (e.g. a clinical case). The mobile application user can enter a medical text or a picture of it, resulting in a set of relevant medical entities.
    On recognized medical entities, extracted and displayed through the interface, the user can get more information on them, get more information from other concepts related to originally extracted and search for scientific publications from MEDLINE/PubMed.},
    issn = {1989-7553},
    url = {http://scholar.google.es/scholar?q=allintitle%3ACross-lingual+intelligent+information+access+system+from+clinical+cases+using++mobile+devices&btnG=&hl=es&as_sdt=0%2C5}}

  • Gachet Páez, D., & Campos Lorrio, T.. (1999). Design of real time software for industrial process control. Paper presented at the 1999 7th IEEE international conference on emerging technologies and factory automation, 1999. proceedings. ETFA ’99.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The paper describes the details of, and the experiences gained from, a case study undertaken by the authors on the design and implementation of a complex control system for a dosage industrial process used in a manufacturing industry. The goal was to demonstrate that industrial real time control systems could be implemented using a high level programming language and some suitable operating system. The software was designed using Harel’s State Charts as the main tool and implemented on an Intel Pentium based system. Our results indicated that system works correctly and is very flexible. The system has been successfully tested and now is in full production at Lignotok {S.A.}, a large manufacturing company in Vigo, Spain

    @inproceedings{paez_design_1999,
    title = {Design of real time software for industrial process control},
    volume = {2},
    doi = {10.1109/ETFA.1999.813133},
    abstract = {The paper describes the details of, and the experiences gained from, a case study undertaken by the authors on the design and implementation of a complex control system for a dosage industrial process used in a manufacturing industry. The goal was to demonstrate that industrial real time control systems could be implemented using a high level programming language and some suitable operating system. The software was designed using Harel's State Charts as the main tool and implemented on an Intel Pentium based system. Our results indicated that system works correctly and is very flexible. The system has been successfully tested and now is in full production at Lignotok {S.A.}, a large manufacturing company in Vigo, Spain},
    booktitle = {1999 7th {IEEE} International Conference on Emerging Technologies and Factory Automation, 1999. Proceedings. {ETFA} '99},
    author = {Gachet Páez, Diego and Campos Lorrio, Tomas},
    year = {1999},
    keywords = {case study, chemical technology, complex control system, Computer industry, Computer languages, Control systems, dosage industrial process, Electrical equipment industry, high level languages, high level programming language, Industrial control, industrial process control, industrial real time control systems, Intel Pentium based system, Lignotok, manufacturing company, manufacturing industries, manufacturing industry, operating system, Operating systems, operating systems (computers), process control, real time software design, Real time systems, real-time systems, Software Engineering, Software systems, Spain, State Charts},
    pages = {1259--1263 vol.2},
    url={http://scholar.google.es/scholar?q=allintitle%3A++Design+of+real+time+software+for+industrial+process+control&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Cortizo Pérez, J. C., & Giráldez, I.. (2004). Discovering data dependencies in web content mining. Paper presented at the Actas de la iadis international conference www/internet.
    [BibTeX] [Abstract] [Google Scholar]
    Web content mining opens up the possibility to use data presented in web pages for the discovery of interesting and useful patterns. Our web mining tool, FBL (Filtered Bayesian Learning), performs a two stage process: first it analyzes data present in a web page, and then, using information about the data dependencies encountered, it performs the mining phase based on bayesian learning. The Näive Bayes classifier is based on the assumption that the attribute values are conditionally independent for a given the class. This makes it perform very well in some data domains, but performs poorly when attributes are dependent. In this paper, we try to identify those dependencies using linear regression on the attribute values, and then eliminate the attributes which are a linear combination of one or two others. We have tested this system on six web domains (extracting the data by parsing the html), where we have added a synthetic attribute which is a linear combination of two of the original ones. The system detects perfectly those synthetic attributes and also some “natural” dependent attributes, obtaining a more accurate classifier.

    @INPROCEEDINGS{CortizoPerez2004,
    author = {Cortizo Pérez , José Carlos and Giráldez , Ignacio},
    title = {Discovering Data Dependencies in Web Content Mining},
    booktitle = {Actas de la IADIS International Conference WWW/Internet },
    year = {2004},
    pages = {6-9},
    abstract = {Web content mining opens up the possibility to use data presented in web pages for the discovery of interesting and useful patterns. Our web mining tool, FBL (Filtered Bayesian Learning), performs a two stage process: first it analyzes data present in a web page, and then, using information about the data dependencies encountered, it performs the mining phase based on bayesian learning. The Näive Bayes classifier is based on the assumption that the attribute values are conditionally independent for a given the class. This makes it perform very well in some data domains, but performs poorly when attributes are dependent. In this paper, we try to identify those dependencies using linear regression on the attribute values, and then eliminate the attributes which are a linear combination of one or two others. We have tested this system on six web domains (extracting the data by parsing the html), where we have added a synthetic attribute which is a linear combination of two of the original ones. The system detects perfectly those synthetic attributes and also some “natural” dependent attributes, obtaining a more accurate classifier.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADiscovering+Data+Dependencies+in+Web+Content+Mining&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Diseño de experimentos de categorización automática de textos basadaen una colección de entrenamiento y una base de datos léxica. Informe técnico – departamento de informática y automática.
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996a,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {Informe técnico - Departamento de Informática y Automática},
    organization = {Universidad Complutense de Madrid},
    title = {Diseño de experimentos de categorización automática de textos basadaen una colección de entrenamiento y una base de datos léxica},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+de+experimentos+de+categorizaci%C3%B3n+autom%C3%A1tica+de+textos+basada+en+una+colecci%C3%B3n+de+entrenamiento+y+una+base+de+datos+l%C3%A9xica&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Buenaga Rodriguez, M., Maña, M., Carrero, F., & Mata, J.. (2007). Diseño e integración de técnicas de categorización automática de textos para el acceso a la información bilingue en un ámbito biomédico. Vii jornada de seguimiento de proyectos en tecnologías informáticas.
    [BibTeX] [Google Scholar]
    @OTHER{BuenagaRodri­guez2007,
    address = {Zaragoza},
    author = {Buenaga Rodriguez , Manuel and Maña , Manuel and Carrero , Francisco and Mata , Jacinto},
    journal = {VII Jornada de Seguimiento de Proyectos en Tecnologías Informáticas},
    month = {September},
    title = {Diseño e Integración de Técnicas de Categorización Automática de Textos para el Acceso a la Información Bilingue en un Ámbito Biomédico},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+e+Integraci%C3%B3n+de+T%C3%A9cnicas+de+Categorizaci%C3%B3n+Autom%C3%A1tica+de+Textos+para+el+Acceso+a+la+Informaci%C3%B3n+Bilingue+en+un+%C3%81mbito+Biom%C3%A9dico&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Maña López, M. J., Buenaga Rodríguez, M., & Gómez Hidalgo, J. M.. (1998). Diseño y evaluación de un generador de texto con modelado de usuarioen un entorno de recuperación de información.. Xiv congreso de la sociedad española de procesamiento de lenguajenatural(23), 32-39.
    [BibTeX] [Abstract] [Google Scholar]
    En este trabajo presentamos un generador de resúmenes que incorpora el modelado de las necesidades de información del usuario con el fin de crear resúmenes adaptados a las mismas. Los resúmenes se generan mediante la extracción de las frases que resultan mejor puntuadas bajo tres criterios: palabras clave, localización y título. El modelado del usuario se consigue a partir de las consultas a un sistema de Recuperación de Información y de la expansión de las mismas utilizando WordNet. Se presenta también un método de evaluación sistemático y objetivo que nos permite comparar la eficacia de los distintos tipos de resúmenes generados. Los resultados demuestran la mayor eficacia de los resúmenes adaptados a las consultas y los de aquellos que emplean WordNet.

    @OTHER{ManaLopez1998,
    abstract = {En este trabajo presentamos un generador de resúmenes que incorpora el modelado de las necesidades de información del usuario con el fin de crear resúmenes adaptados a las mismas. Los resúmenes se generan mediante la extracción de las frases que resultan mejor puntuadas bajo tres criterios: palabras clave, localización y título. El modelado del usuario se consigue a partir de las consultas a un sistema de Recuperación de Información y de la expansión de las mismas utilizando WordNet. Se presenta también un método de evaluación sistemático y objetivo que nos permite comparar la eficacia de los distintos tipos de resúmenes generados. Los resultados demuestran la mayor eficacia de los resúmenes adaptados a las consultas y los de aquellos que emplean WordNet.},
    author = {Maña López , Manuel J. and Buenaga Rodríguez , Manuel and Gómez Hidalgo , José María},
    editor = {Procesamiento del Lenguaje Natural},
    journal = {XIV Congreso de la Sociedad Española de Procesamiento de LenguajeNatural},
    number = {23},
    pages = {32-39},
    title = {Diseño y evaluación de un generador de texto con modelado de usuarioen un entorno de recuperación de información.},
    url = {http://scholar.google.es/scholar?q=allintitle%3ADise%C3%B1o+y+evaluaci%C3%B3n+de+un+generador+de+texto+con+modelado+de+usuario+en+un+entorno+de+recuperaci%C3%B3n+de+informaci%C3%B3n.&btnG=&hl=es&as_sdt=0},
    year = {1998}
    }

  • Puertas Sanz, E., Gómez Hidalgo, J. M., & Cortizo Pérez, J. C.. (2008). Email spam filtering. In Zelkowitz, M. V. (Ed.), In Advances in computers (, Vol. 74pp. 45-114). Elsevier.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In recent years, email spam has become an increasingly important problem, with a big economic impact in society. In this work, we present the problem of spam, how it affects us, and how we can fight against it. We discuss legal, economic, and technical measures used to stop these unsolicited emails. Among all the technical measures, those based on content analysis have been particularly effective in filtering spam, so we focus on them, explaining how they work in detail. In summary, we explain the structure and the process of different Machine Learning methods used for this task, and how we can make them to be cost sensitive through several methods like threshold optimization, instance weighting, or MetaCost. We also discuss how to evaluate spam filters using basic metrics, TREC metrics, and the receiver operating characteristic convex hull method, that best suits classification problems in which target conditions are not known, as it is the case. We also describe how actual filters are used in practice. We also present different methods used by spammers to attack spam filters and what we can expect to find in the coming years in the battle of spam filters against spammers.

    @INCOLLECTION{PuertasSanz2008,
    author = {Puertas Sanz , Enrique and Gómez Hidalgo , José María and Cortizo Pérez , José Carlos},
    title = {Email Spam Filtering},
    booktitle = {Advances in Computers},
    publisher = {Elsevier},
    year = {2008},
    editor = {Marvin V. Zelkowitz},
    volume = {74},
    chapter = {3},
    pages = {45-114},
    abstract = {In recent years, email spam has become an increasingly important problem, with a big economic impact in society. In this work, we present the problem of spam, how it affects us, and how we can fight against it. We discuss legal, economic, and technical measures used to stop these unsolicited emails. Among all the technical measures, those based on content analysis have been particularly effective in filtering spam, so we focus on them, explaining how they work in detail. In summary, we explain the structure and the process of different Machine Learning methods used for this task, and how we can make them to be cost sensitive through several methods like threshold optimization, instance weighting, or MetaCost. We also discuss how to evaluate spam filters using basic metrics, TREC metrics, and the receiver operating characteristic convex hull method, that best suits classification problems in which target conditions are not known, as it is the case. We also describe how actual filters are used in practice. We also present different methods used by spammers to attack spam filters and what we can expect to find in the coming years in the battle of spam filters against spammers.},
    doi = {10.1016/S0065-2458(08)00603-7},
    isbn = {0065-2458},
    shorttitle = {Software Development},
    url = {http://scholar.google.es/scholar?as_q=Email+Spam+Filtering&as_epq=&as_oq=&as_eq=&as_occt=title&as_sauthors=Puertas&as_publication=&as_ylo=&as_yhi=&btnG=&hl=es&as_sdt=0},
    urldate = {2013-01-10}
    }

  • Padrón Nápoles, V. M., Ugarte Suárez, M., Hussain Alanbari, M., & Gachet Páez, D.. (2006). Estudio de las metodologí­as activas y experiencias de su introducción en las asignaturas de sistemas digitales Grafema.
    [BibTeX] [Google Scholar]
    @BOOK{PadronNapoles2006,
    title = {Estudio de las metodologí­as activas y experiencias de su introducción en las asignaturas de sistemas digitales},
    publisher = {Grafema},
    year = {2006},
    author = {Padrón Nápoles , Vi­ctor Manuel and Ugarte Suárez , Marta and Hussain Alanbari , Mohammad and Gachet Páez , Diego},
    isbn = {9788493422561},
    language = {es},
    url = {http://www.google.es/search?tbm=bks&hl=es&q=Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales&btnG=#hl=es&tbm=bks&sclient=psy-ab&q=%22Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales%22&oq=%22Estudio+de+las+metodolog%C3%ADas+activas+y+experiencias+de+su+introducci%C3%B3n+en+las+asignaturas+de+sistemas+digitales%22&gs_l=serp.3...5065.6500.0.6805.2.2.0.0.0.0.0.0..0.0...0.2...1c.1.6.psy-ab.FXP1zEchBms&pbx=1&bav=on.2,or.r_qf.&bvm=bv.43828540,d.ZGU&fp=b9ef6759e3a8d17e&biw=1366&bih=653}
    }

  • Gómez Hidalgo, J. M.. (2010). Experiencias de investigación en la universidad y en la empresa. Novática. revista de la asociación de técnicos en informática(206).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo2010a,
    author = {Gómez Hidalgo , José María},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {206},
    title = {Experiencias de investigación en la universidad y en la empresa},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Experiencias+de+investigaci%C3%B3n+en+la+universidad+y+en+la+empresa&btnG=&hl=es&as_sdt=0},
    year = {2010}
    }

  • Gómez Hidalgo, J. M., Cortizo Pérez, J. C., Puertas Sanz, E., & Buenaga Rodríguez, M.. (2004). Experimentos en indexación conceptual para la categorización de texto. Paper presented at the Actas de la conferencia ibero-americana www/internet.
    [BibTeX] [Abstract] [Google Scholar]
    En la Categorización de Texto (CT), una tarea de gran importancia para el acceso a la información en Internet y la World Wide Web, juega un papel fundamental el método de representación de documentos o indexación. La representación de los documentos en CT se basa generalmente en la utilización de raíces de palabras, excluyendo aquellas que aparecen en una lista de palabras frecuentes (modelo de lista de palabras). Este enfoque padece del problema habitual en Recuperación de Información (RI), la ambigüedad del lenguaje natural. En este artículo exploramos el potencial de la indexación mediante conceptos, utilizando synsets de WordNet, frente al modelo tradicional basado en lista de palabras, en el marco de la CT. Hemos realizado una serie de experimentos en los cuáles evaluamos ambos modelos de indexación para la CT sobre la concordancia semántica Semcor. Los resultados permiten afirmar que la indexación mixta, usando lista de palabras y conceptos de WordNet, es significativamente más efectiva que ambos modelos por separado.

    @INPROCEEDINGS{GomezHidalgo2004a,
    author = {Gómez Hidalgo , José María and Cortizo Pérez , José Carlos and Puertas Sanz , Enrique and Buenaga Rodríguez , Manuel},
    title = {Experimentos en Indexación Conceptual para la Categorización de Texto},
    booktitle = {Actas de la Conferencia Ibero-Americana WWW/Internet },
    year = {2004},
    editor = {J. M. Gutiérrez and J. J. Martínez and P. Isaias},
    pages = {251-258},
    abstract = {En la Categorización de Texto (CT), una tarea de gran importancia para el acceso a la información en Internet y la World Wide Web, juega un papel fundamental el método de representación de documentos o indexación. La representación de los documentos en CT se basa generalmente en la utilización de raíces de palabras, excluyendo aquellas que aparecen en una lista de palabras frecuentes (modelo de lista de palabras). Este enfoque padece del problema habitual en Recuperación de Información (RI), la ambigüedad del lenguaje natural. En este artículo exploramos el potencial de la indexación mediante conceptos, utilizando synsets de WordNet, frente al modelo tradicional basado en lista de palabras, en el marco de la CT. Hemos realizado una serie de experimentos en los cuáles evaluamos ambos modelos de indexación para la CT sobre la concordancia semántica Semcor. Los resultados permiten afirmar que la indexación mixta, usando lista de palabras y conceptos de WordNet, es significativamente más efectiva que ambos modelos por separado.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExperimentos+en+Indexaci%C3%B3n+Conceptual+para+la+Categorizaci%C3%B3n+de+Texto&btnG=&hl=es&as_sdt=0}
    }

  • Gaya, M. C., & Giráldez, J. I.. (2008). Experiments in multi agent learning. 3rd international workshop on hybrid artificial intelligence systems, 5271, 78-85.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. The results show that, as a result of knowledge fusion, the accuracy of initial theories is improved as well as the accuracy of the monolithic solution.

    @OTHER{Gaya2008,
    abstract = {Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. The results show that, as a result of knowledge fusion, the accuracy of initial theories is improved as well as the accuracy of the monolithic solution.},
    address = { LNCS },
    author = {Gaya , Maria Cruz and Giráldez , José Ignacio},
    doi = {10.1007/978-3-540-87656-4_11},
    journal = {3rd International Workshop on Hybrid Artificial Intelligence Systems},
    pages = {78-85},
    publisher = {Springer Verlag},
    series = {Lecture Notes in Artificial Intelligence},
    title = {Experiments in Multi Agent Learning},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExperiments+in+Multi+Agent+Learning&btnG=&hl=es&as_sdt=0},
    volume = {5271},
    year = {2008}
    }

  • Cortizo, J. C., Gachet Páez, D., Buenaga, M., Maña, M., Puertas, E., & de la Villa, M.. (2008). Extending pubmed on tap by means of multidocument summarization. User-centric technologies and applications workshop.
    [BibTeX] [Abstract] [Google Scholar]
    Access to biomedical databases from pockets and hand-held or tablet computers is a useful tool for health care professionals. PubMed on Tap is the standar application for PDA to retrieve information from Medline, the most important and consulted bibliographical database in the biomedical domain. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques improving aspects of PubMed on Tap.

    @OTHER{Cortizo2008,
    abstract = {Access to biomedical databases from pockets and hand-held or tablet computers is a useful tool for health care professionals. PubMed on Tap is the standar application for PDA to retrieve information from Medline, the most important and consulted bibliographical database in the biomedical domain. In this paper we present a description of an intelligent information retrieval system that uses clustering and multidocument summarization techniques improving aspects of PubMed on Tap.},
    author = {Cortizo , José Carlos and Gachet Páez, Diego and Buenaga , Manuel and Maña , Manuel and Puertas , Enrique and de la Villa , Manuel},
    journal = {User-centric Technologies and Applications Workshop },
    publisher = {User-centric Technologies and Applications Workshop – Madrinet},
    title = {Extending PubMed on Tap by means of MultiDocument Summarization},
    url = {http://scholar.google.es/scholar?q=allintitle%3AExtending+on+Tap+by+means+of+Summarization&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Cormack, G., Gómez Hidalgo, J. M., & Puertas Sanz, E.. (2007). Feature engineering for mobile (sms) spam filtering. Paper presented at the Proceedings of the 30th annual international acm sigir conference.
    [BibTeX] [Abstract] [Google Scholar]
    Mobile spam in an increasing threat that may be addressed using filtering systems like those employed against email spam. We believe that email filtering techniques require some adaptation to reach good levels of performance on SMS spam, especially regarding message representation. In order to test this assumption, we have performed experiments on SMS filtering using top performing email spam filters on mobile spam messages using a suitable feature representation, with results supporting our hypothesis.

    @INPROCEEDINGS{Cormack2007,
    author = {Cormack , Gordon and Gómez Hidalgo , José María and Puertas Sanz , Enrique},
    title = {Feature Engineering for Mobile (SMS) Spam Filtering},
    booktitle = {Proceedings of the 30th Annual International ACM SIGIR Conference},
    year = {2007},
    abstract = {Mobile spam in an increasing threat that may be addressed using filtering systems like those employed against email spam. We believe that email filtering techniques require some adaptation to reach good levels of performance on SMS spam, especially regarding message representation. In order to test this assumption, we have performed experiments on SMS filtering using top performing email spam filters on mobile spam messages using a suitable feature representation, with results supporting our hypothesis.},
    url = {http://scholar.google.es/scholar?q=allintitle%3AFeature+Engineering+for+Mobile+%28SMS%29+Spam+Filtering&btnG=&hl=es&as_sdt=0}
    }

  • Gómez Hidalgo, J. M., & Puertas Sanz, E.. (2009). Filtrado de pornografía usando análisis de imagen. Linux+ magazine(51), 62-67.
    [BibTeX] [Abstract] [Google Scholar]
    La pornografía constituye, ya desde los comienzos de Internet, un tipo de contenidos muy extendido y fácilmente localizable. Tal es así, que la propia industria pornográfica ha cambiado para adaptarse a esta nueva realidad.

    @OTHER{GomezHidalgo2009,
    abstract = {La pornografía constituye, ya desde los comienzos de Internet, un tipo de contenidos muy extendido y fácilmente localizable. Tal es así, que la propia industria pornográfica ha cambiado para adaptarse a esta nueva realidad.},
    author = {Gómez Hidalgo , José María and Puertas Sanz , Enrique},
    journal = { Linux+ Magazine},
    month = {Febrero},
    number = {51},
    pages = {62-67},
    title = {Filtrado de pornografía usando análisis de imagen},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Filtrado+de+pornograf%C3%ADa+usando+an%C3%A1lisis+de+imagen&btnG=&hl=es&as_sdt=0},
    year = {2009}
    }

  • Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (1996). Formalismos lógicos para el procesamiento del lenguaje natural. Xii congreso de lenguajes naturales y lenguajes formales, seo deurgel, lérida (españa).
    [BibTeX] [Google Scholar]
    @OTHER{GomezHidalgo1996b,
    author = {Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    journal = {XII Congreso de Lenguajes Naturales y Lenguajes Formales, Seo deUrgel, Lérida (España)},
    title = {Formalismos Lógicos para el Procesamiento del Lenguaje Natural},
    url = {http://scholar.google.es/scholar?q=allintitle%3AFormalismos+L%C3%B3gicos+para+el+Procesamiento+del+Lenguaje+Natural&btnG=&hl=es&as_sdt=0},
    year = {1996}
    }

  • Molina, M., & Flores, V.. (2006). Generating adaptive presentations of hydrologic behavior. Paper presented at the Ideal.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.

    @inproceedings{DBLP:conf/ideal/MolinaF06,
    author = {Molina, Martin and Flores, Victor},
    abstract = {This paper describes a knowledge-based approach for summarizing and presenting the behavior of hydrologic networks. This approach has been designed for visualizing data from sensors and simulations in the context of emergencies caused by floods. It follows a solution for event summarization that exploits physical properties of the dynamic system to automatically generate summaries of relevant data. The summarized information is presented using different modes such as text, 2D graphics and 3D animations on virtual terrains. The presentation is automatically generated using a hierarchical planner with abstract presentation fragments corresponding to discourse patterns, taking into account the characteristics of the user who receives the information and constraints imposed by the communication devices (mobile phone, computer, fax, etc.). An application following this approach has been developed for a national hydrologic information infrastructure of Spain.},
    title = {Generating Adaptive Presentations of Hydrologic Behavior},
    booktitle = {IDEAL},
    year = {2006},
    pages = {896-903},
    doi = {10.1007/11875581_107},
    url = {http://scholar.google.es/scholar?q=allintitle%3AGenerating+Adaptive+Presentations+of+Hydrologic+Behavior&btnG=&hl=es&as_sdt=0%2C5}
    }

  • Molina, M., & Flores, V.. (2012). Generating multimedia presentations that summarize the behavior of dynamic systems using a model-based approach. Expert syst. appl., 39(3), 2759-2770.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    This article describes a knowledge-based method for generating multimedia descriptions that summarize the behavior of dynamic systems. We designed this method for users who monitor the behavior of a dynamic system with the help of sensor networks and make decisions according to prefixed management goals. Our method generates presentations using different modes such as text in natural language, 2D graphics and 3D animations. The method uses a qualitative representation of the dynamic system based on hierarchies of components and causal influences. The method includes an abstraction generator that uses the system representation to find and aggregate relevant data at an appropriate level of abstraction. In addition, the method includes a hierarchical planner to generate a presentation using a model with discourse patterns. Our method provides an efficient and flexible solution to generate concise and adapted multimedia presentations that summarize thousands of time series. It is general to be adapted to different dynamic systems with acceptable knowledge acquisition effort by reusing and adapting intuitive representations. We validated our method and evaluated its practical utility by developing several models for an application that worked in continuous real time operation for more than 1 year, summarizing sensor data of a national hydrologic information system in Spain.

    @article{DBLP:journals/eswa/MolinaF12,
    author = {Molina, Martin and Flores, Victor},
    abstract = {This article describes a knowledge-based method for generating multimedia descriptions that summarize the behavior of dynamic systems. We designed this method for users who monitor the behavior of a dynamic system with the help of sensor networks and make decisions according to prefixed management goals. Our method generates presentations using different modes such as text in natural language, 2D graphics and 3D animations. The method uses a qualitative representation of the dynamic system based on hierarchies of components and causal influences. The method includes an abstraction generator that uses the system representation to find and aggregate relevant data at an appropriate level of abstraction. In addition, the method includes a hierarchical planner to generate a presentation using a model with discourse patterns. Our method provides an efficient and flexible solution to generate concise and adapted multimedia presentations that summarize thousands of time series. It is general to be adapted to different dynamic systems with acceptable knowledge acquisition effort by reusing and adapting intuitive representations. We validated our method and evaluated its practical utility by developing several models for an application that worked in continuous real time operation for more than 1 year, summarizing sensor data of a national hydrologic information system in Spain.},
    title = {Generating multimedia presentations that summarize the behavior of dynamic systems using a model-based approach},
    journal = {Expert Syst. Appl.},
    volume = {39},
    number = {3},
    year = {2012},
    pages = {2759-2770},
    doi = {10.1016/j.eswa.2011.08.135},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3AGenerating+multimedia+presentations+that+summarize+the+behavior+of+dynamic+systems+using+a+model-based+approach&btnG=&lr=}
    }

  • Gachet Páez, D., Buenaga Rodríguez, M., Escribano Otero, J. J., & Rubio, M.. (2010). Helping elderly people and persons with disability to access the information society: the naviga project. The european ambient assisted living innovation alliance (aaliance) conference 2010.
    [BibTeX] [Google Scholar]
    @OTHER{GachetPaez2010,
    address = {Málaga},
    author = {Gachet Páez , Diego and Buenaga Rodríguez , Manuel and Escribano Otero , Juan José and Rubio , Margarita},
    journal = {The European Ambient Assisted Living Innovation Alliance (AALIANCE) Conference 2010},
    month = {March},
    title = {Helping elderly people and persons with disability to access the Information Society: the Naviga Project},
    url = {http://scholar.google.es/scholar?q=allintitle%3AHelping+elderly+people+and+persons+with+disability+to+access+the+Information+Society%3A+the+Naviga+Project&btnG=&hl=es&as_sdt=0%2C5},
    year = {2010}
    }

  • Gachet Páez, D., Buenaga, M., Padrón, V., & Alanbari, M.. (2010). Helping elderly people and persons with disability to access theinformation society. , 72, 189-192.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.

    @OTHER{Gachet2010a,
    abstract = {NAVIGA is an European project whose main goal is to design and develop a technological platform allowing elderly people and persons with disability to access the Internet and the Information Society through an innovative and adaptable navigator. NAVIGA also allows the creation of services targeted to social networks, mind training and personalized health care.},
    author = {Gachet Páez, Diego and Buenaga , Manuel and Padrón , Víctor and Alanbari , Mohammad},
    booktitle = {Ambient Intelligence and Future Trends-International Symposium onAmbient Intelligence},
    doi = {10.1007/978-3-642-13268-1_23},
    pages = {189-192},
    publisher = {Springer Berlin / Heidelberg},
    series = {Advances in Soft Computing},
    title = {Helping Elderly People and Persons with Disability to Access theInformation Society},
    url = {http://scholar.google.es/scholar?q=allintitle%3AHelping+Elderly+People+and+Persons+with+Disability+to+Access+the+Information+Society&btnG=&hl=es&as_sdt=0},
    volume = {72},
    year = {2010}
    }

  • López-Fernández, H., Aparicio Galisteo, F., Glez-Peña, D., Buenaga Rodríguez, M., & Fdez-Riverola, F.. (2011). Herramienta biomédica de anotación y acceso inteligente a información. Iii jornada gallega de bioinformática.
    [BibTeX] [Google Scholar]
    @OTHER{Lopez-Fernandez2011,
    address = {Vigo},
    author = {López-Fernández , H and Aparicio Galisteo , Fernando and Glez-Peña , D and Buenaga Rodríguez , Manuel and Fdez-Riverola , F},
    journal = {III Jornada Gallega de Bioinformática},
    month = {September},
    title = {Herramienta biomédica de anotación y acceso inteligente a información},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Herramienta+biom%C3%A9dica+de+anotaci%C3%B3n+y+acceso+inteligente+a+informaci%C3%B3n&btnG=&hl=es&as_sdt=0},
    year = {2011}
    }

  • Gachet Páez, D., Aparicio, F., Buenaga, M., & Rubio, M.. (2013). Highly personalized health services using cloud and sensors. In Proceedings of the 2013 seventh international conference on innovative mobile and internet services in ubiquitous computing (pp. 451-455). IEEE Computer Society.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services based on cloud computing and vital signs sensors.

    @INCOLLECTION{Gachet2013b,
    author = {Gachet Páez, Diego and Aparicio, Fernando and Buenaga, Manuel and Rubio, Margarita},
    title = {Highly Personalized Health Services Using Cloud and Sensors},
    booktitle = { Proceedings of the 2013 Seventh International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing},
    publisher = {IEEE Computer Society},
    year = {2013},
    pages = {451-455},
    month = {July},
    abstract = {In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services based on cloud computing and vital signs sensors.},
    copyright = {©2013 IEEE},
    doi = {10.1109/IMIS.2013.81},
    isbn = {978-3-319-03091-3},
    url = {http://scholar.google.es/scholar?hl=es&q=allintitle%3AHighly+Personalized+Health+Services+Using+Cloud+and+Sensors&btnG=&lr=},
    urldate = {2014-01-01}
    }

  • Valverde, R., & Gachet Páez, D.. (2007). Identificación de sistemas dinámicos utilizando redes neuronales rbf. Revista iberoamericana de automática e informática industrial, 32-42.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    La identificación de sistemas complejos y no-lineales ocupa un lugar importante en las arquitecturas de neurocontrol, como por ejemplo el control inverso, control adaptativo directo e indirecto, etc. Es habitual en esos enfoques utilizar redes neuronales “feedforward” con memoria en la entrada (Tapped Delay) o bien redes recurrentes (modelos de Elman o Jordan) entrenadas off-line para capturar la dinámica del sistema (directa o inversa) y utilizarla en el lazo de control. En este artículo presentamos un esquema de identificación basado en redes del tipo RBF (Radial Basis Function) que se entrena on-line y que dinámicamente modifica su estructura (número de nodos o elementos en la capa oculta) permitiendo una implementación en tiempo real del identificador en el lazo de control.

    @OTHER{Valverde2007,
    abstract = {La identificación de sistemas complejos y no-lineales ocupa un lugar importante en las arquitecturas de neurocontrol, como por ejemplo el control inverso, control adaptativo directo e indirecto, etc. Es habitual en esos enfoques utilizar redes neuronales “feedforward” con memoria en la entrada (Tapped Delay) o bien redes recurrentes (modelos de Elman o Jordan) entrenadas off-line para capturar la dinámica del sistema (directa o inversa) y utilizarla en el lazo de control. En este artículo presentamos un esquema de identificación basado en redes del tipo RBF (Radial Basis Function) que se entrena on-line y que dinámicamente modifica su estructura (número de nodos o elementos en la capa oculta) permitiendo una implementación en tiempo real del identificador en el lazo de control.},
    author = {Valverde , Ricardo and Gachet Páez, Diego},
    doi = {10.4995/riai.v4i2.8023},
    journal = {Revista Iberoamericana de Automática e Informática industrial},
    pages = {32-42},
    publisher = {IFAC},
    title = {Identificación de Sistemas Dinámicos Utilizando Redes Neuronales RBF},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIdentificaci%C3%B3n+de+Sistemas+Din%C3%A1micos+Utilizando+Redes+Neuronales+RBF&btnG=&hl=es&as_sdt=0},
    year = {2007}
    }

  • Gaya López, M. C., Aparicio Galisteo, F., Villalba Benito, M. T., Gomez Fernandez, E., Ferrari Golinelli, G., Redondo Duarte, S., & Iniesta Casanova, J.. (2013). Improving accessibility in discussion forums. Paper presented at the Inted2013 proceedings.
    [BibTeX] [Google Scholar]
    @InProceedings{GAYALOPEZ2013IMP,
    author = {Gaya L{\'{o}}pez, Maria Cruz and Aparicio Galisteo, Fernando and Villalba Benito, M.T. and Gomez Fernandez, Estrella and Ferrari Golinelli, G. and Redondo Duarte, S. and Iniesta Casanova, Jesus},
    title = {Improving Accessibility In Discussion Forums},
    series = {7th International Technology, Education and Development Conference},
    booktitle = {INTED2013 Proceedings},
    isbn = {978-84-616-2661-8},
    issn = {2340-1079},
    publisher = {IATED},
    location = {Valencia, Spain},
    month = {4-5 March, 2013},
    year = {2013},
    pages = {6658-6665},
    url={http://scholar.google.es/scholar?hl=es&q=allintitle%3A+IMPROVING+ACCESSIBILITY+IN+DISCUSSION+FORUMS&btnG=&lr=}
    }

  • Gachet Páez, D., Padrón, V., Buenaga, M., & Aparicio, F.. (2013). Improving health services using cloud computing, big data and wireless sensors. In Nugent, C., Coronato Antonio, D., & Bravo, J. (Ed.), In Ambient assisted living and active aging (, Vol. 8277pp. 33-38). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services, especially in outdoors environments, based on cloud computing and vital signs monitoring.

    @INCOLLECTION{Gachet2013a,
    author = {Gachet Páez, Diego and Padrón, Víctor and Buenaga, Manuel and Aparicio, Fernando},
    title = {Improving Health Services Using Cloud Computing, Big Data and Wireless Sensors},
    booktitle = {Ambient Assisted Living and Active Aging},
    publisher = {Springer Berlin Heidelberg},
    year = {2013},
    editor = {Nugent, Christofer and Coronato Antonio, Davy and Bravo, José.},
    volume = {8277},
    series = {Lecture Notes in Computer Science},
    pages = {33-38},
    month = {December},
    abstract = {In a society characterized by aging population and economical crisis it is desirable to reduce the costs of public healthcare systems. It is increasingly necessary to streamline the health system resources leading to the development of new medical services such as telemedicine, monitoring of chronic patients, personalized health services, creating new services for dependants, etc. Those new application and services will significantly increasing the volume of health information to manage, including data from medical and biological sensors, contextual information, health records, reference information, etc., which in turn requires the availability of health applications anywhere, at any time and also access to medical information must be pervasive and mobile. In this paper we propose one potential solution for creating those new services, especially in outdoors environments, based on cloud computing and vital signs monitoring.},
    copyright = {©2013 Springer Berlin Heidelberg},
    doi = {10.1007/978-3-319-03092-0_5},
    isbn = {978-3-319-03091-3},
    url = {http://scholar.google.es/scholar?q=allintitle%3AImproving+Health+Services+Using+Cloud+Computing%2C+Big+Data+and+Wireless+Sensors&btnG=&hl=es&as_sdt=0%2C5},
    urldate = {2014-01-01}
    }

  • Carrero, F., Cortizo, J. C., Gómez, J. M., & Buenaga, M.. (2008). In the development of a spanish metamap. Proceedings of the acm 17th conference on information and knowledgemanagement.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers. Currently there is no Spanish version of MetaMap, which difficults the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification and retrieval [3]. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language [4]. In this paper we evaluate the possibility of combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx.

    @OTHER{Carrero2008a,
    abstract = {MetaMap is an online application that allows mapping text to UMLS Metathesaurus concepts, which is very useful interoperability among different languages and systems within the biomedical domain. MetaMap Transfer (MMTx) is a Java program that makes MetaMap available to biomedical researchers. Currently there is no Spanish version of MetaMap, which difficults the use of UMLS Metathesaurus to extract concepts from Spanish biomedical texts. Our ongoing research is mainly focused on using biomedical concepts for cross-lingual text classification and retrieval [3]. In this context the use of concepts instead of bag of words representation allows us to face text classification tasks abstracting from the language [4]. In this paper we evaluate the possibility of combining automatic translation techniques with the use of biomedical ontologies to produce an English text that can be processed by MMTx.},
    author = {Carrero , Francisco and Cortizo , José Carlos and Gómez , José María and Buenaga , Manuel},
    doi = {10.1145/1458082.1458335},
    journal = {Proceedings of the ACM 17th Conference on Information and KnowledgeManagement },
    publisher = {Proceedings of the ACM 17th Conference on Information and KnowledgeManagement},
    title = {In the development of a Spanish Metamap},
    url = {http://scholar.google.es/scholar?q=allintitle%3AIn+the+development+of+a+Spanish+Metamap&btnG=&hl=es&as_sdt=0},
    year = {2008}
    }

  • Buenaga Rodriguez, M., Fernández Manjón, B., & Fernández Valmayor, A.. (1995). Information overload at the information age. Adults in innovative learning situations, 17-30.
    [BibTeX] [Google Scholar]
    @OTHER{BuenagaRodri­guez1995,
    author = {Buenaga Rodriguez , Manuel and Fernández Manjón , Baltasar and Fernández Valmayor , A},
    journal = {Adults in Innovative Learning Situations},
    pages = {17-30},
    title = {Information Overload at the Information Age},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+Information+Overload+at+the+Information+Age&btnG=&hl=es&as_sdt=0},
    year = {1995}
    }

  • Ureña López, L. A., Gómez Hidalgo, J. M., & Buenaga Rodríguez, M.. (2000). Information retrieval by means of word sense disambiguation. Third international workshop on text, speech and dialoguebrno, czech republic, 1902, 93-98.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The increasing problem of information overload can be reduced by the improvement of information access tasks like Information Retrieval. Relevance Feedback plays a key role in this task, and is typically based only on the information extracted from documents judged by the user for a given query. We propose to make use of a thesaurus to complement this information to improve RF. This must be done by means of a Word Sense Disambiguation process that correctly identifies the suitable information from the thesaurus WordNET. The results of our experiments show that the utilisation of a thesaurus requires Word Sense Disambiguation, and that with this process, Relevance Feedback is substantially improved.

    @OTHER{UrenaLopez2000,
    abstract = {The increasing problem of information overload can be reduced by the improvement of information access tasks like Information Retrieval. Relevance Feedback plays a key role in this task, and is typically based only on the information extracted from documents judged by the user for a given query. We propose to make use of a thesaurus to complement this information to improve RF. This must be done by means of a Word Sense Disambiguation process that correctly identifies the suitable information from the thesaurus WordNET. The results of our experiments show that the utilisation of a thesaurus requires Word Sense Disambiguation, and that with this process, Relevance Feedback is substantially improved.},
    author = {Ureña López , Luis Alfonso and Gómez Hidalgo , José María and Buenaga Rodríguez , Manuel},
    booktitle = {Text, Speech and Dialogue},
    doi = {10.1007/3-540-45323-7_16},
    journal = {Third International Workshop on TEXT, SPEECH and DIALOGUEBrno, Czech Republic},
    month = {Septiembre 13-16},
    pages = {93-98},
    title = {Information Retrieval by means of Word Sense Disambiguation},
    url = {http://scholar.google.es/scholar?q=allintitle%3AInformation+Retrieval+by+means+of+Word+Sense+Disambiguation&btnG=&hl=es&as_sdt=0},
    volume = {1902},
    year = {2000}
    }

  • Giráldez, I., & Gachet Páez, D.. (2009). Informatización de procesos de negocio mediante la ejecución de su modelo gráfico. , 201, 61-64.
    [BibTeX] [Google Scholar]
    @OTHER{Giraldez2009,
    author = {Giráldez , Ignacio and Gachet Páez, Diego},
    booktitle = {Novática},
    pages = {61-64},
    title = {Informatización de procesos de negocio mediante la ejecución de su modelo gráfico},
    url = {http://scholar.google.es/scholar?q=allintitle%3AInformatizaci%C3%B3n+de+procesos+de+negocio+mediante+la+ejecuci%C3%B3n+de+su+%09modelo+gr%C3%A1fico&btnG=&hl=es&as_sdt=0},
    volume = {201},
    year = {2009}
    }

  • Gachet Páez, D., Aparicio, F., Ascanio, J. R., & Beaterio, A.. (2012). Innovative health services using cloud computing and internet of things. In Ubiquitous computing and ambient intelligence (pp. 415-421). Springer Berlin Heidelberg.
    [BibTeX] [Abstract] [Ver publicacion] [Google Scholar]
    The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elderly people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since the elderly have different health problems that the rest of the population, we need a deep change in national\’s health policy to get adapted to population ageing. This paper describes the preliminary advances of \’Virtual Cloud Carer\’ (VCC), a spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics, using technologies associated with internet of things and cloud computing.

    @INCOLLECTION{Paez2012,
    author = {Gachet Páez, Diego and Aparicio , Fernando and Ascanio , Juan R. and Beaterio , Alberto},
    title = {Innovative Health Services Using Cloud Computing and Internet of Things},
    booktitle = {Ubiquitous Computing and Ambient Intelligence},
    publisher = {Springer Berlin Heidelberg},
    year = {2012},
    series = {Lecture Notes in Computer Science},
    pages = {415-421},
    month = {jan},
    abstract = {The demographic and social changes are causing a gradual increase of the population in situation of dependency. The main concern of the elderly people is their health and its consequences in terms of dependence and also is the primary cause of suffering and self-rated ill health. Since the elderly have different health problems that the rest of the population, we need a deep change in national\'s health policy to get adapted to population ageing. This paper describes the preliminary advances of \'Virtual Cloud Carer\' (VCC), a spanish national R&D project, whose primary purpose is the creation of new health services for dependents and chronics, using technologies associated with internet of things and cloud computing.},
    copyright = {©2012 Springer-Verlag Berlin Heidelberg},
    doi = {10.1007/978-3-642-35377-2_58},
    isbn = {978-3-642-35376-5, 978-3-642-35377-2},
    url = {http://scholar.google.es/scholar?q=allintitle%3A+%22innovative+health+services+using+cloud+computing+and+internet+of+things%22&btnG=&hl=es&as_sdt=0},
    urldate = {2012-12-21}
    }

  • Gachet Páez, D., Exposito, D., Ascanio, J. R., & Garcia Leiva, R.. (2010). Integracion de servicios inteligentes de e-salud y acceso a la informacion para personas mayores. Novática. revista de la asociación de técnicos en informática(208).
    [BibTeX] [Google Scholar]
    @OTHER{GachetNovatica2010a,
    author = {Gachet Páez, Diego and Exposito, Diego and Ascanio, Juan Ramon and Garcia Leiva, Rafael},
    journal = {Novática. Revista de la Asociación de Técnicos en Informática},
    number = {208},
    title = {Integracion de servicios inteligentes de e-salud y acceso a la informacion para personas mayores},
    url = {http://scholar.google.es/scholar?q=Novatica+Integracion+de+servicios+inteligentes+de+e-salud+y+acceso+a+la+informacion+para+personas+mayores&btnG=&hl=es&as_sdt=0%2C5},
    year = {2010}
    }

  • Ureña López, L. A., Gómez Hidalgo, J. M., García Vega, M., & Díaz Esteban, A.. (1998). Integrando una base de datos léxica y una colección de entrenamientopara la desambiguación del sentido de las palabras. Xiv congreso de la sociedad española de procesamiento de lenguajenatural, 23.
    [BibTeX] [Abstract] [Google Scholar]
    La resolución de la ambigüedad es una tarea compleja y útil para muchas aplicaciones del procesamiento del lenguaje natural. En concreto, la ambigüedad causa probl