1,383 research outputs found
Creating personalised energy plans: from groups to individuals using Fuzzy C Means Clustering
Changes in the UK electricity market mean that domestic
users will be required to modify their usage behaviour in
accordance with energy eciency targets. Clustering allows
usage data, collected at the household level, to be clustered
into groups and assigned a stereotypical prole which may
be used to provide individually tailored energy plans. Fuzzy
C Means extends previous work based around crisp K means
clustering by allowing a household to be a member of multi-
ple customer prole groups to dierent degrees, thus provid-
ing the opportunity to make personalised oers to the house-
hold dependent on their degree of membership of each group.
In addition, feedback can be provided on how household's
changing behaviour is moving them towards more "green" or
cost eective stereotypical usage
Big data analytics:Computational intelligence techniques and application areas
Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment
A Comprehensive Review on Machine Learning Based Models for Healthcare Applications
At present, there has been significant progress concerning AI and machine learning, specifically in medical sector. Artificial intelligence refers to computing programmes that replicate and simulate human intelligence, such as an individual's problem-solving capabilities or their capacity for learning. Moreover, machine learning can be considered as a subfield within the broader domain of artificial intelligence. The process automatically identifies and analyses patterns within unprocessed data. The objective of this work is to facilitate researchers in acquiring an extensive knowledge of machine learning and its utilisation within the healthcare domain. This research commences by providing a categorization of machine learning-based methodologies concerning healthcare. In accordance with the taxonomy, we have put forth, machine learning approaches in the healthcare domain are classified according to various factors. These factors include the methods employed for the process of preparing data for analysis, which includes activities such as data cleansing and data compression techniques. Additionally, the strategies for learning are utilised, such as reinforcement learning, semi-supervised learning, supervised learning, and unsupervised learning. are considered. Also, the evaluation approaches employed encompass simulation-based evaluation as well as evaluation of actual use in everyday situations. Lastly, the applications of these ML-based methods in medicine pertain towards diagnosis and treatment. Based on the classification we have put forward; we proceed to examine a selection of research that have been presented in the framework of machine learning applications within the healthcare domain. This review paper serves as a valuable resource for researchers seeking to gain familiarity with the latest research on ML applications concerning medicine. It aids towards the recognition for obstacles and limitations associated with ML in this domain, while also facilitating the identification of potential future research directions
A Colour Wheel to Rule them All: Analysing Colour & Geometry in Medical Microscopy
Personalized medicine is a rapidly growing field in healthcare that aims to customize
medical treatments and preventive measures based on each patient’s unique characteristics,
such as their genes, environment, and lifestyle factors. This approach
acknowledges that people with the same medical condition may respond differently
to therapies and seeks to optimize patient outcomes while minimizing the risk
of adverse effects.
To achieve these goals, personalized medicine relies on advanced technologies,
such as genomics, proteomics, metabolomics, and medical imaging. Digital
histopathology, a crucial aspect of medical imaging, provides clinicians with valuable
insights into tissue structure and function at the cellular and molecular levels. By
analyzing small tissue samples obtained through minimally invasive techniques, such
as biopsy or aspirate, doctors can gather extensive data to evaluate potential diagnoses
and clinical decisions. However, digital analysis of histology images presents
unique challenges, including the loss of 3D information and stain variability, which
is further complicated by sample variability. Limited access to data exacerbates
these challenges, making it difficult to develop accurate computational models for
research and clinical use in digital histology.
Deep learning (DL) algorithms have shown significant potential for improving the
accuracy of Computer-Aided Diagnosis (CAD) and personalized treatment models,
particularly in medical microscopy. However, factors such as limited generability,
lack of interpretability, and bias sometimes hinder their clinical impact. Furthermore,
the inherent variability of histology images complicates the development of robust DL
methods. Thus, this thesis focuses on developing new tools to address these issues.
Our essential objective is to create transparent, accessible, and efficient methods
based on classical principles from various disciplines, including histology, medical
imaging, mathematics, and art, to tackle microscopy image registration and colour
analysis successfully. These methods can contribute significantly to the advancement
of personalized medicine, particularly in studying the tumour microenvironment
for diagnosis and therapy research.
First, we introduce a novel automatic method for colour analysis and non-rigid
histology registration, enabling the study of heterogeneity morphology in tumour
biopsies. This method achieves accurate tissue cut registration, drastically reducing
landmark distance and excellent border overlap. Second, we introduce ABANICCO, a novel colour analysis method that combines
geometric analysis, colour theory, fuzzy colour spaces, and multi-label systems
for automatically classifying pixels into a set of conventional colour categories.
ABANICCO outperforms benchmark methods in accuracy and simplicity. It is
computationally straightforward, making it useful in scenarios involving changing
objects, limited data, unclear boundaries, or when users lack prior knowledge of
the image or colour theory. Moreover, results can be modified to match each
particular task.
Third, we apply the acquired knowledge to create a novel pipeline of rigid
histology registration and ABANICCO colour analysis for the in-depth study of
triple-negative breast cancer biopsies. The resulting heterogeneity map and tumour
score provide valuable insights into the composition and behaviour of the tumour,
informing clinical decision-making and guiding treatment strategies.
Finally, we consolidate the developed ideas into an efficient pipeline for tissue
reconstruction and multi-modality data integration on Tuberculosis infection data.
This enables accurate element distribution analysis to understand better interactions
between bacteria, host cells, and the immune system during the course of infection.
The methods proposed in this thesis represent a transparent approach to computational
pathology, addressing the needs of medical microscopy registration and
colour analysis while bridging the gap between clinical practice and computational
research. Moreover, our contributions can help develop and train better, more
robust DL methods.En una época en la que la medicina personalizada está revolucionando la asistencia
sanitaria, cada vez es más importante adaptar los tratamientos y las medidas
preventivas a la composición genética, el entorno y el estilo de vida de cada
paciente. Mediante el empleo de tecnologías avanzadas, como la genómica, la
proteómica, la metabolómica y la imagen médica, la medicina personalizada se
esfuerza por racionalizar el tratamiento para mejorar los resultados y reducir
los efectos secundarios.
La microscopía médica, un aspecto crucial de la medicina personalizada, permite
a los médicos recopilar y analizar grandes cantidades de datos a partir de pequeñas
muestras de tejido. Esto es especialmente relevante en oncología, donde las terapias
contra el cáncer se pueden optimizar en función de la apariencia tisular específica de
cada tumor. La patología computacional, un subcampo de la visión por ordenador,
trata de crear algoritmos para el análisis digital de biopsias. Sin embargo, antes de
que un ordenador pueda analizar imágenes de microscopía médica, hay que seguir
varios pasos para conseguir las imágenes de las muestras.
La primera etapa consiste en recoger y preparar una muestra de tejido del
paciente. Para que esta pueda observarse fácilmente al microscopio, se corta en
secciones ultrafinas. Sin embargo, este delicado procedimiento no está exento de
dificultades. Los frágiles tejidos pueden distorsionarse, desgarrarse o agujerearse,
poniendo en peligro la integridad general de la muestra.
Una vez que el tejido está debidamente preparado, suele tratarse con tintes de
colores característicos. Estos tintes acentúan diferentes tipos de células y tejidos
con colores específicos, lo que facilita a los profesionales médicos la identificación
de características particulares. Sin embargo, esta mejora en visualización tiene
un alto coste. En ocasiones, los tintes pueden dificultar el análisis informático
de las imágenes al mezclarse de forma inadecuada, traspasarse al fondo o alterar
el contraste entre los distintos elementos.
El último paso del proceso consiste en digitalizar la muestra. Se toman imágenes
de alta resolución del tejido con distintos aumentos, lo que permite su análisis por
ordenador. Esta etapa también tiene sus obstáculos. Factores como una calibración
incorrecta de la cámara o unas condiciones de iluminación inadecuadas pueden
distorsionar o hacer borrosas las imágenes. Además, las imágenes de porta completo
obtenidas so de tamaño considerable, complicando aún más el análisis. En general, si bien la preparación, la tinción y la digitalización de las muestras
de microscopía médica son fundamentales para el análisis digital, cada uno de estos
pasos puede introducir retos adicionales que deben abordarse para garantizar un
análisis preciso. Además, convertir un volumen de tejido completo en unas pocas
secciones teñidas reduce drásticamente la información 3D disponible e introduce
una gran incertidumbre.
Las soluciones de aprendizaje profundo (deep learning, DL) son muy prometedoras
en el ámbito de la medicina personalizada, pero su impacto clínico a veces se
ve obstaculizado por factores como la limitada generalizabilidad, el sobreajuste, la
opacidad y la falta de interpretabilidad, además de las preocupaciones éticas y en
algunos casos, los incentivos privados. Por otro lado, la variabilidad de las imágenes
histológicas complica el desarrollo de métodos robustos de DL. Para superar estos
retos, esta tesis presenta una serie de métodos altamente robustos e interpretables
basados en principios clásicos de histología, imagen médica, matemáticas y arte,
para alinear secciones de microscopía y analizar sus colores.
Nuestra primera contribución es ABANICCO, un innovador método de análisis
de color que ofrece una segmentación de colores objectiva y no supervisada y permite
su posterior refinamiento mediante herramientas fáciles de usar. Se ha demostrado
que la precisión y la eficacia de ABANICCO son superiores a las de los métodos
existentes de clasificación y segmentación del color, e incluso destaca en la detección
y segmentación de objetos completos. ABANICCO puede aplicarse a imágenes
de microscopía para detectar áreas teñidas para la cuantificación de biopsias, un
aspecto crucial de la investigación de cáncer.
La segunda contribución es un método automático y no supervisado de segmentación
de tejidos que identifica y elimina el fondo y los artefactos de las
imágenes de microscopía, mejorando así el rendimiento de técnicas más sofisticadas
de análisis de imagen. Este método es robusto frente a diversas imágenes, tinciones
y protocolos de adquisición, y no requiere entrenamiento.
La tercera contribución consiste en el desarrollo de métodos novedosos para
registrar imágenes histopatológicas de forma eficaz, logrando el equilibrio adecuado
entre un registro preciso y la preservación de la morfología local, en función de
la aplicación prevista.
Como cuarta contribución, los tres métodos mencionados se combinan para
crear procedimientos eficientes para la integración completa de datos volumétricos,
creando visualizaciones altamente interpretables de toda la información presente en
secciones consecutivas de biopsia de tejidos. Esta integración de datos puede tener
una gran repercusión en el diagnóstico y el tratamiento de diversas enfermedades,
en particular el cáncer de mama, al permitir la detección precoz, la realización
de pruebas clínicas precisas, la selección eficaz de tratamientos y la mejora en la
comunicación el compromiso con los pacientes. Por último, aplicamos nuestros hallazgos a la integración multimodal de datos y
la reconstrucción de tejidos para el análisis preciso de la distribución de elementos
químicos en tuberculosis, lo que arroja luz sobre las complejas interacciones entre
las bacterias, las células huésped y el sistema inmunitario durante la infección
tuberculosa. Este método también aborda problemas como el daño por adquisición,
típico de muchas modalidades de imagen.
En resumen, esta tesis muestra la aplicación de métodos clásicos de visión por
ordenador en el registro de microscopía médica y el análisis de color para abordar
los retos únicos de este campo, haciendo hincapié en la visualización eficaz y fácil de
datos complejos. Aspiramos a seguir perfeccionando nuestro trabajo con una amplia
validación técnica y un mejor análisis de los datos. Los métodos presentados en esta
tesis se caracterizan por su claridad, accesibilidad, visualización eficaz de los datos,
objetividad y transparencia. Estas características los hacen perfectos para tender
puentes robustos entre los investigadores de inteligencia artificial y los clínicos e
impulsar así la patología computacional en la práctica y la investigación médicas.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretario: Gonzalo Ricardo Ríos Muñoz.- Vocal: Estíbaliz Gómez de Marisca
PROFILING - CONCEPTS AND APPLICATIONS
Profiling is an approach to put a label or a set of labels on a subject, considering the characteristics of this subject. The New Oxford American Dictionary defines profiling as: “recording and analysis of a person’s psychological and behavioral characteristics, so as to assess or predict his/her capabilities in a certain sphere or to assist in identifying a particular subgroup of people”. This research extends this definition towards things demonstrating that many methods used for profiling of people may be applied for a different type of subjects, namely things.
The goal of this research concerns proposing methods for discovery of profiles of users and things with application of Data Science methods. The profiles are utilized in vertical and 2 horizontal scenarios and concern such domains as smart grid and telecommunication (vertical scenarios), and support provided both for the needs of authorization and personalization (horizontal usage).:The thesis consists of eight chapters including an introduction and a summary.
First chapter describes motivation for work that was carried out for the last 8 years together with discussion on its importance both for research and business practice. The motivation for this work is much broader and emerges also from business importance of profiling and personalization. The introduction summarizes major research directions, provides research questions, goals and supplementary objectives addressed in the thesis. Research methodology is also described, showing impact of methodological aspects on the work undertaken.
Chapter 2 provides introduction to the notion of profiling. The definition of profiling is introduced. Here, also a relation of a user profile to an identity is discussed. The papers included in this chapter show not only how broadly a profile may be understood, but also how a profile may be constructed considering different data sources.
Profiling methods are introduced in Chapter 3. This chapter refers to the notion of a profile developed using the BFI-44 personality test and outcomes of a survey related to color preferences of people with a specific personality. Moreover, insights into profiling of relations between people are provided, with a focus on quality of a relation emerging from contacts between two entities.
Chapters from 4 to 7 present different scenarios that benefit from application of profiling methods.
Chapter 4 starts with introducing the notion of a public utility company that in the thesis is discussed using examples from smart grid and telecommunication. Then, in chapter 4 follows a description of research results regarding profiling for the smart grid, focusing on a profile of a prosumer and forecasting demand and production of the electric energy in the smart grid what can be influenced e.g. by weather or profiles of appliances.
Chapter 5 presents application of profiling techniques in the field of telecommunication. Besides presenting profiling methods based on telecommunication data, in particular on Call Detail Records, also scenarios and issues related to privacy and trust are addressed.
Chapter 6 and Chapter 7 target at horizontal applications of profiling that may be of benefit for multiple domains.
Chapter 6 concerns profiling for authentication using un-typical data sources such as Call Detail Records or data from a mobile phone describing the user behavior. Besides proposing methods, also limitations are discussed. In addition, as a side research effect a methodology for evaluation of authentication methods is proposed.
Chapter 7 concerns personalization and consists of two diverse parts. Firstly, behavioral profiles to change interface and behavior of the system are proposed and applied. The performance of solutions personalizing content either locally or on the server is studied. Then, profiles of customers of shopping centers are created based on paths identified using Call Detail Records. The analysis demonstrates that the data that is collected for one purpose, may significantly influence other business scenarios.
Chapter 8 summarizes the research results achieved by the author of this document. It presents contribution over state of the art as well as some insights into the future work planned
Recommended from our members
Advances in Technology Enhanced Learning
‘Advances in Technology Enhanced Learning’ presents a range of research projects which aim to explore how to make engagement in learning (and teaching) more passionate. This interactive and experimental resource discusses innovations which pave the way to open collaboration at scale. The book introduces methodological and technological breakthroughs via twelve chapters to learners, instructors, and decision-makers in schools, universities, and workplaces.
The Open University's Knowledge Media Institute and the EU TELMap project have brought together the luminaries from the European research area to showcase their vision of the future of learning with technology via their recent research project work. The projects discussed range widely over the Technology Enhanced Learning area from: environments for responsive open learning, work-based reflection, work-based social creativity, serious games and many more
A survey of recommender systems for energy efficiency in buildings: Principles, challenges and prospects
Recommender systems have significantly developed in recent years in parallel
with the witnessed advancements in both internet of things (IoT) and artificial
intelligence (AI) technologies. Accordingly, as a consequence of IoT and AI,
multiple forms of data are incorporated in these systems, e.g. social,
implicit, local and personal information, which can help in improving
recommender systems' performance and widen their applicability to traverse
different disciplines. On the other side, energy efficiency in the building
sector is becoming a hot research topic, in which recommender systems play a
major role by promoting energy saving behavior and reducing carbon emissions.
However, the deployment of the recommendation frameworks in buildings still
needs more investigations to identify the current challenges and issues, where
their solutions are the keys to enable the pervasiveness of research findings,
and therefore, ensure a large-scale adoption of this technology. Accordingly,
this paper presents, to the best of the authors' knowledge, the first timely
and comprehensive reference for energy-efficiency recommendation systems
through (i) surveying existing recommender systems for energy saving in
buildings; (ii) discussing their evolution; (iii) providing an original
taxonomy of these systems based on specified criteria, including the nature of
the recommender engine, its objective, computing platforms, evaluation metrics
and incentive measures; and (iv) conducting an in-depth, critical analysis to
identify their limitations and unsolved issues. The derived challenges and
areas of future implementation could effectively guide the energy research
community to improve the energy-efficiency in buildings and reduce the cost of
developed recommender systems-based solutions.Comment: 35 pages, 11 figures, 1 tabl
- …