6,176 research outputs found
Enhancement of Epidemiological Models for Dengue Fever Based on Twitter Data
Epidemiological early warning systems for dengue fever rely on up-to-date
epidemiological data to forecast future incidence. However, epidemiological
data typically requires time to be available, due to the application of
time-consuming laboratorial tests. This implies that epidemiological models
need to issue predictions with larger antecedence, making their task even more
difficult. On the other hand, online platforms, such as Twitter or Google,
allow us to obtain samples of users' interaction in near real-time and can be
used as sensors to monitor current incidence. In this work, we propose a
framework to exploit online data sources to mitigate the lack of up-to-date
epidemiological data by obtaining estimates of current incidence, which are
then explored by traditional epidemiological models. We show that the proposed
framework obtains more accurate predictions than alternative approaches, with
statistically better results for delays greater or equal to 4 weeks.Comment: ACM Digital Health 201
A Comprehensive Review on Machine Learning Based Models for Healthcare Applications
At present, there has been significant progress concerning AI and machine learning, specifically in medical sector. Artificial intelligence refers to computing programmes that replicate and simulate human intelligence, such as an individual's problem-solving capabilities or their capacity for learning. Moreover, machine learning can be considered as a subfield within the broader domain of artificial intelligence. The process automatically identifies and analyses patterns within unprocessed data. The objective of this work is to facilitate researchers in acquiring an extensive knowledge of machine learning and its utilisation within the healthcare domain. This research commences by providing a categorization of machine learning-based methodologies concerning healthcare. In accordance with the taxonomy, we have put forth, machine learning approaches in the healthcare domain are classified according to various factors. These factors include the methods employed for the process of preparing data for analysis, which includes activities such as data cleansing and data compression techniques. Additionally, the strategies for learning are utilised, such as reinforcement learning, semi-supervised learning, supervised learning, and unsupervised learning. are considered. Also, the evaluation approaches employed encompass simulation-based evaluation as well as evaluation of actual use in everyday situations. Lastly, the applications of these ML-based methods in medicine pertain towards diagnosis and treatment. Based on the classification we have put forward; we proceed to examine a selection of research that have been presented in the framework of machine learning applications within the healthcare domain. This review paper serves as a valuable resource for researchers seeking to gain familiarity with the latest research on ML applications concerning medicine. It aids towards the recognition for obstacles and limitations associated with ML in this domain, while also facilitating the identification of potential future research directions
From Social Data Mining to Forecasting Socio-Economic Crisis
Socio-economic data mining has a great potential in terms of gaining a better
understanding of problems that our economy and society are facing, such as
financial instability, shortages of resources, or conflicts. Without
large-scale data mining, progress in these areas seems hard or impossible.
Therefore, a suitable, distributed data mining infrastructure and research
centers should be built in Europe. It also appears appropriate to build a
network of Crisis Observatories. They can be imagined as laboratories devoted
to the gathering and processing of enormous volumes of data on both natural
systems such as the Earth and its ecosystem, as well as on human
techno-socio-economic systems, so as to gain early warnings of impending
events. Reality mining provides the chance to adapt more quickly and more
accurately to changing situations. Further opportunities arise by individually
customized services, which however should be provided in a privacy-respecting
way. This requires the development of novel ICT (such as a self- organizing
Web), but most likely new legal regulations and suitable institutions as well.
As long as such regulations are lacking on a world-wide scale, it is in the
public interest that scientists explore what can be done with the huge data
available. Big data do have the potential to change or even threaten democratic
societies. The same applies to sudden and large-scale failures of ICT systems.
Therefore, dealing with data must be done with a large degree of responsibility
and care. Self-interests of individuals, companies or institutions have limits,
where the public interest is affected, and public interest is not a sufficient
justification to violate human rights of individuals. Privacy is a high good,
as confidentiality is, and damaging it would have serious side effects for
society.Comment: 65 pages, 1 figure, Visioneer White Paper, see
http://www.visioneer.ethz.c
Detection and Localization of Leaks in Water Networks
Today, 844 million humans around the world have no access to safe drinking water. Furthermore, every 90 seconds, one child dies from water-related illnesses. Major cities lose 15% - 50% of their water and, in some cases, losses may reach up to 70%, mostly due to leaks. Therefore, it is paramount to preserve water as an invaluable resource through water networks, particularly in large cities in which leak repair may cause disruption. Municipalities usually tackle leak problems using various detection systems and technologies, often long after leaks occur; however, such efforts are not enough to detect leaks at early stages. Therefore, the main objectives of the present research are to develop and validate a leak detection system and to optimize leak repair prioritization.
The development of the leak detection models goes through several phases: (1) technology and device selection, (2) experimental work, (3) signal analysis, (4) selection of parameters, (5) machine learning model development and (6) validation of developed models. To detect leaks, vibration signals are collected through a variety of controlled experiments on PVC and ductile iron pipelines using wireless accelerometers, i.e., micro-electronic mechanical sensors (MEMS). The signals are analyzed to pinpoint leaks in water pipelines. Similarly, acoustic signals are collected from a pilot project in the city of Montreal, using noise loggers as another detection technology. The collected signals are also analyzed to detect and pinpoint the leaks. The leak detection system has presented promising results using both technologies. The developed MEMS model is capable of accurately pinpointing leaks within 12 centimeters from the exact location. Comparatively, for noise loggers, the developed model can detect the exact leak location within a 25-cm radius for an actual leak.
The leak repair prioritization model uses two optimization techniques: (1) a well-known genetic algorithm and (2) a newly innovative Lazy Serpent Algorithm that is developed in the present research. The Lazy Serpent Algorithm has proved capable of surpassing the genetic algorithm in determining a more optimal schedule using much less computation time. The developed research proves that automated real-time leak detection is possible and can help governments save water resource and funds. The developed research proves the viability of accelerometers as a standalone leak detection technology and opens the door for further research and experimentations. The leak detection system model helps municipalities and water resource agencies rapidly detect leaks when they occur in real-time. The developed pinpointing models facilitate the leak repair process by precisely determine the leak location where the repair works should be conducted. The Lazy Serpent Algorithm helps municipalities better distribute their resources to maximize their desired benefits
A Comprehensive Survey on Rare Event Prediction
Rare event prediction involves identifying and forecasting events with a low
probability using machine learning and data analysis. Due to the imbalanced
data distributions, where the frequency of common events vastly outweighs that
of rare events, it requires using specialized methods within each step of the
machine learning pipeline, i.e., from data processing to algorithms to
evaluation protocols. Predicting the occurrences of rare events is important
for real-world applications, such as Industry 4.0, and is an active research
area in statistical and machine learning. This paper comprehensively reviews
the current approaches for rare event prediction along four dimensions: rare
event data, data processing, algorithmic approaches, and evaluation approaches.
Specifically, we consider 73 datasets from different modalities (i.e.,
numerical, image, text, and audio), four major categories of data processing,
five major algorithmic groupings, and two broader evaluation approaches. This
paper aims to identify gaps in the current literature and highlight the
challenges of predicting rare events. It also suggests potential research
directions, which can help guide practitioners and researchers.Comment: 44 page
Process-Oriented Stream Classification Pipeline:A Literature Review
Featured Application: Nowadays, many applications and disciplines work on the basis of stream data. Common examples are the IoT sector (e.g., sensor data analysis), or video, image, and text analysis applications (e.g., in social media analytics or astronomy). With our work, we gather different approaches and terminology, and give a broad overview over the topic. Our main target groups are practitioners and newcomers to the field of data stream classification. Due to the rise of continuous data-generating applications, analyzing data streams has gained increasing attention over the past decades. A core research area in stream data is stream classification, which categorizes or detects data points within an evolving stream of observations. Areas of stream classification are diverse—ranging, e.g., from monitoring sensor data to analyzing a wide range of (social) media applications. Research in stream classification is related to developing methods that adapt to the changing and potentially volatile data stream. It focuses on individual aspects of the stream classification pipeline, e.g., designing suitable algorithm architectures, an efficient train and test procedure, or detecting so-called concept drifts. As a result of the many different research questions and strands, the field is challenging to grasp, especially for beginners. This survey explores, summarizes, and categorizes work within the domain of stream classification and identifies core research threads over the past few years. It is structured based on the stream classification process to facilitate coordination within this complex topic, including common application scenarios and benchmarking data sets. Thus, both newcomers to the field and experts who want to widen their scope can gain (additional) insight into this research area and find starting points and pointers to more in-depth literature on specific issues and research directions in the field.</p
Collaborative environment to support a professional community
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de ComputadoresRecent manufacturing roadmaps stress current production systems limitations, emphasizing social, economic and ecologic consequences for Europe of a non-evolution to sustainable Production Systems. Hence, both academic institutions and enterprises are committed to develop solutions that would endow enterprises to survive in nowadays’ extremely competitive business environment.
A research effort is being carried on by the Evolvable Production Systems consortium towards attaining Production Systems that can cope with current technological, economical, ecological and social demands fulfilling recent roadmaps. Nevertheless research success depends on attaining consensus in the scientific community and therefore an accurate critical mass support is required in the whole process.
The main goal of this thesis is the development of a Collaborative Environment Tool to assist Evolvable Production Systems consortium in such research efforts and to enhance Evolvable Assembly Systems paradigm dissemination. This work resulted in EASET (Evolvable Assembly Systems Environment Tool), a collaborative environment tool which promotes EAS dissemination and brings forth improvements through the raise of critical mass and collaboration between entities
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
- …