492 research outputs found
WHICH TECHNIQUE TO USE WHEN ELICITING KNOWLEDGE FROM AN EXPERT
A knowledge elicitation technique selection for a knowledge elicitation from an expert still represents a problem in a KBS development. In this paper is presented an original computer program for technique selection and a changed program that was aimed at improving the selection. Both programs use certain factor values as a starting point, but the first program is based on a technique grading and the second on decision trees. Comparison and testing results for both programs are presented
INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS
Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions.
With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants.
IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS.
This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in
12
each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency.
The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems
A comparison of statistical machine learning methods in heartbeat detection and classification
In health care, patients with heart problems require quick responsiveness in a clinical setting or in the operating theatre. Towards that end, automated classification of heartbeats is vital as some heartbeat irregularities are time consuming to detect. Therefore, analysis of electro-cardiogram (ECG) signals is an active area of research. The methods proposed in the literature depend on the structure of a heartbeat cycle. In this paper, we use interval and amplitude based features together with a few samples from the ECG signal as a feature vector. We studied a variety of classification algorithms focused especially on a type of arrhythmia known as the ventricular ectopic fibrillation (VEB). We compare the performance of the classifiers against algorithms proposed in the literature and make recommendations regarding features, sampling rate, and choice of the classifier to apply in a real-time clinical setting. The extensive study is based on the MIT-BIH arrhythmia database. Our main contribution is the evaluation of existing classifiers over a range sampling rates, recommendation of a detection methodology to employ in a practical setting, and extend the notion of a mixture of experts to a larger class of algorithms
Context, latency and the value of preventing a statistical cancer fatality
PhD ThesisThis thesis contributes to the state of understanding about the value of latent health and
fatality risk reductions, focussing on the effects of context and latency on the Value of
Preventing a Statistical Cancer Fatality (VSLCAN) relative to road accident fatalities. The
conceptual, methodological and empirical contributions are derived from two stated
preference studies. The studies are designed to explore how the VSLCAN is driven by the
context effect, which includes dread of the cause ‘cancer’ and the effects of illness prior to
fatality; and the latency (delay) effect which depends upon time preferences and risk
preferences.
Study 1 develops a Risk-Risk survey protocol, and the resulting central tendency and
regression analysis verify that the context of cancer increases the VSL and that latency
decreases it. The relativity between VSLCAN and the road accident VSL is then summarised
into a simple relationship where the offsetting influences of context and latency are
parameterised. This novel tool has the potential to enhance the comparability and evaluation
of a wide range of existing and future VSL studies involving context and latency effects
through the elicitation of key underlying parameters such as the context premium and
effective discount rate. As such it represents a significant methodological contribution.
Study 2 focusses directly on two aspects of the latency effect. These relate to risk and time
preferences, explored in Studies 2a and 2b respectively. Delayed outcomes are inherently
risky, so the exploration of latent outcomes requires controlling for risk preferences. Study
2a develops a theoretical and empirical framework for eliciting risk aversion proxies in the
domain of health, which have not previously been fully developed in the literature. The
method extends the classic Holt-Laury risk preference elicitation framework into a new
domain- health risks- and the method is implemented successfully in Study 2. This chapter
therefore makes both conceptual and methodological contributions through clarifying the
utility theoretic basis of a health risk aversion measure and then developing a way to elicit
such a measure in surveys. Study 2b uses the novel VSLCAN:VSL relationship developed in
Study 1 to elicit exponential discount rates from Risk-Risk data comparing latent cancer and
road accident risks. Regression analysis performed on these rates on a sample and individual
level, provides strong evidence to suggest that a non-standard (sub-additive) discounting
model is the most descriptively accurate discounting assumption for this sample. It provides
the first evidence regarding sub-additive discounting in the domain of health and fatality risk.funding providers at the ESRC and at the
HSE and ONR
Estimating lifetime effects of child development for economic evaluation: An exploration of methods and their application to a population screen for postnatal depression
Background: Early health interventions affecting child development can subsequently influence lifetime health and economic outcomes. These lifetime effects may be excluded from economic evaluation as empirical evidence covering the required time horizon is rarely available. One example is screening for postnatal depression where current guidelines do not account for lifetime effects despite evidence of a detrimental association between maternal depression and child development.
Aims: To develop a methodological approach to estimate lifetime effects for economic evaluation and determine their influence on an evaluation assessing the cost-effectiveness of postnatal depression screening.
Methods: Lifetime effects are estimated by linking results from two empirical studies. Firstly, growth curve models establish the effects of postnatal depression on development measures for children aged 3-11 using data from the Millennium Cohort Study. Secondly, child development measures are entered as explanatory variables in linear regression models predicting effects on lifetime health and economic outcomes using data from the 1970 British Cohort Study. An economic evaluation is conducted for scenarios which exclude/include lifetime effects to determine their influence on cost-effectiveness results.
Findings: Postnatal depression was detrimentally associated with children’s cognitive and socioemotional development up to age 11. Detrimental changes in cognitive and socioemotional development were negatively associated with lifetime outcomes. Postnatal depression exposure was predicted to reduce children’s lifetime Quality Adjusted Life Years, increase healthcare and crime costs, and generate fewer monetary returns in education and employment. Cost-effectiveness results changed when including lifetime effects, leading to the recommendation of a screening strategy which treats a greater proportion of depressed mothers.
Conclusions: Lifetime effects can influence cost-effectiveness results and their exclusion risks providing a partial analysis. This research demonstrates methods to estimate and include lifetime effects in economic evaluation. Similar approaches could be applied elsewhere to provide additional evidence for economic evaluation of other childhood interventions
Adaptive Cognitive Interaction Systems
Adaptive kognitive Interaktionssysteme beobachten und modellieren den Zustand ihres Benutzers und passen das Systemverhalten entsprechend an. Ein solches System besteht aus drei Komponenten: Dem empirischen kognitiven Modell, dem komputationalen kognitiven Modell und dem adaptiven Interaktionsmanager. Die vorliegende Arbeit enthält zahlreiche Beiträge zur Entwicklung dieser Komponenten sowie zu deren Kombination. Die Ergebnisse werden in zahlreichen Benutzerstudien validiert
CLASSIFYING AND RESPONDING TO NETWORK INTRUSIONS
Intrusion detection systems (IDS) have been widely adopted within the IT community, as
passive monitoring tools that report security related problems to system administrators.
However, the increasing number and evolving complexity of attacks, along with the
growth and complexity of networking infrastructures, has led to overwhelming numbers of
IDS alerts, which allow significantly smaller timeframe for a human to respond. The need
for automated response is therefore very much evident. However, the adoption of such
approaches has been constrained by practical limitations and administrators' consequent
mistrust of systems' abilities to issue appropriate responses.
The thesis presents a thorough analysis of the problem of intrusions, and identifies false
alarms as the main obstacle to the adoption of automated response. A critical examination
of existing automated response systems is provided, along with a discussion of why a new
solution is needed. The thesis determines that, while the detection capabilities remain
imperfect, the problem of false alarms cannot be eliminated. Automated response
technology must take this into account, and instead focus upon avoiding the disruption of
legitimate users and services in such scenarios. The overall aim of the research has
therefore been to enhance the automated response process, by considering the context of an
attack, and investigate and evaluate a means of making intelligent response decisions.
The realisation of this objective has included the formulation of a response-oriented
taxonomy of intrusions, which is used as a basis to systematically study intrusions and
understand the threats detected by an IDS. From this foundation, a novel Flexible
Automated and Intelligent Responder (FAIR) architecture has been designed, as the basis
from which flexible and escalating levels of response are offered, according to the context
of an attack. The thesis describes the design and operation of the architecture, focusing
upon the contextual factors influencing the response process, and the way they are
measured and assessed to formulate response decisions. The architecture is underpinned by
the use of response policies which provide a means to reflect the changing needs and
characteristics of organisations.
The main concepts of the new architecture were validated via a proof-of-concept prototype
system. A series of test scenarios were used to demonstrate how the context of an attack
can influence the response decisions, and how the response policies can be customised and
used to enable intelligent decisions. This helped to prove that the concept of flexible
automated response is indeed viable, and that the research has provided a suitable
contribution to knowledge in this important domain
- …