178 research outputs found
Supporting the information systems requirements of distributed healthcare teams
The adoption of a patient-centric approach to healthcare delivery in the National Health Service
(NHS) in the UK has led to changing requirements for information systems supporting the
work of health and care practitioners. In particular, the patient-centric approach emphasises
teamwork and cross-boundary coordination and collaboration. Although a great deal of both
time and money has been invested in modernising healthcare information systems, they do not
yet meet the requirements of patient-centric work. Current proposals for meeting these needs
focus on providing cross-boundary information access in the form of an integrated Electronic
Patient Record (EPR). This research considers the requirements that are likely to remain unmet
after an integrated EPR is in place and how to meet these. Because the patient-centric
approach emphasises teamwork, a conceptual model which uses care team meta-data to track
and manage team members and professional roles is proposed as a means to meet this broader
range of requirements. The model is supported by a proof of concept prototype which leverages
team information to provide tailored information access, targeted notifications and alerts, and
patient and team management functionality. Although some concerns were raised regarding implementation,
the proposal was met with enthusiasm by both clinicians and developers during
evaluation. However, the area of need is broad and there is still a great deal of work to be done
if this work is to be taken forward
Safety-Assured Model-Based Development of Real-Time Embedded Software for the Gpca Infusion Pump
Many safety-critical embedded systems must meet safety requirements associated with timing constraints. Not only shall a system read/write correct input or output values, but also those operations shall be performed with the right timing. Failing to meet those timing constraints results in serious safety issues (e.g., medical device malfunctions may harm patients). It is difficult to develop complex embedded software in a correct way without rigorous and systematic handling of various sources that affect the timed behavior of a system.
We propose the model-based development framework that enables timing aspects of a system to be formally modeled, verified, and further implemented in a systematic way.
The fundamental idea is to separate the timing concerns of the platform-independent and the platform-dependent aspects of a system. In the platform-independent development phase, input and output timed interactions between a system and its environment is modeled and verified using state-transition formalism (e.g., UPPAAL) by hiding platform-dependent timing details. In the platform-dependent development phase, such platform-dependent timing details are modeled using architectural modeling languages (e.g., AADL) that are necessary to execute the platform-independent code on a particular platform, such as internal interactions among software components (e.g., threads) and hardware components (e.g., sensors and actuators). The platform-independent code and the platform-dependent code are independently developed from the different levels of timing abstractions, and composed together in the integration phase. In this phase, we propose a way to systematically extend the platform-independent model into different platform-specific models, which formally characterize the implementation-level timed behavior that can be verified for timing requirement conformance. In case this verification step fails, we propose a way to adjust the timing parameters of the platform-independent code by compensating for the platform-dependent processing delays in such a way that the resulting implementation meets the timing requirements verified in the platform-independent model.
Applicability of this development approach was demonstrated by developing software running on several Patient-Controlled Analgesia (PCA) infusion pump systems. We hope that this approach is also applicable to other safety-critical domains where generic software needs to be developed independently of a particular platform, and integrated with many different platforms in a way that conforms to timing requirements
Simplifying the use of event-based systems with context mediation and declarative descriptions
Current trends like the proliferation of sensors or the Internet of Things lead to Cyber-physical Systems
(CPSs). In these systems many different components communicate by exchanging events. While
events provide a convenient abstraction for handling the high load these systems generate, CPSs are very
complex and require expert computer scientists to handle correctly.
We realized that one of the primary reasons for this inherent complexity is that events do not carry
context. We analyzed the context of events and realized that there are two dimensions: context about
the data of an event and context about the event itself. Context about the data includes assumptions like
systems of measurement units or the structure of the encoded information that are required to correctly
understand the event. Context about the event itself is data that provides additional information to
the information carried by the event. For example an event might carry positional data, the additional
information could then be the room identifier belonging to this position.
Context about the data helps bridge the heterogeneity that CPSs possess. Event producers and consumers
may have different assumptions about the data and thus interpret events in different ways. To overcome
this gap, we developed the ACTrESS middleware. ACTrESS provides a model to encode interpretation
assumptions in an interpretation context. Clients can thus make their assumptions explicit and send them
to the middleware, which is then able to mediate between different contexts by transforming events.
Through analysis of the provided contexts, ACTrESS can generate transformers, which are dynamically
loaded into the system. It does not need to rely on costly operations like reflection. To prove this,
we conducted a performance study which shows that in a content-based publish/subscribe system, the
overhead introduced by ACTrESS’ transformations is too small to be measurable.
Because events do not carry contextual information, expert computer scientists are required to describe
situations that are made up of multiple events. The fact that CPSs promise to transform our everyday
life (e.g., smart homes) makes this problem even more severe in that most of the target users cannot
use CPSs. In this thesis, we developed a declarative language to easily describe situations and a desired
reaction. Furthermore, we provide a mechanism to translate this high-level description to executable
code. The key idea is that events are contextualized, i.e. our middleware enriches the event with the
missing contextual information based on the situation description. The enriched events are then correlated
and combined automatically, to ultimately be able to decide if the described situation is fulfilled or
not. By generating small computational units, we achieve good parallelization and are able to elegantly
scale up and down, which makes our approach particularly suitable for modern cloud architectures. We
conducted a usability analysis and performance study. The usability analysis shows that our approach
significantly simplifies the definition of reactive behavior in CPS. The performance study shows that the
achieved automatic distribution and parallelization incur a small performance cost compared to highly
optimized systems like Esper
Recommended from our members
MC2: MPEG-7 content modelling communities
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityThe use of multimedia content on the web has grown significantly in recent years. Websites such as Facebook, YouTube and Flickr cater for enormous amounts of multimedia content uploaded by users. This vast amount of multimedia content requires comprehensive content modelling otherwise
retrieving relevant content will be challenging. Modelling multimedia content can be an extremely time consuming task that may seem impossible particularly when undertaken by individual users. However, the advent of Web 2.0 and associated communities, such as YouTube and Flickr, has
shown that users appear to be more willing to collaborate in order to take on enormous tasks such as multimedia content modelling. Harnessing the power of communities to achieve comprehensive content modelling is the primary focus of this research.
The aim of this thesis is to explore collaborative multimedia content modelling and in particular the effectiveness of existing multimedia content modelling tools, taking into account the key development challenges of existing collaborative content modelling research and the associated
modelling tools. Four research objectives are pursued in order to achieve this; first, design a user experiment to study users’ tagging behaviour with existing multimedia tagging tools and identify any relationships between such user behaviour; second, design and develop a framework for MPEG-7 content modelling communities based on the results of the experiment; third, implement an online
service as a proof of concept of the framework; fourth, validate the framework through the online service during a repeat of the initial user experiment.
This research contributes first, a conceptual model of user behaviour visualised as a fuzzy cognitive
map and, second, an MPEG-7 framework for multimedia content modelling communities (MC2) and its proof of concept as an online service. The fuzzy cognitive model embodies relationships between user tagging behaviour and context and provides an understanding of user priorities in the description of content features and the relationships that exist between them. The MC2 framework,
developed based on the fuzzy cognitive model, is deep-rooted in user content modelling behaviour and content preferences. A proof of concept of the MC2 framework is implemented as an online service in which all metadata is modelled using MPEG-7. The online service is validated, first, empirically with the same group of users and through the same experiment that led to the development of the fuzzy cognitive model and, second, functionally against the folksonomy and MPEG-7 content modelling tools used in the initial experiment. The validation demonstrates that MC2 has the advantages without the shortcomings of existing multimedia tagging tools by harnessing the ease of use of folksonomy tools while producing comprehensive structured metadata.Supported by UK Engineering and Physical Sciences Research Council (EPSRC
Creation and Testing of a Semi-Automated Digital Triage Process Model
Digital forensics examiners have a growing problem caused by their own success. The need for digital forensics is increasing and so are the devices that need examining. Not only are the number of devices growing, but so is the amount of information those devices can hold. One result of this problem is a growing backlog that could soon overwhelm digital forensics labs across the country. One way to combat this growing problem is to use digital triage to find the most pertinent information first. Unfortunately, although several digital forensics models have been created, very few digital triage models have been developed. This results in most organizations, if they perform digital triage at all, performing digital triage in an untested ad hoc fashion that varies from office to office. This dissertation will contribute to digital forensics science by creating and testing a digital triage model. This model will be semi-automated to allow for the use by untrained users; it will be as operating system independent as possible; and it will allow the user to customize it based on a specific crime class or classes. The use of this model will decrease the amount of time it takes a digital triage examiner to make a successful assessment concerning evidence
A patient agent controlled customized blockchain based framework for internet of things
Although Blockchain implementations have emerged as revolutionary technologies for various industrial applications including cryptocurrencies, they have not been widely deployed to store data streaming from sensors to remote servers in architectures known as Internet of Things. New Blockchain for the Internet of Things models promise secure solutions for eHealth, smart cities, and other applications. These models pave the way for continuous monitoring of patient’s physiological signs with wearable sensors to augment traditional medical practice without recourse to storing data with a trusted authority. However, existing Blockchain algorithms cannot accommodate the huge volumes, security, and privacy requirements of health data. In this thesis, our first contribution is an End-to-End secure eHealth architecture that introduces an intelligent Patient Centric Agent. The Patient Centric Agent executing on dedicated hardware manages the storage and access of streams of sensors generated health data, into a customized Blockchain and other less secure repositories. As IoT devices cannot host Blockchain technology due to their limited memory, power, and computational resources, the Patient Centric Agent coordinates and communicates with a private customized Blockchain on behalf of the wearable devices. While the adoption of a Patient Centric Agent offers solutions for addressing continuous monitoring of patients’ health, dealing with storage, data privacy and network security issues, the architecture is vulnerable to Denial of Services(DoS) and single point of failure attacks. To address this issue, we advance a second contribution; a decentralised eHealth system in which the Patient Centric Agent is replicated at three levels: Sensing Layer, NEAR Processing Layer and FAR Processing Layer. The functionalities of the Patient Centric Agent are customized to manage the tasks of the three levels. Simulations confirm protection of the architecture against DoS attacks. Few patients require all their health data to be stored in Blockchain repositories but instead need to select an appropriate storage medium for each chunk of data by matching their personal needs and preferences with features of candidate storage mediums. Motivated by this context, we advance third contribution; a recommendation model for health data storage that can accommodate patient preferences and make storage decisions rapidly, in real-time, even with streamed data. The mapping between health data features and characteristics of each repository is learned using machine learning. The Blockchain’s capacity to make transactions and store records without central oversight enables its application for IoT networks outside health such as underwater IoT networks where the unattended nature of the nodes threatens their security and privacy. However, underwater IoT differs from ground IoT as acoustics signals are the communication media leading to high propagation delays, high error rates exacerbated by turbulent water currents. Our fourth contribution is a customized Blockchain leveraged framework with the model of Patient-Centric Agent renamed as Smart Agent for securely monitoring underwater IoT. Finally, the smart Agent has been investigated in developing an IoT smart home or cities monitoring framework. The key algorithms underpinning to each contribution have been implemented and analysed using simulators.Doctor of Philosoph
Process Mining Handbook
This is an open access book. This book comprises all the single courses given as part of the First Summer School on Process Mining, PMSS 2022, which was held in Aachen, Germany, during July 4-8, 2022. This volume contains 17 chapters organized into the following topical sections: Introduction; process discovery; conformance checking; data preprocessing; process enhancement and monitoring; assorted process mining topics; industrial perspective and applications; and closing
- …