298 research outputs found
Location-aware deep learning-based framework for optimizing cloud consumer quality of service-based service composition
The expanding propensity of organization users to utilize cloud services urges to deliver services in a service pool with a variety of functional and non-functional attributes from online service providers. brokers of cloud services must intense rivalry competing with one another to provide quality of service (QoS) enhancements. Such rivalry prompts a troublesome and muddled providing composite services on the cloud using a simple service selection and composition approach. Therefore, cloud composition is considered a non-deterministic polynomial (NP-hard) and economically motivated problem. Hence, developing a reliable economic model for composition is of tremendous interest and to have importance for the cloud consumer. This paper provides “A location-aware deep learning framework for improving the QoS-based service composition for cloud consumers”. The proposed framework is firstly reducing the dimensions of data. Secondly, it applies a combination of the deep learning long short-term memory network and particle swarm optimization algorithm additionally to considering the location parameter to correctly forecast the QoS provisioned values. Finally, it composes the ideal services need to reduce the customer cost function. The suggested framework's performance has been demonstrated using a real dataset, proving that it superior the current models in terms of prediction and composition accuracy
Measuring the impact of COVID-19 on hospital care pathways
Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted
Uncertainty in runtime verification : a survey
Runtime Verification can be defined as a collection of formal methods for studying the dynamic evaluation of execution traces against formal specifications. Aside from creating a monitor from specifications and building algorithms for the evaluation of the trace, the process of gathering events and making them available for the monitor and the communication between the system under analysis and the monitor are critical and important steps in the runtime verification process. In many situations and for a variety of reasons, the event trace could be incomplete or could contain imprecise events. When a missing or ambiguous event is detected, the monitor may be unable to deliver a sound verdict. In this survey, we review the literature dealing with the problem of monitoring with incomplete traces. We list the different causes of uncertainty that have been identified, and analyze their effect on the monitoring process. We identify and compare the different methods that have been proposed to perform monitoring on such traces, highlighting the advantages and drawbacks of each method
Natural type inference
Recently, dynamic language users have started to recognize the value of types in their code. To fulfil this need, many popular dynamic languages have adopted extensions that support type annotations. A prominent example is that of TypeScript which offers a module system, classes, interfaces, and an optional type system on top of JavaScript.
However, providing usable (not too verbose, or complex) types via traditional type inference is more challenging in optional type systems. Motivated by this, we redefine the goal of type inference for optionally typed languages as: infer the maximally natural and sound type, instead of the most general one. By the maximally natural and sound, we refer to a type that (1) is derivable in the type system, and (2) maximally reflects the intention of the programmer with respect to a learnt model.
We formally devise a type inference problem that aids the inference of the maximally natural type. Towards this goal, our problem asks to combine information derived from two sources: (1) from algorithmic type systems using deductive logic-based techniques; and (2) from the source code text using inductive machine learning techniques.
To tackle our formulated problem, we develop two frameworks that combine the two sources of information using mathematical optimization. In the first framework, we formulate the inference problem as a problem in numerical optimization. In the second framework, we map the inference problem into popular problems in discrete optimization: maximum satisfiability (MaxSAT) and Integer Linear Programming (ILP).
Both frameworks are built to be consistent with information derived from the different sources. Moreover, through formal proofs, we validate the soundness and completeness of the developed framework for a core lambda-calculus with named types.
To assess the efficacy of the developed frameworks, we implement them in a tool named Optyper that realizes natural type inference for TypeScript. We evaluate Optyperon TypeSript programs obtained from real world projects. By evaluating our theoretical frameworks we show that, in practice, the combination of logical and natural constraints yields a large improvement in performance over either kind of information individually. Further, we demonstrate that our frameworks out-perform state-of-the-art techniques in type inference to produce natural and sound types
Specialized IoT systems: Models, Structures, Algorithms, Hardware, Software Tools
Монография включает анализ проблем, модели, алгоритмы и программно-
аппаратные средства специализированных сетей интернета вещей.
Рассмотрены результаты проектирования и моделирования сети интернета вещей,
мониторинга качества продукции, анализа звуковой информации окружающей среды, а
также технология выявления заболеваний легких на базе нейронных сетей.
Монография предназначена для специалистов в области инфокоммуникаций,
может быть полезна студентам соответствующих специальностей, слушателям
факультетов повышения квалификации, магистрантам и аспирантам
Blockchain Software Verification and Optimization
In the last decade, blockchain technology has undergone a strong evolution. The maturity reached and the consolidation obtained have aroused the interest of companies and businesses, transforming it into a possible response to various industrial needs. However, the lack of standards and tools for the development and maintenance of blockchain software leaves open challenges and various possibilities for improvements. The goal of this thesis is to tackle some of the challenges proposed by blockchain technology, to design and implement analysis, processes, and architectures that may be applied in the real world. In particular, two topics are addressed: the verification of the blockchain software and the code optimization of smart contracts. As regards the verification, the thesis focuses on the original developments of tools and analyses able to detect statically, i.e. without code execution, issues related to non-determinism, untrusted cross-contracts invocation, and numerical overflow/underflow. Moreover, an approach based on on-chain verification is investigated, to proactively involve the blockchain in verifying the code before and after its deployment. For the optimization side, the thesis describes an optimization process for the code translation from Solidity language to Takamaka, also proposing an efficient algorithm to compute snapshots for fungible and non-fungible tokens. The results of this thesis are an important first step towards improving blockchain software development, empirically demonstrating the applicability of the proposed approaches and their involvement also in the industrial field
An Extensible Theorem Proving Frontend
Interaktive Theorembeweiser sind Softwarewerkzeuge zum computergestützten Beweisen, d.h. sie können entsprechend kodierte Beweise von logischen Aussagen sowohl verifizieren als auch beim Erstellen dieser unterstützen. In den letzten Jahren wurden weitreichende Formalisierungsprojekte über Mathematik sowie Programmverifikation mit solchen Theorembeweisern bewältigt. Der Theorembeweiser Lean insbesondere wurde nicht nur erfolgreich zum Verifizieren lange bekannter mathematischer Theoreme verwendet, sondern auch zur Unterstützung von aktueller mathematischer Forschung. Das Ziel des Lean-Projekts ist nichts weniger als die Arbeitsweise von Mathematikern grundlegend zu verändern, indem mit dem Computer formalisierte Beweise eine praktible Alternative zu solchen mit Stift und Papier werden sollen. Aufwändige manuelle Gutachten zur Korrektheit von Beweisen wären damit hinfällig und gleichzeitig wäre garantiert, dass alle nötigen Beweisschritte exakt erfasst sind, statt der Interpretation und dem Hintergrundwissen des Lesers überlassen zu sein. Um dieses Ziel zu erreichen, sind jedoch noch weitere Fortschritte hinsichtlich Effizienz und Nutzbarkeit von Theorembeweisern nötig.
Als Schritt in Richtung dieses Ziels beschreibt diese Dissertation eine neue, vollständig erweiterbare Theorembeweiser-Benutzerschnittstelle ("frontend") im Rahmen von Lean 4, der nächsten Version von Lean. Aufgabe dieser Benutzerschnittstelle ist die textuelle Beschreibung und Entgegennahme der Beweiseingabe in einer Syntax, die mehrere teils widersprüchliche Ziele optimieren sollte: Kompaktheit, Lesbarkeit für menschliche Benutzer und Eindeutigkeit in der Interpretation durch den Theorembeweiser. Da in der geschriebenen Mathematik eine umfangreiche Menge an verschiedenen Notationen existiert, die von Jahr zu Jahr weiter wächst und sich gleichzeitig zwischen verschiedenen Feldern, Autoren oder sogar einzelnen Arbeiten unterscheiden kann, muss solch eine Schnittstelle es Benutzern erlauben, sie jederzeit mit neuen, ausdrucksfähigen Notationen zu erweitern und ihnen mit flexiblen Regeln Bedeutung zuzuschreiben. Dieser Wunsch nach Flexibilität der Eingabesprache lässt sich weiterhin auch auf der Ebene der einzelnen Beweisschritte ("Taktiken") sowie höheren Ebenen der Beweis- und Programmorganisation wiederfinden.
Den Kernteil dieser gewünschten Erweiterbarkeit habe ich mit einem ausdrucksstarken Makrosystem für Lean realisiert, mit dem sich sowohl einfach Syntaxtransformationen ("syntaktischer Zucker") also auch komplexe, typgesteuerte Übersetzung in die Kernsprache des Beweisers ausdrücken lassen. Das Makrosystem basiert auf einem neuartigen Algorithmus für Makrohygiene, basierend auf dem der Lisp-Sprache Racket und von mir an die spezifischen Anforderungen von Theorembeweisern angepasst, dessen Aufgabe es ist zu gewährleisten, dass lexikalische Geltungsbereiche von Bezeichnern selbst für komplexe Makros wie intuitiv erwartet funktionieren. Besonders habe ich beim Entwurf des Makrosystems darauf geachtet, das System einfach zugänglich zu gestalten, indem mehrere Abstraktionsebenen bereitgestellt werden, die sich in ihrer Ausdrucksstärke unterscheiden, aber auf den gleichen fundamentalen Prinzipien wie der erwähnten Makrohygiene beruhen. Als ein Anwendungsbeispiel des Makrosystems beschreibe ich eine Erweiterung der aus Haskell bekannten "do"-Notation um weitere imperative Sprachfeatures. Die erweiterte Syntax ist in Lean 4 eingeflossen und hat grundsätzlich die Art und Weise verändert, wie sowohl Entwickler als auch Benutzer monadischen, aber auch puren Code schreiben.
Das Makrosystem stellt das "Herz" des erweiterbaren Frontends dar, ist gleichzeitig aber auch eng mit anderen Softwarekomponenten innerhalb der Benutzerschnittstelle verknüpft oder von ihnen abhängig. Ich stelle das gesamte Frontend und das umgebende Lean-System vor mit Fokus auf Teilen, an denen ich maßgeblich mitgewirkt habe. Schließlich beschreibe ich noch ein effizientes Referenzzählungsschema für funktionale Programmierung, welches eine Neuimplementierung von Lean in Lean selbst und damit das erweiterbare Frontend erst ermöglicht hat. Spezifische Optimierungen darin zur Wiederverwendung von Allokationen vereinen, ähnlich wie die erweiterte do-Notation, die Vorteile von imperativer und pur funktionaler Programmierung in einem neuen Paradigma, das ich "pure imperative Programmierung" nenne
Compliance checking of cloud providers: design and implementation
The recognition of capabilities supplied by cloud systems is presently growing up. Collecting or sharing healthcare data and sensitive information especially during Covid-19 pandemic has motivated organizations and enterprises to leverage the upsides coming from cloud-based applications. However, the privacy of electronic data in such applications remains a significant challenge for cloud vendors to adapt their solutions with existing privacy legislation standards such as general data protection regulation (GDPR). This paper, first, proposes a formal model and verification for data usage requests of providers in a cloud composite service using a model checking tool. A cloud pharmacy scenario is presented to illustrate the connectivity of providers in the composite service and the stream of their requests for both collection and movement of patient data. A set of verification is, then, undertaken over the pharmacy service in accordance with three significant GDPR obligations, namely user consent, data access and data transfer. Following that, the paper designs and implements a cloud container virtualization based on the verified formal model realising GDPR requirements. The container makes use of some enforcement smart contracts to only proceed the providers’ requests, which are compliant with GDPR. Finally, several experiments are provided to investigate the performance of our approach in terms of time, memory and cost
Research Paper: Process Mining and Synthetic Health Data: Reflections and Lessons Learnt
Analysing the treatment pathways in real-world health data can provide valuable insight for clinicians and decision-makers. However, the procedures for acquiring real-world data for research can be restrictive, time-consuming and risks disclosing identifiable information. Synthetic data might enable representative analysis without direct access to sensitive data. In the first part of our paper, we propose an approach for grading synthetic data for process analysis based on its fidelity to relationships found in real-world data. In the second part, we apply our grading approach by assessing cancer patient pathways in a synthetic healthcare dataset (The Simulacrum provided by the English National Cancer Registration and Analysis Service) using process mining. Visualisations of the patient pathways within the synthetic data appear plausible, showing relationships between events confirmed in the underlying non-synthetic data. Data quality issues are also present within the synthetic data which reflect real-world problems and artefacts from the synthetic dataset’s creation. Process mining of synthetic data in healthcare is an emerging field with novel challenges. We conclude that researchers should be aware of the risks when extrapolating results produced from research on synthetic data to real-world scenarios and assess findings with analysts who are able to view the underlying data
Automatic generation of highly concurrent, hierarchical and heterogeneous cache coherence protocols from atomic specifications
Cache coherence protocols are often specified using only stable states and atomic transactions
for a single cache hierarchy level. Designing highly-concurrent, hierarchical and heterogeneous directory cache coherence protocols from these atomic specifications for modern
multicore architectures is a complicated task. To overcome these design challenges we have
developed the novel *Gen algorithms (ProtoGen, HieraGen and HeteroGen).
Using the *Gen
algorithms highly-concurrent, hierarchical and heterogeneous cache coherence protocols can
be automatically generated for a wide range of atomic input stable state protocol (SSP) speci fications - including the MOESI variants, as well as for protocols that are targeted towards
Total Store Order and Release Consistency. In addition, for each *Gen algorithm we have
developed and published an eponymous tool.
The ProtoGen tool takes as input a single SSP (i.e., no concurrency) generating the corresponding protocol for a multicore architecture with non-atomic transactions. The ProtoGen
algorithm automatically enforces the correct interleaving of conflicting coherence transactions
for a given atomic coherence protocol specification.
HieraGen is a tool for automatically generating hierarchical cache coherence protocols.
Its inputs are SSPs for each level of the hierarchy and its output is a highly concurrent
hierarchical protocol. HieraGen thus reduces the complexity that architects face by offloading
the challenging task of composing protocols and managing concurrency.
HeteroGen is a tool for automatically generating heterogeneous protocols that adhere to
precise consistency models. As input, HeteroGen takes SSPs of the per-cluster coherence
protocols, each of which satisfies its own per-cluster consistency model. The output is a
concurrent (i.e., with transient states) heterogeneous protocol that satisfies a precisely defined
consistency model that we refer to as a compound consistency model.
To validate the correctness of the *Gen algorithms, the generated output protocols were
verified for safety and deadlock freedom using a model checker. To verify the correctness
of protocols that need to adhere to a specific compound consistency model generated by
HeteroGen, novel litmus tests for multiple compound consistency models were developed.
The protocols automatically generated using the *Gen tools have a comparable or better
performance than manually generated cache coherence protocols, often discovering opportunities to reduce stalls. Thus, the *Gen tools reduce the complexity that architects face by
offloading the challenging tasks of composing protocols and managing concurrency
- …