528 research outputs found
Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts
Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Methodological approaches and techniques for designing ontologies in information systems requirements engineering
Programa doutoral em Information Systems and TechnologyThe way we interact with the world around us is changing as new challenges arise, embracing innovative business models, rethinking the organization and processes to maximize results, and evolving change management. Currently, and considering the projects executed, the methodologies used do not fully respond to the companies' needs. On the one hand, organizations are not familiar with the languages used in Information Systems, and on the other hand, they are often unable to validate requirements or business models. These are some of the difficulties encountered that lead us to think about formulating a new approach. Thus, the state of the art presented in this paper includes a study of the models involved in the software development process, where traditional methods and the rivalry of agile methods are present. In addition, a survey is made about Ontologies and what methods exist to conceive, transform, and represent them.
Thus, after analyzing some of the various possibilities currently available, we began the process of evolving a method and developing an approach that would allow us to design ontologies. The method we evolved and adapted will allow us to derive terminologies from a specific domain, aggregating them in order to facilitate the construction of a catalog of terminologies. Next, the definition of an approach to designing ontologies will allow the construction of a domain-specific ontology. This approach allows in the first instance to integrate and store the data from different information systems of a given organization. In a second instance, the rules for mapping and building the ontology database are defined. Finally, a technological architecture is also proposed that will allow the mapping of an ontology through the construction of complex networks, allowing mapping and relating terminologies.
This doctoral work encompasses numerous Research & Development (R&D) projects belonging to different domains such as Software Industry, Textile Industry, Robotic Industry and Smart Cities. Finally, a critical and descriptive analysis of the work done is performed, and we also point out perspectives for possible future work.A forma como interagimos com o mundo à nossa volta está a mudar à medida que novos desafios surgem, abraçando modelos empresariais inovadores, repensando a organização e os processos para maximizar os resultados, e evoluindo a gestão da mudança. Atualmente, e considerando os projetos executados, as metodologias utilizadas não respondem na totalidade às necessidades das empresas. Por um lado, as organizações não estão familiarizadas com as linguagens utilizadas nos Sistemas de Informação, por outro lado, são muitas vezes incapazes de validar requisitos ou modelos de negócio. Estas são algumas das dificuldades encontradas que nos levam a pensar na formulação de uma nova abordagem. Assim, o estado da arte apresentado neste documento inclui um estudo dos modelos envolvidos no processo de desenvolvimento de software, onde os métodos tradicionais e a rivalidade de métodos ágeis estão presentes. Além disso, é efetuado um levantamento sobre Ontologias e quais os métodos existentes para as conceber, transformar e representar.
Assim, e após analisarmos algumas das várias possibilidades atualmente disponíveis, iniciou-se o processo de evolução de um método e desenvolvimento de uma abordagem que nos permitisse conceber ontologias. O método que evoluímos e adaptamos permitirá derivar terminologias de um domínio específico, agregando-as de forma a facilitar a construção de um catálogo de terminologias. Em seguida, a definição de uma abordagem para conceber ontologias permitirá a construção de uma ontologia de um domínio específico. Esta abordagem permite em primeira instância, integrar e armazenar os dados de diferentes sistemas de informação de uma determinada organização. Num segundo momento, são definidas as regras para o mapeamento e construção da base de dados ontológica. Finalmente, é também proposta uma arquitetura tecnológica que permitirá efetuar o mapeamento de uma ontologia através da construção de redes complexas, permitindo mapear e relacionar terminologias.
Este trabalho de doutoramento engloba inúmeros projetos de Investigação & Desenvolvimento (I&D) pertencentes a diferentes domínios como por exemplo Indústria de Software, Indústria Têxtil, Indústria Robótica e Smart Cities. Finalmente, é realizada uma análise critica e descritiva do trabalho realizado, sendo que apontamos ainda perspetivas de possíveis trabalhos futuros
Recommended from our members
Proceedings of the 33rd Annual Workshop of the Psychology of Programming Interest Group
This is the Proceedings of the 33rd Annual Workshop of the Psychology of Programming Interest Group (PPIG). This was the first PPIG to be held physically since 2019, following the two online-only PPIGs in 2020 and 2021, both during the Covid pandemic. It was also the first PPIG conference to be designed specifically for hybrid attendance. Reflecting the theme, it was hosted by Music Computing Lab at the Open University in Milton Keynes
Routines and Applications of Symbolic Algebra Software
Computing has become an essential resource in modern research and has found application
across a wide range of scientific disciplines. Developments in symbolic algebra tools have been
particularly valuable in physics where calculations in fields such as general relativity, quantum
field theory and physics beyond the standard model are becoming increasing complex and
unpractical to work with by hand. The computer algebra system Cadabra is a tensor-first
approach to symbolic algebra based on the programming language Python which has been used
extensively in research in these fields while also having a shallow learning curve making it an
excellent way to introduce students to methods in computer algebra.
The work in this thesis has been concentrated on developing Cadabra, which has involved
looking at two different elements which make up a computer algebra program. Firstly, the
implementation of algebraic routines is discussed. This has primarily been focused on the
introduction of an algorithm for detecting the equivalence of tensorial expressions related by
index permutation symmetries. The method employed differs considerably from traditional
canonicalisation routines which are commonly used for this purpose by using Young projection
operators to make such symmetries manifest.
The other element of writing a computer algebra program which is covered is the infrastruc-
ture and environment. The importance of this aspect of software design is often overlooked by
funding committees and academic software users resulting in an anti-pattern of code not being
shared and contributed to in the way in which research itself is published and promulgated.
The focus in this area has been on implementing a packaging system for Cadabra which allows
the writing of generic libraries which can be shared by the community, and interfacing with
other scientific computing packages to increase the capabilities of Cadabra
Recommended from our members
Understanding the Significance of Patient Empowerment in Health Care Services and Delivery
To address emerging challenges in empowering patients through telehealth, this dissertation has the following objectives: (a) find the key characteristics that enable patient empowerment [PE], (b) determining when will PE work as a solution, (c) find the optimal telehealth care method that enables PE, and (d) evaluate the impact of telehealth on health care outcomes (such as, patient satisfaction, patient trust with primary care providers, etc.) that ultimately enhances PE. These objectives are addressed in three studies presented here as three essays. Collectively, these essays contribute to the knowledge on PE, patient trust, and telehealth by providing insights on leveraging PE towards better health care services and delivery systems. Essay 1 aims to systemically map the concept of PE using principles of systems thinking with the Boardman soft systems methodology that enables a graphical visualization (i.e., systemigrams). Essay 2 investigates the practical and theoretical implications of connecting patients to empowerment care plans and minimizing wait times in healthcare service delivery using electronic prescriptions (s-scripts), phone calls, and video calls. In Essay 3, the mediating role of telehealth services between patient empowerment and patient satisfaction was analyzed, along with patient trust was assessed as a moderator between telehealth usability and patient satisfaction. Two hundred sixty-two responses from patients in North America with chronic illnesses were collected through an online survey questionnaire were analyzed using partial least squares-structural equation modeling (PLS-SEM). The findings of the research show that patients with chronic illnesses in North America feel empowered by using telehealth as they can get diagnosis of the illness even in remote areas and face no obstacle
Applications of Boolean modelling to study and stratify dynamics of a complex disease
Interpretation of omics data is needed to form meaningful hypotheses about
disease mechanisms. Pathway databases give an overview of disease-related processes, while mathematical models give qualitative and quantitative insights into
their complexity. Similarly to pathway databases, mathematical models are stored
and shared on dedicated platforms. Moreover, community-driven initiatives such
as disease maps encode disease-specific mechanisms in both computable and
diagrammatic form using dedicated tools for diagram biocuration and visualisation. To investigate the dynamic properties of complex disease mechanisms,
computationally readable content can be used as a scaffold for building dynamic
models in an automated fashion. The dynamic properties of a disease are extremely complex. Therefore, more research is required to better understand the
complexity of molecular mechanisms, which may advance personalized medicine
in the future.
In this study, Parkinson’s disease (PD) is analyzed as an example of a complex
disorder. PD is associated with complex genetic, environmental causes and
comorbidities that need to be analysed in a systematic way to better understand
the progression of different disease subtypes. Studying PD as a multifactorial
disease requires deconvoluting the multiple and overlapping changes to identify
the driving neurodegenerative mechanisms. Integrated systems analysis and
modelling can enable us to study different aspects of a disease such as progression,
diagnosis, and response to therapeutics. Therefore, more research is required to
better understand the complexity of molecular mechanisms, which may advance
personalized medicine in the future. Modelling such complex processes depends
on the scope and it may vary depending on the nature of the process (e.g. signalling
vs metabolic). Experimental design and the resulting data also influence model
structure and analysis. Boolean modelling is proposed to analyse the complexity
of PD mechanisms. Boolean models (BMs) are qualitative rather than quantitative
and do not require detailed kinetic information such as Petri nets or Ordinary
Differential equations (ODEs). Boolean modelling represents a logical formalism
where available variables have binary values of one (ON) or zero (OFF), making it
a plausible approach in cases where quantitative details and kinetic parameters
9
are not available. Boolean modelling is well validated in clinical and translational
medicine research.
In this project, the PD map was translated into BMs in an automated fashion
using different methods. Therefore, the complexity of disease pathways can be
analysed by simulating the effect of genomic burden on omics data. In order to
make sure that BMs accurately represent the biological system, validation was
performed by simulating models at different scales of complexity. The behaviour
of the models was compared with expected behavior based on validated biological
knowledge. The TCA cycle was used as an example of a well-studied simple
network. Different scales of complex signalling networks were used including the
Wnt-PI3k/AKT pathway, and T-cell differentiation models. As a result, matched
and mismatched behaviours were identified, allowing the models to be modified
to better represent disease mechanisms. The BMs were stratified by integrating
omics data from multiple disease cohorts. The miRNA datasets from the Parkinson’s Progression Markers Initiative study (PPMI) were analysed. PPMI provides
an important resource for the investigation of potential biomarkers and therapeutic targets for PD. Such stratification allowed studying disease heterogeneity and
specific responses to molecular perturbations. The results can support research
hypotheses, diagnose a condition, and maximize the benefit of a treatment. Furthermore, the challenges and limitations associated with Boolean modelling in
general were discussed, as well as those specific to the current study.
Based on the results, there are different ways to improve Boolean modelling
applications. Modellers can perform exploratory investigations, gathering the
associated information about the model from literature and data resources. The
missing details can be inferred by integrating omics data, which identifies missing
components and optimises model accuracy. Accurate and computable models
improve the efficiency of simulations and the resulting analysis of their controllability. In parallel, the maintenance of model repositories and the sharing of
models in easily interoperable formats are also important
Developments in multiscale ONIOM and fragment methods for complex chemical systems
Multiskalenprobleme werden in der Computerchemie immer allgegenwärtiger und bestimmte Klassen solcher Probleme entziehen sich einer effizienten Beschreibung mit den verfügbaren Berechnungsansätzen. In dieser Arbeit wurden effiziente Erweiterungen der Multilayer-Methode ONIOM und von Fragmentmethoden als Lösungsansätze für derartige Probleme entwickelt. Dabei wurde die Kombination von ONIOM und Fragmentmethoden im Rahmen der Multi-Centre Generalised ONIOM entwickelt sowie die eine Multilayer-Variante der Fragment Combinatio Ranges. Außerdem wurden Schemata für elektronische Einbettung derartiger Multilayer-Systeme entwickelt. Der zweite Teil der Arbeit beschreibt die Implementierung im Haskell-Programm "Spicy" und demonstriert Anwendungen derartiger Multiskalen-Methoden
A productive response to legacy system petrification
Requirements change. The requirements of a legacy information system change, often in unanticipated ways, and at a more rapid pace than the rate at which the information system itself can be evolved to support them. The capabilities of a legacy system progressively fall further and further behind their evolving requirements, in a degrading process termed petrification. As systems petrify, they deliver diminishing business value, hamper business effectiveness, and drain organisational resources. To address legacy systems, the first challenge is to understand how to shed their resistance to tracking requirements change. The second challenge is to ensure that a newly adaptable system never again petrifies into a change resistant legacy system. This thesis addresses both challenges. The approach outlined herein is underpinned by an agile migration process - termed Productive Migration - that homes in upon the specific causes of petrification within each particular legacy system and provides guidance upon how to address them. That guidance comes in part from a personalised catalogue of petrifying patterns, which capture recurring themes underlying petrification. These steer us to the problems actually present in a given legacy system, and lead us to suitable antidote productive patterns via which we can deal with those problems one by one. To prevent newly adaptable systems from again degrading into legacy systems, we appeal to a follow-on process, termed Productive Evolution, which embraces and keeps pace with change rather than resisting and falling behind it. Productive Evolution teaches us to be vigilant against signs of system petrification and helps us to nip them in the bud. The aim is to nurture systems that remain supportive of the business, that are adaptable in step with ongoing requirements change, and that continue to retain their value as significant business assets
Provenance, Incremental Evaluation, and Debugging in Datalog
The Datalog programming language has recently found increasing traction in research and industry. Driven by its clean declarative semantics, along with its conciseness and ease of use, Datalog has been adopted for a wide range of important applications, such as program analysis, graph problems, and networking. To enable this adoption, modern Datalog engines have implemented advanced language features and high-performance evaluation of Datalog programs. Unfortunately, critical infrastructure and tooling to support Datalog users and developers are still missing. For example, there are only limited tools addressing the crucial debugging problem, where developers can spend up to 30% of their time finding and fixing bugs.
This thesis addresses Datalog’s tooling gaps, with the ultimate goal of improving the productivity of Datalog programmers. The first contribution is centered around the critical problem of debugging: we develop a new debugging approach that explains the execution steps taken to produce a faulty output. Crucially, our debugging method can be applied for large-scale applications without substantially sacrificing performance. The second contribution addresses the problem of incremental evaluation, which is necessary when program inputs change slightly, and results need to be recomputed. Incremental evaluation allows this recomputation to happen more efficiently, without discarding the previous results and recomputing from scratch. Finally, the last contribution provides a new incremental debugging approach that identifies the root causes of faulty outputs that occur after an incremental evaluation. Incremental debugging focuses on the relationship between input and output and can provide debugging suggestions to amend the inputs so that faults no longer occur. These techniques, in combination, form a corpus of critical infrastructure and tooling developments for Datalog, allowing developers and users to use Datalog more productively
- …