432 research outputs found
Survey-scale discovery-based research processes: Evaluating a bespoke visualisation environment for astronomical survey data
Next generation astronomical surveys naturally pose challenges for
human-centred visualisation and analysis workflows that currently rely on the
use of standard desktop display environments. While a significant fraction of
the data preparation and analysis will be taken care of by automated pipelines,
crucial steps of knowledge discovery can still only be achieved through various
level of human interpretation. As the number of sources in a survey grows,
there is need to both modify and simplify repetitive visualisation processes
that need to be completed for each source. As tasks such as per-source quality
control, candidate rejection, and morphological classification all share a
single instruction, multiple data (SIMD) work pattern, they are amenable to a
parallel solution. Selecting extragalactic neutral hydrogen (HI) surveys as a
representative example, we use system performance benchmarking and the visual
data and reasoning (VDAR) methodology from the field of information
visualisation to evaluate a bespoke comparative visualisation environment: the
encube visual analytics framework deployed on the 83 Megapixel Swinburne
Discovery Wall. Through benchmarking using spectral cube data from existing HI
surveys, we are able to perform interactive comparative visualisation via
texture-based volume rendering of 180 three-dimensional (3D) data cubes at a
time. The time to load a configuration of spectral cubes scale linearly with
the number of voxels, with independent samples of 180 cubes (8.4 Gigavoxels or
34 Gigabytes) each loading in under 5 minutes. We show that parallel
comparative inspection is a productive and time-saving technique which can
reduce the time taken to complete SIMD-style visual tasks currently performed
at the desktop by at least two orders of magnitude, potentially rendering some
labour-intensive desktop-based workflows obsolete.Comment: 21 pages, 10 figures, Accepted for publication in the Publications of
the Astronomical Society of Australi
Sequential Manipulation Planning on Scene Graph
We devise a 3D scene graph representation, contact graph+ (cg+), for
efficient sequential task planning. Augmented with predicate-like attributes,
this contact graph-based representation abstracts scene layouts with succinct
geometric information and valid robot-scene interactions. Goal configurations,
naturally specified on contact graphs, can be produced by a genetic algorithm
with a stochastic optimization method. A task plan is then initialized by
computing the Graph Editing Distance (GED) between the initial contact graphs
and the goal configurations, which generates graph edit operations
corresponding to possible robot actions. We finalize the task plan by imposing
constraints to regulate the temporal feasibility of graph edit operations,
ensuring valid task and motion correspondences. In a series of simulations and
experiments, robots successfully complete complex sequential object
rearrangement tasks that are difficult to specify using conventional planning
language like Planning Domain Definition Language (PDDL), demonstrating the
high feasibility and potential of robot sequential task planning on contact
graph.Comment: 8 pages, 6 figures. Accepted by IROS 202
Semantic Driven Approach for Rapid Application Development in Industrial Internet of Things
The evolution of IoT has revolutionized industrial automation. Industrial devices at every level such as field devices, control devices, enterprise level devices etc., are connected to the Internet, where they can be accessed easily. It has significantly changed the way applications are developed on the industrial automation systems. It led to the paradigm shift where novel IoT application development tools such as Node-RED can be used to develop complex industrial applications as IoT orchestrations. However, in the current state, these applications are bound strictly to devices from specific vendors and ecosystems. They cannot be re-used with devices from other vendors and platforms, since the applications are not semantically interoperable. For this purpose, it is desirable to use platform-independent, vendor-neutral application templates for common automation tasks. However, in the current state in Node-RED such reusable and interoperable application templates cannot be developed. The interoperability problem at the data level can be addressed in IoT, using Semantic Web (SW) technologies. However, for an industrial engineer or an IoT application developer, SW technologies are not very easy to use. In order to enable efficient use of SW technologies to create interoperable IoT applications, novel IoT tools are required. For this purpose, in this paper we propose a novel semantic extension to the widely used Node-RED tool by introducing semantic definitions such as iot.schema.org semantic models into Node-RED. The tool guides a non-expert in semantic technologies such as a device vendor, a machine builder to configure the semantics of a device consistently. Moreover, it also enables an engineer, IoT application developer to design and develop semantically interoperable IoT applications with minimal effort. Our approach accelerates the application development process by introducing novel semantic application templates called Recipes in Node-RED. Using Recipes, complex application development tasks such as skill matching between Recipes and existing things can be automated.We will present the approach to perform automated skill matching on the Cloud or on the Edge of an automation system. We performed quantitative and qualitative evaluation of our approach to test the feasibility and scalability of the approach in real world scenarios. The results of the evaluation are presented and discussed in the paper.Die Entwicklung des Internet der Dinge (IoT) hat die industrielle Automatisierung revolutioniert. Industrielle GerĂ€te auf allen Ebenen wie FeldgerĂ€te, SteuergerĂ€te, GerĂ€te auf Unternehmensebene usw. sind mit dem Internet verbunden, wodurch problemlos auf sie zugegriffen werden kann. Es hat die Art und Weise, wie Anwendungen auf industriellen Automatisierungssystemen entwickelt werden, deutlich verĂ€ndert. Es fĂŒhrte zum Paradigmenwechsel, wo neuartige IoT Anwendungsentwicklungstools, wie Node-RED, verwendet werden können, um komplexe industrielle Anwendungen als IoT-Orchestrierungen zu entwickeln. Aktuell sind diese Anwendungen jedoch ausschlieĂlich an GerĂ€te bestimmter Anbieter und Ăkosysteme gebunden. Sie können nicht mit GerĂ€ten anderer Anbieter und Plattformen verbunden werden, da die Anwendungen nicht semantisch interoperabel sind. Daher ist es wĂŒnschenswert, plattformunabhĂ€ngige, herstellerneutrale Anwendungsvorlagen fĂŒr allgemeine Automatisierungsaufgaben zu verwenden. Im aktuellen Status von Node-RED können solche wiederverwendbaren und interoperablen Anwendungsvorlagen jedoch nicht entwickelt werden. Diese InteroperabilitĂ€tsprobleme auf Datenebene können im IoT mithilfe von Semantic Web (SW) -Technologien behoben werden. FĂŒr Ingenieure oder Entwickler von IoT-Anwendungen sind SW-Technologien nicht sehr einfach zu verwenden. Zur Erstellung interoperabler IoT-Anwendungen sind daher neuartige IoT-Tools erforderlich. Zu diesem Zweck schlagen wir eine neuartige semantische Erweiterung des weit verbreiteten Node-RED-Tools vor, indem wir semantische Definitionen wie iot.schema.org in die semantischen Modelle von NODE-Red einfĂŒhren. Das Tool leitet einen GerĂ€tehersteller oder Maschinebauer, die keine Experten in semantische Technologien sind, an um die Semantik eines GerĂ€ts konsistent zu konfigurieren. DarĂŒber hinaus ermöglicht es auch einem Ingenieur oder IoT-Anwendungsentwickler, semantische, interoperable IoT-Anwendungen mit minimalem Aufwand zu entwerfen und entwicklen Unser Ansatz beschleunigt die Anwendungsentwicklungsprozesse durch EinfĂŒhrung neuartiger semantischer Anwendungsvorlagen namens Rezepte fĂŒr Node-RED. Durch die Verwendung von Rezepten können komplexe Anwendungsentwicklungsaufgaben wie das Abgleichen von Funktionen zwischen Rezepten und vorhandenen Strukturen automatisiert werden. Wir demonstrieren Skill-Matching in der Cloud oder am Industrial Edge eines Automatisierungssystems. Wir haben dafĂŒr quantitative und qualitative Bewertung unseres Ansatzes durchgefĂŒhrt, um die Machbarkeit und Skalierbarkeit des Ansatzes in realen Szenarien zu testen. Die Ergebnisse der Bewertung werden in dieser Arbeit vorgestellt und diskutiert
Understanding, Analysis, and Handling of Software Architecture Erosion
Architecture erosion occurs when a software system's implemented architecture diverges from the intended architecture over time. Studies show erosion impacts development, maintenance, and evolution since it accumulates imperceptibly. Identifying early symptoms like architectural smells enables managing erosion through refactoring. However, research lacks comprehensive understanding of erosion, unclear which symptoms are most common, and lacks detection methods. This thesis establishes an erosion landscape, investigates symptoms, and proposes identification approaches. A mapping study covers erosion definitions, symptoms, causes, and consequences. Key findings: 1) "Architecture erosion" is the most used term, with four perspectives on definitions and respective symptom types. 2) Technical and non-technical reasons contribute to erosion, negatively impacting quality attributes. Practitioners can advocate addressing erosion to prevent failures. 3) Detection and correction approaches are categorized, with consistency and evolution-based approaches commonly mentioned.An empirical study explores practitioner perspectives through communities, surveys, and interviews. Findings reveal associated practices like code review and tools identify symptoms, while collected measures address erosion during implementation. Studying code review comments analyzes erosion in practice. One study reveals architectural violations, duplicate functionality, and cyclic dependencies are most frequent. Symptoms decreased over time, indicating increased stability. Most were addressed after review. A second study explores violation symptoms in four projects, identifying 10 categories. Refactoring and removing code address most violations, while some are disregarded.Machine learning classifiers using pre-trained word embeddings identify violation symptoms from code reviews. Key findings: 1) SVM with word2vec achieved highest performance. 2) fastText embeddings worked well. 3) 200-dimensional embeddings outperformed 100/300-dimensional. 4) Ensemble classifier improved performance. 5) Practitioners found results valuable, confirming potential.An automated recommendation system identifies qualified reviewers for violations using similarity detection on file paths and comments. Experiments show common methods perform well, outperforming a baseline approach. Sampling techniques impact recommendation performance
MatNexus: A Comprehensive Text Mining and Analysis Suite for Materials Discover
MatNexus is a specialized software for the automated collection, processing,
and analysis of text from scientific articles. Through an integrated suite of
modules, the MatNexus facilitates the retrieval of scientific articles,
processes textual data for insights, generates vector representations suitable
for machine learning, and offers visualization capabilities for word
embeddings. With the vast volume of scientific publications, MatNexus stands
out as an end-to-end tool for researchers aiming to gain insights from
scientific literature in material science, making the exploration of materials,
such as the electrocatalyst examples we show here, efficient and insightful.Comment: 15 pages, 6 figures, submission to Software
By how much can closed-loop frameworks accelerate computational materials discovery?
The implementation of automation and machine learning surrogatization within
closed-loop computational workflows is an increasingly popular approach to
accelerate materials discovery. However, the scale of the speedup associated
with this paradigm shift from traditional manual approaches remains an open
question. In this work, we rigorously quantify the acceleration from each of
the components within a closed-loop framework for material hypothesis
evaluation by identifying four distinct sources of speedup: (1) task
automation, (2) calculation runtime improvements, (3) sequential
learning-driven design space search, and (4) surrogatization of expensive
simulations with machine learning models. This is done using a time-keeping
ledger to record runs of automated software and corresponding manual
computational experiments within the context of electrocatalysis. From a
combination of the first three sources of acceleration, we estimate that
overall hypothesis evaluation time can be reduced by over 90%, i.e., achieving
a speedup of . Further, by introducing surrogatization into the
loop, we estimate that the design time can be reduced by over 95%, i.e.,
achieving a speedup of -. Our findings present a clear
value proposition for utilizing closed-loop approaches for accelerating
materials discovery.Comment: added Supplementary Informatio
An integrated approach to high integrity software verification.
Computer software is developed through software engineering. At its most precise, software
engineering involves mathematical rigour as formal methods. High integrity software
is associated with safety critical and security critical applications, where failure
would bring significant costs. The development of high integrity software is subject to
stringent standards, prescribing best practises to increase quality. Typically, these standards
will strongly encourage or enforce the application of formal methods.
The application of formal methods can entail a significant amount of mathematical
reasoning. Thus, the development of automated techniques is an active area of research.
The trend is to deliver increased automation through two complementary approaches.
Firstly, lightweight formal methods are adopted, sacrificing expressive power, breadth of
coverage, or both in favour of tractability. Secondly, integrated solutions are sought,
exploiting the strengths of different technologies to increase automation.
The objective of this thesis is to support the production of high integrity software by
automating an aspect of formal methods. To develop tractable techniques we focus on
the niche activity of verifying exception freedom. To increase effectiveness, we integrate
the complementary technologies of proof planning and program analysis. Our approach
is investigated by enhancing the SPARK Approach, as developed by Altran Praxis Limited.
Our approach is implemented and evaluated as the SPADEase system. The key
contributions of the thesis are summarised below:
âą Configurable and Sound - Present a configurable and justifiably sound approach
to software verification.
âą Cooperative Integration - Demonstrate that more targeted and effective automation
can be achieved through the cooperative integration of distinct technologies.
âą Proof Discovery - Present proof plans that support the verification of exception
freedom.
âą Invariant Discovery - Present invariant discovery heuristics that support the verification
of exception freedom.
âą Implementation as SPADEase - Implement our approach as SPADEase.
âą Industrial Evaluation - Evaluate SPADEase against both textbook and industrial
subprograms
Regulation of human pancreas hormone secretion by autonomic innervation
Diabetes mellitus is a silent killer doing away with one person every 10 seconds. We speak of
diabetes when the organism cannot control the right level of glucose in the blood. The hormones
insulin and glucagon secreted by the islets of Langerhans are the major players maintaining glucose
homeostasis. In the living organism, the function of the islets is orchestrated by their interaction with
other organs through the vasculature and with the nervous system. Most of our current knowledge of
islet biology has been obtained by using mouse models, but caution is needed, as mice are not simply
small humans. Indeed, recent studies have revealed that the cell composition and architecture of the
human islet are different from that of mouse islets. Thus, other important features such as nervous
regulation of islet function may also be different.
The work in this thesis aimed to identify the role of innervation for islet function. Our
hypothesis is that autonomic and paracrine signals are involved in islet function and that the relative
role of these components varies among species. To identify the sympathetic and parasympathetic
components of innervation as well as their cellular targets we used immunohistochemical staining of
human and mouse pancreatic sections. In contrast to mouse, human islets are devoid of
parasympathetic innervation. Instead, human alpha cells possess the machinery for exocytosis of
acetylcholine, the major parasympathetic neurotransmitter. Our findings suggest that human islets
depend less on neural cholinergic input than mouse islets. Alpha cells secrete acetylcholine as a
paracrine signal priming the human beta cell to respond optimally to subsequent increases in glucose
concentration. In addition, noradrenergic fibers contact few endocrine cells in the human islet and
preferentially innervate smooth muscle cells of the islet vasculature. This suggests that sympathetic
innervation regulates hormone secretion by controlling the blood flow rather than modulating
endocrine cell function directly.
By taking advantage of our recently developed noninvasive anterior chamber of the eye
imaging platform we were able to study the role of innervation in the maintenance of glucose
homeostasis in vivo. We studied the process of reinnervation and revascularization of intraocular islet
grafts and showed that islets orchestrate the process of engraftment to restore their original
microenvironment. Islet grafts from two different mouse strains and human xenografts showed
innervation patterns similar to those in pancreatic sections in situ. Islet grafts displayed the
characteristic fenestrae of the pancreatic vascular endothelium independently of the origin of the new
vessels. In addition, the model allowed controlling the fraction of the graft vasculature that is
contributed by the donor islet endothelial cells to the point that the original donor vasculature of the
islet is restored. Recording graft function while manipulating the eyeâs neural input through the
pupillary light reflex revealed functional differences in parasympathetic innervation between the two
mouse strains. The eye platform also allowed us to follow cell dynamics during immune responses,
which will enable investigations aimed at clarifying the role of innervation in the pathogenesis of
autoimmune diabetes.
To study human islet biology in vivo we further adopted the eye model by transplanting
human islets into the eye of diabetic immune compromised mice. Human xenografts reversed
diabetes and tightly controlled plasma glucose concentrations. Moreover, our results provided the
first real time monitoring of revascularization and blood flow inside human islets and graft function
could be modulated by local drug administration. Our findings establish a âhumanizedâ mouse model
to investigate human islet biology in vivo that will allow addressing how nervous input affects
endocrine function or blood flow in human islets. The physiological relevance of the anterior
chamber of the eye model is further underscored by the therapeutic potential as a novel
transplantation site to treat type 1 diabetic patients
- âŠ