158,783 research outputs found
Sensitivity in Service Design for the Development of SOA Based Systems
Service Oriented Architecture has proven itself to be a beneficial approach to software development. One of the most identifiable challenges of the SOA model is performance evaluation and service selection. To ensure the success of SOA, service requestors require a technique to evaluate existing services to identify and select the best service available for their needs. Furthermore, service providers require a method to evaluate their services to ensure there consistency, and performance. A technique of sensitivity analysis addresses these concerns by evaluating the effects of factor variation on system performance in a quantitative manner. An algorithm is produced to identify which factors are sensitive to factor variation in a software service. An experiment is performed to demonstrate the effects of sensitivity analysis as it applies to SOA systems. The experiment successfully shows that sensitivity analysis is a successful approach of evaluating a services performance and resolving issues surrounding service selection
Reliability prediction in model driven development
Evaluating the implications of an architecture design early in the software development lifecycle is important in order to reduce costs of development. Reliability is an important concern with regard to the correct delivery of software
system service. Recently, the UML Profile for Modeling Quality of Service has defined a set of UML extensions to represent dependability concerns (including reliability) and other non-functional requirements in early stages of the software
development lifecycle. Our research has shown that these extensions are not comprehensive enough to support reliability analysis for model-driven software engineering,
because the description of reliability characteristics in this profile lacks support for certain dynamic aspects that are essential in modeling reliability. In this work, we define a profile for reliability analysis by extending the UML 2.0
specification to support reliability prediction based on scenario specifications. A UML model specified using the profile is translated to a labelled transition system (LTS), which is used for automated reliability prediction and identification of implied
scenarios; the results of this analysis are then fed back to the UML model. The result is a comprehensive framework for addressing software reliability modeling, including analysis and evolution of reliability predictions. We exemplify our approach using the Boiler System used in previous work and demonstrate
how reliability analysis results can be integrated into UML models
Highly accurate model for prediction of lung nodule malignancy with CT scans
Computed tomography (CT) examinations are commonly used to predict lung
nodule malignancy in patients, which are shown to improve noninvasive early
diagnosis of lung cancer. It remains challenging for computational approaches
to achieve performance comparable to experienced radiologists. Here we present
NoduleX, a systematic approach to predict lung nodule malignancy from CT data,
based on deep learning convolutional neural networks (CNN). For training and
validation, we analyze >1000 lung nodules in images from the LIDC/IDRI cohort.
All nodules were identified and classified by four experienced thoracic
radiologists who participated in the LIDC project. NoduleX achieves high
accuracy for nodule malignancy classification, with an AUC of ~0.99. This is
commensurate with the analysis of the dataset by experienced radiologists. Our
approach, NoduleX, provides an effective framework for highly accurate nodule
malignancy prediction with the model trained on a large patient population. Our
results are replicable with software available at
http://bioinformatics.astate.edu/NoduleX
AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks
Segmentation of axon and myelin from microscopy images of the nervous system
provides useful quantitative information about the tissue microstructure, such
as axon density and myelin thickness. This could be used for instance to
document cell morphometry across species, or to validate novel non-invasive
quantitative magnetic resonance imaging techniques. Most currently-available
segmentation algorithms are based on standard image processing and usually
require multiple processing steps and/or parameter tuning by the user to adapt
to different modalities. Moreover, only few methods are publicly available. We
introduce AxonDeepSeg, an open-source software that performs axon and myelin
segmentation of microscopic images using deep learning. AxonDeepSeg features:
(i) a convolutional neural network architecture; (ii) an easy training
procedure to generate new models based on manually-labelled data and (iii) two
ready-to-use models trained from scanning electron microscopy (SEM) and
transmission electron microscopy (TEM). Results show high pixel-wise accuracy
across various species: 85% on rat SEM, 81% on human SEM, 95% on mice TEM and
84% on macaque TEM. Segmentation of a full rat spinal cord slice is computed
and morphological metrics are extracted and compared against the literature.
AxonDeepSeg is freely available at https://github.com/neuropoly/axondeepsegComment: 14 pages, 7 figure
A framework for the definition of metrics for actor-dependency models
Actor-dependency models are a formalism aimed at providing intentional
descriptions of processes as a network of dependency relationships among
actors. This kind of models is currently widely used in the early phase of
requirements engineering as well as in other contexts such as organizational
analysis and business process reengineering. In this paper, we are
interested in the definition of a framework for the formulation of metrics
over these models. These metrics are used to analyse the models with respect
to some properties that are interesting for the system being modelled, such
as security, efficiency or accuracy. The metrics are defined in terms of the
actors and dependencies of the model. We distinguish three different kinds
of metrics that are formally defined, and then we apply the framework at two
different layers of a meeting scheduler system.Postprint (published version
Improving the Performance and Energy Efficiency of GPGPU Computing through Adaptive Cache and Memory Management Techniques
Department of Computer Science and EngineeringAs the performance and energy efficiency requirement of GPGPUs have risen, memory management techniques of GPGPUs have improved to meet the requirements by employing hardware caches and utilizing heterogeneous memory. These techniques can improve GPGPUs by providing lower latency and higher bandwidth of the memory. However, these methods do not always guarantee improved performance and energy efficiency due to the small cache size and heterogeneity of the memory nodes. While prior works have proposed various techniques to address this issue, relatively little work has been done to investigate holistic support for memory management techniques.
In this dissertation, we analyze performance pathologies and propose various techniques to improve memory management techniques. First, we investigate the effectiveness of advanced cache indexing (ACI) for high-performance and energy-efficient GPGPU computing. Specifically, we discuss the designs of various static and adaptive cache indexing schemes and present implementation for GPGPUs. We then quantify and analyze the effectiveness of the ACI schemes based on a cycle-accurate GPGPU simulator. Our quantitative evaluation shows that ACI schemes achieve significant performance and energy-efficiency gains over baseline conventional indexing scheme. We also analyze the performance sensitivity of ACI to key architectural parameters (i.e., capacity, associativity, and ICN bandwidth) and the cache indexing latency. We also demonstrate that ACI continues to achieve high performance in various settings.
Second, we propose IACM, integrated adaptive cache management for high-performance and energy-efficient GPGPU computing. Based on the performance pathology analysis of GPGPUs, we integrate state-of-the-art adaptive cache management techniques (i.e., cache indexing, bypassing, and warp limiting) in a unified architectural framework to eliminate performance pathologies. Our quantitative evaluation demonstrates that IACM significantly improves the performance and energy efficiency of various GPGPU workloads over the baseline architecture (i.e., 98.1% and 61.9% on average, respectively) and achieves considerably higher performance than the state-of-the-art technique (i.e., 361.4% at maximum and 7.7% on average). Furthermore, IACM delivers significant performance and energy efficiency gains over the baseline GPGPU architecture even when enhanced with advanced architectural technologies (e.g., higher capacity, associativity).
Third, we propose bandwidth- and latency-aware page placement (BLPP) for GPGPUs with heterogeneous memory. BLPP analyzes the characteristics of a application and determines the optimal page allocation ratio between the GPU and CPU memory. Based on the optimal page allocation ratio, BLPP dynamically allocate pages across the heterogeneous memory nodes. Our experimental results show that BLPP considerably outperforms the baseline and state-of-the-art technique (i.e., 13.4% and 16.7%) and performs similar to the static-best version (i.e., 1.2% difference), which requires extensive offline profiling.clos
- âŠ