11 research outputs found
MAPPING BPEL PROCESSES TO DIAGNOSTIC MODELS
Web services are loosely-coupled, self-contained, and self-describing software modules that perform a predetermined task. These services can be linked together to develop an appli cation that spans multiple organizations. This linking is referred to as a composition of web services. These compositions potentially can help businesses respond more quickly and more cost-effectively to changing market conditions. Compositions can be specified using a high- level workflow process language.
A fault or problem is a defect in a software or software component. A system is said to have a failure if the service it delivers to the user deviates from compliance with the system specification for a specified period of time. A problem causes a failure. Failures are often referred to as symptoms of a problem. A problem can occur on one component but a failure is detected on another component. This suggests a need to be able to determine a problem based on failures. This is referred to as fault diagnosis.
This thesis focuses on the design, implementation and evaluation of a diagnostic module that performs automated mapping of a high-level specification of a web services composition to a diagnostics model. A diagnosis model expresses the relationship between problems and potential symptoms. This mapping can be done by a third party service that is not part of the application resulting from the composition of the web services. Automation will allow a third party to do diagnosis for a large number of compositions and should be less error-prone
Augmented Reality Simulation Modules for EVD Placement Training and Planning Aids
When a novice neurosurgeon performs a psychomotor surgical task (e.g., tool navigation into brain structures), a potential risk of damaging healthy tissues and eloquent brain structures is unavoidable. When novices make multiple hits, thus a set of undesirable trajectories is created, and resulting in the potential for surgical complications. Thus, it is important that novices not only aim for a high-level of surgical mastery but also receive deliberate training in common neurosurgical procedures and underlying tasks. Surgical simulators have emerged as an adequate candidate as effective method to teach novices in safe and free-error training environments. The design of neurosurgical simulators requires a comprehensive approach to development and. In that in mind, we demonstrate a detailed case study in which two Augmented Reality (AR) training simulation modules were designed and implemented through the adoption of Model-driven Engineering. User performance evaluation is a key aspect of the surgical simulation validity. Many AR surgical simulators become obsolete; either they are not sufficient to support enough surgical scenarios, or they were validated according to subjective assessments that did not meet every need. Accordingly, we demonstrate the feasibility of the AR simulation modules through two user studies, objectively measuring novices’ performance based on quantitative metrics. Neurosurgical simulators are prone to perceptual distance underestimation. Few investigations were conducted for improving user depth perception in head-mounted display-based AR systems with perceptual motion cues. Consequently, we report our investigation’s results about whether or not head motion and perception motion cues had an influence on users’ performance
In Need of a Domain-Specific Language Modeling Notation for Smartphone Applications with Portable Capability
The rapid growth of the smartphone market and its increasing revenue has motivated developers to target multiple platforms. Market leaders, such as Apple, Google, and Microsoft, develop their smartphone applications complying with their platform specifications. The specification of each platform makes a platform-dedicated application incompatible with other platforms due to the diversity of operating systems, programming languages, and design patterns. Conventional development methodologies are applied to smartphone applications, yet they perform less well. Smartphone applications have unique hardware and software requirements. All previous factors push smartphone developers to build less sophisticated and low-quality products when targeting multiple smartphone platforms. Model-driven development have been considered to generate smartphone applications from abstract models to alleviate smartphones platform fragmentation. Reusing these abstract models for other platforms was not considered because they do not fit new platforms requirements. It is possible that defining smartphone applications using a portability-driven modeling notation would facilitate smartphone developers to understand better their applications to be ported to other platforms. We call for a portability-driven modeling notation to be used within a smartphone development process. Our in-process research work will be manifested through the application of a domain-specific language complying with the three software portability principles and three design factors. This paper aims to highlight our research work, methodology and current statue
Semantic Segmentation and Edge Detection—Approach to Road Detection in Very High Resolution Satellite Images
Road detection technology plays an essential role in a variety of applications, such as urban planning, map updating, traffic monitoring and automatic vehicle navigation. Recently, there has been much development in detecting roads in high-resolution (HR) satellite images based on semantic segmentation. However, the objects being segmented in such images are of small size, and not all the information in the images is equally important when making a decision. This paper proposes a novel approach to road detection based on semantic segmentation and edge detection. Our approach aims to combine these two techniques to improve road detection, and it produces sharp-pixel segmentation maps, using the segmented masks to generate road edges. In addition, some well-known architectures, such as SegNet, used multi-scale features without refinement; thus, using attention blocks in the encoder to predict fine segmentation masks resulted in finer edges. A combination of weighted cross-entropy loss and the focal Tversky loss as the loss function is also used to deal with the highly imbalanced dataset. We conducted various experiments on two datasets describing real-world datasets covering the three largest regions in Saudi Arabia and Massachusetts. The results demonstrated that the proposed method of encoding HR feature maps effectively predicts sharp segmentation masks to facilitate accurate edge detection, even against a harsh and complicated background
A transformer-based approach empowered by a self-attention technique for semantic segmentation in remote sensing
Semantic segmentation of Remote Sensing (RS) images involves the classification of each pixel in a satellite image into distinct and non-overlapping regions or segments. This task is crucial in various domains, including land cover classification, autonomous driving, and scene understanding. While deep learning has shown promising results, there is limited research that specifically addresses the challenge of processing fine details in RS images while also considering the high computational demands. To tackle this issue, we propose a novel approach that combines convolutional and transformer architectures. Our design incorporates convolutional layers with a low receptive field to generate fine-grained feature maps for small objects in very high-resolution images. On the other hand, transformer blocks are utilized to capture contextual information from the input. By leveraging convolution and self-attention in this manner, we reduce the need for extensive downsampling and enable the network to work with full-resolution features, which is particularly beneficial for handling small objects. Additionally, our approach eliminates the requirement for vast datasets, which is often necessary for purely transformer-based networks. In our experimental results, we demonstrate the effectiveness of our method in generating local and contextual features using convolutional and transformer layers, respectively. Our approach achieves a mean dice score of 80.41%, outperforming other well-known techniques such as UNet, Fully-Connected Network (FCN), Pyramid Scene Parsing Network (PSP Net), and the recent Convolutional vision Transformer (CvT) model, which achieved mean dice scores of 78.57%, 74.57%, 73.45%, and 62.97% respectively, under the same training conditions and using the same training dataset
Noninvasive Detection of Respiratory Disorder Due to COVID-19 at the Early Stages in Saudi Arabia
The Kingdom of Saudi Arabia has suffered from COVID-19 disease as part of the global pandemic due to severe acute respiratory syndrome coronavirus 2. The economy of Saudi Arabia also suffered a heavy impact. Several measures were taken to help mitigate its impact and stimulate the economy. In this context, we present a safe and secure WiFi-sensing-based COVID-19 monitoring system exploiting commercially available low-cost wireless devices that can be deployed in different indoor settings within Saudi Arabia. We extracted different activities of daily living and respiratory rates from ubiquitous WiFi signals in terms of channel state information (CSI) and secured them from unauthorized access through permutation and diffusion with multiple substitution boxes using chaos theory. The experiments were performed on healthy participants. We used the variances of the amplitude information of the CSI data and evaluated their security using several security parameters such as the correlation coefficient, mean-squared error (MSE), peak-signal-to-noise ratio (PSNR), entropy, number of pixel change rate (NPCR), and unified average change intensity (UACI). These security metrics, for example, lower correlation and higher entropy, indicate stronger security of the proposed encryption method. Moreover, the NPCR and UACI values were higher than 99% and 30, respectively, which also confirmed the security strength of the encrypted information
An Investigation of Head Motion and Perceptual Motion Cues\u27 Influence on User Depth Perception of Augmented Reality Neurosurgical Simulators
Training and planning for neurosurgeries necessitate many requirements from junior neurosurgeons, including perceptual capacities. An effective method of deliberate training is to replicate the required procedures using neurosurgical simulation tools and visualizing a three-dimensional (3D) workspace. However, Augmented Reality (AR) neurosurgical simulators become obsolete for a variety of reasons, including users\u27 distance underestimation. Few investigations have been conducted for improving users\u27 depth perception in AR systems with perceptual motion cues through neurosurgical simulation tools for planning aid purposes. In this poster, we are reporting a user study about whether head motion and perceptual motion cues have any an influence on users\u27 depth perception
Development of augmented reality training simulator systems for neurosurgery using model-driven software engineering
Neurosurgical procedures are complicated processes, providing challenges and demands ranging from medical knowledge and judgment to the neurosurgeons dexterity and perceptual capacities. Deliberate training of common neurosurgical procedures and underlying tasks is extremely important. One effective method for the training is to enhance the required surgical training tasks through the use of neurosurgical simulators. Development of neurosurgical simulators is challenging due to many reasons. In this work, we proposed to facilitate the development of new augmented reality neurosurgical simulator systems through the adoption of model-driven engineering. Our developed systems involve the interactive visualization of three-dimension brain meshes in order to train users and simulate a targeting task towards a variety of predetermined virtual targets. We present our results in a way which highlights two new design artifacts through our MDE approach
Information Fusion in Autonomous Vehicle Using Artificial Neural Group Key Synchronization
Information fusion in automated vehicle for various datatypes emanating from many resources is the foundation for making choices in intelligent transportation autonomous cars. To facilitate data sharing, a variety of communication methods have been integrated to build a diverse V2X infrastructure. However, information fusion security frameworks are currently intended for specific application instances, that are insufficient to fulfill the overall requirements of Mutual Intelligent Transportation Systems (MITS). In this work, a data fusion security infrastructure has been developed with varying degrees of trust. Furthermore, in the V2X heterogeneous networks, this paper offers an efficient and effective information fusion security mechanism for multiple sources and multiple type data sharing. An area-based PKI architecture with speed provided by a Graphic Processing Unit (GPU) is given in especially for artificial neural synchronization-based quick group key exchange. A parametric test is performed to ensure that the proposed data fusion trust solution meets the stringent delay requirements of V2X systems. The efficiency of the suggested method is tested, and the results show that it surpasses similar strategies already in use
A novel CNN-LSTM-based approach to predict urban expansion
Time-series remote sensing data offer a rich source of information that can be used in a wide range of applications, from monitoring changes in land cover to surveillance of crops, coastal changes, flood risk assessment, and urban sprawl. In this paper, time-series satellite images are used to predict urban expansion. As the ground truth is not available in time-series satellite images, an unsupervised image segmentation method based on deep learning is used to generate the ground truth for training and validation. The automated annotated images are then manually validated using Google Maps to generate the ground truth. The remaining data were then manually annotated. Prediction of urban expansion is achieved by using a ConvLSTM network, which can learn the global spatio-temporal information without shrinking the size of spatial feature maps. The ConvLSTM based model is applied on the time-series satellite images and the results of prediction are compared with Pix2pix and Dual GAN networks. In this paper, experimental results are conducted using several multi-date satellite images representing the three largest cities in Saudi Arabia, namely: Riyadh, Jeddah, and Dammam. The evaluation results show that the proposed ConvLSTM based model produced better prediction results in terms of Mean Square Error, Root Mean Square Error, Peak Signal to Noise Ratio, Structural Similarity Index, and overall classification accuracy as compared to Pix2pix and Dual GAN. Moreover, the training time of the proposed architecture is less than the Dual GAN architecture