673 research outputs found

    Quality Assessment and Prediction in Software Product Lines

    Get PDF
    At the heart of product line development is the assumption that through structured reuse later products will be of a higher quality and require less time and effort to develop and test. This thesis presents empirical results from two case studies aimed at assessing the quality aspect of this claim and exploring fault prediction in the context of software product lines. The first case study examines pre-release faults and change proneness of four products in PolyFlow, a medium-sized, industrial software product line; the second case study analyzes post-release faults using pre-release data over seven releases of four products in Eclipse, a very large, open source software product line.;The goals of our research are (1) to determine the association between various software metrics, as well as their correlation with the number of faults at the component/package level; (2) to characterize the fault and change proneness of components/packages at various levels of reuse; (3) to explore the benefits of the structured reuse found in software product lines; and (4) to evaluate the effectiveness of predictive models, built on a variety of products in a software product line, to make accurate predictions of pre-release software faults (in the case of PolyFlow) and post-release software faults (in the case of Eclipse).;The research results of both studies confirm, in a software product line setting, the findings of others that faults (both pre- and post-release) are more highly correlated to change metrics than to static code metrics, and are mostly contained in a small set of components/ packages. The longitudinal aspect of our research indicates that new products do benefit from the development and testing of previous products. The results also indicate that pre-existing components/packages, including the common components/packages, undergo continuous change, but tend to sustain low fault densities. However, this is not always true for newly developed components/packages. Finally, the results also show that predictions of pre-release faults in the case of PolyFlow and post-release faults in the case of Eclipse can be done accurately from pre-release data, and furthermore, that these predictions benefit from information about additional products in the software product lines

    Searching for Needles in the Cosmic Haystack

    Get PDF
    Searching for pulsar signals in radio astronomy data sets is a difficult task. The data sets are extremely large, approaching the petabyte scale, and are growing larger as instruments become more advanced. Big Data brings with it big challenges. Processing the data to identify candidate pulsar signals is computationally expensive and must utilize parallelism to be scalable. Labeling benchmarks for supervised classification is costly. To compound the problem, pulsar signals are very rare, e.g., only 0.05% of the instances in one data set represent pulsars. Furthermore, there are many different approaches to candidate classification with no consensus on a best practice. This dissertation is focused on identifying and classifying radio pulsar candidates from single pulse searches. First, to identify and classify Dispersed Pulse Groups (DPGs), we developed a supervised machine learning approach that consists of RAPID (a novel peak identification algorithm), feature extraction, and supervised machine learning classification. We tested six algorithms for classification with four imbalance treatments. Results showed that classifiers with imbalance treatments had higher recall values. Overall, classifiers using multiclass RandomForests combined with Synthetic Majority Oversampling TEchnique (SMOTE) were the most efficient; they identified additional known pulsars not in the benchmark, with less false positives than other classifiers. Second, we developed a parallel single pulse identification method, D-RAPID, and introduced a novel automated multiclass labeling (ALM) technique that we combined with feature selection to improve execution performance. D-RAPID improved execution performance over RAPID by a factor of 5. We also showed that the combination of ALM and feature selection sped up the execution performance of RandomForest by 54% on average with less than a 2% average reduction in classification performance. Finally, we proposed CoDRIFt, a novel classification algorithm that is distributed for scalability and employs semi-supervised learning to leverage unlabeled data to inform classification. We evaluated and compared CoDRIFt to eleven other classifiers. The results showed that CoDRIFt excelled at classifying candidates in imbalanced benchmarks with a majority of non-pulsar signals (\u3e95%). Furthermore, CoDRIFt models created with very limited sets of labeled data (as few as 22 labeled minority class instances) were able to achieve high recall (mean = 0.98). In comparison to the other algorithms trained on similar sets, CoDRIFt outperformed them all, with recall 2.9% higher than the next best classifier and a 35% average improvement over all eleven classifiers. CoDRIFt is customizable for other problem domains with very large, imbalanced data sets, such as fraud detection and cyber attack detection

    Sustainable Safari Practices: Proximity to Wildlife, Educational Intervention and the Quality of Experience

    Get PDF
    This research examines the perceived quality of experience for safari tourists in relation to wildlife viewing proximities and the potential of educational interventions as a management strategy to mitigate adverse impacts of safari participant crowding. Crowding emanates from the safari tourist preferences to obtain close proximity to animals, particularly large mammals. Recognizing these preferences and associated impacts to animal behavior defined in previous research, we develop and deliver a survey instrument designed to measure the perceived quality of experience of the safari tourist while controlling for the viewing proximity variable. The survey instrument involves responding to stock photos selected to represent the safari-tour experience, using a Likert type rating scale. Using a “pre-treatment” and “post treatment” protocol, we share an educational management intervention that correlates the impact of intervention on safari participants’ perceptions of the quality of safari experience based on proximity to animals

    Fire department fitness and wellness program implementation

    Get PDF
    Health promotion programs are becoming increasingly popular throughout the private sector. The return on investment averages between 33 - 6 for every dollar spent on program costs. Despite this, implementation has lagged among career fire departments. This study examines the barriers that exist in implementing fire department fitness and wellness programs. This quantitative research project utilizes a survey conducted on a major, career department located in the Northeastern US. The goal of the survey is to better understand/predict firefighter motivations and willingness in order to promote fitness and wellness programs. The survey focuses on barriers towards cooperation and how impacts of demographics (age groups/time on job/current health perception) affects responses. Finally, the survey concludes with a list of incentives that increase in value/cost in order to determine willingness of accepting a program if it were voluntary, non-punitive, with age-based fitness goals. Variables include three main groups; older and younger firefighters, those with more/less years on the job and those with higher and lower fitness levels. The survey examines fitness motivation in fourteen distinct categories and an exercise causality index (determining how individuals are orientated to exercise). The survey concludes with incentives and program offerings to determine preferences by department, by groups or individuals. This research determined that younger firefighters and those with fewer years on the job are more willing to accept health promotion programs than older members. Motivation levels decrease in nearly every category for older members. Those with lower fitness scores are also less willing to accept a comprehensive program. Despite this, nearly all ages and demographics understand the importance of firefighters maintaining a high level of fitness and wellness. This survey has proved to be helpful in understanding the demographics of a particular department, understanding likes/dislikes, strengths and weaknesses in order to design and implement a program that has the best chance of being accepted by the majority and that has the best chance of lasting long term

    A comprehensive examination of the evidence for whole of diet patterns in Parkinson\u27s disease: A scoping review

    Get PDF
    Both motor and non-motor symptoms of Parkinson’s disease (PD), a progressive neurological condition, have broad-ranging impacts on nutritional intake and dietary behaviour. Historically studies focused on individual dietary components, but evidence demonstrating ameliorative outcomes with whole-of-diet patterns such as Mediterranean and Mediterranean-DASH Intervention for Neurodegenerative Delay (MIND) is emerging. These diets provide plenty of antioxidant rich fruits, vegetables, nuts, wholegrains and healthy fats. Paradoxically, the ketogenic diet, high fat and very low carbohydrate, is also proving to be beneficial. Within the PD community, it is well advertised that nutritional intake is associated with disease progression and symptom severity but understandably, the messaging is inconsistent. With projected prevalence estimated to rise to 1.6 million by 2037, more data regarding the impact of whole-of-diet patterns is needed to develop diet-behaviour change programmes and provide clear advice for PD management. Objectives and Methods: Objectives of this scoping review of both peer-reviewed academic and grey literatures are to determine the current evidence-based consensus for best dietary practice in PD and to ascertain whether the grey literature aligns. Results and Discussion: The consensus from the academic literature was that a MeDi/MIND whole of diet pattern (fresh fruit, vegetables, wholegrains, omega-3 fish and olive oil) is the best practice for improving PD outcomes. Support for the KD is emerging, but further research is needed to determine long-term effects. Encouragingly, the grey literature mostly aligned but nutrition advice was rarely forefront. The importance of nutrition needs greater emphasis in the grey literature, with positive messaging on dietary approaches for management of day-to-day symptoms

    Detection of dispersed radio pulses: a machine learning approach to candidate identification and classification

    Get PDF
    Searching for extraterrestrial, transient signals in astronomical data sets is an active area of current research. However, machine learning techniques are lacking in the literature concerning single-pulse detection. This paper presents a new, two-stage approach for identifying and classifying dispersed pulse groups (DPGs) in single-pulse search output. The first stage identified DPGs and extracted features to characterize them using a new peak identification algorithm which tracks sloping tendencies around local maxima in plots of signal-to-noise ratio versus dispersion measure. The second stage used supervised machine learning to classify DPGs. We created four benchmark data sets: one unbalanced and three balanced versions using three different imbalance treatments. We empirically evaluated 48 classifiers by training and testing binary and multiclass versions of six machine learning algorithms on each of the four benchmark versions. While each classifier had advantages and disadvantages, all classifiers with imbalance treatments had higher recall values than those with unbalanced data, regardless of the machine learning algorithm used. Based on the benchmarking results, we selected a subset of classifiers to classify the full, unlabelled data set of over 1.5 million DPGs identified in 42 405 observations made by the Green Bank Telescope. Overall, the classifiers using a multiclass ensemble tree learner in combination with two oversampling imbalance treatments were the most efficient; they identified additional known pulsars not in the benchmark data set and provided six potential discoveries, with significantly less false positives than the other classifiers

    Use of Geospatial Technique to Estimate Traffic Volume on South Carolina Roads

    Get PDF
    Data from the Federal Highway Administration (FHWA) indicates that there is an increase in the number of vehicles on already congested roadways. For a department of transportation (DOT) to keep up with this increased demand, it is necessary for them to continuously collect and monitor traffic volume on the roads they maintain. One of the most important parameters that DOTs collect and use for traffic engineering and transportation planning studies is annual average daily traffic (AADT). The DOTs are also required to collect and report AADTs to the Federal Highway Administration annually as part of the Highway Performance Monitoring System (HPMS) program. AADTs are typically obtained by using pneumatic tubes to count traffic for 24 hours; these “short-term” counts are then converted to AADTs based on expansion factors. This method requires an enormous amount of time and money. For these reasons, the SCDOT can only afford to perform short-term counts at a limited number of locations throughout the state every two or three years. The counts from these locations are known as “coverage counts”. However, the South Carolina DOT (SCDOT) is required to determine and report the AADTs on all roads it maintains, including non-coverage locations, where short-term counts have never been collected or were collected more than 10 years ago. In absence of a methodology, the SCDOT simply assumes the AADT to be 100 vehicles/day (vpd) for a rural local road and 200 vpd for an urban local road. This thesis investigates the applicability and effectiveness of the kriging method to estimate AADT at non-coverage locations. Other studies have investigated the use of kriging to estimate AADTs, but they have been applied at a local or regional level. This study was the first to evaluate the kriging method statewide. The effectiveness of the kriging method was evaluated against other interpolation methods, including nearest neighbor, average k nearest neighbors, inverse distance weighting, and the SCDOT’s current default values

    Polymer Matrix Composites Fabrication and Testing

    Get PDF
    This project involves two separate processes for fabricating carbon fiber composite parts using Hexcels RTM6 resin system and Kanekas IR-6070 toughened resin system to impregnate carbon fiber tow and weave. These two resins were chosen to model microcracking in parts using RTM6 compared to parts using IR-6070. Plies of the composites were made by painting resin onto 8 harness satin weave or impregnating IM7 12k tow in a prepregging machine. Plies were consolidated using an out-of-autoclave oven or a heat press. Fabrication of the composite parts were conducted with the end goal of sending the composites to be tested and modeled for microcracking. The data will be used for computer modeling in the future

    Pharmacists as educators – Engaging with the community through outreach workshops in schools in Cork city

    Get PDF
    Inspired by the UCC Campus engage initiative and in a quest to help final year pharmacy students develop higher-order thinking skills, students were tasked with designing and delivering outreach workshops on the “Role of the Pharmacist in Educating patients on microbes, antimicrobial usage, and infection prevention”. The assignment formed part of continuous assessment requirements for PF4015 Novel Drug Delivery module delivered to final year Pharmacy students on the B.Pharm course. These 1-hour interactive workshops were delivered to students across diverse age (primary and secondary) and socioeconomic backgrounds in schools during Science week in Nov 2016 & Nov 2017

    Comparison of models with and without roadway features to estimate annual average daily traffic at non-coverage locations

    Get PDF
    This study develops and evaluates models to estimate Annual Average Daily Traffic (AADT) at non-coverage or out-of-network locations. The non-coverage locations are those where counts are performed very infrequently, but an up-to-date and accurate estimate is needed by state departments of transportation. Two types of models are developed, one that simply uses the nearby known AADTs to provide an estimate and one that requires roadway features (e.g., type of median, presence of left-turn lane). The advantage of the former type is that no additional data collection is needed, thereby saving time and money for state highway agencies. A natural question and one that this study seeks to answer is: can this type of model provide equally as good or better estimates than the latter type? The models developed belonging to the first type include hybrid-kriging and Gaussian process regression model (GPR-no-feature), and the models developed belonging to the second type include point-based model, ordinary regression model, quantile regression model, and Gaussian process regression model (GPR-with-features). The performance of these models is compared against one another using South Carolina data from 2019 to 2021. The results indicate that the GPR-with-features model yields the lowest Root Mean Squared Error (RMSE) and lowest Mean Absolute Percentage Error (MAPE). It outperforms the hybrid kriging model by 6.45% in RMSE, GPR without features model by 4.25%, point-based model by 4.69%, regular regression model by 11.35%, and quantile regression model by 4.25%. Similarly, the GPR-with-features model outperforms the hybrid kriging model by 25.21% in MAPE, GPR without features model by 17.81%, point-based model by 22.26%, regular regression model by 26.36%, and quantile regression model by 21.07%
    corecore