390 research outputs found
An inventory of collaborative medication reviews for older adults-evolution of practices
Background Collaborative medication review (CMR) practices for older adults are evolving in many countries. Development has been under way in Finland for over a decade, but no inventory of evolved practices has been conducted. The aim of this study was to identify and describe CMR practices in Finland after 10 years of developement. Methods An inventory of CMR practices was conducted using a snowballing approach and an open call in the Finnish Medicines Agency's website in 2015. Data were quantitatively analysed using descriptive statistics and qualitatively by inductive thematic content analysis. Clyne et al's medication review typology was applied for evaluating comprehensiveness of the practices. Results In total, 43 practices were identified, of which 22 (51%) were designed for older adults in primary care. The majority (n = 30, 70%) of the practices were clinical CMRs, with 18 (42%) of them being in routine use. A checklist with criteria was used in 19 (44%) of the practices to identify patients with polypharmacy (n = 6), falls (n = 5), and renal dysfunction (n = 5) as the most common criteria for CMR. Patients were involved in 32 (74%) of the practices, mostly as a source of information via interview (n = 27, 63%). A medication care plan was discussed with the patient in 17 practices (40%), and it was established systematically as usual care to all or selected patient groups in 11 (26%) of the practices. All or selected patients' medication lists were reconciled in 15 practices (35%). Nearly half of the practices (n = 19, 44%) lacked explicit methods for following up effects of medication changes. When reported, the effects were followed up as a routine control (n = 9, 21%) or in a follow-up appointment (n = 6, 14%). Conclusions Different MRs in varying settings were available and in routine use, the majority being comprehensive CMRs designed for primary outpatient care and for older adults. Even though practices might benefit from national standardization, flexibility in their customization according to context, medical and patient needs, and available resources is important.Peer reviewe
Daily questionnaire to assess self-reported well-being during a software development project
Peer reviewe
Internet as a source of medicines information (MI) among frequent internet users
Background: The internet is widely and increasingly used to search for health information. Previous studies have focused mainly on health information on the internet and not specifically on medicines information (MI). Objectives: The aim of this study was to explore the internet as a source of MI compared to other sources of MI; to identify those who use the internet as a source of MI; and to describe patterns of use of the internet as a source of MI. Methods: A cross-sectional design employed a web-based questionnaire posted by patients' and other organizations as well as pharmacies on their websites during six weeks in the beginning of 2014. Logistic regression analysis was used to assess associations of background variables to the use of different MI sources. Results: The most frequently used MI sources among respondents (n = 2489) were package leaflets (90%), pharmacists (83%), physicians (72%), and the internet (68%). According to a multivariate analysis, internet use for MI was associated with female gender, age <65 years, higher education, daily use of the internet, and continuous use of vitamins or herbals. MI was most commonly searched from a Finnish health portal (56%) and websites of pharmacies (41%). Of the respondents, nearly half (43%) used search engines to find information from the internet. The names of the medicinal product, symptom or disease were the most commonly used search terms. Conclusions: Well-educated, young women tend to search MI on the internet. Health care professionals should discuss reliable MI websites and tools that can help patients evaluate the reliability of information.Peer reviewe
Recommended from our members
A benchmark study on the effectiveness of search-based data selection and feature selection for cross project defect prediction
Abstract
Context: Previous studies have shown that steered training data or
dataset selection can lead to better performance for cross project defect prediction(
CPDP). On the other hand, feature selection and data quality are
issues to consider in CPDP.
Objective: We aim at utilizing the Nearest Neighbor (NN)-Filter, embedded
in genetic algorithm to produce validation sets for generating evolving
training datasets to tackle CPDP while accounting for potential noise in
defect labels. We also investigate the impact of using di erent feature sets.
Method: We extend our proposed approach, Genetic Instance Selection
(GIS), by incorporating feature selection in its setting. We use 41 releases of
11 multi-version projects to assess the performance GIS in comparison with
benchmark CPDP (NN- lter and Naive-CPDP) and within project (Cross-
Validation(CV) and Previous Releases(PR)). To assess the impact of feature
sets, we use two sets of features, SCM+OO+LOC(all) and CK+LOC(ckloc)
as well as iterative info-gain subsetting(IG) for feature selection.
Results: GIS variant with info gain feature selection is signi cantly
better than NN-Filter (all,ckloc,IG) in terms of F1 (p = values 0:001,
Cohen's d = f0:621; 0:845; 0:762g) and G (p = values 0:001, Cohen's
d = f0:899; 1:114; 1:056g), and Naive CPDP (all,ckloc,IG) in terms of F1
(p = values 0:001, Cohen's d = f0:743; 0:865; 0:789g) and G (p =
values 0:001, Cohen's d = f1:027; 1:119; 1:050g). Overall, the performance
of GIS is comparable to that of within project defect prediction
(WPDP) benchmarks, i.e. CV and PR. In terms of multiple comparisons
test, all variants of GIS belong to the top ranking group of approaches.
Conclusions: We conclude that datasets obtained from search based
approaches combined with feature selection techniques is a promising way
to tackle CPDP. Especially, the performance comparison with the within
project scenario encourages further investigation of our approach. However,
the performance of GIS is based on high recall in the expense of a loss in precision.
Using di erent optimization goals, utilizing other validation datasets
and other fea
Industry-academia collaborations in software engineering: An empirical analysis of challenges, patterns and anti-patterns in research projects
Research collaboration between industry and academia supports improvement and innovation in industry and helps to ensure industrial relevance in academic research. However, many researchers and practitioners believe that the level of joint industry-academia collaboration (IAC) in software engineering (SE) research is still relatively low, compared to the amount of activity in each of the two communities. The goal of the empirical study reported in this paper is to exploratory characterize the state of IAC with respect to a set of challenges, patterns and anti-patterns identified by a recent Systematic Literature Review study. To address the above goal, we gathered the opinions of researchers and practitioners w.r.t. their experiences in IAC projects. Our dataset includes 47 opinion data points related to a large set of projects conducted in 10 different countries. We aim to contribute to the body of evidence
in the area of IAC, for the benefit of researchers and practitioners in conducting future successful IAC projects in SE. As an output, the study presents a set of empirical findings and evidence-based recommendations to increase the success of IAC projects.Supported by the National Research Fund, Luxembourg FNR/P10/03. Supported by FCT (Fundação para a Ciˆencia e Tecnologia) within the Project Scope UID/CEC/00319/2013.info:eu-repo/semantics/publishedVersio
Search based training data selection for cross project defect prediction
Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction (CPDP). On the other hand, data quality is an issue to consider in CPDP. Aim: We aim at utilising the Nearest Neighbor (NN)-Filter, embedded in a genetic algorithm, for generating evolving training datasets to tackle CPDP, while accounting for potential noise in defect labels. Method: We propose a new search based training data (i.e., instance) selection approach for CPDP called GIS (Genetic Instance Selection) that looks for solutions to optimize a combined measure of F-Measure and GMean, on a validation set generated by (NN)-filter. The genetic operations consider the similarities in features and address possible noise in assigned defect labels. We use 13 datasets from PROMISE repository in order to compare the performance of GIS with benchmark CPDP methods, namely (NN)-filter and naive CPDP, as well as with within project defect prediction (WPDP). Results: Our results show that GIS is significantly better than (NN)-Filter in terms of F-Measure (p – value ≪ 0.001, Cohen’s d = 0.697) and GMean (p – value ≪ 0.001, Cohen’s d = 0.946). It also outperforms the naive CPDP approach in terms of F-Measure (p – value ≪ 0.001, Cohen’s d = 0.753) and GMean (p – value ≪ 0.001, Cohen’s d = 0.994). In addition, the performance of our approach is better than that of WPDP, again considering F-Measure (p – value ≪ 0.001, Cohen’s d = 0.227) and GMean (p – value ≪ 0.001, Cohen’s d = 0.595) values. Conclusions: We conclude that search based instance selection is a promising way to tackle CPDP. Especially, the performance comparison with the within project scenario encourages further investigation of our approach. However, the performance of GIS is based on high recall in the expense of low precision. Using different optimization goals, e.g. targeting high precision, would be a future direction to investigate
Towards a catalog of spreadsheet smells
Spreadsheets are considered to be the most widely used programming language in the world, and reports have shown that 90% of real-world spreadsheets contain errors. In this work, we try to identify spreadsheet smells, a concept adapted from software, which consists of a surface indication that usually corresponds to a deeper problem. Our smells have been integrated in a tool, and were computed for a large spreadsheet repository. Finally, the analysis of the results we obtained led to the refinement of our initial catalog
- …