16 research outputs found

    A Survey of Systems Engineering Effectiveness - Initial Results

    Get PDF
    This survey quantifies the relationship between the application of Systems Engineering (SE) best practices to projects and programs, and the performance of those projects and programs. The survey population consisted of projects and programs executed by defense contractors who are members of the Systems Engineering Division (SED) of the National Defense Industrial Association (NDIA). The deployment of SE practices on a project or program was measured through the availability and characteristics of specific SE-related work products. Project Performance was measured through typically available project measures of cost performance, schedule performance, and scope performance. Additional project and program information such as project size, project domain, and other data was also collected to aid in characterizing the respondent's project. Analysis of the survey responses revealed moderately strong statistical relationships between Project Performance and several categorizations of specific of SE best practices. Notably stronger relationships are apparent by combining the effects of more than one the best practices categories. Of course, Systems Engineering Capability alone does not ensure outstanding Project Performance. The survey results show notable differences in the relationship between SE best practices and performance between more challenging as compared to less challenging projects. The statistical relationship between Project Performance and the combination of SE Capability and Project Challenge is quite strong for survey data of this type

    The re-identification risk of Canadians from longitudinal demographics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The public is less willing to allow their personal health information to be disclosed for research purposes if they do not trust researchers and how researchers manage their data. However, the public is more comfortable with their data being used for research if the risk of re-identification is low. There are few studies on the risk of re-identification of Canadians from their basic demographics, and no studies on their risk from their longitudinal data. Our objective was to estimate the risk of re-identification from the basic cross-sectional and longitudinal demographics of Canadians.</p> <p>Methods</p> <p>Uniqueness is a common measure of re-identification risk. Demographic data on a 25% random sample of the population of Montreal were analyzed to estimate population uniqueness on postal code, date of birth, and gender as well as their generalizations, for periods ranging from 1 year to 11 years.</p> <p>Results</p> <p>Almost 98% of the population was unique on full postal code, date of birth and gender: these three variables are effectively a unique identifier for Montrealers. Uniqueness increased for longitudinal data. Considerable generalization was required to reach acceptably low uniqueness levels, especially for longitudinal data. Detailed guidelines and disclosure policies on how to ensure that the re-identification risk is low are provided.</p> <p>Conclusions</p> <p>A large percentage of Montreal residents are unique on basic demographics. For non-longitudinal data sets, the three character postal code, gender, and month/year of birth represent sufficiently low re-identification risk. Data custodians need to generalize their demographic information further for longitudinal data sets.</p

    A Survey of Quality Assurance Practices in Biomedical Open Source Software Projects

    No full text
    Reviewer: Murray, PeterReviewer: Pietrobon, RReviewer: McDonald, C[This item is a preserved copy and is not necessarily the most recent version. To view the current item, visit http://www.jmir.org/2007/2/e8/ ] Background: Open source (OS) software is continuously gaining recognition and use in the biomedical domain, for example, in health informatics and bioinformatics. Objectives: Given the mission critical nature of applications in this domain and their potential impact on patient safety, it is important to understand to what degree and how effectively biomedical OS developers perform standard quality assurance (QA) activities such as peer reviews and testing. This would allow the users of biomedical OS software to better understand the quality risks, if any, and the developers to identify process improvement opportunities to produce higher quality software. Methods: A survey of developers working on biomedical OS projects was conducted to examine the QA activities that are performed. We took a descriptive approach to summarize the implementation of QA activities and then examined some of the factors that may be related to the implementation of such practices. Results: Our descriptive results show that 63% (95% CI, 54-72) of projects did not include peer reviews in their development process, while 82% (95% CI, 75-89) did include testing. Approximately 74% (95% CI, 67-81) of developers did not have a background in computing, 80% (95% CI, 74-87) were paid for their contributions to the project, and 52% (95% CI, 43-60) had PhDs. A multivariate logistic regression model to predict the implementation of peer reviews was not significant (likelihood ratio test = 16.86, 9 df, P = .051) and neither was a model to predict the implementation of testing (likelihood ratio test = 3.34, 9 df, P = .95). Conclusions: Less attention is paid to peer review than testing. However, the former is a complementary, and necessary, QA practice rather than an alternative. Therefore, one can argue that there are quality risks, at least at this point in time, in transitioning biomedical OS software into any critical settings that may have operational, financial, or safety implications. Developers of biomedical OS applications should invest more effort in implementing systemic peer review practices throughout the development and maintenance processes

    A method for managing re-identification risk from small geographic areas in Canada

    No full text
    Abstract Background A common disclosure control practice for health datasets is to identify small geographic areas and either suppress records from these small areas or aggregate them into larger ones. A recent study provided a method for deciding when an area is too small based on the uniqueness criterion. The uniqueness criterion stipulates that an the area is no longer too small when the proportion of unique individuals on the relevant variables (the quasi-identifiers) approaches zero. However, using a uniqueness value of zero is quite a stringent threshold, and is only suitable when the risks from data disclosure are quite high. Other uniqueness thresholds that have been proposed for health data are 5% and 20%. Methods We estimated uniqueness for urban Forward Sortation Areas (FSAs) by using the 2001 long form Canadian census data representing 20% of the population. We then constructed two logistic regression models to predict when the uniqueness is greater than the 5% and 20% thresholds, and validated their predictive accuracy using 10-fold cross-validation. Predictor variables included the population size of the FSA and the maximum number of possible values on the quasi-identifiers (the number of equivalence classes). Results All model parameters were significant and the models had very high prediction accuracy, with specificity above 0.9, and sensitivity at 0.87 and 0.74 for the 5% and 20% threshold models respectively. The application of the models was illustrated with an analysis of the Ontario newborn registry and an emergency department dataset. At the higher thresholds considerably fewer records compared to the 0% threshold would be considered to be in small areas and therefore undergo disclosure control actions. We have also included concrete guidance for data custodians in deciding which one of the three uniqueness thresholds to use (0%, 5%, 20%), depending on the mitigating controls that the data recipients have in place, the potential invasion of privacy if the data is disclosed, and the motives and capacity of the data recipient to re-identify the data. Conclusion The models we developed can be used to manage the re-identification risk from small geographic areas. Being able to choose among three possible thresholds, a data custodian can adjust the definition of "small geographic area" to the nature of the data and recipient.</p

    Personal Care Product Use in Pregnancy and the Postpartum Period: Implications for Exposure Assessment

    No full text
    Concern regarding the potential for developmental health risks associated with certain chemicals (e.g., phthalates, antibacterials) used in personal care products is well documented; however, current exposure data for pregnant women are limited. The objective of this study was to describe the pattern of personal care product use in pregnancy and the post-partum period. Usage patterns of personal care products were collected at six different time points during pregnancy and once in the postpartum period for a cohort of 80 pregnant women in Ottawa, Canada. The pattern of use was then described and groups of personal care product groups commonly used together were identified using hierarchical cluster analysis. The results showed that product use varied by income and country of birth. General hygiene products were the most commonly used products and were consistently used over time while cosmetic product use declined with advancing pregnancy and post-delivery. Hand soaps and baby products were reported as used more frequently after birth. This study is the first to track personal care product use across pregnancy and into the postpartum period, and suggests that pregnant populations may be a unique group of personal care product users. This information will be useful for exposure assessments

    A Survey of Systems Engineering Effectiveness-Initial Results

    No full text
    This survey quantifies the relationship between the application of Systems Engineering (SE) best practices to projects and programs, and the performance of those projects and programs. The survey population consisted of projects and programs executed by defense contractors who are members of the Systems Engineering Division (SED) of the National Defense Industrial Association (NDIA). The deployment of SE practices on a project or program was measured through the availability and characteristics of specific SE-related work products. Project Performance was measured through typically available project measures of cost performance, schedule performance, and scope performance. Additional project and program information such as project size, project domain, and other data was also collected to aid in characterizing the respondent's project. Analysis of the survey responses revealed moderately strong statistical relationships between Project Performance and several categorizations of specific of SE best practices. Notably stronger relationships are apparent by combining the effects of more than one the best practices categories. Of course, Systems Engineering Capability alone does not ensure outstanding Project Performance. The survey results show notable differences in the relationship between SE best practices and performance between more challenging as compared to less challenging projects. The statistical relationship between Project Performance and the combination of SE Capability and Project Challenge is quite strong for survey data of this type
    corecore