268,697 research outputs found

    Evaluation and optimal design of spectral sensitivities for digital color imaging

    Get PDF
    The quality of an image captured by color imaging system primarily depends on three factors: sensor spectral sensitivity, illumination and scene. While illumination is very important to be known, the sensitivity characteristics is critical to the success of imaging applications, and is necessary to be optimally designed under practical constraints. The ultimate image quality is judged subjectively by human visual system. This dissertation addresses the evaluation and optimal design of spectral sensitivity functions for digital color imaging devices. Color imaging fundamentals and device characterization are discussed in the first place. For the evaluation of spectral sensitivity functions, this dissertation concentrates on the consideration of imaging noise characteristics. Both signal-independent and signal-dependent noises form an imaging noise model and noises will be propagated while signal is processed. A new colorimetric quality metric, unified measure of goodness (UMG), which addresses color accuracy and noise performance simultaneously, is introduced and compared with other available quality metrics. Through comparison, UMG is designated as a primary evaluation metric. On the optimal design of spectral sensitivity functions, three generic approaches, optimization through enumeration evaluation, optimization of parameterized functions, and optimization of additional channel, are analyzed in the case of the filter fabrication process is unknown. Otherwise a hierarchical design approach is introduced, which emphasizes the use of the primary metric but the initial optimization results are refined through the application of multiple secondary metrics. Finally the validity of UMG as a primary metric and the hierarchical approach are experimentally tested and verified

    Using Software Quality Evaluation Standard Model for Managing Software Development Projects in Solar Sector

    Get PDF
    This paper proposes a framework for managing Project Quality Management (PQM) processes of software development projects related to photovoltaic (PV) system design. The International Organization for Standardization (ISO) quality evaluation model, Software Product Quality Requirements and Evaluation (SQuaRE) standard, is used to determine quality characteristics and quality metrics of the software. This work presents the following contributions: I) defining quality characteristics associated with a PV design software using the SQuaRE standard model, II) adding the proposed framework as a tool and technique which is used by practitioners following the global standard book for project managers, A Guide to the Project Management Body of Knowledge (PMBOK), and III) Identifying quality measures and sub-characteristics of a PV design software. The presented model can be employed for simulation-based and/or model-based software products in various technical fields and engineering

    PRODUCT DISASSEMBLABILITY AND REMANUFACTURABILITY ASSESSMENT: A QUANTITATIVE APPROACH

    Get PDF
    Majority of the products get discarded at end-of-life (EoL), causing environmental pollution, and resulting in a complete loss of all materials and embodied energy. Adopting a closed-loop material flow approach can aid preventing such losses and enable EoL value recovery from these products. Design and engineering decisions made and how products are used impact the capability to implement EOL strategies such as disassembly and remanufacturing. Some underlying factors affecting the capability to implement these EOL strategies have been discussed in previous studies. However, relevant metrics and attributes are not well defined and comprehensive methods to quantitatively evaluate them are lacking. This study will first identify key lifecycle oriented metrics affecting disassemblability and remanufacturability. Then a methodology is proposed for the quantitative evaluation of these strategies considering the quality of returns, product-design characteristics and process technology requirements. Finally, an industrial case-study is presented to demonstrate the application of the proposed method

    Examining the Validity of a State Policy-Directed Framework for Evaluating Teacher Instructional Quality: Informing Policy, Impacting Practice

    Get PDF
    abstract: ABSTRACT This study examines validity evidence of a state policy-directed teacher evaluation system implemented in Arizona during school year 2012-2013. The purpose was to evaluate the warrant for making high stakes, consequential judgments of teacher competence based on value-added (VAM) estimates of instructional impact and observations of professional practice (PP). The research also explores educator influence (voice) in evaluation design and the role information brokers have in local decision making. Findings are situated in an evidentiary and policy context at both the LEA and state policy levels. The study employs a single-phase, concurrent, mixed-methods research design triangulating multiple sources of qualitative and quantitative evidence onto a single (unified) validation construct: Teacher Instructional Quality. It focuses on assessing the characteristics of metrics used to construct quantitative ratings of instructional competence and the alignment of stakeholder perspectives to facets implicit in the evaluation framework. Validity examinations include assembly of criterion, content, reliability, consequential and construct articulation evidences. Perceptual perspectives were obtained from teachers, principals, district leadership, and state policy decision makers. Data for this study came from a large suburban public school district in metropolitan Phoenix, Arizona. Study findings suggest that the evaluation framework is insufficient for supporting high stakes, consequential inferences of teacher instructional quality. This is based, in part on the following: (1) Weak associations between VAM and PP metrics; (2) Unstable VAM measures across time and between tested content areas; (3) Less than adequate scale reliabilities; (4) Lack of coherence between theorized and empirical PP factor structures; (5) Omission/underrepresentation of important instructional attributes/effects; (6) Stakeholder concerns over rater consistency, bias, and the inability of test scores to adequately represent instructional competence; (7) Negative sentiments regarding the system's ability to improve instructional competence and/or student learning; (8) Concerns regarding unintended consequences including increased stress, lower morale, harm to professional identity, and restricted learning opportunities; and (9) The general lack of empowerment and educator exclusion from the decision making process. Study findings also highlight the value of information brokers in policy decision making and the importance of having access to unbiased empirical information during the design and implementation phases of important change initiatives.Dissertation/ThesisDoctoral Dissertation Educational Leadership and Policy Studies 201

    J Surv Stat Methodol

    Get PDF
    Researchers strive to design and implement high-quality surveys to maximize the utility of the data collected. The definitions of quality and usefulness, however, vary from survey to survey and depend on the analytic needs. Survey teams must evaluate the trade-offs of various decisions, such as when results are needed and their required level of precision, in addition to practical constraints like budget, before finalizing the design. Characteristics within the concept of fit for purpose (FfP) can provide the framework for considering the trade-offs. Furthermore, this tool can enable an evaluation of quality for the resulting estimates. Implementation of a FfP framework in this context, however, is not straightforward. In this article, we provide the reader with a glimpse of a FfP framework in action for obtaining estimates on early season influenza vaccination coverage estimates and on knowledge, attitudes, behaviors, and barriers related to influenza and influenza prevention among civilian noninstitutionalized adults aged 18 years and older in the United States. The result is the National Internet Flu Survey (NIFS), an annual, two-week internet survey sponsored by the US Centers for Disease Control and Prevention. In addition to critical design decisions, we use the established NIFS FfP framework to discuss the quality of the NIFS in meeting the intended objectives. We highlight aspects that work well and other survey traits requiring further evaluation. Differences found in comparing the NIFS to the National Flu Survey, the National Health Interview Survey, and Behavioral Risk Factor Surveillance System are discussed via their respective FfP characteristics. The findings presented here highlight the importance of the FfP framework for designing surveys, defining data quality, and providing a set a metrics used to advertise the intended use of the survey data and results.CC999999/ImCDC/Intramural CDC HHSUnited States/2022-09-01T00:00:00Z36060551PMC94347061187

    Integrating automated support for a software management cycle into the TAME system

    Get PDF
    Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system

    Study and Evaluation for Quality Improvement of Object Oriented System at Various Layers of Object Oriented Matrices

    Get PDF
    Abstract-The fundamental of any engineering discipline is measurement. For making quality management decisions object-oriented design metrics can be used which is considerable evidence. This leads to saving substantious costs of allocated resources in testing or estimate a maintenance effort of project. Many object-oriented metrics have been designed for C++ which is most preferred language for many object-oriented systems. This paper focuses on an evaluation of object oriented metrics in C++. For this, two projects considered as inputs for the study -The first project is a library management system for college and the second is a graphical editor which can be used to describe and create a scene. The metric values have been calculated by semi automated tool. The resulting values have been analyzed to provide significant insight about object oriented characteristics of projects. The metric values have been calculated using a semi automated tool. The resulting values have been analyzed to provide significant insight about the object oriented characteristics of the projects. This is very long process. This project will be applicable in well compiled java program and it should have valid comments to measure the cohesion .The objective of this project is to create a common platform for all design quality metrics to make the application software more scalable and maintainable. In this paper, we include the system design of my work, solution steps according to the problem definition statement. Finally conclude by testing C++ and Java project

    Student academic performance stochastic simulator based on the Monte Carlo method

    Full text link
    In this paper, a computer-based tool is developed to analyze student performance along a given curriculum. The proposed software makes use of historical data to compute passing/failing probabilities and simulates future student academic performance based on stochastic programming methods (MonteCarlo) according to the specific university regulations. This allows to compute the academic performance rates for the specific subjects of the curriculum for each semester, as well as the overall rates (the set of subjects in the semester), which are the efficiency rate and the success rate. Additionally, we compute the rates for the Bachelors degree, which are the graduation rate measured as the percentage of students who finish as scheduled or taking an extra year and the efficiency rate (measured as the percentage of credits of the curriculum with respect to the credits really taken). In Spain, these metrics have been defined by the National Quality Evaluation and Accreditation Agency (ANECA). Moreover, the sensitivity of the performance metrics to some of the parameters of the simulator is analyzed using statistical tools (Design of Experiments). The simulator has been adapted to the curriculum characteristics of the Bachelor in Engineering Technologies at the Technical University of Madrid(UPM)
    • …
    corecore