1,006,736 research outputs found

    Automatic Estimation of Modulation Transfer Functions

    Full text link
    The modulation transfer function (MTF) is widely used to characterise the performance of optical systems. Measuring it is costly and it is thus rarely available for a given lens specimen. Instead, MTFs based on simulations or, at best, MTFs measured on other specimens of the same lens are used. Fortunately, images recorded through an optical system contain ample information about its MTF, only that it is confounded with the statistics of the images. This work presents a method to estimate the MTF of camera lens systems directly from photographs, without the need for expensive equipment. We use a custom grid display to accurately measure the point response of lenses to acquire ground truth training data. We then use the same lenses to record natural images and employ a data-driven supervised learning approach using a convolutional neural network to estimate the MTF on small image patches, aggregating the information into MTF charts over the entire field of view. It generalises to unseen lenses and can be applied for single photographs, with the performance improving if multiple photographs are available

    The Development of Measures to Assess the Performance of the Information Systems Function: A Multiple-Constitutency Approach

    Get PDF
    While the importance of measuring the performance of the information systems function (ISF) is well recognized, no comprehensive instrument exists to assess the ISF’s performance. This research develops a theoretical model of the ISF’ s performance based on existing research. The proposed model consists of three dimensions: systems performance, information effectiveness, and service performance. Based on the model, an instrument is developed to capture the perceptions of various IS user constituency groups. Therefore, in addition to validating the ISF performance instrument, this research also attempts to empirically differentiate IS users into parsimonious groups and to compare the differences in ISF performance assessments by the various constituencies

    Evaluating the Impact on Market Performance of Investments in Market Information Systems: Methodological Challenges

    Get PDF
    Evaluating the impact on market performance of investments in agricultural market information systems (MIS) face several methodological challenges. These fall into two broad categories: (a) defining the dimensions of market performance to measure (which is a function of whom the MIS is designed to serve) and identifying reliable indicators of those performance dimensions, and (b) identifying the causal effects of the MIS. The determination of causal effects in turn requires establishing a credible baseline, measuring “treatment effects” (i.e., the effects on economic behavior of receiving improved information from an MIS), dealing with problems of endogenous placement of treatment, and interpreting the validity of stakeholders’ statements and governments’ revealed preferences regarding the utility of MIS. Many of these challenges arise because improved market information can affect the welfare of market actors through improved market polices and increased competition even if those actors do not have direct access to that information. The paper discusses these challenges and identifies approaches that may be useful in developing a “convergence of evidence” concerning whether investment in a given MIS is socially worthwhile.market information services, impact assessment, market transparency, food policies, Agricultural and Food Policy, Food Security and Poverty, International Development, Marketing, Research and Development/Tech Change/Emerging Technologies, Research Methods/ Statistical Methods, C81, D80, H43, N57, 013, Q13,

    Scalable Deep Traffic Flow Neural Networks for Urban Traffic Congestion Prediction

    Full text link
    Tracking congestion throughout the network road is a critical component of Intelligent transportation network management systems. Understanding how the traffic flows and short-term prediction of congestion occurrence due to rush-hour or incidents can be beneficial to such systems to effectively manage and direct the traffic to the most appropriate detours. Many of the current traffic flow prediction systems are designed by utilizing a central processing component where the prediction is carried out through aggregation of the information gathered from all measuring stations. However, centralized systems are not scalable and fail provide real-time feedback to the system whereas in a decentralized scheme, each node is responsible to predict its own short-term congestion based on the local current measurements in neighboring nodes. We propose a decentralized deep learning-based method where each node accurately predicts its own congestion state in real-time based on the congestion state of the neighboring stations. Moreover, historical data from the deployment site is not required, which makes the proposed method more suitable for newly installed stations. In order to achieve higher performance, we introduce a regularized Euclidean loss function that favors high congestion samples over low congestion samples to avoid the impact of the unbalanced training dataset. A novel dataset for this purpose is designed based on the traffic data obtained from traffic control stations in northern California. Extensive experiments conducted on the designed benchmark reflect a successful congestion prediction

    Neighborhood Integration and Connectivity Predict Cognitive Performance and Decline

    Get PDF
    Objective: Neighborhood characteristics may be important for promoting walking, but little research has focused on older adults, especially those with cognitive impairment. We evaluated the role of neighborhood characteristics on cognitive function and decline over a 2-year period adjusting for measures of walking. Method: In a study of 64 older adults with and without mild Alzheimer’s disease (AD), we evaluated neighborhood integration and connectivity using geographical information systems data and space syntax analysis. In multiple regression analyses, we used these characteristics to predict 2-year declines in factor analytically derived cognitive scores (attention, verbal memory, mental status) adjusting for age, sex, education, and self-reported walking. Results: Neighborhood integration and connectivity predicted cognitive performance at baseline, and changes in cognitive performance over 2 years. The relationships between neighborhood characteristics and cognitive performance were not fully explained by self-reported walking. Discussion: Clearer definitions of specific neighborhood characteristics associated with walkability are needed to better understand the mechanisms by which neighborhoods may impact cognitive outcomes. These results have implications for measuring neighborhood characteristics, design and maintenance of living spaces, and interventions to increase walking among older adults. We offer suggestions for future research measuring neighborhood characteristics and cognitive function

    Challenges in Evaluating Development Effectiveness

    Get PDF
    Evaluation quality is a function of methodological and data inputs. This paper argues that there has been inadequate investment in methodology, often resulting in low quality evaluation outputs. With an increased focus on results, evaluation needs to deliver credible information on the role of developmentsupported interventions in improving the lives of poor people, so attention to sound methodology matters. This paper explores three areas in which evaluation can be improved. First, reporting agency-wide performance through monitoring systems that satisfy the Triple-A criteria of aggregation, attribution and alignment; which includes procedures for the systematic summary of qualitative data. Second, more attention need to be paid to measuring impact, both through the use of randomisation where possible and appropriate, or through quasi-experimental methods. However, analysis of impact needs to be firmly embedded in a theory-based approach which maps the causal chain from inputs to impacts. Finally, analysis of sustainability needs to move beyond its current crude and cursory treatment to embrace the tools readily available to the discipline.Evaluation, development effectiveness, World Bank

    Performance evaluation of CEBus power line communication in the presence of X-10 module signaling

    Get PDF
    A power line of CEBus has great potential towards inexpensive home automation. Both PLBus and X-10 uses [sic] burst of 120KHz signals to transmit bits of information on the power line. However, these two systems are completely incompatible and can conflict with each other. This thesis presents the first performance evaluation of Power Line CEBus communication in the presence of X-10 module signaling. The evaluation included simulation experiments measuring packet delays, message delays, message throughput, channel throughput and the percentage of messages received in error verses [sic] different loads. Network performance has been confirmed to function well in terms of delays and throughputs over the practical range of normalized offered load. Also the percentage of CEBus messages received in error due to a collision with X-10 signals did not exceed 2% in all [sic] our cases

    Classical EIS and square pattern signals comparison based on a well-known reference impedance

    Get PDF
    International audienceElectrochemical impedance spectroscopy or ac impeda nce methods are popularly used for the diagnosis of electrochemical generators (batteries or fuel cell) . It is now possible to acquire and quantitatively interpret the experimental electrical impedances of such syst ems, whose evolutions indirectly reflect the modifications of the internal electrochemical proce ss. The scope of these measurement methods is to identify the frequency response function of the sys tem under test by applying a small signal perturbat ion to the system input, and measuring the corresponding r esponse. Once identified, and according to the application, frequency response functions can provi de useful information about the characteristics of the system. Classical EIS consists in applying a set of frequency-controlled sine waves to the input of th e system. However, the most difficult problem is the integration of this type of measuring device in embedded systems. In order to overcome this problem , we propose to apply squared pattern excitation signals to perform such impedance measurements. In this paper, we quantify and compare the performance of classical EIS and the proposed broadband identif ication method applied to a well-known impedance circuit

    Quality as the criterion for delivered information systems effectiveness

    Full text link
    One of the major challenges of MIS activities is the difficulty in measuring the effectiveness of delivered systems. The principal purpose of my research is to explore this field in order to develop an instrument by which to measure such effectiveness. Conceptualisation of Information System (IS) Effectiveness has been substantially framed by DeLone and McLean\u27s (1992) Success; Model. But with the innovation in Information Technology (IT) over the past decade, and the constant pressure in IT to improve performance, there is merit in undertaking a fresh appraisal of the issue. This study built on the model of IS Success developed by DeLone and MeLean, but was broadened to include related research from the domains of IS, Management and Marketing. This analysis found that an effective IS function is built on three pillars: the systems implemented; the information held and delivered by these systems; and, the service provided in support of the IS function. A common foundation for these pillars is the concept of stakeholder needs. In seeking to appreciate the effectiveness: of delivered IS applications in relation to the job performance of stakeholders, this research developed an understanding of what quality means in an IT context I argue that quality is a more useful criterion for effectiveness than the more customary measures of use and user satisfaction. Respecification of the IS Success Model was then proposed. The second phase of the research was to test this model empirically through judgment panels, focus groups and interviews. Results consistently supported the structure and components of the respecified model. Quality was determined as a multi-dimensional construct, with the key dimensions for the quality of delivered IS differing from those used in the research from other disciplines. Empirical work indicated that end-user stakeholders derived their evaluations of quality by internally evaluating perceived performance of delivered IS in relation to their expectations for such performance. A short trial explored whether, when overt measurement of expectations was concurrent with the measurement of perceptions, a more revealing appraisal of delivered IS quality was provided than when perceptions alone were measured. Results revealed a difference between the two measures. Using the New IS Success Model as the foundation, and drawing upon the related theoretical and empirical research, an instrument was developed to measure the quality/effectiveness of delivered IS applications. Four trials of this instrument, QUALIT, are documented. Analysis of results from preliminary trials indicates promise in terms of business value: the instrument is simple to administer and has the capacity to pinpoint areas of weakness. The research related to the respecification of the New IS Success Model and the associated empirical studies, including the development of QTJALIT, have both contributed to the development of theory about IS Effectiveness. More precisely, my research has reviewed the components of an information system, the dimensions comprising these components and the indicators of each, and based upon these findings, formulated an instrument by which to measure the effectiveness of a delivered IS

    Challenges in evaluating development effectiveness

    Get PDF
    Evaluation quality is a function of methodological and data inputs. This paper argues that there has been inadequate investment in methodology, often resulting in low quality evaluation outputs. With an increased focus on results, evaluation needs to deliver credible information on the role of developmentsupported interventions in improving the lives of poor people, so attention to sound methodology matters. This paper explores three areas in which evaluation can be improved. First, reporting agency-wide performance through monitoring systems that satisfy the Triple-A criteria of aggregation, attribution and alignment; which includes procedures for the systematic summary of qualitative data. Second, more attention need to be paid to measuring impact, both through the use of randomisation where possible and appropriate, or through quasi-experimental methods. However, analysis of impact needs to be firmly embedded in a theory-based approach which maps the causal chain from inputs to impacts. Finally, analysis of sustainability needs to move beyond its current crude and cursory treatment to embrace the tools readily available to the discipline
    • 

    corecore