182 research outputs found

    Location Gathering: An Evaluation of Smartphone-Based Geographic Mobile Field Data Collection Hardware and Applications

    Get PDF
    Mobile field spatial data collection is the act of gathering attribute data, including spatial position, about features in a study area. A common method of field data collection is to use a handheld computing device attached to a global navigation satellite system in which attribute data are directly inputted into a database table. The market for mobile data collection systems was formerly dominated by bulky positioning systems and highly specialized software. However, recent years have seen the emergence and widespread adoption of highly customizable and user-friendly mobile smartphones and tablets. In this research, smartphone devices and smartphone data collection applications were tested and compared to a conventional survey-grade field data collection system to compare the capabilities and possible use cases of each. The test consisted of an evaluation of the accuracy and precision of several mobile devices, followed by a usability analysis of several contemporary data collection applications for the Android operating system. The results of the experiment showed that mobile devices and applications are still less powerful than dedicated conventional data collection systems. However, the performance gap is shrinking over time. The use cases for mobile devices as data collection systems are currently limited to general use and small to mid-size projects, but future development promises expanding capability

    Identifying the role of labor markets for monetary policy in an estimated DSGE model

    Get PDF
    We focus on a quantitative assessment of rigid labor markets in an environment of stable monetary policy. We ask how wages and labor market shocks feed into the inflation process and derive monetary policy implications. Towards that aim, we structurally model matching frictions and rigid wages in line with an optimizing rationale in a New Keynesian closed economy DSGE model. We estimate the model using Bayesian techniques for German data from the late 1970s to present. Given the pre-euro heterogeneity in wage bargaining we take this as the first-best approximation at hand for modelling monetary policy in the presence of labor market frictions in the current European regime. In our framework, we find that labor market structure is of prime importance for the evolution of the business cycle, and for monetary policy in particular. Yet shocks originating in the labor market itself may contain only limited information for the conduct of stabilization policy. JEL Classification: E32, E52, J64, C11bargaining, Bayesian estimation, Labor market, wage rigidity

    SQL pattern design, development & evaluation of its efficacy

    Get PDF
    Databases provide the foundation of most software systems. This means that system developers will inevitably need to write code to query these databases. The de facto language for querying is SQL and this, consequently, is the language primarily taught by higher education institutions. There is some evidence that learners find it hard to master SQL. These issues and concerns were confirmed by reviewing the literature and establishing the scope and context. The literature review allowed extraction of the common issues in impacting SQL acquisition. The identified issues were confirmed and justified by empirical evidence as reported here. A model of SQL learning was derived. This framework or model involves SQL learning taxonomy, a model of SQL problem solving and incorporates cross-cutting factors. The framework is used as map to the design of a proposed instructional design. The design employed pattern concepts and the related research to structure SQL knowledge as SQL patterns. Also presented are details on how SQL patterns could be organized and presented. A strong theoretical background (checklist, component-level design) was employed to organize, present and facilitated SQL pattern collection. The evaluation of the SQL patterns yielded new insight such as novice problem solving strategies and the types of errors students made in attempting to solve SQL problems. SQL patterns, as proposed as a result of this research, yielded statistically significant important in novice performance in writing SQL queries. A longitudinal field study with a large number of learners in a flexible environment should be conducted to confirm the findings of this research

    Impacts of Data Synthesis: A Metric for Quantifiable Data Standards and Performances

    Get PDF
    Clinical data analysis could lead to breakthroughs. However, clinical data contain sensitive information about participants that could be utilized for unethical activities, such as blackmailing, identity theft, mass surveillance, or social engineering. Data anonymization is a standard step during data collection, before sharing, to overcome the risk of disclosure. However, conventional data anonymization techniques are not foolproof and also hinder the opportunity for personalized evaluations. Much research has been done for synthetic data generation using generative adversarial networks and many other machine learning methods; however, these methods are either not free to use or are limited in capacity. This study evaluates the performance of an emerging tool named synthpop, an R package producing synthetic data as an alternative approach for data anonymization. This paper establishes data standards derived from the original data set based on the utilities and quality of information and measures variations in the synthetic data set to evaluate the performance of the data synthesis process. The methods to assess the utility of the synthetic data set can be broadly divided into two approaches: general utility and specific utility. General utility assesses whether synthetic data have overall similarities in the statistical properties and multivariate relationships with the original data set. Simultaneously, the specific utility assesses the similarity of a fitted model’s performance on the synthetic data to its performance on the original data. The quality of information is assessed by comparing variations in entropy bits and mutual information to response variables within the original and synthetic data sets. The study reveals that synthetic data succeeded at all utility tests with a statistically non-significant difference and not only preserved the utilities but also preserved the complexity of the original data set according to the data standard established in this study. Therefore, synthpop fulfills all the necessities and unfolds a wide range of opportunities for the research community, including easy data sharing and information protection
    corecore