548,528 research outputs found

    ITR/IM: Enabling the Creation and Use of GeoGrids for Next Generation Geospatial Information

    Get PDF
    The objective of this project is to advance science in information management, focusing in particular on geospatial information. It addresses the development of concepts, algorithms, and system architectures to enable users on a grid to query, analyze, and contribute to multivariate, quality-aware geospatial information. The approach consists of three complementary research areas: (1) establishing a statistical framework for assessing geospatial data quality; (2) developing uncertainty-based query processing capabilities; and (3) supporting the development of space- and accuracy-aware adaptive systems for geospatial datasets. The results of this project will support the extension of the concept of the computational grid to facilitate ubiquitous access, interaction, and contributions of quality-aware next generation geospatial information. By developing novel query processes as well as quality and similarity metrics the project aims to enable the integration and use of large collections of disperse information of varying quality and accuracy. This supports the evolution of a novel geocomputational paradigm, moving away from current standards-driven approaches to an inclusive, adaptive system, with example potential applications in mobile computing, bioinformatics, and geographic information systems. This experimental research is linked to educational activities in three different academic programs among the three participating sites. The outreach activities of this project include collaboration with U.S. federal agencies involved in geospatial data collection, an international partner (Brazil\u27s National Institute for Space Research), and the organization of a 2-day workshop with the participation of U.S. and international experts

    AwarNS: A framework for developing context-aware reactive mobile applications for health and mental health

    Get PDF
    In recent years, interest and investment in health and mental health smartphone apps have grown significantly. However, this growth has not been followed by an increase in quality and the incorporation of more advanced features in such applications. This can be explained by an expanding fragmentation of existing mobile platforms along with more restrictive privacy and battery consumption policies, with a consequent higher complexity of developing such smartphone applications. To help overcome these barriers, there is a need for robust, well-designed software development frameworks which are designed to be reliable, power-efficient and ethical with respect to data collection practices, and which support the sense-analyse-act paradigm typically employed in reactive mHealth applications. In this article, we present the AwarNS Framework, a context-aware modular software development framework for Android smartphones, which facilitates transparent, reliable, passive and active data sampling running in the background (sense), on-device and server-side data analysis (analyse), and context-aware just-in-time offline and online intervention capabilities (act). It is based on the principles of versatility, reliability, privacy, reusability, and testability. It offers built-in modules for capturing smartphone and associated wearable sensor data (e.g. IMU sensors, geolocation, Wi-Fi and Bluetooth scans, physical activity, battery level, heart rate), analysis modules for data transformation, selection and filtering, performing geofencing analysis and machine learning regression and classification, and act modules for persistence and various notification deliveries. We describe the framework’s design principles and architecture design, explain its capabilities and implementation, and demonstrate its use at the hand of real-life case studies implementing various mobile interventions for different mental disorders used in clinical practice

    Addressing and Presenting Quality of Satellite Data via Web-Based Services

    Get PDF
    With the recent attention to climate change and proliferation of remote-sensing data utilization, climate model and various environmental monitoring and protection applications have begun to increasingly rely on satellite measurements. Research application users seek good quality satellite data, with uncertainties and biases provided for each data point. However, different communities address remote sensing quality issues rather inconsistently and differently. We describe our attempt to systematically characterize, capture, and provision quality and uncertainty information as it applies to the NASA MODIS Aerosol Optical Depth data product. In particular, we note the semantic differences in quality/bias/uncertainty at the pixel, granule, product, and record levels. We outline various factors contributing to uncertainty or error budget; errors. Web-based science analysis and processing tools allow users to access, analyze, and generate visualizations of data while alleviating users from having directly managing complex data processing operations. These tools provide value by streamlining the data analysis process, but usually shield users from details of the data processing steps, algorithm assumptions, caveats, etc. Correct interpretation of the final analysis requires user understanding of how data has been generated and processed and what potential biases, anomalies, or errors may have been introduced. By providing services that leverage data lineage provenance and domain-expertise, expert systems can be built to aid the user in understanding data sources, processing, and the suitability for use of products generated by the tools. We describe our experiences developing a semantic, provenance-aware, expert-knowledge advisory system applied to NASA Giovanni web-based Earth science data analysis tool as part of the ESTO AIST-funded Multi-sensor Data Synergy Advisor project

    Lifelong-MonoDepth: Lifelong Learning for Multi-Domain Monocular Metric Depth Estimation

    Full text link
    With the rapid advancements in autonomous driving and robot navigation, there is a growing demand for lifelong learning models capable of estimating metric (absolute) depth. Lifelong learning approaches potentially offer significant cost savings in terms of model training, data storage, and collection. However, the quality of RGB images and depth maps is sensor-dependent, and depth maps in the real world exhibit domain-specific characteristics, leading to variations in depth ranges. These challenges limit existing methods to lifelong learning scenarios with small domain gaps and relative depth map estimation. To facilitate lifelong metric depth learning, we identify three crucial technical challenges that require attention: i) developing a model capable of addressing the depth scale variation through scale-aware depth learning, ii) devising an effective learning strategy to handle significant domain gaps, and iii) creating an automated solution for domain-aware depth inference in practical applications. Based on the aforementioned considerations, in this paper, we present i) a lightweight multi-head framework that effectively tackles the depth scale imbalance, ii) an uncertainty-aware lifelong learning solution that adeptly handles significant domain gaps, and iii) an online domain-specific predictor selection method for real-time inference. Through extensive numerical studies, we show that the proposed method can achieve good efficiency, stability, and plasticity, leading the benchmarks by 8% to 15%

    ANALISIS EFEKTIVITAS DIGITALISASI PAJAK TERHADAP KESADARAN WAJIB PAJAK UMKM KABUPATEN TEGAL DENGAN KEPERCAYAAN KEPADA PEMERINTAH SEBAGAI VARIABEL MODERASI

    Get PDF
    Tax is state revenues that have important role on development also is one of the elements in the APBN. General administration at the Tegal City Pratama Tax Service Office, improving the quality of employees, hire employees every year, and developing the use of e-filling applications. which experienced development and growth with a total of 140,042 people in 2019 who are engaged in 17 types of business fields (dinkop umkm Central Java, 2019). This study aims to provide empirical evidences based on effects of the effectiveness tax digitization on taxpayer awareness and government trust aware of taxpayer awareness. In this study, data was collected by distributes questionnaire to obtain a sample of 74 samples and then processed using SPSS 22 software. The data analysis technique is the form of quantitative data also used simple linear regressions data analysis. This study also shows there was positive influence in the effectiveness of tax digitization on taxpayer awareness and the positive influence of government trust on the awareness of Tegal Regency MSE taxpayer

    Compression Ratio Learning and Semantic Communications for Video Imaging

    Full text link
    Camera sensors have been widely used in intelligent robotic systems. Developing camera sensors with high sensing efficiency has always been important to reduce the power, memory, and other related resources. Inspired by recent success on programmable sensors and deep optic methods, we design a novel video compressed sensing system with spatially-variant compression ratios, which achieves higher imaging quality than the existing snapshot compressed imaging methods with the same sensing costs. In this article, we also investigate the data transmission methods for programmable sensors, where the performance of communication systems is evaluated by the reconstructed images or videos rather than the transmission of sensor data itself. Usually, different reconstruction algorithms are designed for applications in high dynamic range imaging, video compressive sensing, or motion debluring. This task-aware property inspires a semantic communication framework for programmable sensors. In this work, a policy-gradient based reinforcement learning method is introduced to achieve the explicit trade-off between the compression (or transmission) rate and the image distortion. Numerical results show the superiority of the proposed methods over existing baselines

    Interoperating Context Discovery Mechanisms

    Get PDF
    Context-Aware applications adapt their behaviour to the current situation of the user. This information, for instance user location and user availability, is called context information. Context is delivered by distributed context sources that need to be discovered before they can be used to retrieve context. Currently, multiple context discovery mechanisms exist, exhibiting heterogeneous capabilities (e.g. communication mechanisms, and data formats), which can be available to context-aware applications at arbitrary moments during the ap-plication’s lifespan. In this paper, we discuss a middleware mechanism that en-ables a (mobile) context-aware application to interoperate transparently with different context discovery mechanisms available at run-time. The goal of the proposed mechanism is to hide the heterogeneity and availability of context discovery mechanisms for context-aware applications, thereby facilitating their development

    Model Based Development of Quality-Aware Software Services

    Get PDF
    Modelling languages and development frameworks give support for functional and structural description of software architectures. But quality-aware applications require languages which allow expressing QoS as a first-class concept during architecture design and service composition, and to extend existing tools and infrastructures adding support for modelling, evaluating, managing and monitoring QoS aspects. In addition to its functional behaviour and internal structure, the developer of each service must consider the fulfilment of its quality requirements. If the service is flexible, the output quality depends both on input quality and available resources (e.g., amounts of CPU execution time and memory). From the software engineering point of view, modelling of quality-aware requirements and architectures require modelling support for the description of quality concepts, support for the analysis of quality properties (e.g. model checking and consistencies of quality constraints, assembly of quality), tool support for the transition from quality requirements to quality-aware architectures, and from quality-aware architecture to service run-time infrastructures. Quality management in run-time service infrastructures must give support for handling quality concepts dynamically. QoS-aware modeling frameworks and QoS-aware runtime management infrastructures require a common evolution to get their integration
    • …
    corecore