120,805 research outputs found

    Dynamic Geospatial Spectrum Modelling: Taxonomy, Options and Consequences

    Get PDF
    Much of the research in Dynamic Spectrum Access (DSA) has focused on opportunistic access in the temporal domain. While this has been quite useful in establishing the technical feasibility of DSA systems, it has missed large sections of the overall DSA problem space. In this paper, we argue that the spatio-temporal operating context of specific environments matters to the selection of the appropriate technology for learning context information. We identify twelve potential operating environments and compare four context awareness approaches (on-board sensing, databases, sensor networks, and cooperative sharing) for these environments. Since our point of view is overall system cost and efficiency, this analysis has utility for those regulators whose objectives are reducing system costs and enhancing system efficiency. We conclude that regulators should pay attention to the operating environment of DSA systems when determining which approaches to context learning to encourage

    Cross-Domain Image Retrieval with Attention Modeling

    Full text link
    With the proliferation of e-commerce websites and the ubiquitousness of smart phones, cross-domain image retrieval using images taken by smart phones as queries to search products on e-commerce websites is emerging as a popular application. One challenge of this task is to locate the attention of both the query and database images. In particular, database images, e.g. of fashion products, on e-commerce websites are typically displayed with other accessories, and the images taken by users contain noisy background and large variations in orientation and lighting. Consequently, their attention is difficult to locate. In this paper, we exploit the rich tag information available on the e-commerce websites to locate the attention of database images. For query images, we use each candidate image in the database as the context to locate the query attention. Novel deep convolutional neural network architectures, namely TagYNet and CtxYNet, are proposed to learn the attention weights and then extract effective representations of the images. Experimental results on public datasets confirm that our approaches have significant improvement over the existing methods in terms of the retrieval accuracy and efficiency.Comment: 8 pages with an extra reference pag

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    A framework for utility data integration in the UK

    Get PDF
    In this paper we investigate various factors which prevent utility knowledge from being fully exploited and suggest that integration techniques can be applied to improve the quality of utility records. The paper suggests a framework which supports knowledge and data integration. The framework supports utility integration at two levels: the schema and data level. Schema level integration ensures that a single, integrated geospatial data set is available for utility enquiries. Data level integration improves utility data quality by reducing inconsistency, duplication and conflicts. Moreover, the framework is designed to preserve autonomy and distribution of utility data. The ultimate aim of the research is to produce an integrated representation of underground utility infrastructure in order to gain more accurate knowledge of the buried services. It is hoped that this approach will enable us to understand various problems associated with utility data, and to suggest some potential techniques for resolving them

    Mapping for the Masses: Accessing Web 2.0 through Crowdsourcing

    Get PDF
    The authors describe how we are harnessing the power of web 2.0 technologies to create new approaches to collecting, mapping, and sharing geocoded data. The authors begin with GMapCreator that lets users fashion new maps using Google Maps as a base. The authors then describe MapTube that enables users to archive maps and demonstrate how it can be used in a variety of contexts to share map information, to put existing maps into a form that can be shared, and to create new maps from the bottom-up using a combination of crowdcasting, crowdsourcing, and traditional broadcasting. The authors conclude by arguing that such tools are helping to define a neogeography that is essentially "mapping for the masses,'' while noting that there are many issues of quality, accuracy, copyright, and trust that will influence the impact of these tools on map-based communication
    • …
    corecore