598,130 research outputs found
Low-Cost Air Quality Monitoring Tools: From Research to Practice (A Workshop Summary).
In May 2017, a two-day workshop was held in Los Angeles (California, U.S.A.) to gather practitioners who work with low-cost sensors used to make air quality measurements. The community of practice included individuals from academia, industry, non-profit groups, community-based organizations, and regulatory agencies. The group gathered to share knowledge developed from a variety of pilot projects in hopes of advancing the collective knowledge about how best to use low-cost air quality sensors. Panel discussion topics included: (1) best practices for deployment and calibration of low-cost sensor systems, (2) data standardization efforts and database design, (3) advances in sensor calibration, data management, and data analysis and visualization, and (4) lessons learned from research/community partnerships to encourage purposeful use of sensors and create change/action. Panel discussions summarized knowledge advances and project successes while also highlighting the questions, unresolved issues, and technological limitations that still remain within the low-cost air quality sensor arena
AIS Research Database 1986 - 2004
The purpose of this research note is to advise accounting researchers about the availability of a downloadable database that contains accounting information systems (AIS) articles from 1986 to 2004. The author created this Microsoft Access database in order to disseminate knowledge about these AIS articles to fellow researchers. The database is fully searchable and contains 536 AIS articles and 243 other items from 1986 (or initial year of publication) through most recent publication in 2004 for the following three journals: International Journal of Accounting Information Systems (IJAIS) (formerly Advances in Accounting Information Systems)(AiAIS) (1992 – May 2004), Journal of Information Systems (JIS) (Fall 1986 – Spring 2004), and Review of Business Information Systems (RBIS) (formerly Review of Accounting Information Systems)(RAIS) (Winter 1997 – Summer 2004)
Content Based Image Retrieval System Using NOHIS-tree
Content-based image retrieval (CBIR) has been one of the most important
research areas in computer vision. It is a widely used method for searching
images in huge databases. In this paper we present a CBIR system called
NOHIS-Search. The system is based on the indexing technique NOHIS-tree. The two
phases of the system are described and the performance of the system is
illustrated with the image database ImagEval. NOHIS-Search system was compared
to other two CBIR systems; the first that using PDDP indexing algorithm and the
second system is that using the sequential search. Results show that
NOHIS-Search system outperforms the two other systems.Comment: 6 pages, 10th International Conference on Advances in Mobile
Computing & Multimedia (MoMM2012
Image mining: trends and developments
[Abstract]: Advances in image acquisition and storage technology have led to tremendous growth in very large and detailed image databases. These images, if analyzed, can reveal useful information to the human users. Image mining deals with the extraction of implicit knowledge, image data relationship, or other patterns not explicitly stored in the images. Image mining is more than just an extension of data mining to image domain. It is an interdisciplinary endeavor that draws upon expertise in computer vision, image processing, image retrieval, data mining, machine learning, database, and artificial intelligence. In this paper, we will examine the research issues in image mining, current developments in image mining, particularly, image mining frameworks, state-of-the-art techniques and systems. We will also identify some future research directions for image mining
Recommended from our members
Software tools for stochastic programming: A Stochastic Programming Integrated Environment (SPInE)
SP models combine the paradigm of dynamic linear programming with
modelling of random parameters, providing optimal decisions which hedge
against future uncertainties. Advances in hardware as well as software
techniques and solution methods have made SP a viable optimisation tool.
We identify a growing need for modelling systems which support the creation
and investigation of SP problems. Our SPInE system integrates a number of
components which include a flexible modelling tool (based on stochastic
extensions of the algebraic modelling languages AMPL and MPL), stochastic
solvers, as well as special purpose scenario generators and database tools.
We introduce an asset/liability management model and illustrate how SPInE
can be used to create and process this model as a multistage SP application
Big Data Model Simulation on a Graph Database for Surveillance in Wireless Multimedia Sensor Networks
Sensors are present in various forms all around the world such as mobile
phones, surveillance cameras, smart televisions, intelligent refrigerators and
blood pressure monitors. Usually, most of the sensors are a part of some other
system with similar sensors that compose a network. One of such networks is
composed of millions of sensors connect to the Internet which is called
Internet of things (IoT). With the advances in wireless communication
technologies, multimedia sensors and their networks are expected to be major
components in IoT. Many studies have already been done on wireless multimedia
sensor networks in diverse domains like fire detection, city surveillance,
early warning systems, etc. All those applications position sensor nodes and
collect their data for a long time period with real-time data flow, which is
considered as big data. Big data may be structured or unstructured and needs to
be stored for further processing and analyzing. Analyzing multimedia big data
is a challenging task requiring a high-level modeling to efficiently extract
valuable information/knowledge from data. In this study, we propose a big
database model based on graph database model for handling data generated by
wireless multimedia sensor networks. We introduce a simulator to generate
synthetic data and store and query big data using graph model as a big
database. For this purpose, we evaluate the well-known graph-based NoSQL
databases, Neo4j and OrientDB, and a relational database, MySQL.We have run a
number of query experiments on our implemented simulator to show that which
database system(s) for surveillance in wireless multimedia sensor networks is
efficient and scalable
BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments
Advances in sequencing techniques have led to exponential growth in
biological data, demanding the development of large-scale bioinformatics
experiments. Because these experiments are computation- and data-intensive,
they require high-performance computing (HPC) techniques and can benefit from
specialized technologies such as Scientific Workflow Management Systems (SWfMS)
and databases. In this work, we present BioWorkbench, a framework for managing
and analyzing bioinformatics experiments. This framework automatically collects
provenance data, including both performance data from workflow execution and
data from the scientific domain of the workflow application. Provenance data
can be analyzed through a web application that abstracts a set of queries to
the provenance database, simplifying access to provenance information. We
evaluate BioWorkbench using three case studies: SwiftPhylo, a phylogenetic tree
assembly workflow; SwiftGECKO, a comparative genomics workflow; and RASflow, a
RASopathy analysis workflow. We analyze each workflow from both computational
and scientific domain perspectives, by using queries to a provenance and
annotation database. Some of these queries are available as a pre-built feature
of the BioWorkbench web application. Through the provenance data, we show that
the framework is scalable and achieves high-performance, reducing up to 98% of
the case studies execution time. We also show how the application of machine
learning techniques can enrich the analysis process
Image mining: issues, frameworks and techniques
[Abstract]: Advances in image acquisition and storage technology have led to tremendous growth in significantly large and detailed image databases. These images, if analyzed, can reveal useful information to the human users. Image mining deals with the extraction of implicit knowledge, image data relationship, or other patterns not explicitly stored in the images. Image mining is more than just an extension of data mining to image domain. It is an
interdisciplinary endeavor that draws upon expertise in
computer vision, image processing, image retrieval, data
mining, machine learning, database, and artificial
intelligence. Despite the development of many
applications and algorithms in the individual research
fields cited above, research in image mining is still in its infancy. In this paper, we will examine the research issues in image mining, current developments in image mining, particularly, image mining frameworks, state-of-the-art techniques and systems. We will also identify some future research directions for image mining at the end of this paper
KIIT Digital Library: An open hypermedia Application
The massive use of Web technologies has spurred a new revolution in information storing and retrieving. It has always been an issue whether to incorporate hyperlinks embedded in a document or to store them separately in a link base. Research effort has been concentrated on the development of link services that enable hypermedia functionality to be integrate into the general computing environment and allow linking from all tools on the browser or desktop. KIIT digital library is such an application that focuses mainly on architecture and protocols of Open Hypermedia Systems (OHS), providing on-line document authoring, browsing, cataloguing, searching and updating features. The WWW needs fundamentally new frameworks and concepts to support new search and indexing functionality. This is because of the frequent use of digital archives and to maintain huge amount of database and documents. These digital materials range from electronic versions of books and journals offered by traditional publishers to manuscripts, photographs, maps, sound recordings and similar materials digitized from libraries' own special collections to new electronic scholarly and scientific databases developed through the collaboration of researchers, computer and information scientists, and librarians. Metadata in catalogue systems are an indispensable tool to find information and services in networks. Technological advances provide new opportunities to facilitate the process of collecting and maintaining metadata and to facilitate using catalogue systems. The overall objective is how to make best use of catalogue systems. Information systems such as the World Wide Web, Digital Libraries, inventories of satellite images and other repositories contain more data than ever before, are globally distributed, easy to use and, therefore, become accessible to huge, heterogeneous user groups. For KIIT Digital Library, we have used Resource Development Framework (RDF) and Dublin Core (DC) standards to incorporate metadata. Overall KIIT digital library provides electronic access to information in many different forms. Recent technological advances make the storage and transmission of digital information possible. This project is to design and implement a cataloguing system of the digital library system suitable for storage, indexing, and retrieving information and providing that information across the Internet. The goal is to allow users to quickly search indices to locate segments of interests and view and manipulate these segments on their remote computers
- …