19,678 research outputs found
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Guidelines For Pursuing and Revealing Data Abstractions
Many data abstraction types, such as networks or set relationships, remain
unfamiliar to data workers beyond the visualization research community. We
conduct a survey and series of interviews about how people describe their data,
either directly or indirectly. We refer to the latter as latent data
abstractions. We conduct a Grounded Theory analysis that (1) interprets the
extent to which latent data abstractions exist, (2) reveals the far-reaching
effects that the interventionist pursuit of such abstractions can have on data
workers, (3) describes why and when data workers may resist such explorations,
and (4) suggests how to take advantage of opportunities and mitigate risks
through transparency about visualization research perspectives and agendas. We
then use the themes and codes discovered in the Grounded Theory analysis to
develop guidelines for data abstraction in visualization projects. To continue
the discussion, we make our dataset open along with a visual interface for
further exploration
Digital Image Access & Retrieval
The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
Grids and the Virtual Observatory
We consider several projects from astronomy that benefit from the Grid paradigm and
associated technology, many of which involve either massive datasets or the federation
of multiple datasets. We cover image computation (mosaicking, multi-wavelength
images, and synoptic surveys); database computation (representation through XML,
data mining, and visualization); and semantic interoperability (publishing, ontologies,
directories, and service descriptions)
Enhancement and evaluation of Skylab photography for potential land use inventories, part 1
The author has identified the following significant results. Three sites were evaluated for land use inventory: Finger Lakes - Tompkins County, Lower Hudson Valley - Newburgh, and Suffolk County - Long Island. Special photo enhancement processes were developed to standardize the density range and contrast among S190A negatives. Enhanced black and white enlargements were converted to color by contact printing onto diazo film. A color prediction model related the density values on each spectral band for each category of land use to the spectral properties of the various diazo dyes. The S190A multispectral system proved to be almost as effective as the S190B high resolution camera for inventorying land use. Aggregate error for Level 1 averaged about 12% while Level 2 aggregate error averaged about 25%. The S190A system proved to be much superior to LANDSAT in inventorying land use, primarily because of increased resolution
Modeling views in the layered view model for XML using UML
In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction
SELECTION OF FOOD WASTE MANAGEMENT OPTION BY PROMETHEE METHOD
: Food waste management performed following the EU Circular Economy Strategy principles poses a problem in small islands. There are several standard food waste management methods on islands; however, there are two specific methods which must be considered along with their positive and negative impacts. These two specific methods are discharging food waste into the city’s sewer system and transporting waste to the mainland, i.e., to regional waste processing facilities. This paper presents a multi-criteria decision analysis to evaluate different waste management options and their applicability in small islands such as Vis. The results of this study indicate that the best food waste treatment option for small islands is discharging food waste into the city’s sewer system to be processed with wastewater through wastewater treatment. The PROMETHEE method used in this study has proved to be a useful tool for solving the food waste management problem
- …