6,214 research outputs found

    Discretionary Policy versus Non-Discretionary Policy in the Economic Adjustment Process

    Get PDF
    The study aims to examine the concept of automatic fiscal stabilization in the context of macroeconomic adjustment policies. To this end, first a conceptual distinction between discretionary public adjustment policies and non-discretionary ones is achieved. Second, sufficient and necessary attributes for an automatic fiscal stabilizer are identified and examined, in order to obtain a definition of this instrument. The whole research approach is characterized by a logical and abstract way of thinking, to provide a general and non-contextual result. Finally, a general mechanism of action of automatic fiscal stabilizers is proposed, by introducing the basic concepts of action base and of action rate of such an instrument.sustainability, fiscal policy, automatic fiscal stabilizers, discretionary versus nondiscretionary, principle of the minimal action

    Micro-Macro Analysis of Complex Networks

    Get PDF
    Complex systems have attracted considerable interest because of their wide range of applications, and are often studied via a \u201cclassic\u201d approach: study a specific system, find a complex network behind it, and analyze the corresponding properties. This simple methodology has produced a great deal of interesting results, but relies on an often implicit underlying assumption: the level of detail on which the system is observed. However, in many situations, physical or abstract, the level of detail can be one out of many, and might also depend on intrinsic limitations in viewing the data with a different level of abstraction or precision. So, a fundamental question arises: do properties of a network depend on its level of observability, or are they invariant? If there is a dependence, then an apparently correct network modeling could in fact just be a bad approximation of the true behavior of a complex system. In order to answer this question, we propose a novel micro-macro analysis of complex systems that quantitatively describes how the structure of complex networks varies as a function of the detail level. To this extent, we have developed a new telescopic algorithm that abstracts from the local properties of a system and reconstructs the original structure according to a fuzziness level. This way we can study what happens when passing from a fine level of detail (\u201cmicro\u201d) to a different scale level (\u201cmacro\u201d), and analyze the corresponding behavior in this transition, obtaining a deeper spectrum analysis. The obtained results show that many important properties are not universally invariant with respect to the level of detail, but instead strongly depend on the specific level on which a network is observed. Therefore, caution should be taken in every situation where a complex network is considered, if its context allows for different levels of observability

    Formal Identification of Right-Grained Services for Service-Oriented Modeling

    Full text link
    Abstract. Identifying the right-grained services is important to lead the successful service orientation because it has a direct impact on two major goals: the composability of loosely-coupled services, and the reusability of individual services in different contexts. Although the concept of service orientation has been intensively debated in recent years, a unified methodic approach for identifying services has not yet been reached. In this paper, we suggest a formal approach to identify services at the right level of granularity from the business process model. Our approach uses the concept of graph clustering and provides a systematical approach by defining the cost metric as a measure of the interaction costs. To effectively extract service information from the business model, we take activities as the smallest units in service identification and cluster activities with high interaction cost into a task through hierarchical clustering algorithm, so as to reduce the coupling of remote tasks and to increase local task cohesion

    Exploring Data Hierarchies to Discover Knowledge in Different Domains

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Granular Partition and Concept Lattice Division Based on Quotient Space

    Get PDF
    In this paper, we investigate the relationship between the concept lattice and quotient space by granularity. A new framework of knowledge representation - granular quotient space - is constructed and it demonstrates that concept lattice classing is linked to quotient space. The covering of the formal context is firstly given based on this granule, then the granular concept lattice model and its construction are discussed on the sub-context which is formed by the granular classification set. We analyze knowledge reduction and give the description of granular entropy techniques, including some novel formulas. Lastly, a concept lattice constructing algorithm is proposed based on multi-granular feature selection in quotient space. Examples and experiments show that the algorithm can obtain a minimal reduct and is much more efficient than classical incremental concept formation methods

    Processing count queries over event streams at multiple time granularities

    Get PDF
    Management and analysis of streaming data has become crucial with its applications in web, sensor data, network tra c data, and stock market. Data streams consist of mostly numeric data but what is more interesting is the events derived from the numerical data that need to be monitored. The events obtained from streaming data form event streams. Event streams have similar properties to data streams, i.e., they are seen only once in a fixed order as a continuous stream. Events appearing in the event stream have time stamps associated with them in a certain time granularity, such as second, minute, or hour. One type of frequently asked queries over event streams is count queries, i.e., the frequency of an event occurrence over time. Count queries can be answered over event streams easily, however, users may ask queries over di erent time granularities as well. For example, a broker may ask how many times a stock increased in the same time frame, where the time frames specified could be hour, day, or both. This is crucial especially in the case of event streams where only a window of an event stream is available at a certain time instead of the whole stream. In this paper, we propose a technique for predicting the frequencies of event occurrences in event streams at multiple time granularities. The proposed approximation method e ciently estimates the count of events with a high accuracy in an event stream at any time granularity by examining the distance distributions of event occurrences. The proposed method has been implemented and tested on di erent real data sets and the results obtained are presented to show its e ectiveness

    Formal Concept Analysis Applications in Bioinformatics

    Get PDF
    Bioinformatics is an important field that seeks to solve biological problems with the help of computation. One specific field in bioinformatics is that of genomics, the study of genes and their functions. Genomics can provide valuable analysis as to the interaction between how genes interact with their environment. One such way to measure the interaction is through gene expression data, which determines whether (and how much) a certain gene activates in a situation. Analyzing this data can be critical for predicting diseases or other biological reactions. One method used for analysis is Formal Concept Analysis (FCA), a computing technique based in partial orders that allows the user to examine the structural properties of binary data based on which subsets of the data set depend on each other. This thesis surveys, in breadth and depth, the current literature related to the use of FCA for bioinformatics, with particular focus on gene expression data. This includes descriptions of current data management techniques specific to FCA, such as lattice reduction, discretization, and variations of FCA to account for different data types. Advantages and shortcomings of using FCA for genomic investigations, as well as the feasibility of using FCA for this application are addressed. Finally, several areas for future doctoral research are proposed. Adviser: Jitender S. Deogu

    Reasoning & Querying – State of the Art

    Get PDF
    Various query languages for Web and Semantic Web data, both for practical use and as an area of research in the scientific community, have emerged in recent years. At the same time, the broad adoption of the internet where keyword search is used in many applications, e.g. search engines, has familiarized casual users with using keyword queries to retrieve information on the internet. Unlike this easy-to-use querying, traditional query languages require knowledge of the language itself as well as of the data to be queried. Keyword-based query languages for XML and RDF bridge the gap between the two, aiming at enabling simple querying of semi-structured data, which is relevant e.g. in the context of the emerging Semantic Web. This article presents an overview of the field of keyword querying for XML and RDF

    The Effects of Disorganization on Goals and Problem Solving

    Get PDF
    This chapter presents an agent-based simulation of the ability of employees to solve problems. The primary aim of the chapter is to discern the difference in problem solving under two structural conditions. One has rigid structural constraints imposed on the agents while the other has very little structural constraints (called “disorganization” in this work). The simulation further utilizes organizational goals as a basis for motivation and studies the effects of disorganization on goals and motivation. Results from the simulation show that, under the condition of a more disorganized environment, the number of problems solved is relatively higher than under the condition of a less disorganized and more structured environment

    Towards trajectory anonymization: a generalization-based approach

    Get PDF
    Trajectory datasets are becoming popular due to the massive usage of GPS and locationbased services. In this paper, we address privacy issues regarding the identification of individuals in static trajectory datasets. We first adopt the notion of k-anonymity to trajectories and propose a novel generalization-based approach for anonymization of trajectories. We further show that releasing anonymized trajectories may still have some privacy leaks. Therefore we propose a randomization based reconstruction algorithm for releasing anonymized trajectory data and also present how the underlying techniques can be adapted to other anonymity standards. The experimental results on real and synthetic trajectory datasets show the effectiveness of the proposed techniques
    corecore