5,429 research outputs found

    Memory effects in microscopic traffic models and wide scattering in flow-density data

    Full text link
    By means of microscopic simulations we show that non-instantaneous adaptation of the driving behaviour to the traffic situation together with the conventional measurement method of flow-density data can explain the observed inverse-λ\lambda shape and the wide scattering of flow-density data in ``synchronized'' congested traffic. We model a memory effect in the response of drivers to the traffic situation for a wide class of car-following models by introducing a new dynamical variable describing the adaptation of drivers to the surrounding traffic situation during the past few minutes (``subjective level of service'') and couple this internal state to parameters of the underlying model that are related to the driving style. % For illustration, we use the intelligent-driver model (IDM) as underlying model, characterize the level of service solely by the velocity and couple the internal variable to the IDM parameter ``netto time gap'', modelling an increase of the time gap in congested traffic (``frustration effect''), that is supported by single-vehicle data. % We simulate open systems with a bottleneck and obtain flow-density data by implementing ``virtual detectors''. Both the shape, relative size and apparent ``stochasticity'' of the region of the scattered data points agree nearly quantitatively with empirical data. Wide scattering is even observed for identical vehicles, although the proposed model is a time-continuous, deterministic, single-lane car-following model with a unique fundamental diagram.Comment: 8 pages, submitted to Physical Review

    Sistemas granulares evolutivos

    Get PDF
    Orientador: Fernando Antonio Campos GomideTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Recentemente tem-se observado um crescente interesse em abordagens de modelagem computacional para lidar com fluxos de dados do mundo real. Métodos e algoritmos têm sido propostos para obtenção de conhecimento a partir de conjuntos de dados muito grandes e, a princípio, sem valor aparente. Este trabalho apresenta uma plataforma computacional para modelagem granular evolutiva de fluxos de dados incertos. Sistemas granulares evolutivos abrangem uma variedade de abordagens para modelagem on-line inspiradas na forma com que os humanos lidam com a complexidade. Esses sistemas exploram o fluxo de informação em ambiente dinâmico e extrai disso modelos que podem ser linguisticamente entendidos. Particularmente, a granulação da informação é uma técnica natural para dispensar atenção a detalhes desnecessários e enfatizar transparência, interpretabilidade e escalabilidade de sistemas de informação. Dados incertos (granulares) surgem a partir de percepções ou descrições imprecisas do valor de uma variável. De maneira geral, vários fatores podem afetar a escolha da representação dos dados tal que o objeto representativo reflita o significado do conceito que ele está sendo usado para representar. Neste trabalho são considerados dados numéricos, intervalares e fuzzy; e modelos intervalares, fuzzy e neuro-fuzzy. A aprendizagem de sistemas granulares é baseada em algoritmos incrementais que constroem a estrutura do modelo sem conhecimento anterior sobre o processo e adapta os parâmetros do modelo sempre que necessário. Este paradigma de aprendizagem é particularmente importante uma vez que ele evita a reconstrução e o retreinamento do modelo quando o ambiente muda. Exemplos de aplicação em classificação, aproximação de função, predição de séries temporais e controle usando dados sintéticos e reais ilustram a utilidade das abordagens de modelagem granular propostas. O comportamento de fluxos de dados não-estacionários com mudanças graduais e abruptas de regime é também analisado dentro do paradigma de computação granular evolutiva. Realçamos o papel da computação intervalar, fuzzy e neuro-fuzzy em processar dados incertos e prover soluções aproximadas de alta qualidade e sumário de regras de conjuntos de dados de entrada e saída. As abordagens e o paradigma introduzidos constituem uma extensão natural de sistemas inteligentes evolutivos para processamento de dados numéricos a sistemas granulares evolutivos para processamento de dados granularesAbstract: In recent years there has been increasing interest in computational modeling approaches to deal with real-world data streams. Methods and algorithms have been proposed to uncover meaningful knowledge from very large (often unbounded) data sets in principle with no apparent value. This thesis introduces a framework for evolving granular modeling of uncertain data streams. Evolving granular systems comprise an array of online modeling approaches inspired by the way in which humans deal with complexity. These systems explore the information flow in dynamic environments and derive from it models that can be linguistically understood. Particularly, information granulation is a natural technique to dispense unnecessary details and emphasize transparency, interpretability and scalability of information systems. Uncertain (granular) data arise from imprecise perception or description of the value of a variable. Broadly stated, various factors can affect one's choice of data representation such that the representing object conveys the meaning of the concept it is being used to represent. Of particular concern to this work are numerical, interval, and fuzzy types of granular data; and interval, fuzzy, and neurofuzzy modeling frameworks. Learning in evolving granular systems is based on incremental algorithms that build model structure from scratch on a per-sample basis and adapt model parameters whenever necessary. This learning paradigm is meaningful once it avoids redesigning and retraining models all along if the system changes. Application examples in classification, function approximation, time-series prediction and control using real and synthetic data illustrate the usefulness of the granular approaches and framework proposed. The behavior of nonstationary data streams with gradual and abrupt regime shifts is also analyzed in the realm of evolving granular computing. We shed light upon the role of interval, fuzzy, and neurofuzzy computing in processing uncertain data and providing high-quality approximate solutions and rule summary of input-output data sets. The approaches and framework introduced constitute a natural extension of evolving intelligent systems over numeric data streams to evolving granular systems over granular data streamsDoutoradoAutomaçãoDoutor em Engenharia Elétric

    An Application-Tailored MAC Protocol for Wireless Sensor Networks

    Get PDF
    We describe a data management framework suitable for wireless sensor networks that can be used to adapt the performance of a medium access control (MAC) protocol depending on the query injected into the network. The framework has a\ud completely distributed architecture and only makes use of information available locally to capture information about network traffic patterns. It allows\ud nodes not servicing a query to enter a dormant mode which minimizes transmissions and yet maintain an updated view of the network. We then introduce an Adaptive, Information-centric and Lightweight MAC\ud (AI-LMAC) protocol that adapts its operation depending on the information presented by the framework. Our results demonstrate how transmissions are greatly reduced during the dormant mode. During the active mode, the MAC\ud protocol adjusts fairness to match the expected requirements of the query thus reducing latency. Thus such a data management framework allows the MAC to operate more efficiently by tailoring its needs to suit the requirements of the application

    Competition and cooperation:aspects of dynamics in sandpiles

    Full text link
    In this article, we review some of our approaches to granular dynamics, now well known to consist of both fast and slow relaxational processes. In the first case, grains typically compete with each other, while in the second, they cooperate. A typical result of {\it cooperation} is the formation of stable bridges, signatures of spatiotemporal inhomogeneities; we review their geometrical characteristics and compare theoretical results with those of independent simulations. {\it Cooperative} excitations due to local density fluctuations are also responsible for relaxation at the angle of repose; the {\it competition} between these fluctuations and external driving forces, can, on the other hand, result in a (rare) collapse of the sandpile to the horizontal. Both these features are present in a theory reviewed here. An arena where the effects of cooperation versus competition are felt most keenly is granular compaction; we review here a random graph model, where three-spin interactions are used to model compaction under tapping. The compaction curve shows distinct regions where 'fast' and 'slow' dynamics apply, separated by what we have called the {\it single-particle relaxation threshold}. In the final section of this paper, we explore the effect of shape -- jagged vs. regular -- on the compaction of packings near their jamming limit. One of our major results is an entropic landscape that, while microscopically rough, manifests {\it Edwards' flatness} at a macroscopic level. Another major result is that of surface intermittency under low-intensity shaking.Comment: 36 pages, 23 figures, minor correction

    Analysis Framework for Reduced Data Warehouse

    Get PDF
    International audienceOur aim is to define a framework supporting analysis in MDW with reductions. Firstly, we describe a modeling solution for reduced MDW. A schema of reduced MDW is composed of states. Each state is defined as a star schema composed of one fact and its related dimensions valid for a certain period of time. Secondly, we present a multi-state analysis framework. Extensions of classical drilldown and rollup operators are defined to support multi-states analyses. Finally we present a prototype of our framework aiming to prove the feasibility of concept. By implementing our extended operators, the prototype automatically generates appropriate SQL queries over metadata and reduced data

    Decision-making in semi-democratic contexts

    Get PDF
    Producción CientíficaA general problem, which may concern practical contexts of different nature, is to aggregate multi-experts rankings on a set of alternatives into a single fused ranking. Aggregation should also take into account the experts’ importance, which may not necessarily be the same for all of them. We synthetically define this context as semi-democratic. The main aim of the paper is the analysis of the possible semi-democratic paradigms that can be conceived when the experts’ importance is not the same: (i) the importance is described by means of a weighting vector; (ii) the importance is expressed by a weak order on the set of experts; (iii) the importance is described by a weak order on the set of experts with additional information on the ordinal proximities among them. The three paradigms can be applied in different decision-making situations, where some experts perform multiple assignments. In this paper various situations are discussed and analyzed in detail. A series of examples, in the field of interior design of a new car, will complement the description.Ministerio de Economía, Industria y Competitividad (Project ECO2016-77900-P)European Regional Development Fund (ERDF

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    A finder and representation system for knowledge carriers based on granular computing

    Get PDF
    In one of his publications Aristotle states ”All human beings by their nature desire to know” [Kraut 1991]. This desire is initiated the day we are born and accompanies us for the rest of our life. While at a young age our parents serve as one of the principle sources for knowledge, this changes over the course of time. Technological advances and particularly the introduction of the Internet, have given us new possibilities to share and access knowledge from almost anywhere at any given time. Being able to access and share large collections of written down knowledge is only one part of the equation. Just as important is the internalization of it, which in many cases can prove to be difficult to accomplish. Hence, being able to request assistance from someone who holds the necessary knowledge is of great importance, as it can positively stimulate the internalization procedure. However, digitalization does not only provide a larger pool of knowledge sources to choose from but also more people that can be potentially activated, in a bid to receive personalized assistance with a given problem statement or question. While this is beneficial, it imposes the issue that it is hard to keep track of who knows what. For this task so-called Expert Finder Systems have been introduced, which are designed to identify and suggest the most suited candidates to provide assistance. Throughout this Ph.D. thesis a novel type of Expert Finder System will be introduced that is capable of capturing the knowledge users within a community hold, from explicit and implicit data sources. This is accomplished with the use of granular computing, natural language processing and a set of metrics that have been introduced to measure and compare the suitability of candidates. Furthermore, are the knowledge requirements of a problem statement or question being assessed, in order to ensure that only the most suited candidates are being recommended to provide assistance
    • …
    corecore