20,232 research outputs found

    Energy Efficient Ant Colony Algorithms for Data Aggregation in Wireless Sensor Networks

    Get PDF
    In this paper, a family of ant colony algorithms called DAACA for data aggregation has been presented which contains three phases: the initialization, packet transmission and operations on pheromones. After initialization, each node estimates the remaining energy and the amount of pheromones to compute the probabilities used for dynamically selecting the next hop. After certain rounds of transmissions, the pheromones adjustment is performed periodically, which combines the advantages of both global and local pheromones adjustment for evaporating or depositing pheromones. Four different pheromones adjustment strategies are designed to achieve the global optimal network lifetime, namely Basic-DAACA, ES-DAACA, MM-DAACA and ACS-DAACA. Compared with some other data aggregation algorithms, DAACA shows higher superiority on average degree of nodes, energy efficiency, prolonging the network lifetime, computation complexity and success ratio of one hop transmission. At last we analyze the characteristic of DAACA in the aspects of robustness, fault tolerance and scalability.Comment: To appear in Journal of Computer and System Science

    A Knowledge-Based Optimization Method for Aerodynamic Design

    Get PDF
    A new aerodynamic design method, CODISC, has been developed that combines a legacy knowledge-based design method, CDISC, with a simple optimization module known as SOUP. The primary goal of this new design system is to improve the performance gains obtained using CDISC without adding significant computational time. An additional objective of this approach is to reduce the need for a priori knowledge of good initial input variable values, as well as for subsequent manual revisions of those values as the design progresses. Several test cases illustrate the development of the process to date and some of the options available at transonic and supersonic speeds for turbulent flow designs. The test cases generally start from good baseline configurations and, in all cases, were able to improve the performance. Several new guidelines for good initial values for the design variables, as well as new design rules within CDISC itself, were developed from these cases

    Development of a Knowledge-Based Optimization Method for Aerodynamic Design

    Get PDF
    A new aerodynamic design method, CODISC, has been developed that combines an existing knowledgebased design method, CDISC, with a simple optimization module known as SOUP. The primary goal of this new design system is to improve the performance gains obtained using CDISC without adding significant computational time. An additional benefit of this approach is a reduction in the need for a priori knowledge of good initial input variable values as well as for subsequent manual revisions of those values as the design progresses. A series of 2D and 3D test cases are used to illustrate the development of the process and some of the options available at transonic and supersonic speeds for both laminar and turbulent flow. The test cases start from good baseline configurations and, in all cases, were able to improve the performance. Several new guidelines for good initial values for the design variables, as well new design rules within CDISC itself, were developed from these cases

    Forecasting with time series imaging

    Full text link
    Feature-based time series representations have attracted substantial attention in a wide range of time series analysis methods. Recently, the use of time series features for forecast model averaging has been an emerging research focus in the forecasting community. Nonetheless, most of the existing approaches depend on the manual choice of an appropriate set of features. Exploiting machine learning methods to extract features from time series automatically becomes crucial in state-of-the-art time series analysis. In this paper, we introduce an automated approach to extract time series features based on time series imaging. We first transform time series into recurrence plots, from which local features can be extracted using computer vision algorithms. The extracted features are used for forecast model averaging. Our experiments show that forecasting based on automatically extracted features, with less human intervention and a more comprehensive view of the raw time series data, yields highly comparable performances with the best methods in the largest forecasting competition dataset (M4) and outperforms the top methods in the Tourism forecasting competition dataset

    Practopoiesis: Or how life fosters a mind

    Get PDF
    The mind is a biological phenomenon. Thus, biological principles of organization should also be the principles underlying mental operations. Practopoiesis states that the key for achieving intelligence through adaptation is an arrangement in which mechanisms laying a lower level of organization, by their operations and interaction with the environment, enable creation of mechanisms lying at a higher level of organization. When such an organizational advance of a system occurs, it is called a traverse. A case of traverse is when plasticity mechanisms (at a lower level of organization), by their operations, create a neural network anatomy (at a higher level of organization). Another case is the actual production of behavior by that network, whereby the mechanisms of neuronal activity operate to create motor actions. Practopoietic theory explains why the adaptability of a system increases with each increase in the number of traverses. With a larger number of traverses, a system can be relatively small and yet, produce a higher degree of adaptive/intelligent behavior than a system with a lower number of traverses. The present analyses indicate that the two well-known traverses-neural plasticity and neural activity-are not sufficient to explain human mental capabilities. At least one additional traverse is needed, which is named anapoiesis for its contribution in reconstructing knowledge e.g., from long-term memory into working memory. The conclusions bear implications for brain theory, the mind-body explanatory gap, and developments of artificial intelligence technologies.Comment: Revised version in response to reviewer comment

    Testing numerical relativity with the shifted gauge wave

    Full text link
    Computational methods are essential to provide waveforms from coalescing black holes, which are expected to produce strong signals for the gravitational wave observatories being developed. Although partial simulations of the coalescence have been reported, scientifically useful waveforms have so far not been delivered. The goal of the AppleswithApples (AwA) Alliance is to design, coordinate and document standardized code tests for comparing numerical relativity codes. The first round of AwA tests have now being completed and the results are being analyzed. These initial tests are based upon periodic boundary conditions designed to isolate performance of the main evolution code. Here we describe and carry out an additional test with periodic boundary conditions which deals with an essential feature of the black hole excision problem, namely a non-vanishing shift. The test is a shifted version of the existing AwA gauge wave test. We show how a shift introduces an exponentially growing instability which violates the constraints of a standard harmonic formulation of Einstein's equations. We analyze the Cauchy problem in a harmonic gauge and discuss particular options for suppressing instabilities in the gauge wave tests. We implement these techniques in a finite difference evolution algorithm and present test results. Although our application here is limited to a model problem, the techniques should benefit the simulation of black holes using harmonic evolution codes.Comment: Submitted to special numerical relativity issue of Classical and Quantum Gravit

    Semantics-driven event clustering in Twitter feeds

    Get PDF
    Detecting events using social media such as Twitter has many useful applications in real-life situations. Many algorithms which all use different information sources - either textual, temporal, geographic or community features - have been developed to achieve this task. Semantic information is often added at the end of the event detection to classify events into semantic topics. But semantic information can also be used to drive the actual event detection, which is less covered by academic research. We therefore supplemented an existing baseline event clustering algorithm with semantic information about the tweets in order to improve its performance. This paper lays out the details of the semantics-driven event clustering algorithms developed, discusses a novel method to aid in the creation of a ground truth for event detection purposes, and analyses how well the algorithms improve over baseline. We find that assigning semantic information to every individual tweet results in just a worse performance in F1 measure compared to baseline. If however semantics are assigned on a coarser, hashtag level the improvement over baseline is substantial and significant in both precision and recall

    An efficient iterative method to reduce eccentricity in numerical-relativity simulations of compact binary inspiral

    Get PDF
    We present a new iterative method to reduce eccentricity in black-hole-binary simulations. Given a good first estimate of low-eccentricity starting momenta, we evolve puncture initial data for ~4 orbits and construct improved initial parameters by comparing the inspiral with post-Newtonian calculations. Our method is the first to be applied directly to the gravitational-wave (GW) signal, rather than the orbital motion. The GW signal is in general less contaminated by gauge effects, which, in moving-puncture simulations, limit orbital-motion-based measurements of the eccentricity to an uncertainty of Δe∼0.002\Delta e \sim 0.002, making it difficult to reduce the eccentricity below this value. Our new method can reach eccentricities below 10−310^{-3} in one or two iteration steps; we find that this is well below the requirements for GW astronomy in the advanced detector era. Our method can be readily adapted to any compact-binary simulation with GW emission, including black-hole-binary simulations that use alternative approaches, and neutron-star-binary simulations. We also comment on the differences in eccentricity estimates based on the strain hh, and the Newman-Penrose scalar Ψ4\Psi_4.Comment: 24 pages, 25 figures, pdflatex; v2: minor change

    Statistical algorithm for nonuniformity correction in focal-plane arrays

    Get PDF
    A statistical algorithm has been developed to compensate for the fixed-pattern noise associated with spatial nonuniformity and temporal drift in the response of focal-plane array infrared imaging systems. The algorithm uses initial scene data to generate initial estimates of the gain, the offset, and the variance of the additive electronic noise of each detector element. The algorithm then updates these parameters by use of subsequent frames and uses the updated parameters to restore the true image by use of a least-mean-square error finite-impulse-response filter. The algorithm is applied to infrared data, and the restored images compare favorably with those restored by use of a multiple-point calibration technique
    • …
    corecore