4,389 research outputs found

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Fitting Analysis using Differential Evolution Optimization (FADO): Spectral population synthesis through genetic optimization under self-consistency boundary conditions

    Full text link
    The goal of population spectral synthesis (PSS) is to decipher from the spectrum of a galaxy the mass, age and metallicity of its constituent stellar populations. This technique has been established as a fundamental tool in extragalactic research. It has been extensively applied to large spectroscopic data sets, notably the SDSS, leading to important insights into the galaxy assembly history. However, despite significant improvements over the past decade, all current PSS codes suffer from two major deficiencies that inhibit us from gaining sharp insights into the star-formation history (SFH) of galaxies and potentially introduce substantial biases in studies of their physical properties (e.g., stellar mass, mass-weighted stellar age and specific star formation rate). These are i) the neglect of nebular emission in spectral fits, consequently, ii) the lack of a mechanism that ensures consistency between the best-fitting SFH and the observed nebular emission characteristics of a star-forming (SF) galaxy. In this article, we present FADO (Fitting Analysis using Differential evolution Optimization): a conceptually novel, publicly available PSS tool with the distinctive capability of permitting identification of the SFH that reproduces the observed nebular characteristics of a SF galaxy. This so-far unique self-consistency concept allows us to significantly alleviate degeneracies in current spectral synthesis. The innovative character of FADO is further augmented by its mathematical foundation: FADO is the first PSS code employing genetic differential evolution optimization. This, in conjunction with other unique elements in its mathematical concept (e.g., optimization of the spectral library using artificial intelligence, convergence test, quasi-parallelization) results in key improvements with respect to computational efficiency and uniqueness of the best-fitting SFHs.Comment: 25 pages, 12 figures, A&A accepte

    Topology Control Multi-Objective Optimisation in Wireless Sensor Networks: Connectivity-Based Range Assignment and Node Deployment

    Get PDF
    The distinguishing characteristic that sets topology control apart from other methods, whose motivation is to achieve effects of energy minimisation and an increased network capacity, is its network-wide perspective. In other words, local choices made at the node-level always have the goal in mind of achieving a certain global, network-wide property, while not excluding the possibility for consideration of more localised factors. As such, our approach is marked by being a centralised computation of the available location-based data and its reduction to a set of non-homogeneous transmitting range assignments, which elicit a certain network-wide property constituted as a whole, namely, strong connectedness and/or biconnectedness. As a means to effect, we propose a variety of GA which by design is multi-morphic, where dependent upon model parameters that can be dynamically set by the user, the algorithm, acting accordingly upon either single or multiple objective functions in response. In either case, leveraging the unique faculty of GAs for finding multiple optimal solutions in a single pass. Wherefore it is up to the designer to select the singular solution which best meets requirements. By means of simulation, we endeavour to establish its relative performance against an optimisation typifying a standard topology control technique in the literature in terms of the proportion of time the network exhibited the property of strong connectedness. As to which, an analysis of the results indicates that such is highly sensitive to factors of: the effective maximum transmitting range, node density, and mobility scenario under observation. We derive an estimate of the optimal constitution thereof taking into account the specific conditions within the domain of application in that of a WSN, thereby concluding that only GA optimising for the biconnected components in a network achieves the stated objective of a sustained connected status throughout the duration.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    A Quantitative Graph-Based Approach to Monitoring Ice-Wedge Trough Dynamics in Polygonal Permafrost Landscapes

    Get PDF
    In response to increasing Arctic temperatures, ice-rich permafrost landscapes are undergoing rapid changes. In permafrost lowlands, polygonal ice wedges are especially prone to degradation. Melting of ice wedges results in deepening troughs and the transition from low-centered to high-centered ice-wedge polygons. This process has important implications for surface hydrology, as the connectivity of such troughs determines the rate of drainage for these lowland landscapes. In this study, we present a comprehensive, modular, and highly automated workflow to extract, to represent, and to analyze remotely sensed ice-wedge polygonal trough networks as a graph (i.e., network structure). With computer vision methods, we efficiently extract the trough locations as well as their geomorphometric information on trough depth and width from high-resolution digital elevation models and link these data within the graph. Further, we present and discuss the benefits of graph analysis algorithms for characterizing the erosional development of such thaw-affected landscapes. Based on our graph analysis, we show how thaw subsidence has progressed between 2009 and 2019 following burning at the Anaktuvuk River fire scar in northern Alaska, USA. We observed a considerable increase in the number of discernible troughs within the study area, while simultaneously the number of disconnected networks decreased from 54 small networks in 2009 to only six considerably larger disconnected networks in 2019. On average, the width of the troughs has increased by 13.86%, while the average depth has slightly decreased by 10.31%. Overall, our new automated approach allows for monitoring ice-wedge dynamics in unprecedented spatial detail, while simultaneously reducing the data to quantifiable geometric measures and spatial relationships.BMBF PermaRiskNational Science FoundationPeer Reviewe

    Asteroid lightcurves from the Palomar Transient Factory survey: Rotation periods and phase functions from sparse photometry

    Get PDF
    We fit 54,296 sparsely-sampled asteroid lightcurves in the Palomar Transient Factory to a combined rotation plus phase-function model. Each lightcurve consists of 20+ observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude and other attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining 53,000 fitted periods. By this method we find 9,033 of our lightcurves (of 8,300 unique asteroids) have reliable periods. Subsequent consideration of asteroids with multiple lightcurve fits indicate 4% contamination in these reliable periods. For 3,902 lightcurves with sufficient phase-angle coverage and either a reliably-fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than 2 g/cm3^3, while C types have a lower limit of between 1 and 2 g/cm3^3, in agreement with previous density estimates. For 5-20km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types' differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes and apparent-magnitude residuals to those of the Minor Planet Center's nominal G=0.15G=0.15, rotation-neglecting model; our phase-function plus Fourier-series fitting reduces asteroid photometric RMS scatter by a factor of 3.Comment: 35 pages, 29 figures. Accepted 15-Apr-2015 to The Astronomical Journal (AJ). Supplementary material including ASCII data tables will be available through the publishing journal's websit

    Link Prediction by De-anonymization: How We Won the Kaggle Social Network Challenge

    Full text link
    This paper describes the winning entry to the IJCNN 2011 Social Network Challenge run by Kaggle.com. The goal of the contest was to promote research on real-world link prediction, and the dataset was a graph obtained by crawling the popular Flickr social photo sharing website, with user identities scrubbed. By de-anonymizing much of the competition test set using our own Flickr crawl, we were able to effectively game the competition. Our attack represents a new application of de-anonymization to gaming machine learning contests, suggesting changes in how future competitions should be run. We introduce a new simulated annealing-based weighted graph matching algorithm for the seeding step of de-anonymization. We also show how to combine de-anonymization with link prediction---the latter is required to achieve good performance on the portion of the test set not de-anonymized---for example by training the predictor on the de-anonymized portion of the test set, and combining probabilistic predictions from de-anonymization and link prediction.Comment: 11 pages, 13 figures; submitted to IJCNN'201

    They are Small Worlds After All: Revised Properties of Kepler M Dwarf Stars and their Planets

    Get PDF
    We classified the reddest (rJ>2.2r-J>2.2) stars observed by the NASA KeplerKepler mission into main sequence dwarf or evolved giant stars and determined the properties of 4216 M dwarfs based on a comparison of available photometry with that of nearby calibrator stars, as well as available proper motions and spectra. We revised the properties of candidate transiting planets using the stellar parameters, high-resolution imaging to identify companion stars, and, in the case of binaries, fitting light curves to identify the likely planet host. In 49 of 54 systems we validated the primary as the host star. We inferred the intrinsic distribution of M dwarf planets using the method of iterative Monte Carlo simulation. We compared several models of planet orbital geometry and clustering and found that one where planets are exponentially distributed and almost precisely coplanar best describes the distribution of multi-planet systems. We determined that KeplerKepler M dwarfs host an average of 2.2±0.32.2 \pm 0.3 planets with radii of 1-4RR_{\oplus} and orbital periods of 1.5-180 d. The radius distribution peaks at 1.2R\sim 1.2R_{\oplus} and is essentially zero at 4R4R_{\oplus}, although we identify three giant planet candidates other than the previously confirmed Kepler-45b. There is suggestive but not significant evidence that the radius distribution varies with orbital period. The distribution with logarithmic orbital period is flat except for a decline for orbits less than a few days. Twelve candidate planets, including two Jupiter-size objects, experience an irradiance below the threshold level for a runaway greenhouse on an Earth-like planet and are thus in a "habitable zone".Comment: MNRAS, in press. Tables 1, 3, and 4 are available in electronic form in the "anc" director

    NASA patent abstracts bibliography: A continuing bibliography. Section 1: Abstracts (supplement 38)

    Get PDF
    Abstracts are provided for 132 patents and patent applications entered into the NASA scientific and technical information system during the period July 1990 through December 1990. Each entry consists of a citation, an abstract, and in most cases, a key illustration selected from the patent or patent application
    corecore