1,813 research outputs found

    Rocket experiments for spectral estimation of electron density fine structure in the auroral and equatorial ionosphere and preliminary results

    Get PDF
    Sounding rockets equipped to monitor electron density and its fine structure were launched into the auroral and equatorial ionosphere in 1980 and 1983, respectively. The measurement electronics are based on the Langmuir probe and are described in detail. An approach to the spectral analysis of the density irregularities is addressed and a software algorithm implementing the approach is given. Preliminary results of the analysis are presented

    Stability

    Full text link
    Reproducibility is imperative for any scientific discovery. More often than not, modern scientific findings rely on statistical analysis of high-dimensional data. At a minimum, reproducibility manifests itself in stability of statistical results relative to "reasonable" perturbations to data and to the model used. Jacknife, bootstrap, and cross-validation are based on perturbations to data, while robust statistics methods deal with perturbations to models. In this article, a case is made for the importance of stability in statistics. Firstly, we motivate the necessity of stability for interpretable and reliable encoding models from brain fMRI signals. Secondly, we find strong evidence in the literature to demonstrate the central role of stability in statistical inference, such as sensitivity analysis and effect detection. Thirdly, a smoothing parameter selector based on estimation stability (ES), ES-CV, is proposed for Lasso, in order to bring stability to bear on cross-validation (CV). ES-CV is then utilized in the encoding models to reduce the number of predictors by 60% with almost no loss (1.3%) of prediction performance across over 2,000 voxels. Last, a novel "stability" argument is seen to drive new results that shed light on the intriguing interactions between sample to sample variability and heavier tail error distribution (e.g., double-exponential) in high-dimensional regression models with pp predictors and nn independent samples. In particular, when p/n→κ∈(0.3,1)p/n\rightarrow\kappa\in(0.3,1) and the error distribution is double-exponential, the Ordinary Least Squares (OLS) is a better estimator than the Least Absolute Deviation (LAD) estimator.Comment: Published in at http://dx.doi.org/10.3150/13-BEJSP14 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    First Results from the CHARA Array. II. A Description of the Instrument

    Full text link
    The CHARA Array is a six 1-m telescope optical/IR interferometric array located on Mount Wilson California, designed and built by the Center for High Angular Resolution Astronomy of Georgia State University. In this paper we describe the main elements of the Array hardware and software control systems as well as the data reduction methods currently being used. Our plans for upgrades in the near future are also described

    Multipath/RFI/modulation study for DRSS-RFI problem: Voice coding and intelligibility testing for a satellite-based air traffic control system

    Get PDF
    Analog and digital voice coding techniques for application to an L-band satellite-basedair traffic control (ATC) system for over ocean deployment are examined. In addition to performance, the techniques are compared on the basis of cost, size, weight, power consumption, availability, reliability, and multiplexing features. Candidate systems are chosen on the bases of minimum required RF bandwidth and received carrier-to-noise density ratios. A detailed survey of automated and nonautomated intelligibility testing methods and devices is presented and comparisons given. Subjective evaluation of speech system by preference tests is considered. Conclusion and recommendations are developed regarding the selection of the voice system. Likewise, conclusions and recommendations are developed for the appropriate use of intelligibility tests, speech quality measurements, and preference tests with the framework of the proposed ATC system

    Resource-efficient strategies for mobile ad-hoc networking

    Get PDF
    The ubiquity and widespread availability of wireless mobile devices with ever increasing inter-connectivity (e. g. by means of Bluetooth, WiFi or UWB) have led to new and emerging next generation mobile communication paradigms, such as the Mobile Ad-hoc NETworks (MANETs). MANETs are differentiated from traditional mobile systems by their unique properties, e. g. unpredictable nodal location, unstable topology and multi-hop packet relay. The success of on-going research in communications involving MANETs has encouraged their applications in areas with stringent performance requirements such as the e-healthcare, e. g. to connect them with existing systems to deliver e-healthcare services anytime anywhere. However, given that the capacity of mobile devices is restricted by their resource constraints (e. g. computing power, energy supply and bandwidth), a fundamental challenge in MANETs is how to realize the crucial performance/Quality of Service (QoS) expectations of communications in a network of high dynamism without overusing the limited resources. A variety of networking technologies (e. g. routing, mobility estimation and connectivity prediction) have been developed to overcome the topological instability and unpredictability and to enable communications in MANETs with satisfactory performance or QoS. However, these technologies often feature a high consumption of power and/or bandwidth, which makes them unsuitable for resource constrained handheld or embedded mobile devices. In particular, existing strategies of routing and mobility characterization are shown to achieve fairly good performance but at the expense of excessive traffic overhead or energy consumption. For instance, existing hybrid routing protocols in dense MANETs are based in two-dimensional organizations that produce heavy proactive traffic. In sparse MANETs, existing packet delivery strategy often replicates too many copies of a packet for a QoS target. In addition, existing tools for measuring nodal mobility are based on either the GPS or GPS-free positioning systems, which incur intensive communications/computations that are costly for battery-powered terminals. There is a need to develop economical networking strategies (in terms of resource utilization) in delivering the desired performance/soft QoS targets. The main goal of this project is to develop new networking strategies (in particular, for routing and mobility characterization) that are efficient in terms of resource consumptions while being effective in realizing performance expectations for communication services (e. g. in the scenario of e-healthcare emergency) with critical QoS requirements in resource-constrained MANETs. The main contributions of the thesis are threefold: (1) In order to tackle the inefficient bandwidth utilization of hybrid service/routing discovery in dense MANETs, a novel "track-based" scheme is developed. The scheme deploys a one-dimensional track-like structure for hybrid routing and service discovery. In comparison with existing hybrid routing/service discovery protocols that are based on two-dimensional structures, the track-based scheme is more efficient in terms of traffic overhead (e. g. about 60% less in low mobility scenarios as shown in Fig. 3.4). Due to the way "provocative tracks" are established, the scheme has also the capability to adapt to the network traffic and mobility for a better performance. (2) To minimize the resource utilization of packet delivery in sparse MANETs where wireless links are intermittently connected, a store-and-forward based scheme, "adaptive multicopy routing", was developed for packet delivery in sparse mobile ad-hoc networks. Instead of relying on the source to control the delivery overhead as in the conventional multi-copy protocols, the scheme allows each intermediate node to independently decide whether to forward a packet according to the soft QoS target and local network conditions. Therefore, the scheme can adapt to varying networking situations that cannot be anticipated in conventional source-defined strategies and deliver packets for a specific QoS targets using minimum traffic overhead. ii (3) The important issue of mobility measurement that imposes heavy communication/computation burdens on a mobile is addressed with a set of resource-efficient "GPS-free" soluti ons, which provide mobility characterization with minimal resource utilization for ranging and signalling by making use of the information of the time-varying ranges between neighbouring mobile nodes (or groups of mobile nodes). The range-based solutions for mobility characterization consist of a new mobility metric for network-wide performance measurement, two velocity estimators for approximating the inter-node relative speeds, and a new scheme for characterizing the nodal mobility. The new metric and its variants are capable of capturing the mobility of a network as well as predicting the performance. The velocity estimators are used to measure the speed and orientation of a mobile relative to its neighbours, given the presence of a departing node. Based on the velocity estimators, the new scheme for mobility characterization is capable of characterizing the mobility of a node that are associated with topological stability, i. e. the node's speeds, orientations relative to its neighbouring nodes and its past epoch time. iiiBIOPATTERN EU Network of Excellence (EU Contract 508803

    Doctor of Philosophy

    Get PDF
    dissertationIn wireless sensor networks, knowing the location of the wireless sensors is critical in many remote sensing and location-based applications, from asset tracking, and structural monitoring to geographical routing. For a majority of these applications, received signal strength (RSS)-based localization algorithms are a cost effective and viable solution. However, RSS measurements vary unpredictably because of fading, the shadowing caused by presence of walls and obstacles in the path, and non-isotropic antenna gain patterns, which affect the performance of the RSS-based localization algorithms. This dissertation aims to provide efficient models for the measured RSS and use the lessons learned from these models to develop and evaluate efficient localization algorithms. The first contribution of this dissertation is to model the correlation in shadowing across link pairs. We propose a non-site specific statistical joint path loss model between a set of static nodes. Radio links that are geographically proximate often experience similar environmental shadowing effects and thus have correlated shadowing. Using a large number of multi-hop network measurements in an ensemble of indoor and outdoor environments, we show statistically significant correlations among shadowing experienced on different links in the network. Finally, we analyze multihop paths in three and four node networks using both correlated and independent shadowing models and show that independent shadowing models can underestimate the probability of route failure by a factor of two or greater. Second, we study a special class of algorithms, called kernel-based localization algorithms, that use kernel methods as a tool for learning correlation between the RSS measurements. Kernel methods simplify RSS-based localization algorithms by providing a means to learn the complicated relationship between RSS measurements and position. We present a common mathematical framework for kernel-based localization algorithms to study and compare the performance of four different kernel-based localization algorithms from the literature. We show via simulations and an extensive measurement data set that kernel-based localization algorithms can perform better than model-based algorithms. Results show that kernel methods can achieve an RMSE up to 55% lower than a model-based algorithm. Finally, we propose a novel distance estimator for estimating the distance between two nodes a and b using indirect link measurements, which are the measurements made between a and k, for k ? b and b and k, for k ? a. Traditionally, distance estimators use only direct link measurement, which is the pairwise measurement between the nodes a and b. The results show that the estimator that uses indirect link measurements enables better distance estimation than the estimator that uses direct link measurements
    • …
    corecore