580,094 research outputs found

    Case studies on the use of LiveLink for MATLAB for evaluation and optimization of the heat sources in experimental borehole

    Get PDF
    In the Czech part of the Upper Silesian Coal Basin (Moravian-Silesian region, Czech Republic), there are many deposits of endogenous combustion (e.g., localized burning soil bodies, landfills containing industrial waste, or slag rocks caused by mining processes). The Hedwig mining dump represents such an example of these sites where, besides the temperature and the concentrations of toxic gases, electric and non-electric quantities are also monitored within the frame of experimentally proposed and patented technology for heat collection (the so-called "Pershing" system). Based on these quantities, this paper deals with the determination and evaluation of negative heat sources and the optimization of the positive heat source dependent on measured temperatures within evaluation points or on a thermal profile. The optimization problem is defined based on a balance of the heat sources in the steady state while searching for a local minimum of the objective function for the heat source. From an implementation point of view, it is the interconnection of the numerical model of the heat collector in COMSOL with a user optimization algorithm in MATLAB using the LiveLink for MATLAB. The results are elaborated in five case studies based on the susceptibility testing of the numerical model by input data from the evaluation points. The tests were focused on the model behavior in terms of preprocessing for measurement data from each chamber of the heat collector and for the estimated value of temperature differences at 90% and 110% of the nominal value. It turned out that the numerical model is more sensitive to the estimates in comparison with the measured data of the chambers, and this finding does not depend on the type optimization algorithm. The validation of the model by the use of the mean-square error led to the finding of optimal value, also valid with respect to the other evaluation.Web of Science205art. no. 129

    Increased sky coverage with optimal correction of tilt and tilt-anisoplanatism modes in laser-guide-star multiconjugate adaptive optics

    Get PDF
    Laser-guide-star multiconjugate adaptive optics (MCAO) systems require natural guide stars (NGS) to measure tilt and tilt-anisoplanatism modes. Making optimal use of the limited number of photons coming from such, generally dim, sources is mandatory to obtain reasonable sky coverage, i.e., the probability of finding asterisms amenable to NGS wavefront (WF) sensing for a predefined WF error budget. This paper presents a Strehl-optimal (minimum residual variance) spatiotemporal reconstructor merging principles of modal atmospheric tomography and optimal stochastic control theory. Simulations of NFIRAOS, the first light MCAO system for the thirty-meter telescope, using ∼500 typical NGS asterisms, show that the minimum-variance (MV) controller delivers outstanding results, in particular for cases with relatively dim stars (down to magnitude 22 in the H-band), for which low-temporal frame rates (as low as 16 Hz) are required to integrate enough flux. Over all the cases tested ∼21  nm  rms median improvement in WF error can be achieved with the MV compared to the current baseline, a type-II controller based on a double integrator. This means that for a given level of tolerable residual WF error, the sky coverage is increased by roughly 10%, a quite significant figure. The improvement goes up to more than 20% when compared with a traditional single-integrator controller

    A Fast Chi-squared Technique For Period Search of Irregularly Sampled Data

    Full text link
    A new, computationally- and statistically-efficient algorithm, the Fast χ2\chi^2 algorithm, can find a periodic signal with harmonic content in irregularly-sampled data with non-uniform errors. The algorithm calculates the minimized χ2\chi^2 as a function of frequency at the desired number of harmonics, using Fast Fourier Transforms to provide O(NlogN)O (N \log N) performance. The code for a reference implementation is provided.Comment: Source code for the reference implementation is available at http://public.lanl.gov/palmer/fastchi.html . Accepted by ApJ. 24 pages, 4 figure

    Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation

    Full text link
    Sensor networks potentially feature large numbers of nodes that can sense their environment over time, communicate with each other over a wireless network, and process information. They differ from data networks in that the network as a whole may be designed for a specific application. We study the theoretical foundations of such large scale sensor networks, addressing four fundamental issues- connectivity, capacity, clocks and function computation. To begin with, a sensor network must be connected so that information can indeed be exchanged between nodes. The connectivity graph of an ad-hoc network is modeled as a random graph and the critical range for asymptotic connectivity is determined, as well as the critical number of neighbors that a node needs to connect to. Next, given connectivity, we address the issue of how much data can be transported over the sensor network. We present fundamental bounds on capacity under several models, as well as architectural implications for how wireless communication should be organized. Temporal information is important both for the applications of sensor networks as well as their operation.We present fundamental bounds on the synchronizability of clocks in networks, and also present and analyze algorithms for clock synchronization. Finally we turn to the issue of gathering relevant information, that sensor networks are designed to do. One needs to study optimal strategies for in-network aggregation of data, in order to reliably compute a composite function of sensor measurements, as well as the complexity of doing so. We address the issue of how such computation can be performed efficiently in a sensor network and the algorithms for doing so, for some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE

    Variable Point Sources in Sloan Digital Sky Survey Stripe 82. I. Project Description and Initial Catalog (0 h < R.A. < 4 h)

    Full text link
    We report the first results of a study of variable point sources identified using multi-color time-series photometry from Sloan Digital Sky Survey (SDSS) Stripe 82 over a span of nearly 10 years (1998-2007). We construct a light-curve catalog of 221,842 point sources in the R.A. 0-4 h half of Stripe 82, limited to r = 22.0, that have at least 10 detections in the ugriz bands and color errors of < 0.2 mag. These objects are then classified by color and by cross-matching them to existing SDSS catalogs of interesting objects. We use inhomogeneous ensemble differential photometry techniques to greatly improve our sensitivity to variability. Robust variable identification methods are used to extract 6520 variable candidates in this dataset, resulting in an overall variable fraction of ~2.9% at the level of 0.05 mag variability. A search for periodic variables results in the identification of 30 eclipsing/ellipsoidal binary candidates, 55 RR Lyrae, and 16 Delta Scuti variables. We also identify 2704 variable quasars matched to the SDSS Quasar catalog (Schneider et al. 2007), as well as an additional 2403 quasar candidates identified by their non-stellar colors and variability properties. Finally, a sample of 11,328 point sources that appear to be nonvariable at the limits of our sensitivity is also discussed. (Abridged.)Comment: 67 pages, 27 figures. Accepted for publication in ApJS. Catalog available at http://shrike.pha.jhu.edu/stripe82-variable

    Archiving multi-epoch data and the discovery of variables in the near infrared

    Full text link
    We present a description of the design and usage of a new synoptic pipeline and database model for time series photometry in the VISTA Data Flow System (VDFS). All UKIRT-WFCAM data and most of the VISTA main survey data will be processed and archived by the VDFS. Much of these data are multi-epoch, useful for finding moving and variable objects. Our new database design allows the users to easily find rare objects of these types amongst the huge volume of data being produced by modern survey telescopes. Its effectiveness is demonstrated through examples using Data Release 5 of the UKIDSS Deep Extragalactic Survey (DXS) and the WFCAM standard star data. The synoptic pipeline provides additional quality control and calibration to these data in the process of generating accurate light-curves. We find that 0.6+-0.1% of stars and 2.3+-0.6% of galaxies in the UKIDSS-DXS with K<15 mag are variable with amplitudes \Delta K>0.015 magComment: 30 pages, 31 figures, MNRAS, in press Minor changes from previous version due to refereeing and proof-readin
    corecore