5,748 research outputs found

    Enhancing statistical wind speed forecasting models : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Manawatū Campus, New Zealand

    Get PDF
    In recent years, wind speed forecasting models have seen significant development and growth. In particular, hybrid models have been emerging since the last decade. Hybrid models combine two or more techniques from several categories, with each model utilizing its distinct strengths. Mainly, data-driven models that include statistical and Artificial Intelligence/Machine Learning (AI/ML) models are deployed in hybrid models for shorter forecasting time horizons (< 6hrs). Literature studies show that machine learning models have gained enormous potential owing to their accuracy and robustness. On the other hand, only a handful of studies are available on the performance enhancement of statistical models, despite the fact that hybrid models are incomplete without statistical models. To address the knowledge gap, this thesis identified the shortcomings of traditional statistical models while enhancing prediction accuracy. Three statistical models are considered for analyses: Grey Model [GM(1,1)], Markov Chain, and Holt’s Double Exponential Smoothing models. Initially, the problems that limit the forecasting models' applicability are highlighted. Such issues include negative wind speed predictions, failure of predetermined accuracy levels, non-optimal estimates, and additional computational cost with limited performance. To address these concerns, improved forecasting models are proposed considering wind speed data of Palmerston North, New Zealand. Several methodologies have been developed to improve the model performance and fulfill the necessary and sufficient conditions. These approaches include adjusting dynamic moving window, self-adaptive state categorization algorithm, a similar approach to the leave-one-out method, and mixed initialization method. Keeping in view the application of the hybrid methods, novel MODWT-ARIMA-Markov and AGO-HDES models are further proposed as secondary objectives. Also, a comprehensive analysis is presented by comparing sixteen models from three categories, each for four case studies, three rolling windows, and three forecasting horizons. Overall, the improved models showed higher accuracy than their counter traditional models. Finally, the future directions are highlighted that need subsequent research to improve forecasting performance further

    Evolution of the hot flow of MAXI J1543-564

    Get PDF
    We present a spectral and timing analysis of the black hole candidate MAXI J1543-564 during its 2011 outburst. As shown in previous work, the source follows the standard evolution of a black hole outburst. During the rising phase of the outburst we detect an abrupt change in timing behavior associated with the occurrence of a type-B quasi-periodic oscillation (QPO). This QPO and the simultaneously detected radio emission mark the transition between hard and soft intermediate state. We fit power spectra from the rising phase of the outburst using the recently proposed model propfluc. This assumes a truncated disc / hot inner flow geometry, with mass accretion rate fluctuations propagating through a precessing inner flow. We link the propfluc physical parameters to the phenomenological multi-Lorentzian fit parameters. The physical parameter dominating the QPO frequency is the truncation radius, while broad band noise characteristics are also influenced by the radial surface density and emissivity profiles of the flow. In the outburst rise we found that the truncation radius decreases from ro∼24r_o \sim 24 to 10Rg10 R_g, and the surface density increases faster than the mass accretion rate, as previously reported for XTE J1550-564. Two soft intermediate state observations could not be fitted with propfluc, and we suggest that they are coincident with the ejection of material from the inner regions of the flow in a jet or accretion of these regions into the BH horizon, explaining the drop in QPO frequency and suppression of broad band variability preferentially at high energy bands coincident with a radio flare.Comment: 13 pages, 11 figures, 2 table

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    Test-retest reliability of structural brain networks from diffusion MRI

    Get PDF
    Structural brain networks constructed from diffusion MRI (dMRI) and tractography have been demonstrated in healthy volunteers and more recently in various disorders affecting brain connectivity. However, few studies have addressed the reproducibility of the resulting networks. We measured the test–retest properties of such networks by varying several factors affecting network construction using ten healthy volunteers who underwent a dMRI protocol at 1.5 T on two separate occasions. Each T1-weighted brain was parcellated into 84 regions-of-interest and network connections were identified using dMRI and two alternative tractography algorithms, two alternative seeding strategies, a white matter waypoint constraint and three alternative network weightings. In each case, four common graph-theoretic measures were obtained. Network properties were assessed both node-wise and per network in terms of the intraclass correlation coefficient (ICC) and by comparing within- and between-subject differences. Our findings suggest that test–retest performance was improved when: 1) seeding from white matter, rather than grey; and 2) using probabilistic tractography with a two-fibre model and sufficient streamlines, rather than deterministic tensor tractography. In terms of network weighting, a measure of streamline density produced better test–retest performance than tract-averaged diffusion anisotropy, although it remains unclear which is a more accurate representation of the underlying connectivity. For the best performing configuration, the global within-subject differences were between 3.2% and 11.9% with ICCs between 0.62 and 0.76. The mean nodal within-subject differences were between 5.2% and 24.2% with mean ICCs between 0.46 and 0.62. For 83.3% (70/84) of nodes, the within-subject differences were smaller than between-subject differences. Overall, these findings suggest that whilst current techniques produce networks capable of characterising the genuine between-subject differences in connectivity, future work must be undertaken to improve network reliability

    Phenomenological model of diffuse global and regional atrophy using finite-element methods

    Get PDF
    The main goal of this work is the generation of ground-truth data for the validation of atrophy measurement techniques, commonly used in the study of neurodegenerative diseases such as dementia. Several techniques have been used to measure atrophy in cross-sectional and longitudinal studies, but it is extremely difficult to compare their performance since they have been applied to different patient populations. Furthermore, assessment of performance based on phantom measurements or simple scaled images overestimates these techniques' ability to capture the complexity of neurodegeneration of the human brain. We propose a method for atrophy simulation in structural magnetic resonance (MR) images based on finite-element methods. The method produces cohorts of brain images with known change that is physically and clinically plausible, providing data for objective evaluation of atrophy measurement techniques. Atrophy is simulated in different tissue compartments or in different neuroanatomical structures with a phenomenological model. This model of diffuse global and regional atrophy is based on volumetric measurements such as the brain or the hippocampus, from patients with known disease and guided by clinical knowledge of the relative pathological involvement of regions and tissues. The consequent biomechanical readjustment of structures is modelled using conventional physics-based techniques based on biomechanical tissue properties and simulating plausible tissue deformations with finite-element methods. A thermoelastic model of tissue deformation is employed, controlling the rate of progression of atrophy by means of a set of thermal coefficients, each one corresponding to a different type of tissue. Tissue characterization is performed by means of the meshing of a labelled brain atlas, creating a reference volumetric mesh that will be introduced to a finite-element solver to create the simulated deformations. Preliminary work on the simulation of acquisition artefa- - cts is also presented. Cross-sectional and

    Towards in vivo g-ratio mapping using MRI: unifying myelin and diffusion imaging

    Get PDF
    The g-ratio, quantifying the comparative thickness of the myelin sheath encasing an axon, is a geometrical invariant that has high functional relevance because of its importance in determining neuronal conduction velocity. Advances in MRI data acquisition and signal modelling have put in vivo mapping of the g-ratio, across the entire white matter, within our reach. This capacity would greatly increase our knowledge of the nervous system: how it functions, and how it is impacted by disease. This is the second review on the topic of g-ratio mapping using MRI. As such, it summarizes the most recent developments in the field, while also providing methodological background pertinent to aggregate g-ratio weighted mapping, and discussing pitfalls associated with these approaches. Using simulations based on recently published data, this review demonstrates the relevance of the calibration step for three myelin-markers (macromolecular tissue volume, myelin water fraction, and bound pool fraction). It highlights the need to estimate both the slope and offset of the relationship between these MRI-based markers and the true myelin volume fraction if we are really to achieve the goal of precise, high sensitivity g-ratio mapping in vivo. Other challenges discussed in this review further evidence the need for gold standard measurements of human brain tissue from ex vivo histology. We conclude that the quest to find the most appropriate MRI biomarkers to enable in vivo g-ratio mapping is ongoing, with the potential of many novel techniques yet to be investigated.Comment: Will be published as a review article in Journal of Neuroscience Methods as parf of the Special Issue with Hu Cheng and Vince Calhoun as Guest Editor

    Clementine Observations of the Zodiacal Light and the Dust Content of the Inner Solar System

    Get PDF
    Using the Moon to occult the Sun, the Clementine spacecraft used its navigation cameras to map the inner zodiacal light at optical wavelengths over elongations of 3-30 degrees from the Sun. This surface brightness map is then used to infer the spatial distribution of interplanetary dust over heliocentric distances of about 10 solar radii to the orbit of Venus. We also apply a simple model that attributes the zodiacal light as being due to three dust populations having distinct inclination distributions, namely, dust from asteroids and Jupiter-family comets (JFCs), dust from Halley-type comets, and an isotropic cloud of dust from Oort Cloud comets. The best-fitting scenario indicates that asteroids + JFCs are the source of about 45% of the optical dust cross-section seen in the ecliptic at 1 AU, but that at least 89% of the dust cross-section enclosed by a 1 AU radius sphere is of a cometary origin. When these results are extrapolated out to the asteroid belt, we find an upper limit on the mass of the light-reflecting asteroidal dust that is equivalent to a 12 km asteroid, and a similar extrapolation of the isotropic dust cloud out to Oort Cloud distances yields a mass equivalent to a 30 km comet, although the latter mass is uncertain by orders of magnitude.Comment: To be published in Icaru
    • …
    corecore