3,113 research outputs found

    Model Selection of Ensemble Forecasting Using Weighted Similarity of TIME Series

    Get PDF
    Several methods have been proposed to combine the forecasting results into single forecast namely the simple averaging, weighted average on validation performance, or non-parametric combination schemas. These methods use fixed combination of individual forecast to get the final forecast result. In this paper, quite different approach is employed to select the forecasting methods, in which every point to forecast is calculated by using the best methods used by similar training dataset. Thus, the selected methods may differ at each point to forecast. The similarity measures used to compare the time series for testing and validation are Euclidean and Dynamic Time Warping (DTW), where each point to compare is weighted according to its recentness. The dataset used in the experiment is the time series data designated for NN3 Competition and time series generated from the frequency of USPTOñ€ℱs patents and PubMedñ€ℱs scientific publications on the field of health, namely on Apnea, Arrhythmia, and Sleep Stages. The experimental result shows that the weighted combination of methods selected based on the similarity between training and testing data may perform better compared to either the unweighted combination of methods selected based on the similarity measure or the fixed combination of best individual forecast. Beberapa metode telah diajukan untuk menggabungkan beberapa hasil forecasting dalam single forecast yang diberi nama simple averaging, pemberian rata-rata dengan bobot pada tahap validasi kinerja, atau skema kombinasi non-parametrik. Metode ini menggunakan kombinasi tetap pada individual forecast untuk mendapatkan hasil final dari forecast. Dalam paper ini, pendekatan berbeda digunakan untuk memilih metode forecasting, di mana setiap titik dihitung dengan menggunakan metode terbaik yang digunakan oleh dataset pelatihan sejenis. Dengan demikian, metode yang dipilih dapat berbeda di setiap titik perkiraan. Similarity measure yang digunakan untuk membandingkan deret waktu untuk pengujian dan validasi adalah Euclidean dan Dynamic Time Warping (DTW), di mana setiap titik yang dibandingkan diberi bobot sesuai dengan keterbaruannya. Dataset yang digunakan dalam percobaan ini adalah data time series yang didesain untuk NN3 Competition dan data time series yang di-generate dari paten-paten USPTO dan publikasi ilmiah PubMed di bidang kesehatan, yaitu pada Apnea, Aritmia, dan Sleep Stages. Hasil percobaan menunjukkan bahwa pemberian kombinasi bobot dari metode yang dipilih berdasarkan kesamaan antara data pelatihan dan data pengujian, dapat menyajikan hasil yang lebih baik dibanding salah satu kombinasi metode unweighted yang dipilih berdasarkan similarity measure atau kombinasi tetap dari individual forecast terbaik

    Lossless Compression of Medical Image Sequences Using a Resolution Independent Predictor and Block Adaptive Encoding

    Get PDF
    The proposed block-based lossless coding technique presented in this paper targets at compression of volumetric medical images of 8-bit and 16-bit depth. The novelty of the proposed technique lies in its ability of threshold selection for prediction and optimal block size for encoding. A resolution independent gradient edge detector is used along with the block adaptive arithmetic encoding algorithm with extensive experimental tests to find a universal threshold value and optimal block size independent of image resolution and modality. Performance of the proposed technique is demonstrated and compared with benchmark lossless compression algorithms. BPP values obtained from the proposed algorithm show that it is capable of effective reduction of inter-pixel and coding redundancy. In terms of coding efficiency, the proposed technique for volumetric medical images outperforms CALIC and JPEG-LS by 0.70 % and 4.62 %, respectively

    Stroke lesion size:Still a useful biomarker for stroke severity and outcome in times of high-dimensional models

    Get PDF
    BACKGROUND The volumetric size of a brain lesion is a frequently used stroke biomarker. It stands out among most imaging biomarkers for being a one-dimensional variable that is applicable in simple statistical models. In times of machine learning algorithms, the question arises of whether such a simple variable is still useful, or whether high-dimensional models on spatial lesion information are superior. METHODS We included 753 first-ever anterior circulation ischemic stroke patients (age 68.4±15.2 years; NIHSS at 24 h 4.4±5.1; modified Rankin Scale (mRS) at 3-months median[IQR] 1[0.75;3]) and traced lesions on diffusion-weighted MRI. In an out-of-sample model validation scheme, we predicted stroke severity as measured by NIHSS 24 h and functional stroke outcome as measured by mRS at 3 months either from spatial lesion features or lesion size. RESULTS For stroke severity, the best regression model based on lesion size performed significantly above chance (p < 0.0001) with R2 = 0.322, but models with spatial lesion features performed significantly better with R2 = 0.363 (t(752) = 2.889; p = 0.004). For stroke outcome, the best classification model based on lesion size again performed significantly above chance (p < 0.0001) with an accuracy of 62.8%, which was not different from the best model with spatial lesion features (62.6%, p = 0.80). With smaller training data sets of only 150 or 50 patients, the performance of high-dimensional models with spatial lesion features decreased up to the point of being equivalent or even inferior to models trained on lesion size. The combination of lesion size and spatial lesion features in one model did not improve predictions. CONCLUSIONS Lesion size is a decent biomarker for stroke outcome and severity that is slightly inferior to spatial lesion features but is particularly suited in studies with small samples. When low-dimensional models are desired, lesion size provides a viable proxy biomarker for spatial lesion features, whereas high-precision prediction models in personalised prognostic medicine should operate with high-dimensional spatial imaging features in large samples

    A multi-objective performance optimisation framework for video coding

    Get PDF
    Digital video technologies have become an essential part of the way visual information is created, consumed and communicated. However, due to the unprecedented growth of digital video technologies, competition for bandwidth resources has become fierce. This has highlighted a critical need for optimising the performance of video encoders. However, there is a dual optimisation problem, wherein, the objective is to reduce the buffer and memory requirements while maintaining the quality of the encoded video. Additionally, through the analysis of existing video compression techniques, it was found that the operation of video encoders requires the optimisation of numerous decision parameters to achieve the best trade-offs between factors that affect visual quality; given the resource limitations arising from operational constraints such as memory and complexity. The research in this thesis has focused on optimising the performance of the H.264/AVC video encoder, a process that involved finding solutions for multiple conflicting objectives. As part of this research, an automated tool for optimising video compression to achieve an optimal trade-off between bit rate and visual quality, given maximum allowed memory and computational complexity constraints, within a diverse range of scene environments, has been developed. Moreover, the evaluation of this optimisation framework has highlighted the effectiveness of the developed solution

    Towards Precision Psychiatry: gray Matter Development And Cognition In Adolescence

    Get PDF
    Precision Psychiatry promises a new era of optimized psychiatric diagnosis and treatment through comprehensive, data-driven patient stratification. Among the core requirements towards that goal are: 1) neurobiology-guided preprocessing and analysis of brain imaging data for noninvasive characterization of brain structure and function, and 2) integration of imaging, genomic, cognitive, and clinical data in accurate and interpretable predictive models for diagnosis, and treatment choice and monitoring. In this thesis, we shall touch on specific aspects that fit under these two broad points. First, we investigate normal gray matter development around adolescence, a critical period for the development of psychopathology. For years, the common narrative in human developmental neuroimaging has been that gray matter declines in adolescence. We demonstrate that different MRI-derived gray matter measures exhibit distinct age and sex effects and should not be considered equivalent, as has often been done in the past, but complementary. We show for the first time that gray matter density increases from childhood to young adulthood, in contrast with gray matter volume and cortical thickness, and that females, who are known to have lower gray matter volume than males, have higher density throughout the brain. A custom preprocessing pipeline and a novel high-resolution gray matter parcellation were created to analyze brain scans of 1189 youths collected as part of the Philadelphia Neurodevelopmental Cohort. This work emphasizes the need for future studies combining quantitative histology and neuroimaging to fully understand the biological basis of MRI contrasts and their derived measures. Second, we use the same gray matter measures to assess how well they can predict cognitive performance. We train mass-univariate and multivariate models to show that gray matter volume and density are complementary in their ability to predict performance. We suggest that parcellation resolution plays a big role in prediction accuracy and that it should be tuned separately for each modality for a fair comparison among modalities and for an optimal prediction when combining all modalities. Lastly, we introduce rtemis, an R package for machine learning and visualization, aimed at making advanced data analytics more accessible. Adoption of accurate and interpretable machine learning methods in basic research and medical practice will help advance biomedical science and make precision medicine a reality

    The sweet spot: How people trade off size and definition on mobile devices

    Get PDF
    Mobile TV can deliver up-to-date content to users on the move. But it is currently unclear how to best adapt higher resolution TV content. In this paper, we describe a laboratory study with 35 participants who watched short clips of different content and shot types on a 200ppi PDA display at a resolution of either 120x90 or 168x128. Participants selected their preferred size and rated the acceptability of the visual experience. The preferred viewing ratio depended on the resolution and had to be at least 9.8H. The minimal angular resolution people required and which limited the up-scaling factor was 14 pixels per degree. Extreme long shots were best when depicted actors were at least 0.7° high. A second study researched the ecological validity of previous lab results by comparing them to results from the field. Image size yielded more value for users in the field than was apparent from lab results. In conclusion, current prediction models based on preferred viewing distances for TV and large displays do not predict viewing preferences on mobile devices. Our results will help to further the understanding of multimedia perception and service designers to deliver both economically viable and enjoyable experiences

    Learning Linear Dynamical Systems via Spectral Filtering

    Full text link
    We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix. We circumvent the non-convex optimization problem using improper learning: carefully overparameterize the class of LDSs by a polylogarithmic factor, in exchange for convexity of the loss functions. From this arises a polynomial-time algorithm with a near-optimal regret guarantee, with an analogous sample complexity bound for agnostic learning. Our algorithm is based on a novel filtering technique, which may be of independent interest: we convolve the time series with the eigenvectors of a certain Hankel matrix.Comment: Published as a conference paper at NIPS 201

    The Big Picture on Small Screens Delivering Acceptable Video Quality in Mobile TV

    Get PDF
    Mobile TV viewers can change the viewing distance and (on some devices) scale the picture to their preferred viewing ratio, trading off size for angular resolution. We investigated optimal trade-offs between size and resolution through a series of studies. Participants selected their preferred size and rated the acceptability of the visual experience on a 200ppi device at a 4: 3 aspect ratio. They preferred viewing ratios similar to living room TV setups regardless of the much lower resolution: at a minimum 14 pixels per degree. While traveling on trains people required videos with a height larger than 35mm

    Long-Term Memory Motion-Compensated Prediction

    Get PDF
    Long-term memory motion-compensated prediction extends the spatial displacement vector utilized in block-based hybrid video coding by a variable time delay permitting the use of more frames than the previously decoded one for motion compensated prediction. The long-term memory covers several seconds of decoded frames at the encoder and decoder. The use of multiple frames for motion compensation in most cases provides significantly improved prediction gain. The variable time delay has to be transmitted as side information requiring an additional bit rate which may be prohibitive when the size of the long-term memory becomes too large. Therefore, we control the bit rate of the motion information by employing rate-constrained motion estimation. Simulation results are obtained by integrating long-term memory prediction into an H.263 codec. Reconstruction PSNR improvements up to 2 dB for the Foreman sequence and 1.5 dB for the Mother–Daughter sequence are demonstrated in comparison to the TMN-2.0 H.263 coder. The PSNR improvements correspond to bit-rate savings up to 34 and 30%, respectively. Mathematical inequalities are used to speed up motion estimation while achieving full prediction gain
    • 

    corecore