115 research outputs found

    Wavelet Methods and Inverse Problems

    Get PDF
    Archaeological investigations are designed to acquire information without damaging the archaeological site. Magnetometry is one of the important techniques for producing a surface grid of readings, which can be used to infer underground features. The inversion of this data, to give a fitted model, is an inverse problem. This type of problem can be ill-posed or ill-conditioned, making the estimation of model parameters less stable or even impossible. More precisely, the relationship between archaeological data and parameters is expressed by a likelihood. It is not possible to use the standard regression estimate obtained through the likelihood, which means that no maximum likelihood estimate exists. Instead, various constraints can be added through a prior distribution with an estimate produced using the posterior distribution. Current approaches incorporate prior information describing smoothness, which is not always appropriate. The biggest challenge is that the reconstruction of an archaeological site as a single layer requires various physical features such as depth and extent to be assumed. By applying a smoothing prior in the analysis of stratigraphy data, however, these features are not easily estimated. Wavelet analysis has proved to be highly efficient at eliciting information from noisy data. Additionally, complicated signals can be explained by interpreting only a small number of wavelet coefficients. It is possible that a modelling approach, which attempts to describe an underlying function in terms of a multi-level wavelet representation will be an improvement on standard techniques. Further, a new method proposed uses an elastic-net based distribution as the prior. Two methods are used to solve the problem, one is based on one-stage estimation and the other is based on two stages. The one-stage considers two approaches a single prior for all wavelet resolution levels and a level-dependent prior, with separate priors at each resolution level. In a simulation study and a real data analysis, all these techniques are compared to several existing methods. It is shown that the methodology using a single prior provides good reconstruction, comparable even to several established wavelet methods that use mixture priors

    Measuring Social Media Activity of Scientific Literature: An Exhaustive Comparison of Scopus and Novel Altmetrics Big Data

    Full text link
    This paper measures social media activity of 15 broad scientific disciplines indexed in Scopus database using Altmetric.com data. First, the presence of Altmetric.com data in Scopus database is investigated, overall and across disciplines. Second, the correlation between the bibliometric and altmetric indices is examined using Spearman correlation. Third, a zero-truncated negative binomial model is used to determine the association of various factors with increasing or decreasing citations. Lastly, the effectiveness of altmetric indices to identify publications with high citation impact is comprehensively evaluated by deploying Area Under the Curve (AUC) - an application of receiver operating characteristic. Results indicate a rapid increase in the presence of Altmetric.com data in Scopus database from 10.19% in 2011 to 20.46% in 2015. A zero-truncated negative binomial model is implemented to measure the extent to which different bibliometric and altmetric factors contribute to citation counts. Blog count appears to be the most important factor increasing the number of citations by 38.6% in the field of Health Professions and Nursing, followed by Twitter count increasing the number of citations by 8% in the field of Physics and Astronomy. Interestingly, both Blog count and Twitter count always show positive increase in the number of citations across all fields. While there was a positive weak correlation between bibliometric and altmetric indices, the results show that altmetric indices can be a good indicator to discriminate highly cited publications, with an encouragingly AUC= 0.725 between highly cited publications and total altmetric count. Overall, findings suggest that altmetrics could better distinguish highly cited publications.Comment: 34 Pages, 3 Figures, 15 Table

    Gaussian mixture model based probabilistic modeling of images for medical image segmentation

    Get PDF
    In this paper, we propose a novel image segmentation algorithm that is based on the probability distributions of the object and background. It uses the variational level sets formulation with a novel region based term in addition to the edge-based term giving a complementary functional, that can potentially result in a robust segmentation of the images. The main theme of the method is that in most of the medical imaging scenarios, the objects are characterized by some typical characteristics such a color, texture, etc. Consequently, an image can be modeled as a Gaussian mixture of distributions corresponding to the object and background. During the procedure of curve evolution, a novel term is incorporated in the segmentation framework which is based on the maximization of the distance between the GMM corresponding to the object and background. The maximization of this distance using differential calculus potentially leads to the desired segmentation results. The proposed method has been used for segmenting images from three distinct imaging modalities i.e. magnetic resonance imaging (MRI), dermoscopy and chromoendoscopy. Experiments show the effectiveness of the proposed method giving better qualitative and quantitative results when compared with the current state-of-the-art. INDEX TERMS Gaussian Mixture Model, Level Sets, Active Contours, Biomedical Engineerin

    Mining network-level properties of Twitter altmetrics data

    Get PDF
    © 2019, AkadĂ©miai KiadĂł, Budapest, Hungary. Social networking sites play a significant role in altmetrics. While 90% of all altmetric mentions come from Twitter, the known microscopic and macroscopic properties of Twitter altmetrics data are limited. In this study, we present a large-scale analysis of Twitter altmetrics data using social network analysis techniques on the ‘mention’ network of Twitter users. Exploiting the network-level properties of over 1.4 million tweets, corresponding to 77,757 scholarly articles, this study focuses on the following aspects of Twitter altmetrics data: (a) the influence of organizational accounts; (b) the formation of disciplinary communities; (c) the cross-disciplinary interaction among Twitter users; (d) the network motifs of influential Twitter users; and (e) testing the small-world property. The results show that Twitter-based social media communities have unique characteristics, which may affect social media usage counts either directly or indirectly. Therefore, instead of treating altmetrics data as a black box, the underlying social media networks, which may either inflate or deflate social media usage counts, need further scrutiny

    Quantum Long Short-Term Memory (QLSTM) vs Classical LSTM in Time Series Forecasting: A Comparative Study in Solar Power Forecasting

    Full text link
    Accurately forecasting solar power generation is crucial in the global progression towards sustainable energy systems. In this study, we conduct a meticulous comparison between Quantum Long Short-Term Memory (QLSTM) and classical Long Short-Term Memory (LSTM) models for solar power production forecasting. Our controlled experiments reveal promising advantages of QLSTMs, including accelerated training convergence and substantially reduced test loss within the initial epoch compared to classical LSTMs. These empirical findings demonstrate QLSTM's potential to swiftly assimilate complex time series relationships, enabled by quantum phenomena like superposition. However, realizing QLSTM's full capabilities necessitates further research into model validation across diverse conditions, systematic hyperparameter optimization, hardware noise resilience, and applications to correlated renewable forecasting problems. With continued progress, quantum machine learning can offer a paradigm shift in renewable energy time series prediction. This pioneering work provides initial evidence substantiating quantum advantages over classical LSTM, while acknowledging present limitations. Through rigorous benchmarking grounded in real-world data, our study elucidates a promising trajectory for quantum learning in renewable forecasting. Additional research and development can further actualize this potential to achieve unprecedented accuracy and reliability in predicting solar power generation worldwide.Comment: 17 pages, 8 figure

    Modeling for the Relationship between Monetary Policy and GDP in the USA Using Statistical Methods

    Get PDF
    The Federal Reserve has played an arguably important role in financial crises in the United States since its creation in 1913 through monetary policy tools. Thus, this paper aims to analyze the impact of monetary policy on the United States' economic growth in the short and long run, measured by Gross Domestic Product (GDP). The Vector Autoregressive (VAR) method explores the relationship among the variables, and the Granger causality test assesses the predictability of the variables. Moreover, the Impulse Response Function (IRF) examines the behavior of one variable after a change in another, utilizing the time-series dataset from the first quarter of 1959 to the second quarter of 2022. This work demonstrates that expansionary monetary policy does have a positive impact on economic growth in the short term though it does not last long. However, in the long term, inflation, measured by the Consumer Price Index (CPI), is affected by expansionary monetary policy. Therefore, if the Federal Reserve wants to cease the expansionary monetary policy in the short run, this should be done appropriately, with the fiscal surplus, to preserve its credibility and trust in the US dollar as a global store of value asset. Also, the paper's findings suggest that continuous expansion of the Money Supply will lead to a long-term inflationary problem. The purpose of this research is to bring the spotlight to the side effects of expansionary monetary policy on the US economy, but also allow other researchers to test this model in different economies with different dynamics

    Demonstrating and negotiating the adoption of web design technologies : Cascading Style Sheets and the CSS Zen Garden

    Get PDF
    Cascading Style Sheets (CSS) express the visual design of a website through code and remain an understudied area of web history. Although CSS was proposed as a method of adding a design layer to HTML documents early on in the development of the web, they only crossed from a marginal position to mainstream usage after a long period of proselytising by web designers working towards “web standards”. The CSS Zen Garden grassroots initiative aimed at negotiating, mainstreaming and archiving possible methods of CSS web design, while dealing with varying levels of browser support for the technology. Using the source code of the CSS Zen Garden and the accompanying book, this paper demonstrates that while the visual designs were complex and sophisticated, the CSS lived within an ecosystem of related platforms, i.e., web browsers, screen sizes and design software, which constrained its use and required enormous sensitivity to the possibilities browser ecosystems could reliably provide. As the CSS Zen Garden was maintained for over ten years, it also acts as a unique site to trace the continuing development of web design, and the imaginaries expressed in the Zen Garden can also be related to ethical dimensions that influence the process of web design. Compared to Flash-based web design, work implemented using CSS required a greater willingness to negotiate source code configurations between browser platforms. Following the history of the individuals responsible for creating and contributing to the CSS Zen Garden shows the continuing influence of layer-based metaphors of design separated from content within web source code

    Predicting Academic Performance of Students from VLE Big Data using Deep Learning Models

    Get PDF
    The abundance of accessible educational data, supported by the technology-enhanced learning platforms, provides opportunities to mine learning behavior of students, addressing their issues, optimizing the educational environment, and enabling data-driven decision making. Virtual learning environments complement the learning analytics paradigm by effectively providing datasets for analysing and reporting the learning process of students and its reflection and contribution in their respective performances. This study deploys a deep artificial neural network on a set of unique handcrafted features, extracted from the virtual learning environments clickstream data, to predict at-risk students providing measures for early intervention of such cases. The results show the proposed model to achieve a classification accuracy of 84%-93%. We show that a deep artificial neural network outperforms the baseline logistic regression and support vector machine models. While logistic regression achieves an accuracy of 79.82% - 85.60%, the support vector machine achieves 79.95% - 89.14%. Aligned with the existing studies - our findings demonstrate the inclusion of legacy data and assessment-related data to impact the model significantly. Students interested in accessing the content of the previous lectures are observed to demonstrate better performance. The study intends to assist institutes in formulating a necessary framework for pedagogical support, facilitating higher education decision-making process towards sustainable education

    HTSS: A novel hybrid text summarisation and simplification architecture

    Get PDF
    Text simplification and text summarisation are related, but different sub-tasks in Natural Language Generation. Whereas summarisation attempts to reduce the length of a document, whilst keeping the original meaning, simplification attempts to reduce the complexity of a document. In this work, we combine both tasks of summarisation and simplification using a novel hybrid architecture of abstractive and extractive summarisation called HTSS. We extend the well-known pointer generator model for the combined task of summarisation and simplification. We have collected our parallel corpus from the simplified summaries written by domain experts published on the science news website EurekaAlert (www.eurekalert.org). Our results show that our proposed HTSS model outperforms neural text simplification (NTS) on SARI score and abstractive text summarisation (ATS) on the ROUGE score. We further introduce a new metric (CSS1) which combines SARI and Rouge and demonstrates that our proposed HTSS model outperforms NTS and ATS on the joint task of simplification and summarisation by 38.94% and 53.40%, respectively. We provide all code, models and corpora to the scientific community for future research at the following URL: https://github.com/slab-itu/HTSS/

    A Bayesian approach to wavelet-based modelling of discontinuous functions applied to inverse problems

    Get PDF
    Inverse problems are examples of regression with more unknowns than the amount of information in the data and hence constraints are imposed through prior information. The proposed method defines the underlying function as a wavelet approximation which is related to the data through a convolution. The wavelets provide a sparse and multi-resolution solution which can capture local behaviour in an adaptive way. Varied prior models are considered along with level-specific prior parameter estimation. Archaeological stratigraphy data are considered where vertical earth cores are analysed producing clear piecewise constant function estimates
    • 

    corecore