75 research outputs found

    Application of Developers' and Users' Dependent Factors in App Store Optimization

    Full text link
    This paper presents an application of developers' and users' dependent factors in the app store optimization. The application is based on two main fields: developers' dependent factors and users' dependent factors. Developers' dependent factors are identified as: developer name, app name, subtitle, genre, short description, long description, content rating, system requirements, page url, last update, what's new and price. Users' dependent factors are identified as: download volume, average rating, rating volume and reviews. The proposed application in its final form is modelled after mining sample data from two leading app stores: Google Play and Apple App Store. Results from analyzing collected data show that developer dependent elements can be better optimized. Names and descriptions of mobile apps are not fully utilized. In Google Play there is one significant correlation between download volume and number of reviews, whereas in App Store there is no significant correlation between factors

    Exploring the Impact of Google Discover on Users and Publishers: A Data-Driven Study

    Get PDF
    The paper is dedicated to the analysis of Google Discover recommendation algorithms. The study is conducted on two online stores operating in Poland, for which the data from Google Search Console are available for 16 months. Google Discover is explored from the perspective of web users and web content publishers based on the total number of impressions and clicks and the click-through rate of the two websites. The results allow us to understand that although the users’ activity increases the website’s performance in Google Discover, its algorithms consider many more factors (like the popularity of other websites with similar content, users\u27 location, and frequency of content updates) and may remove the website from Discover despite a relatively high number of clicks. Additionally, the literature review conducted in the paper revealed a gap in scientific research dedicated to the phenomena of this content recommendation system

    Knowledge Graph Development for App Store Data Modeling

    Get PDF
    Usage of mobile applications has become a part of our lives today, since every day we use our smartphones for communication, entertainment, business and education. High demand on apps has led to significant growth of supply, yet large offer has caused complications in users’ search of the one suitable application. The authors have made an attempt to solve the problem of facilitating the search in app stores. With the help of a website crawling software a sample of data was retrieved from one of the well-known mobile app stores and divided into 11 groups by types. These groups of data were used to construct a Knowledge Schema – a graphic model of interconnections of data that characterize any mobile app in the selected store. Schema creation is the first step in the process of developing a Knowledge Graph that will perform applications clustering to facilitate users’ search in app stores

    Correlation between the spread of COVID-19 and the Interest in personal protective measures in Poland and Portugal

    Get PDF
    This research initiated during Artur Strzelecki’s research stay at Polytechnic Institute of Porto, Portugal.The pandemic of the coronavirus disease 2019 (COVID-19), has gained extensive coverage in public media and global news, generated international and national communication campaigns to educate the communities worldwide and raised the attention of everyone. The coronavirus has caused viral pneumonia in tens of thousands of people around the world, and the COVID-19 outbreak changed most countries’ routines and concerns and transformed social behaviour. This study explores the potential use of Google Trends (GT) in monitoring interest in the COVID-19 outbreak and, specifically, in personal protective equipment and hand hygiene, since these have been promoted by official health care bodies as two of the most protective measures. GT was chosen as a source of reverse engineering data, given the interest in the topic and the novelty of the research. Current data on COVID-19 are retrieved from GT using keywords in two languages—Portuguese and Polish. The geographical settings for GT are two countries: Poland and Portugal. The period under analysis is 20 January 2020, when the first cases outside China were known, to 15 June 2020. The results show that there is a correlation between the spread of COVID-19 and the search for personal protective equipment and hand hygiene and that GT can help, to a certain extent, understand people’s concerns, behaviour and reactions to sanitary problems and protection recommendationsinfo:eu-repo/semantics/publishedVersio

    Using a General Extended Technology Acceptance Model for E-Learning (GETAMEL). A Literature Review of Empirical Studies

    Get PDF
    This paper examines peer-reviewed empirical studies using the General Extended Technology Acceptance Model for E-learning (GETAMEL). We have created a framework for examining the effects of the set of external variables on e-learning acceptance. The study reviews the independent variables (Experience, Subjective Norms, Enjoyment, Computer Anxiety, and Self-efficacy), and dependent variables (Perceived Usefulness, Perceived Ease of Use, Attitudes Towards Using, Intention to Use, and Actual Use), path coefficients, theoretical backgrounds, and the type of studies performed on the e-learning systems in the literature review. The paper examines the state of current research on the topic and points out gaps in the existing literature. The objective of the paper is both to provide an overview of the literature and to investigate the reasons for e-learning acceptance. As a result of the study, we present the mean values of the relations between variables adequate for the GETAMEL model in all the reviewed works. The findings of the review provide insight for further studies and the use of the GETAMEL model

    Multidecadal (1960–2011) shoreline changes in Isbjþrnhamna (Hornsund, Svalbard)

    Get PDF
    A section of a gravel-dominated coast in Isbjþrnhamna (Hornsund, Svalbard) was analysed to calculate the rate of shoreline changes and explain processes controlling coastal zone development over last 50 years. Between 1960 and 2011, coastal landscape of Isbjþrnhamna experienced a significant shift from dominated by influence of tide-water glacier and protected by prolonged sea-ice conditions towards storm-affected and rapidly changing coast. Information derived from analyses of aerial images and geomorphological mapping shows that the Isbjþrnhamna coastal zone is dominated by coastal erosion resulting in a shore area reduction of more than 31,600 m2. With ~3,500 m2 of local aggradation, the general balance of changes in the study area of the shore is negative, and amounts to a loss of more than 28,000 m2. Mean shoreline change is −13.1 m (−0.26 m a−1). Erosional processes threaten the Polish Polar Station infrastructure and may damage of one of the storage buildings in nearby future

    Featured Snippets Results in Google Web Search: An Exploratory Study

    Full text link
    In this paper authors analyzed 163412 keywords and results with featured snippets collected from localized Polish Google search engine. A method-ology for retrieving data from Google search engine was proposed in terms of obtaining necessary data to study featured snippets. It was observed that almost half of featured snippets (48%) is taken from result on first ranking position. Furthermore, some correlations between prepositions and the most often appearing content words in keywords was discovered. Results show that featured snippets are often taken from trustworthy websites like e.g., Wikipedia and are mainly presented in form of a paragraph. Paragraph can be read by Google Assistant or Home Assistant with voice search. We conclude our findings with discussion and research limitations.Comment: 10 pages, 6 tables, accepted to conference ICMarktech'1

    Human Mobility Restrictions and COVID-19 Infection Rates: Analysis of Mobility Data and Coronavirus Spread in Poland and Portugal

    Get PDF
    This study examines the possibility of correlation between the data on human mobility restrictions and the COVID-19 infection rates in two European countries: Poland and Portugal. The aim of this study is to verify the correlation and causation between mobility changes and the infection spread as well as to investigate the impact of the introduced restrictions on changes in human mobility. The data were obtained from Google Community Mobility Reports, Apple Mobility Trends Reports, and The Humanitarian Data Exchange along with other reports published online. All the data were organized in one dataset, and three groups of variables were distinguished: restrictions, mobility, and intensity of the disease. The causal-comparative research design method is used for this study. The results show that in both countries the state restrictions reduced human mobility, with the strongest impact in places related to retail and recreation, grocery, pharmacy, and transit stations. At the same time, the data show that the increase in restrictions had strong positive correlation with stays in residential places both in Poland and Portugal.info:eu-repo/semantics/publishedVersio

    3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    Full text link
    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the labeled molecules in space.Comment: 10 pages, 7 figure
    • 

    corecore