243,458 research outputs found

    Content marketing model for leading web content management

    Get PDF
    This paper is envisaged to provide the Ukrainian businesses with suggestions for a content marketing model for the effective management of website content in order to ensure its leading position on the European and world markets. Our study employed qualitative data collection with semi-structured interviews, survey, observation methods, quantitative and qualitative methods of content analysis of regional B2B companies, as well as the comparative analysis. The following essential stages of the content marketing process as preliminary search and analysis, website content creation, promotion and distribution, and content marketing progress assessment were identified and classified in detail. The strategic decisions and activities at each stage of the process showed how a company’s on-site and off-site content can be used as a tool to establish the relationship between the brand and its target audience and increase brand visibility online. This study offered several useful insights into how website content, social media and various optimization techniques work together in engaging with the target audience and driving website traffic and sales leads. We constructed and described the content marketing model elaborated for effective web content management that can be useful for those companies that start to consider employing content marketing strategy for achieving business goals and increasing a leadership position

    A framework for smart traffic management using heterogeneous data sources

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Traffic congestion constitutes a social, economic and environmental issue to modern cities as it can negatively impact travel times, fuel consumption and carbon emissions. Traffic forecasting and incident detection systems are fundamental areas of Intelligent Transportation Systems (ITS) that have been widely researched in the last decade. These systems provide real time information about traffic congestion and other unexpected incidents that can support traffic management agencies to activate strategies and notify users accordingly. However, existing techniques suffer from high false alarm rate and incorrect traffic measurements. In recent years, there has been an increasing interest in integrating different types of data sources to achieve higher precision in traffic forecasting and incident detection techniques. In fact, a considerable amount of literature has grown around the influence of integrating data from heterogeneous data sources into existing traffic management systems. This thesis presents a Smart Traffic Management framework for future cities. The proposed framework fusions different data sources and technologies to improve traffic prediction and incident detection systems. It is composed of two components: social media and simulator component. The social media component consists of a text classification algorithm to identify traffic related tweets. These traffic messages are then geolocated using Natural Language Processing (NLP) techniques. Finally, with the purpose of further analysing user emotions within the tweet, stress and relaxation strength detection is performed. The proposed text classification algorithm outperformed similar studies in the literature and demonstrated to be more accurate than other machine learning algorithms in the same dataset. Results from the stress and relaxation analysis detected a significant amount of stress in 40% of the tweets, while the other portion did not show any emotions associated with them. This information can potentially be used for policy making in transportation, to understand the usersïżœïżœïżœ perception of the transportation network. The simulator component proposes an optimisation procedure for determining missing roundabouts and urban roads flow distribution using constrained optimisation. Existing imputation methodologies have been developed on straight section of highways and their applicability for more complex networks have not been validated. This task presented a solution for the unavailability of roadway sensors in specific parts of the network and was able to successfully predict the missing values with very low percentage error. The proposed imputation methodology can serve as an aid for existing traffic forecasting and incident detection methodologies, as well as for the development of more realistic simulation networks

    Traffic event detection framework using social media

    Get PDF
    This is an accepted manuscript of an article published by IEEE in 2017 IEEE International Conference on Smart Grid and Smart Cities (ICSGSC) on 18/09/2017, available online: https://ieeexplore.ieee.org/document/8038595 The accepted version of the publication may differ from the final published version.© 2017 IEEE. Traffic incidents are one of the leading causes of non-recurrent traffic congestions. By detecting these incidents on time, traffic management agencies can activate strategies to ease congestion and travelers can plan their trip by taking into consideration these factors. In recent years, there has been an increasing interest in Twitter because of the real-time nature of its data. Twitter has been used as a way of predicting revenues, accidents, natural disasters, and traffic. This paper proposes a framework for the real-time detection of traffic events using Twitter data. The methodology consists of a text classification algorithm to identify traffic related tweets. These traffic messages are then geolocated and further classified into positive, negative, or neutral class using sentiment analysis. In addition, stress and relaxation strength detection is performed, with the purpose of further analyzing user emotions within the tweet. Future work will be carried out to implement the proposed framework in the West Midlands area, United Kingdom.Published versio

    Leveraging Personal Navigation Assistant Systems Using Automated Social Media Traffic Reporting

    Full text link
    Modern urbanization is demanding smarter technologies to improve a variety of applications in intelligent transportation systems to relieve the increasing amount of vehicular traffic congestion and incidents. Existing incident detection techniques are limited to the use of sensors in the transportation network and hang on human-inputs. Despite of its data abundance, social media is not well-exploited in such context. In this paper, we develop an automated traffic alert system based on Natural Language Processing (NLP) that filters this flood of information and extract important traffic-related bullets. To this end, we employ the fine-tuning Bidirectional Encoder Representations from Transformers (BERT) language embedding model to filter the related traffic information from social media. Then, we apply a question-answering model to extract necessary information characterizing the report event such as its exact location, occurrence time, and nature of the events. We demonstrate the adopted NLP approaches outperform other existing approach and, after effectively training them, we focus on real-world situation and show how the developed approach can, in real-time, extract traffic-related information and automatically convert them into alerts for navigation assistance applications such as navigation apps.Comment: This paper is accepted for publication in IEEE Technology Engineering Management Society International Conference (TEMSCON'20), Metro Detroit, Michigan (USA

    Predicting Risk for Deer-Vehicle Collisions Using a Social Media Based Geographic Information System

    Get PDF
    As an experiment investigating social media as a data source for making management decisions, photo sharing websites were searched for data on deer sightings. Data about deer density and location are important factors in decisions related to herd management and transportation safety, but such data are often limited or not available. Results indicate that when combined with simple rules, data from photo sharing websites reliably predicted the location of road segments with high risk for deer-vehicle collisions as reported by volunteers to an internet site tracking roadkill. Use of Google Maps as the GIS platform was helpful in plotting and sharing data, measuring road segments and other distances, and overlaying geographical data. The ability to view satellite images and panoramic street views proved to be a particularly useful. As a general conclusion, the two independently collected sets of data from social media provided consistent information, suggesting investigative value to this data source. Overlaying two independently collected data sets can be a useful step in evaluating or mitigating reporting bias and human error in data taken from social media

    Incident detection using data from social media

    Get PDF
    This is an accepted manuscript of an article published by IEEE in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) on 15/03/2018, available online: https://ieeexplore.ieee.org/document/8317967/citations#citations The accepted version of the publication may differ from the final published version.© 2017 IEEE. Due to the rapid growth of population in the last 20 years, an increased number of instances of heavy recurrent traffic congestion has been observed in cities around the world. This rise in traffic has led to greater numbers of traffic incidents and subsequent growth of non-recurrent congestion. Existing incident detection techniques are limited to the use of sensors in the transportation network. In this paper, we analyze the potential of Twitter for supporting real-time incident detection in the United Kingdom (UK). We present a methodology for retrieving, processing, and classifying public tweets by combining Natural Language Processing (NLP) techniques with a Support Vector Machine algorithm (SVM) for text classification. Our approach can detect traffic related tweets with an accuracy of 88.27%.Published versio

    Search Engine Optimisation in UK news production

    Get PDF
    This is an Author's Accepted Manuscript of an article published in Journalism Practice, 5(4), 462 - 477, 2011, copyright Taylor & Francis, available online at: http://www.tandfonline.com/10.1080/17512786.2010.551020.This paper represents an exploratory study into an emerging culture in UK online newsrooms—the practice of Search Engine Optimisation (SEO), which assesses its impact on news production. Comprising a short-term participant observational case study at a national online news publisher, and a series of semi-structured, in-depth interviews with SEO professionals at three further UK media organisations, the author sets out to establish how SEO is operationalised in the newsroom, and what consequences these practices have for online news production. SEO practice is found to be varied and application is not universal. Not all UK news organisations are making the most of SEO even though some publishers take a highly sophisticated approach. Efforts are constrained by time, resources and management support, as well as off-page technical issues. SEO policy is found, in some cases, to inform editorial policy, but there is resistance to the principal of SEO driving decision-making. Several themes are established which call for further research

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    Network Neutrality: A Research Guide

    Get PDF
    The conclusion in a research handbook should emphasise the complexity of the problem than trying to claim a one-size-fits-all solution. I have categorised net neutrality into positive and negative (content discrimination) net neutrality indicating the latter as potentially harmful. Blocking content without informing customers appropriately is wrong: if it says ‘Internet service’, it should offer an open Internet (alongside walled gardens if that is expressly advertised as such). The issue of uncontrolled Internet flows versus engineered solutions is central to the question of a ‘free’ versus regulated Internet. A consumer- and citizen-orientated intervention depends on passing regulations to prevent unregulated nontransparent controls exerted over traffic via DPI equipment, whether imposed by ISPs for financial advantage or by governments eager to use this new technology to filter, censor and enforce copyright against their citizens. Unraveling the previous ISP limited liability regime risks removing the efficiency of that approach in permitting the free flow of information for economic and social advantage. These conclusions support a light-touch regulatory regime involving reporting requirements and co-regulation with, as far as is possible, market-based solutions. Solutions may be international as well as local, and international coordination of best practice and knowledge will enable national regulators to keep up with the technology ‘arms race’

    Quantifying Biases in Online Information Exposure

    Full text link
    Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this paper, we mine a massive dataset of Web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used Web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."Comment: 25 pages, 10 figures, to appear in the Journal of the Association for Information Science and Technology (JASIST
    • 

    corecore