15,362 research outputs found

    Predicting User-Interactions on Reddit

    Full text link
    In order to keep up with the demand of curating the deluge of crowd-sourced content, social media platforms leverage user interaction feedback to make decisions about which content to display, highlight, and hide. User interactions such as likes, votes, clicks, and views are assumed to be a proxy of a content's quality, popularity, or news-worthiness. In this paper we ask: how predictable are the interactions of a user on social media? To answer this question we recorded the clicking, browsing, and voting behavior of 186 Reddit users over a year. We present interesting descriptive statistics about their combined 339,270 interactions, and we find that relatively simple models are able to predict users' individual browse- or vote-interactions with reasonable accuracy.Comment: Presented at ASONAM 201

    Ensuring Cyber-Security in Smart Railway Surveillance with SHIELD

    Get PDF
    Modern railways feature increasingly complex embedded computing systems for surveillance, that are moving towards fully wireless smart-sensors. Those systems are aimed at monitoring system status from a physical-security viewpoint, in order to detect intrusions and other environmental anomalies. However, the same systems used for physical-security surveillance are vulnerable to cyber-security threats, since they feature distributed hardware and software architectures often interconnected by ‘open networks’, like wireless channels and the Internet. In this paper, we show how the integrated approach to Security, Privacy and Dependability (SPD) in embedded systems provided by the SHIELD framework (developed within the EU funded pSHIELD and nSHIELD research projects) can be applied to railway surveillance systems in order to measure and improve their SPD level. SHIELD implements a layered architecture (node, network, middleware and overlay) and orchestrates SPD mechanisms based on ontology models, appropriate metrics and composability. The results of prototypical application to a real-world demonstrator show the effectiveness of SHIELD and justify its practical applicability in industrial settings

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    Using webcrawling of publicly available websites to assess E-commerce relationships

    Get PDF
    We investigate e-commerce success factors concerning their impact on the success of commerce transactions between businesses companies. In scientific literature, many e-commerce success factors are introduced. Most of them are focused on companies' website quality. They are evaluated concerning companies' success in the business-to- consumer (B2C) environment where consumers choose their preferred e-commerce websites based on these success factors e.g. website content quality, website interaction, and website customization. In contrast to previous work, this research focuses on the usage of existing e-commerce success factors for predicting successfulness of business-to-business (B2B) ecommerce. The introduced methodology is based on the identification of semantic textual patterns representing success factors from the websites of B2B companies. The successfulness of the identified success factors in B2B ecommerce is evaluated by regression modeling. As a result, it is shown that some B2C e-commerce success factors also enable the predicting of B2B e-commerce success while others do not. This contributes to the existing literature concerning ecommerce success factors. Further, these findings are valuable for B2B e-commerce websites creation

    Attack-Surface Metrics, OSSTMM and Common Criteria Based Approach to “Composable Security” in Complex Systems

    Get PDF
    In recent studies on Complex Systems and Systems-of-Systems theory, a huge effort has been put to cope with behavioral problems, i.e. the possibility of controlling a desired overall or end-to-end behavior by acting on the individual elements that constitute the system itself. This problem is particularly important in the “SMART” environments, where the huge number of devices, their significant computational capabilities as well as their tight interconnection produce a complex architecture for which it is difficult to predict (and control) a desired behavior; furthermore, if the scenario is allowed to dynamically evolve through the modification of both topology and subsystems composition, then the control problem becomes a real challenge. In this perspective, the purpose of this paper is to cope with a specific class of control problems in complex systems, the “composability of security functionalities”, recently introduced by the European Funded research through the pSHIELD and nSHIELD projects (ARTEMIS-JU programme). In a nutshell, the objective of this research is to define a control framework that, given a target security level for a specific application scenario, is able to i) discover the system elements, ii) quantify the security level of each element as well as its contribution to the security of the overall system, and iii) compute the control action to be applied on such elements to reach the security target. The main innovations proposed by the authors are: i) the definition of a comprehensive methodology to quantify the security of a generic system independently from the technology and the environment and ii) the integration of the derived metrics into a closed-loop scheme that allows real-time control of the system. The solution described in this work moves from the proof-of-concepts performed in the early phase of the pSHIELD research and enrich es it through an innovative metric with a sound foundation, able to potentially cope with any kind of pplication scenarios (railways, automotive, manufacturing, ...)

    Progress in information technology and tourism management: 20 years on and 10 years after the Internet—The state of eTourism research

    Get PDF
    This paper reviews the published articles on eTourism in the past 20 years. Using a wide variety of sources, mainly in the tourism literature, this paper comprehensively reviews and analyzes prior studies in the context of Internet applications to Tourism. The paper also projects future developments in eTourism and demonstrates critical changes that will influence the tourism industry structure. A major contribution of this paper is its overview of the research and development efforts that have been endeavoured in the field, and the challenges that tourism researchers are, and will be, facing

    Thinking Materially: Cognition as Extended and Enacted

    Get PDF
    Human cognition is extended and enacted. Drawing the boundaries of cognition to include the resources and attributes of the body and materiality allows an examination of how these components interact with the brain as a system, especially over cultural and evolutionary spans of time. Literacy and numeracy provide examples of multigenerational, incremental change in both psychological functioning and material forms. Though we think materiality, its central role in human cognition is often unappreciated, for reasons that include conceptual distribution over multiple material forms, the unconscious transparency of cognitive activity in general, and the different temporalities of metaplastic change in neurons and cultural forms

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft
    corecore