3,122 research outputs found

    The Digital Architectures of Social Media: Comparing Political Campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 U.S. Election

    Full text link
    The present study argues that political communication on social media is mediated by a platform's digital architecture, defined as the technical protocols that enable, constrain, and shape user behavior in a virtual space. A framework for understanding digital architectures is introduced, and four platforms (Facebook, Twitter, Instagram, and Snapchat) are compared along the typology. Using the 2016 US election as a case, interviews with three Republican digital strategists are combined with social media data to qualify the studyies theoretical claim that a platform's network structure, functionality, algorithmic filtering, and datafication model affect political campaign strategy on social media

    Feeds as Query Result Serializations

    Full text link
    Many Web-based data sources and services are available as feeds, a model that provides consumers with a loosely coupled way of interacting with providers. The current feed model is limited in its capabilities, however. Though it is simple to implement and scales well, it cannot be transferred to a wider range of application scenarios. This paper conceptualizes feeds as a way to serialize query results, describes the current hardcoded query semantics of such a perspective, and surveys the ways in which extensions of this hardcoded model have been proposed or implemented. Our generalized view of feeds as query result serializations has implications for the applicability of feeds as a generic Web service for any collection that is providing access to individual information items. As one interesting and compelling class of applications, we describe a simple way in which a query-based approach to feeds can be used to support location-based services

    Impact of service-oriented architectures (SOA) on business process standardization - Proposing a research model

    Get PDF
    Originally, Data Warehouses (DWH) were conceived to be components for the data support of controlling and management. From early on, this brought along the need to cope with extensive data preparation, integration, and distribution requirements. In the growing infrastructures for managerial support (“Business Intelligence”), the DWH turned into a central data hub for decision support. As the business environment and the underlying technical infrastructures are fostering an ever increasing degree of systems integration, the DWH has been recognized to be a pivotal component for all sorts of data transformation and data integration operations. Nowadays, the DWH is supposed to process both managerial and operational data – it becomes a transformation hub (TH). This article delineates the relevant motives that drive the trend towards THs and the resulting requirements. The logical composition of a TH is developed based on data transformation steps. Two case studies exemplify the application of the resulting architecture

    Using Classification Techniques for Assigning Work Descriptions to Task Groups on the Basis of Construction Vocabulary

    Get PDF
    Construction project management produces a huge amount of documents in a variety of formats. The efficient use of the data contained in these documents is crucial to enhance control and to improve performance. A central pillar throughout the project life cycle is the Bill of Quantities (BoQ) document. It provides economic information and details a collection of work descriptions describing the nature of the different works needed to be done to achieve the project goal. In this work, we focus on the problem of automatically classifying such work descriptions into a predefined task organization hierarchy, so that it can be possible to store them in a common data repository. We describe a methodology for preprocessing the text associated to work descriptions to build training and test data sets and carry out a complete experimentation with several well-known machine learning algorithms.Programa Juan de la Cierva. Grant Number: FJCI-2015-24093Ministry of Economy, Industry and Competitiveness. European Regional Development Fund—ERDF. Grant Number: TIN2014-58227-

    Visual and computational analysis of structure-activity relationships in high-throughput screening data

    Get PDF
    Novel analytic methods are required to assimilate the large volumes of structural and bioassay data generated by combinatorial chemistry and high-throughput screening programmes in the pharmaceutical and agrochemical industries. This paper reviews recent work in visualisation and data mining that can be used to develop structure-activity relationships from such chemical/biological datasets

    Building Discerning Knowledge Bases from Multiple Source Documents, with Novel Fact Filtering

    Get PDF
    Information extraction systems that remember only novel information (facts that differ semantically from those previously extracted) can be used to build lean knowledge bases fed from multiple, possibly overlapping sources. In previous research by the authors, natural language processing techniques were used to build a system to extract financial facts from international corporate reports of the Wall Street Journal. We will enhance that system to extract the same types of financial facts from a second source of corporate financial reports: Reuters. The improved system will provide more generality through its ability to extract from multiple sources rather than just one. In addition, it will provide novelty filtering of extracted information, admitting only novel facts into the database, while remembering all sources that a redundant fact came from

    Cloud BI: Future of business intelligence in the Cloud

    Get PDF
    In self-hosted environments it was feared that business intelligence (BI) will eventually face a resource crunch situation due to the never ending expansion of data warehouses and the online analytical processing (OLAP) demands on the underlying networking. Cloud computing has instigated a new hope for future prospects of BI. However, how will BI be implemented on Cloud and how will the traffic and demand profile look like? This research attempts to answer these key questions in regards to taking BI to the Cloud. The Cloud hosting of BI has been demonstrated with the help of a simulation on OPNET comprising a Cloud model with multiple OLAP application servers applying parallel query loads on an array of servers hosting relational databases. The simulation results reflected that extensible parallel processing of database servers on the Cloud can efficiently process OLAP application demands on Cloud computing
    • …
    corecore