99 research outputs found

    Mobile Live Video Streaming Optimization via Crowdsourcing Brokerage

    Get PDF
    Nowadays, people can enjoy a rich real-time sensing cognition of what they are interested in anytime and anywhere by leveraging powerful mobile devices such as smartphones. As a key support for the propagation of these richer live media contents, cellular-based access technologies play a vital role to provide reliable and ubiquitous Internet access to mobile devices. However, these limited wireless network channel conditions vary and fluctuate depending on weather, building shields, congestion, etc., which degrade the quality of live video streaming dramatically. To address this challenge, we propose to use crowdsourcing brokerage in future networks which can improve each mobile user's bandwidth condition and reduce the fluctuation of network condition. Further, to serve mobile users better in this crowdsourcing style, we study the brokerage scheduling problem which aims at maximizing the user's QoE (quality of experience) satisfaction degree cost-effectively. Both offline and online algorithms are proposed to solve this problem. The results of extensive evaluations demonstrate that by leveraging crowdsourcing technique, our solution can cost-effectively guarantee a higher quality view experience

    Managing Quality of Crowdsourced Data

    Get PDF
    The Web is the central medium for discovering knowledge via various sources such as blogs, social media, and wikis. It facilitates access to contents provided by a large number of users, regardless of their geographical locations or cultural backgrounds. Such user-generated content is often referred to as \emph{crowdsourced data}, which provides informational benefit in terms of variety and scale. Yet, the quality of the crowdsourced data is hard to manage, due to the inherent uncertainty and heterogeneity of the Web. In this proposal, we summarize prior work on crowdsourced data that studies quality dimensions and techniques to assess data quality. However, they often lack mechanisms to collect data with high quality guarantee and to improve data quality. To overcome such limitations, we propose a research direction that emphasises on (1) guaranteeing the data quality at collection time, and (2) using expert knowledge to improve data quality for the cases where data is already collected

    Rethinking the digital transformation using technology space analysis

    Get PDF
    The world is in the midst of a digital transformation. An intensified prevalence and use of digital technologies is fundamentally changing organizations and economies. However, the notion of 'digital transformation' is both theoretically and empirically underspecified. This paper rethinks the digital transformation narrative theoretically by embedding the concept in concurrent debates about technological revolutions and neo-Schumpeterian innovation theory. Empirically, the paper specifies the digital transformation by analysing the technological composition of key start-up and scale-up companies in the knowledge-intensive services sector. Undertaking a technology space analysis of 40,754 start-up and scale-up companies derived from the near real-time Dealroom.co database, we analyse which technologies and application domains are currently converging, distilling of key elements of the digital transformation. The paper concludes that the transmission of digital technologies is often indirect through ‘key enabling technology clusters’ that connect the technological vanguard to application domains

    Pressures, centralization, economics, technology, and ethics: factors that impact public information officer - journalist relationships

    Get PDF
    A study of public information officers (PIOs) in three states and the journalists that cover state government finds five primary factors that shape the working relationships between both groups. Institutional pressures on both PIOs and journalists impact the ability of both parties to meet the needs of the other party on a daily basis. High levels of centralization in state government communication limit the ability of PIOs to meet the needs of journalists, fostering journalists’ antagonism and a more combative working relationship. The economic decline of journalism is creating a dichotomous situation where PIOs can help journalists manage increasing demands on shrinking deadlines, or they can take advantage of growing limitations on journalists and abuse the relationships. Growing use of social and digital media are providing opportunities to help journalists be more efficient in performing daily tasks, but some journalists perceive of PIOs’ use of these tools as a source of competition for public attention. Straightforward, ethical practices by both parties that are grounded in candor help build trust over time and strengthen working relationships. These findings provide the basis for a new model for state government media relations that helps PIOs and journalists negotiate these factors to meet their shared responsibilities in co-creating an enlightened citizenry

    Polycentric Information Commons: A Theory Development and Empirical Investigation

    Get PDF
    Decentralized systems online—such as open source software (OSS) development, online communities, wikis, and social media—often experience decline in participation which threatens their long-terms sustainability. Building on a rich body of research on the sustainability of physical resource systems, this dissertation presents a novel theoretical framing that addresses the sustainability issues arising in decentralized systems online and which are amplified because of their open nature. The first essay develops the theory of polycentric information commons (PIC) which conceptualizes decentralized systems online as “information commons”. The theory defines information commons, the stakeholders that participate in them, the sustainability indicators of information commons and the collective-action threats putting pressure on their long-term sustainability. Drawing on Ostrom’s factors associated with stable common pool resource systems, PIC theory specifies four polycentric governance practices that can help information commons reduce the magnitude and impact of collective-action threats while improving the information commons’ sustainability. The second essay further develops PIC theory by applying it in an empirical context of “digital activism”. Specifically, it examines the role of polycentric governance in reducing the threats to the legitimacy of digital activism—a type of information commons with an overarching objective of instigating societal change. As such, it illustrates the applicability of PIC theory in the study of digital activism. The third essay focuses on the threat of “information pollution” and its impact on open collaboration, a type of information commons dedicated to the creation of value through open participation online. It uncovers the way polycentric governance mechanism help reduce the duration of pollution events. This essay contributes to PIC theory by expanding it to the realm of operational governance in open collaboration

    Big Data and Its Applications in Smart Real Estate and the Disaster Management Life Cycle: A Systematic Analysis

    Get PDF
    Big data is the concept of enormous amounts of data being generated daily in different fields due to the increased use of technology and internet sources. Despite the various advancements and the hopes of better understanding, big data management and analysis remain a challenge, calling for more rigorous and detailed research, as well as the identifications of methods and ways in which big data could be tackled and put to good use. The existing research lacks in discussing and evaluating the pertinent tools and technologies to analyze big data in an efficient manner which calls for a comprehensive and holistic analysis of the published articles to summarize the concept of big data and see field-specific applications. To address this gap and keep a recent focus, research articles published in last decade, belonging to top-tier and high-impact journals, were retrieved using the search engines of Google Scholar, Scopus, and Web of Science that were narrowed down to a set of 139 relevant research articles. Different analyses were conducted on the retrieved papers including bibliometric analysis, keywords analysis, big data search trends, and authors’ names, countries, and affiliated institutes contributing the most to the field of big data. The comparative analyses show that, conceptually, big data lies at the intersection of the storage, statistics, technology, and research fields and emerged as an amalgam of these four fields with interlinked aspects such as data hosting and computing, data management, data refining, data patterns, and machine learning. The results further show that major characteristics of big data can be summarized using the seven Vs, which include variety, volume, variability, value, visualization, veracity, and velocity. Furthermore, the existing methods for big data analysis, their shortcomings, and the possible directions were also explored that could be taken for harnessing technology to ensure data analysis tools could be upgraded to be fast and efficient. The major challenges in handling big data include efficient storage, retrieval, analysis, and visualization of the large heterogeneous data, which can be tackled through authentication such as Kerberos and encrypted files, logging of attacks, secure communication through Secure Sockets Layer (SSL) and Transport Layer Security (TLS), data imputation, building learning models, dividing computations into sub-tasks, checkpoint applications for recursive tasks, and using Solid State Drives (SDD) and Phase Change Material (PCM) for storage. In terms of frameworks for big data management, two frameworks exist including Hadoop and Apache Spark, which must be used simultaneously to capture the holistic essence of the data and make the analyses meaningful, swift, and speedy. Further field-specific applications of big data in two promising and integrated fields, i.e., smart real estate and disaster management, were investigated, and a framework for field-specific applications, as well as a merger of the two areas through big data, was highlighted. The proposed frameworks show that big data can tackle the ever-present issues of customer regrets related to poor quality of information or lack of information in smart real estate to increase the customer satisfaction using an intermediate organization that can process and keep a check on the data being provided to the customers by the sellers and real estate managers. Similarly, for disaster and its risk management, data from social media, drones, multimedia, and search engines can be used to tackle natural disasters such as floods, bushfires, and earthquakes, as well as plan emergency responses. In addition, a merger framework for smart real estate and disaster risk management show that big data generated from the smart real estate in the form of occupant data, facilities management, and building integration and maintenance can be shared with the disaster risk management and emergency response teams to help prevent, prepare, respond to, or recover from the disasters
    • 

    corecore