7 research outputs found

    Using the Twitter Platform as a Research Method in the Information and Social Media Age

    Get PDF
    Within the context of social media there are rich and massive qualitative sources of data. The intense volume of data produced by social media comprises a promising resource for studying socially constructed language, interactions and behaviours. Research utilising data from social media offers a useful alternative to traditional research methods, which are often restricted and limited by theoretical and methodological boundaries. Social media data are characterised by a qualitative and unstructured nature of using words, either written or spoken, that naturally leads to using a qualitative approach. With social media, researchers immerse themselves with captured social media data as ‘text’. Such involvement poses challenges to researchers, and in particular to identify and locate appropriate data to be collected and develop an appropriate research design that analyses data to its full potential with valid findings. This study contributes methodologically to qualitative research and extends traditional qualitative methods to include the social media platform of designated hashtag of Twitter. Two cases of Organic and Semi-organic Twitter Data are discussed along with research implications and limitations

    Creation of unstructured big data from customer service: The case of parcel shipping companies on Twitter

    Get PDF
    Purpose - Customer service provision is a growing phenomenon on social media and parcel shipping companies have been among the most prominent adopters. This has coincided with greater interest in the development of analysis techniques for unstructured big data from social media platforms, such as the micro-blogging platform, Twitter. Given the growing use of dedicated customer service accounts on Twitter, this paper investigates the effectiveness with which parcel shipping companies use the platform. Design/methodology/approach – This paper demonstrates the use of a combination of tools for retrieving, processing and analysing large volumes of customer service related conversations generated between parcel shipping companies and their customers in Australia, United Kingdom and the United States. Extant studies using data from Twitter tend to focus on the contributions of individual entities and are unable to capture the insights provided by a holistic examination of the interactions. Findings – This study identifies the key issues that trigger customer contact with parcel shipping companies on Twitter. It identifies similarities and differences in the approaches that these companies bring to customer engagement and identifies opportunities for using the medium more effectively. Originality/value – The development of consumer-centric supply chains and relevant theories require researchers and practitioners to have the ability to include insights from growing quantities of unstructured data gathered from consumer engagement. This study makes a methodological contribution by demonstrating the use of a set of tools to gather insight from a large volume of conversations on a social media platform

    Capability Challenges in Transforming Government through Open and Big Data: Tales of Two Cities

    Get PDF
    Hyper-connected and digitized governments are increasingly advancing a vision of data-driven government as producers and consumers of big data in the big data ecosystem. Despite the growing interests in the potential power of big data, we found paucity of empirical research on big data use in government. This paper explores organizational capability challenges in transforming government through big data use. Using systematic literature review approach we developed initial framework for examining impacts of socio-political, strategic change, analytical, and technical capability challenges in enhancing public policy and service through big data. We then applied the framework to conduct case study research on two large-size city governments’ big data use. The findings indicate the framework’s usefulness, shedding new insights into the unique government context. Consequently, the framework was revised by adding big data public policy, political leadership structure, and organizational culture to further explain impacts of organizational capability challenges in transforming government

    Filtering News from Document Streams: Evaluation Aspects and Modeled Stream Utility

    Get PDF
    Events like hurricanes, earthquakes, or accidents can impact a large number of people. Not only are people in the immediate vicinity of the event affected, but concerns about their well-being are shared by the local government and well-wishers across the world. The latest information about news events could be of use to government and aid agencies in order to make informed decisions on providing necessary support, security and relief. The general public avails of news updates via dedicated news feeds or broadcasts, and lately, via social media services like Facebook or Twitter. Retrieving the latest information about newsworthy events from the world-wide web is thus of importance to a large section of society. As new content on a multitude of topics is continuously being published on the web, specific event related information needs to be filtered from the resulting stream of documents. We present in this thesis, a user-centric evaluation measure for evaluating systems that filter news related information from document streams. Our proposed evaluation measure, Modeled Stream Utility (MSU), models users accessing information from a stream of sentences produced by a news update filtering system. The user model allows for simulating a large number of users with different characteristic stream browsing behavior. Through simulation, MSU estimates the utility of a system for an average user browsing a stream of sentences. Our results show that system performance is sensitive to a user population's stream browsing behavior and that existing evaluation metrics correspond to very specific types of user behavior. To evaluate systems that filter sentences from a document stream, we need a set of judged sentences. This judged set is a subset of all the sentences returned by all systems, and is typically constructed by pooling together the highest quality sentences, as determined by respective system assigned scores for each sentence. Sentences in the pool are manually assessed and the resulting set of judged sentences is then used to compute system performance metrics. In this thesis, we investigate the effect of including duplicates of judged sentences, into the judged set, on system performance evaluation. We also develop an alternative pooling methodology, that given the MSU user model, selects sentences for pooling based on the probability of a sentences being read by modeled users. Our research lays the foundation for interesting future work for utilizing user-models in different aspects of evaluation of stream filtering systems. The MSU measure enables incorporation of different user models. Furthermore, the applicability of MSU could be extended through calibration based on user behavior

    Distributed Contextual Anomaly Detection from Big Event Streams

    Get PDF
    The age of big digital data is emerged and the size of generating data is rapidly increasing in a millisecond through the Internet of Things (IoT) and Internet of Everything (IoE) objects. Specifically, most of today’s available data are generated in a form of streams through different applications including sensor networks, bioinformatics, smart airport, smart highway traffic, smart home applications, e-commerce online shopping, and social media streams. In this context, processing and mining such high volume of data stream becomes one of the research priority concern and challenging tasks. On the one hand, processing high volumes of streaming data with low-latency response is a critical concern in most of the real-time application before the important information can be missed or disregarded. On the other hand, detecting events from data stream is becoming a new research challenging task since the existing traditional anomaly detection method is mainly focusing on; a) limited size of data, b) centralised detection with limited computing resource, and c) specific anomaly detection types of either point or collective rather than the Contextual behaviour of the data. Thus, detecting Contextual events from high sequence volume of data stream is one of the research concerns to be addressed in this thesis. As the size of IoT data stream is scaled up to a high volume, it is impractical to propose existing processing data structure and anomaly detection method. This is due to the space, time and the complexity of the existing data processing model and learning algorithms. In this thesis, a novel distributed anomaly detection method and algorithm is proposed to detect Contextual behaviours from the sequence of bounded streams. Capturing event streams and partitioning them over several windows to control the high rate of event streams mainly base on, the proposed solution firstly. Secondly, by proposing a parallel and distributed algorithm to detect Contextual anomalous event. The experimental results are evaluated based on the algorithm’s performances, processing low-latency response, and detecting Contextual anomalous behaviour accuracy rate from the event streams. Finally, to address scalability concerned of the Contextual events, appropriate computational metrics are proposed to measure and evaluate the processing latency of distributed method. The achieved result is evidenced distributed detection is effective in terms of learning from high volumes of streams in real-time
    corecore