2,433 research outputs found

    Big Data as a Technology of Power

    Get PDF
    The growing importance of big data in contemporary society raises significant and urgent ethical questions. In the academic literature and in the media, the dominant response to many of these ethical questions is to re-examine the role and importance of privacy protections, but I argue that it is far more fruitful to investigate the relationship between power and big data. As algorithmic processes are increasingly used in decision-making processes, it is crucial that we understand the ways in which big data can be used as a technology of power. Only then can we properly understand the ways in which the use of big data impacts on and reorganises society, and go on to develop effective, tailored protections for individuals against harm from the use of big data. First, I show that the rise of big data highlights the limits of privacy protections, as big data-based analytics allow for personal information to be inferred in ways that circumvent privacy protections and problematises the category of personal information. In order to properly protect people from the potential harms that can arise from the use of big data in decision making, I argue that we must also examine the relationship between big data and power. In this thesis, I will present an argument for a pluralistic understanding of power, and a lens through which we can identify the kinds of power being exercised in the contexts we are investigating. Power is best understood as an umbrella term that refers to a diverse range of phenomena across an equally diverse range of domains or contexts. We can use this attitude to examine the central features of an exercise of power to identify the relevant theoretical accounts of power to draw on in understanding the modes of power present in a context. In Chapter 4, I will demonstrate the value of this approach by using it to analyse four contexts where big data is used as a technology of power, showing that we cannot use a single theoretical understanding of power across all exercises of power. Following this, I examine the impacts of big data on the operation of power. While many in the literature see big data as necessitating the development of new theoretical understandings of power, I argue that there are important historical continuities in power. Big data can be picked up and used as part of existing kinds of power just as any new technology can, and while this may change the efficiency, range, and effectiveness of exercises of power, it does not change their fundamental nature. However, there are impacts on the operation of power that are unique to big data, and one of these impacts I consider here is that the inferential capabilities of big data shift power from acting on human subjects and towards acting on data doubles (fragmentary digital representations of people). This leads to significant ethical problems with ensuring that power is exercised accountably. Finally, I will demonstrate these problems in Chapter 7 through examining four more contexts in which big data is used as a technology of power, showing how the shift to the data double as the subject of power undermines the effectiveness of accountability as a check on the abuse of power

    The Application of Big Data Analytics to Patent Litigation

    Get PDF
    This research defines the current gap between big data analytics and patent litigation. It discovers how big data analytics can be applied to the patent industry in order to create more effective risk analysis, an early warning system, and preventative strategies for inside and outside of the courtroom. Big data has the potential to modify current practices in the patent industry, namely geared towards aiding patent examiners, attorneys, inventors, jurors, and judges. It also offers a solution to the threat that patent monetizers pose on smaller companies and inventors, who often lose rights to their patents and other assets in these sometimes unavoidable lawsuits. This research examines the application of big data in the healthcare industry for real-time results and preventative measures. These actions set a good precedent for further diffusion into other industries, specifically patent legal. Features for future implementation and project development are presented as a roadmap to create a universal big data analytics system for the patent industry. Finally, this research will touch upon a case study of Apple v. Samsung and identify how the case might have yielded different results in the event that big data analytics had been applied to the legal proceeding

    Digital Marketing and the Culture Industry: The Ethics of Big Data

    Get PDF
    Instead of the steady march of the one percent growth in ecommerce as compared to total retail revenues in the last decade (to comprise about nine percent of the industry at the close of 2019), we have witnessed leaps now to over twenty percent in just the last year. Scott Galloway marks the pandemic as an accelerant not just of digital marketing posting a year of growth for each month of quarantine but as an accelerant of each major GAFA (Google, Amazon, Facebook, and Apple) firm from market dominance to total dominance (Galloway 2020). Viewing these trends from the standpoint of critical marketing requires revisiting first-generation critical theorist reflections on the American dominance of the global culture industry. Insofar as GAFA digital marketing practices highlight their transition from mere neutral platforms to shapers, creators, and drivers of cultural content, we need to complement marketing’s praiseworthy achievements in statistical modeling (like SEM) with a sufficiently critical and theoretical contextualization. In this sense, while my investigation of big data will certainly countenance and explore its statistical (as algorithmic) innovations, what I capitalize as Big Data connotes the manners in which these large reserves of behavioral exhaust shape culture—domestic and global, home and workplace, private and public. The focus on ethics in each of these three articles follows not just moral norms, social practices, and associated virtues (or vices), but also the important ethical domains of compliance, basic rights, and juridical precedent. In the first article, I focus most exclusively on the manners in which GAFA algorithmic personalization tends to employ the alluring promise of individual tailoring of service convenience at the social costs of echo chambers, filter bubbles, and endemic political polarization. In the second article, I seek to devise a data theory of value as the wider context for my proposal to advance a new marketing mix. My tentative argument is that the classical subject as constructed by these platform domains has now juxtaposed the consumer and firm relationship. The true value creators of the workforce of the digital marketplace are its users as prosumers: an odd mixture of consumer, producer, and product. While the production era took nature as the collateral damage to its claims upon mining limited raw materials, the onset of a consumption driven economy harvests psychic and behavioral data as its new unlimited raw material with its own trails of collateral damage that constitute the birth of surveillance capitalism (Zuboff 2019). In the third article, I turn to systemic racism in American sport with the focus on the performative rituals sanctioned, censored, and sold by the NFL as its foremost culture industry. In this last article, I also seek to develop a revamped epistemology for critical marketing that places a new primacy on the voices and experiences of those most systemically marginalized as the best lens from which to advance theories and practices that can disclose forms of latent domination often hidden behind otherwise an uncritical acceptance of the NFL culture industry as fundamentally apolitical leisurely entertainment

    Combining Big Data And Traditional Business Intelligence – A Framework For A Hybrid Data-Driven Decision Support System

    Get PDF
    Since the emergence of big data, traditional business intelligence systems have been unable to meet most of the information demands in many data-driven organisations. Nowadays, big data analytics is perceived to be the solution to the challenges related to information processing of big data and decision-making of most data-driven organisations. Irrespective of the promised benefits of big data, organisations find it difficult to prove and realise the value of the investment required to develop and maintain big data analytics. The reality of big data is more complex than many organisations’ perceptions of big data. Most organisations have failed to implement big data analytics successfully, and some organisations that have implemented these systems are struggling to attain the average promised value of big data. Organisations have realised that it is impractical to migrate the entire traditional business intelligence (BI) system into big data analytics and there is a need to integrate these two types of systems. Therefore, the purpose of this study was to investigate a framework for creating a hybrid data-driven decision support system that combines components from traditional business intelligence and big data analytics systems. The study employed an interpretive qualitative research methodology to investigate research participants' understanding of the concepts related to big data, a data-driven organisation, business intelligence, and other data analytics perceptions. Semi-structured interviews were held to collect research data and thematic data analysis was used to understand the research participants’ feedback information based on their background knowledge and experiences. The application of the organisational information processing theory (OIPT) and the fit viability model (FVM) guided the interpretation of the study outcomes and the development of the proposed framework. The findings of the study suggested that data-driven organisations collect data from different data sources and process these data to transform them into information with the goal of using the information as a base of all their business decisions. Executive and senior management roles in the adoption of a data-driven decision-making culture are key to the success of the organisation. BI and big data analytics are tools and software systems that are used to assist a data-driven organisation in transforming data into information and knowledge. The suggested challenges that organisations experience when they are trying to integrate BI and big data analytics were used to guide the development of the framework that can be used to create a hybrid data-driven decision support system. The framework is divided into these elements: business motivation, information requirements, supporting mechanisms, data attributes, supporting processes and hybrid data-driven decision support system architecture. The proposed framework is created to assist data-driven organisations in assessing the components of both business intelligence and big data analytics systems and make a case-by-case decision on which components can be used to satisfy the specific data requirements of an organisation. Therefore, the study contributes to enhancing the existing literature position of the attempt to integrate business intelligence and big data analytics systems.Dissertation (MIT (Information Systems))--University of Pretoria, 2021.InformaticsMIT (Information Systems)Unrestricte

    Data mining Twitter for cancer, diabetes, and asthma insights

    Get PDF
    Twitter may be a data resource to support healthcare research. Literature is still limited related to the potential of Twitter data as it relates to healthcare. The purpose of this study was to contrast the processes by which a large collection of unstructured disease-related tweets could be converted into structured data to be further analyzed. This was done with the objective of gaining insights into the content and behavioral patterns associated with disease-specific communications on Twitter. Twelve months of Twitter data related to cancer, diabetes, and asthma were collected to form a baseline dataset containing over 34 million tweets. As Twitter data in its raw form would have been difficult to manage, three separate data reduction methods were contrasted to identify a method to generate analysis files, maximizing classification precision and data retention. Each of the disease files were then run through a CHAID (chi-square automatic interaction detector) analysis to demonstrate how user behavior insights vary by disease. Chi-square Automatic Interaction Detector (CHAID) was a technique created by Gordon V. Kass in 1980. CHAID is a tool used to discover the relationship between variables. This study followed the standard CRISP-DM data mining approach and demonstrates how the practice of mining Twitter data fits into this six-stage iterative framework. The study produced insights that provide a new lens into the potential Twitter data has as a valuable healthcare data source as well as the nuances involved in working with the data

    The Management of Direct Material Cost During New Product Development: A Case Study on the Application of Big Data, Machine Learning, and Target Costing

    Get PDF
    This dissertation thesis investigates the application of big data, machine learning, and the target costing approach for managing costs during new product development in the context of high product complexity and uncertainty. A longitudinal case study at a German car manufacturer is conducted to examine the topic. First, we conduct a systematic literature review, which analyzes use cases, issues, and benefits of big data and machine learning technology for the application in management accounting. Our review contributes to the literature by providing an overview about the specific aspects of both technologies that can be applied in managerial accounting. Further, we identify the specific issues and benefits of both technologies in the context management accounting. Second, we present a case study on the applicability of machine learning and big data technology for product cost estimation, focusing on the material costs of passenger cars. Our case study contributes to the literature by providing a novel approach to increase the predictive accuracy of cost estimates of subsequent product generations, we show that the predictive accuracy is significantly larger when using big data sets, and we find that machine learning can outperform cost estimates from cost experts, or produce at least comparable results, even when dealing with highly complex products. Third, we conduct an experimental study to investigate the trade-off between accuracy (predictive performance) and explainability (transparency and interpretability) of machine learning models in the context of product cost estimation. We empirically confirm the oftenimplied inverse relationship between both attributes from the perspective of cost experts. Further, we show that the relative importance of explainability to accuracy perceived by cost experts is important when selecting between alternative machine learning models. Then, we present four factors that significantly determine the perceived relative importance of explainability to accuracy. Fourth, we present a proprietary archival study to investigate the target costing approach in a complex product development context, which is characterized by product design interdependence and uncertainty about target cost difficulty. We find that target cost difficulty is related to more cost reduction performance during product development based on archival company data, and thereby complement results from earlier studies, which are based on experimental studies. Further, we demonstrate that in a complex product development context, product design interdependence and uncertainty about target cost difficulty may both limit the effectiveness of target costing

    Biosupremacy: Big Data, Antitrust, and Monopolistic Power Over Human Behavior

    Get PDF
    Since 2001, five leading technology companies have acquired more than 600 other firms while avoiding antitrust enforcement. By accumulating technologies in adjacent or unrelated industries, these companies have grown so powerful that their influence over human affairs equals that of many governments. Their power stems from data collected by devices that people welcome into their homes, workplaces, schools, and public spaces. When paired with artificial intelligence, these devices form a vast surveillance network that sorts people into increasingly specific categories related to health, sexuality, religion, and other categories. However, this surveillance network was not created solely to observe human behavior; it was also designed to exert control. Accordingly, it is paired with a second network that leverages intelligence gained through surveillance to manipulate people\u27s behavior, nudging them through personalized newsfeeds, targeted advertisements, dark patterns, and other forms of coercive choice architecture. Together, these dual networks of surveillance and control form a global digital panopticon, a modern analog of Bentham\u27s eighteenth-century building designed for total surveillance. Moreover, they enable a pernicious type of influence that Foucault defined as biopower: the ability to measure and modify the behavior of populations to shift social norms. This Article is the first to introduce biopower into antitrust doctrine. It contends that a handful of companies are vying for a dominant share of biopower to achieve biosupremacy, monopolistic power over human behavior. The Article analyzes how companies concentrate biopower through unregulated conglomerate and concentric mergers that add software and devices to their surveillance and control networks. Acquiring technologies in new markets establishes cross-market data flows that send information to acquiring firms across market boundaries. Conglomerate and concentric mergers also expand the control network, establishing beachheads from which platforms exert biopower to shift social norms. Antitrust regulators should expand their conception of consumer welfare to account for the costs imposed by surveillance and coercive choice architecture on product quality. They should revive conglomerate merger control, abandoned in the 1970s, and update it for the Digital Age. Specifically, regulators should halt mergers that concentrate biopower, prohibit the use of dark patterns, and mandate data silos, which contain data within specific markets, to block cross-market data flows

    The New Hampshire, Vol. 105, No. 28 (Feb. 11, 2016)

    Get PDF
    An independent student produced newspaper from the University of New Hampshire
    • …
    corecore