220 research outputs found

    SOFTWARE UNDER TEST DALAM PENELITIAN SOFTWARE TESTING: SEBUAH REVIEW

    Get PDF
    Software under Test (SUT) is an essential aspect of software testing research activities. Preparation of the SUT is not simple. It requires accuracy, completeness and will affect the quality of the research conducted. Currently, there are several ways to utilize an SUT in software testing research: building an own SUT, utilization of open source to build an SUT, and SUT from the repository utilization. This article discusses the results of SUT identification in many software testing studies. The research is conducted in a systematic literature review (SLR) using the Kitchenham protocol. The review process is carried out on 86 articles published in 2017-2020. The article was selected after two selection stages: the Inclusion and Exclusion Criteria and the quality assessment. The study results show that the trend of using open source is very dominant. Some researchers use open source as the basis for developing SUT, while others use SUT from a repository that provides ready-to-use SUT. In this context, utilization of the SUT from the software infrastructure repository (SIR) and Defect4J are the most significant choice of researchers

    Design of currency, markets, and economy for knowledge

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 131-133).Information markets benefit the communities they serve by facilitating electronic distributed exchange of information. Further benefits include enhancing knowledge sharing, innovation, and productivity. This research explores innovative market mechanisms to build longterm sustainable incentives that many existing platforms fail to provide, while encouraging pro-social behavior. A key advantage of this research is direct application of established information economic and macroeconomic theories to the design of social software and knowledge platforms. The research contribution is the design of a complete framework for information economy, which consists of several distinct components: 1) a market engine for exchanging information products that are non-rivalrous and non-excludable; 2) a serialized currency system that enables monetary acceleration; 3) "monetary policies" that ensure a healthy growth of currency supply; 4) "fiscal policies" that reward information reuse and good behavior such as tagging, voting, tipping, and fraud reporting. We built a web-based software platform called Barter, and have deployed it at several universities. Analysis of user data helps test information market effectiveness and illustrates effects of various market interventions. We present our key findings learned in the process of system deployment, such as the impacts of social connections on market interactions and fraud, effects of bounty on information quality, market fraud and intervention of fraud prevention mechanism.by Dawei Shen.Ph.D

    Scrub: Online TroubleShooting for Large Mission-Critical Applications

    Get PDF
    Scrub is a troubleshooting tool for distributed applications that operate under strict SLOs common in production environments. It allows users to formulate queries on events occurring during execution in order to assess the correctness of the application’s operation. Scrub has been in use for two years at Turn, where developers and users have relied on it to resolve numerous issues in its online advertisement bidding platform. This platform spans thousands of machines across the globe, serving several million bid requests per second, and dispensing many millions of dollars in advertising budgets. Troubleshooting distributed applications is notoriously hard, and its difficulty is exacerbated by the presence of strict SLOs, which requires the troubleshooting tool to have only minimal impact on the hosts running the application. Furthermore, with large amounts of money at stake, users expect to be able to run frequent diagnostics and demand quick evaluation and remediation of any problems. These constraints have led to a number of design and implementation decisions, that go counter to conventional wisdom. In particular, Scrub supports only a restricted form of joins. Its query execution strategy eschews imposing any overhead on the application hosts. In particular, joins, group-by operations and aggregations are sent to a dedicated centralized facility. In terms of implementation, Scrub avoids the overhead and security concerns of dynamic instrumentation. Finally, at all levels of the system, accuracy is traded for minimal impact on the hosts. We present the design and implementation of Scrub and contrast its choices to those made in earlier systems. We illustrate its power by describing a number of use cases, and we demonstrate its negligible overhead on the underlying application. On average, we observe a maximum CPU overhead of up to 2.5% on application hosts and a 1% increase in request latency. These overheads allow the advertisement bidding platform to operate well within its SLOs

    Extraction of ontology and semantic web information from online business reports

    Get PDF
    CAINES, Content Analysis and INformation Extraction System, employs an information extraction (IE) methodology to extract unstructured text from the Web. It can create an ontology and a Semantic Web. This research is different from traditional IE systems in that CAINES examines the syntactic and semantic relationships within unstructured text of online business reports. Using CAINES provides more relevant results than manual searching or standard keyword searching. Over most extraction systems, CAINES extensively uses information extraction from natural language, Key Words in Context (KWIC), and semantic analysis. A total of 21 online business reports, averaging about 100 pages long, were used in this study. Based on financial expert opinions, extraction rules were created to extract information, an ontology, and a Semantic Web of data from financial reports. Using CAINES, one can extract information about global and domestic market conditions, market condition impacts, and information about the business outlook. A Semantic Web was created from Merrill Lynch reports, 107,533 rows of data, and displays information regarding mergers, acquisitions, and business segment news between 2007 and 2009. User testing of CAINES resulted in recall of 85.91%, precision of 87.16%, and an F-measure of 86.46%. Speed with CAINES was also greater than manually extracting information. Users agree that CAINES quickly and easily extracts unstructured information from financial reports on the EDGAR database

    The genesis and emergence of Web 3.0: a study in the integration of artificial intelligence and the semantic web in knowledge creation

    Get PDF
    The web as we know it has evolved rapidly over the last decade. We have gone from a phase of rapid growth as seen with the dot.com boom where business was king to the current web 2.0 phase where social networking, Wiki’s, Blogs and other related tools flood the bandwidth of the world wide web. The empowerment of the web user with web 2.0 technologies has led to the exponential growth of data, information and knowledge on the web. With this rapid change, there is a need to logically categorise this information and knowledge so it can be fully utilised by all. It can be argued that the power of the knowledge held on the web is not fully exposed under its current structure and to improve this we need to explore the foundations of the web. This dissertation will explore the evolution of the web from its early days to the present day. It will examine the way web content is stored and discuss the new semantic technologies now available to represent this content. The research aims to demonstrate the possibilities of efficient knowledge extraction from a knowledge portal such as a Wiki or SharePoint portal using these semantic technologies. This generation of dynamic knowledge content within a limited domain will attempt to demonstrate the benefits of semantic web to the knowledge age

    Hierarchical categorisation of web tags for Delicious

    Get PDF
    In the scenario of social bookmarking, a user browsing the Web bookmarks web pages and assigns free-text labels (i.e., tags) to them according to their personal preferences. The benefits of social tagging are clear – tags enhance Web content browsing and search. However, since these tags may be publicly available to any Internet user, a privacy attacker may collect this information and extract an accurate snapshot of users’ interests or user profiles, containing sensitive information, such as health-related information, political preferences, salary or religion. In order to hinder attackers in their efforts to profile users, this report focuses on the practical aspects of capturing user interests from their tagging activity. More accurately, we study how to categorise a collection of tags posted by users in one of the most popular bookmarking services, Delicious (http://delicious.com).Preprin

    Processing spam: Conducting processed listening and rhythmedia to (re)produce people and territories

    Get PDF
    This thesis provides a transdisciplinary investigation of ‘deviant’ media categories, specifically spam and noise, and the way they are constructed and used to (re)produce territories and people. Spam, I argue, is a media phenomenon that has always existed, and received different names in different times. The changing definitions of spam, the reasons and actors behind these changes are thus the focus of this research. It brings to the forefront a longer history of the politics of knowledge production with and in media, and its consequences. This thesis makes a contribution to the media and communication field by looking at neglected media phenomena through fields such as sound studies, software studies, law and history to have richer understanding that disciplinary boundaries fail to achieve. The thesis looks at three different case studies: the conceptualisation of noise in the early 20th century through Bell Telephone Company, web metric standardisation in the European Union 2000s legislation, and unwanted behaviours on Facebook. What these cases show is that media practitioners have been constructing ‘deviant’ categories in different media and periods by using seven sonic epistemological strategies: training of the (digital) body, restructuring of territories, new experts, standardising measurements (tools and units), filtering, de-politicising and licensing. Informed by my empirical work, I developed two concepts - processed listening and rhythmedia - offering a new theoretical framework to analyse how media practitioners construct power relations by knowing people in mediated territories and then spatially and temporally (re)ordering them. Shifting the attention from theories of vision allows media researchers to have a better understanding of practitioners who work in multi-layered digital/datafied spaces, tuning in and out to continuously measure and record people’s behaviours. Such knowledge is being fed back in a recursive feedback-loop conducted by a particular rhythmedia constantly processing, ordering, shaping and regulating people, objects and spaces. Such actions (re)configure the boundaries of what it means to be human, worker and medium

    Media Distortions

    Get PDF
    Media Distortions is about the power behind the production of deviant media categories. It shows the politics behind categories we take for granted such as spam and noise, and what it means to our broader understanding of, and engagement with media. The book synthesizes media theory, sound studies, science and technology studies (STS), feminist technoscience, and software studies into a new composition to explore media power. Media Distortions argues that using sound as a conceptual framework is more useful due to its ability to cross boundaries and strategically move between multiple spaces—which is essential for multi-layered mediated spaces. Drawing on repositories of legal, technical and archival sources, the book amplifies three stories about the construction and negotiation of the ‘deviant’ in media. The book starts in the early 20th century with Bell Telephone’s production of noise, tuning into the training of their telephone operators and their involvement with the Noise Abatement Commission in New York City. The next story jumps several decades to the early 2000s focusing on web metric standardization in the European Union and shows how the digital advertising industry constructed web-cookies as legitimate communication while making spam illegal. The final story focuses on the recent decade and the way Facebook filters out antisocial behaviors to engineer a sociality that produces more value. These stories show how deviant categories re-draw boundaries between human and non-human, public and private spaces, and importantly, social and antisocial
    • …
    corecore