7 research outputs found

    Towards Automated Software Evolution of Data-Intensive Applications

    Full text link
    Recent years have witnessed an explosion of work on Big Data. Data-intensive applications analyze and produce large volumes of data typically terabyte and petabyte in size. Many techniques for facilitating data processing are integrated into data-intensive applications. API is a software interface that allows two applications to communicate with each other. Streaming APIs are widely used in today\u27s Object-Oriented programming development that can support parallel processing. In this dissertation, an approach that automatically suggests stream code run in parallel or sequentially is proposed. However, using streams efficiently and properly needs many subtle considerations. The use and misuse patterns for stream codes are proposed in this dissertation. Modern software, especially for highly transactional software systems, generates vast logging information every day. The huge amount of information prevents developers from receiving useful information effectively. Log-level could be used to filter run-time information. This dissertation proposes an automated evolution approach for alleviating logging information overload by rejuvenating log levels according to developers\u27 interests. Machine Learning (ML) systems are pervasive in today\u27s software society. They are always complex and can process large volumes of data. Due to the complexity of ML systems, they are prone to classic technical debt issues, but how ML systems evolve is still a puzzling problem. This dissertation introduces ML-specific refactoring and technical debt for solving this problem

    Mining Multiple Web Sources Using Non-Deterministic Finite State Automata

    Get PDF
    Existing web content extracting systems use unsupervised, supervised, and semi-supervised approaches. The WebOMiner system is an automatic web content data extraction system which models a specific Business to Customer (B2C) web site such as bestbuy.com using object oriented database schema. WebOMiner system extracts different web page content types like product, list, text using non deterministic finite automaton (NFA) generated manually. This thesis extends the automatic web content data extraction techniques proposed in the WebOMiner system to handle multiple web sites and generate integrated data warehouse automatically. We develop the WebOMiner-2 which generates NFA of specific domain classes from regular expressions extracted from web page DOM trees\u27 frequent patterns. Our algorithm can also handle NFA epsilon([varepsilon]) transition and convert it to deterministic finite automata (DFA) to identify different content tuples from list of tuples. Experimental results show that our system is highly effective and performs the content extraction task with 100% precision and 98.35% recall value

    Strategies for securing the unity of the self in Augustine and certain modern psychologists

    Get PDF
    My thesis explores what is involved in attaining an integrated sense of self, a question which is both interesting in its own right and which can also provide one enlightening means of comparing the disciplines of theology and psychology. The first two chapters establish the theological method to be followed and provide an ideological context. I describe why the relationship between theology and psychology is a particularly problematic one and outline why I think some of the methods so far proposed for relating them are unsatisfactory. I suggest instead that in some respects the two disciplines may be seen as providing alternative strategies for securing the unity of the self. With the aid of Charles Taylor's philosophy of personhood, I set out what I mean by the self and what constitutes the unity of the self. I describe how the modem self has developed historically through the relation of individuals to sources of value, and I suggest that theology and some forms of psychology can be understood as offering expressions of complementary sources of such value and hence can be related to one another. I consider postmodern attacks on the unified self and conclude that our contemporary context is one which demands less strongly ordered forms of integrating the self than those which have come down to us in the Western intellectual tradition. The next four chapters focus on the work of key representatives of the theological and psychological traditions. From the side of theology, I describe Augustine's conviction that an individual might move from a state of fragmentation to a state of wholeness through being remade in the image of the one God (chapter 3). From the psychological side, I consider Freud's methods for enabling us to move from a state of neurosis to limited self-mastery (chapter 4), and Jung's suggestion that wholeness is attained though discovery and acceptance of the natural realm lying within the psyche (chapter 5). 1 then review the proposals for uniting the self behind the project of self-actualisation that have been developed by the humanistic psychologists, in particular Fromm, Maslow and Rogers (chapter 6).In conclusion (chapter 7) 1 suggest some ways in which Augustine's theology needs to be revised if it is to be relevant to our contemporary self-understanding, and show how the most promising strategy for unifying the self is likely to arise from a combination of an Augustinian theistic outlook with the insights of these modem psychologists

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution

    Fast Numerical and Machine Learning Algorithms for Spatial Audio Reproduction

    Get PDF
    Audio reproduction technologies have underwent several revolutions from a purely mechanical, to electromagnetic, and into a digital process. These changes have resulted in steady improvements in the objective qualities of sound capture/playback on increasingly portable devices. However, most mobile playback devices remove important spatial-directional components of externalized sound which are natural to the subjective experience of human hearing. Fortunately, the missing spatial-directional parts can be integrated back into audio through a combination of computational methods and physical knowledge of how sound scatters off of the listener's anthropometry in the sound-field. The former employs signal processing techniques for rendering the sound-field. The latter employs approximations of the sound-field through the measurement of so-called Head-Related Impulse Responses/Transfer Functions (HRIRs/HRTFs). This dissertation develops several numerical and machine learning algorithms for accelerating and personalizing spatial audio reproduction in light of available mobile computing power. First, spatial audio synthesis between a sound-source and sound-field requires fast convolution algorithms between the audio-stream and the HRIRs. We introduce a novel sparse decomposition algorithm for HRIRs based on non-negative matrix factorization that allows for faster time-domain convolution than frequency-domain fast-Fourier-transform variants. Second, the full sound-field over the spherical coordinate domain must be efficiently approximated from a finite collection of HRTFs. We develop a joint spatial-frequency covariance model for Gaussian process regression (GPR) and sparse-GPR methods that supports the fast interpolation and data fusion of HRTFs across multiple data-sets. Third, the direct measurement of HRTFs requires specialized equipment that is unsuited for widespread acquisition. We ``bootstrap'' the human ability to localize sound in listening tests with Gaussian process active-learning techniques over graphical user interfaces that allows the listener to infer his/her own HRTFs. Experiments are conducted on publicly available HRTF datasets and human listeners

    Language and compiler support for stream programs

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 153-166).Stream programs represent an important class of high-performance computations. Defined by their regular processing of sequences of data, stream programs appear most commonly in the context of audio, video, and digital signal processing, though also in networking, encryption, and other areas. Stream programs can be naturally represented as a graph of independent actors that communicate explicitly over data channels. In this work we focus on programs where the input and output rates of actors are known at compile time, enabling aggressive transformations by the compiler; this model is known as synchronous dataflow. We develop a new programming language, StreamIt, that empowers both programmers and compiler writers to leverage the unique properties of the streaming domain. StreamIt offers several new abstractions, including hierarchical single-input single-output streams, composable primitives for data reordering, and a mechanism called teleport messaging that enables precise event handling in a distributed environment. We demonstrate the feasibility of developing applications in StreamIt via a detailed characterization of our 34,000-line benchmark suite, which spans from MPEG-2 encoding/decoding to GMTI radar processing. We also present a novel dynamic analysis for migrating legacy C programs into a streaming representation. The central premise of stream programming is that it enables the compiler to perform powerful optimizations. We support this premise by presenting a suite of new transformations. We describe the first translation of stream programs into the compressed domain, enabling programs written for uncompressed data formats to automatically operate directly on compressed data formats (based on LZ77). This technique offers a median speedup of 15x on common video editing operations.(cont.) We also review other optimizations developed in the StreamIt group, including automatic parallelization (offering an 11x mean speedup on the 16-core Raw machine), optimization of linear computations (offering a 5.5x average speedup on a Pentium 4), and cache-aware scheduling (offering a 3.5x mean speedup on a StrongARM 1100). While these transformations are beyond the reach of compilers for traditional languages such as C, they become tractable given the abundant parallelism and regular communication patterns exposed by the stream programming model.by William Thies.Ph.D

    "Strategy in the skin : strategic practices of South Africa's official development assistance"

    Get PDF
    This study set out to explore how Official Development Assistance was practised in South Africa. An exploratory narrative design was followed to uncover the ‘strategy in the skin’ of strategy practitioners in the unit of analysis and to respond, therefore, to the research questions. This study has contributed to the body of knowledge in that it has brought together an alternative confluence of three theoretical perspectives of strategy as practice; complex adaptive systems and organisational hypocrisy and has explored the impact of the practice lens on these standpoints. While there has been extensive research on each of the theoretical perspectives, there has not yet been a study that has drawn together the three perspectives in relation to an empirical unit of analysis such as Official Development Assistance practices and practitioners. The study responded to a knowledge gap in relation to how public sector organisations, such as government units and the strategy practitioners of such units, practice strategy beyond the reified, formalised conceptions of strategy and in relation to their inhabiting complex, political organisational systems. The study arrived at two central theoretical findings. Firstly, that strategising represents a calibration of strategic practices towards strategic outcomes through the activities of complex adaptive practitioners v within the more politically inclined organisation. Secondly, that beyond the text of strategy, there is sub-text that is equally part of the micro strategy towards strategic outcomes.The skilful and sometimes delicate balancing act, that strategists perform to legitimise the calibrated combinations of action and politics in organisational strategy, equally needs nuanced, subtle and more complex forms of organisational communication. The study, therefore, makes the claim that complex adaptive systems and the characteristics of political organisations (as not being geared to action) are inherently broadened through the multiple dimensions of the practice turn and strategy as sub-text. The research confirmed that strategy as practice is a useful lens to understand strategy beyond the formally documented scripts and espoused pronouncements of strategy within organisational studiesBusiness ManagementD.B.L
    corecore