2,296 research outputs found

    On the automated compilation of UML notation to a VLIW chip multiprocessor

    Get PDF
    With the availability of more and more cores within architectures the process of extracting implicit and explicit parallelism in applications to fully utilise these cores is becoming complex. Implicit parallelism extraction is performed through the inclusion of intelligent software and hardware sections of tool chains although these reach their theoretical limit rather quickly. Due to this the concept of a method of allowing explicit parallelism to be performed as fast a possible has been investigated. This method enables application developers to perform creation and synchronisation of parallel sections of an application at a finer-grained level than previously possible, resulting in smaller sections of code being executed in parallel while still reducing overall execution time. Alongside explicit parallelism, a concept of high level design of applications destined for multicore systems was also investigated. As systems are getting larger it is becoming more difficult to design and track the full life-cycle of development. One method used to ease this process is to use a graphical design process to visualise the high level designs of such systems. One drawback in graphical design is the explicit nature in which systems are required to be generated, this was investigated, and using concepts already in use in text based programming languages, the generation of platform-independent models which are able to be specialised to multiple hardware architectures was developed. The explicit parallelism was performed using hardware elements to perform thread management, this resulted in speed ups of over 13 times when compared to threading libraries executed in software on commercially available processors. This allowed applications with large data dependent sections to be parallelised in small sections within the code resulting in a decrease of overall execution time. The modelling concepts resulted in the saving of between 40-50% of the time and effort required to generate platform-specific models while only incurring an overhead of up to 15% the execution cycles of these models designed for specific architectures

    Modeling views in the layered view model for XML using UML

    Get PDF
    In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    Intelligent maintenance management in a reconfigurable manufacturing environment using multi-agent systems

    Get PDF
    Thesis (M. Tech.) -- Central University of Technology, Free State, 2010Traditional corrective maintenance is both costly and ineffective. In some situations it is more cost effective to replace a device than to maintain it; however it is far more likely that the cost of the device far outweighs the cost of performing routine maintenance. These device related costs coupled with the profit loss due to reduced production levels, makes this reactive maintenance approach unacceptably inefficient in many situations. Blind predictive maintenance without considering the actual physical state of the hardware is an improvement, but is still far from ideal. Simply maintaining devices on a schedule without taking into account the operational hours and workload can be a costly mistake. The inefficiencies associated with these approaches have contributed to the development of proactive maintenance strategies. These approaches take the device health state into account. For this reason, proactive maintenance strategies are inherently more efficient compared to the aforementioned traditional approaches. Predicting the health degradation of devices allows for easier anticipation of the required maintenance resources and costs. Maintenance can also be scheduled to accommodate production needs. This work represents the design and simulation of an intelligent maintenance management system that incorporates device health prognosis with maintenance schedule generation. The simulation scenario provided prognostic data to be used to schedule devices for maintenance. A production rule engine was provided with a feasible starting schedule. This schedule was then improved and the process was determined by adhering to a set of criteria. Benchmarks were conducted to show the benefit of optimising the starting schedule and the results were presented as proof. Improving on existing maintenance approaches will result in several benefits for an organisation. Eliminating the need to address unexpected failures or perform maintenance prematurely will ensure that the relevant resources are available when they are required. This will in turn reduce the expenditure related to wasted maintenance resources without compromising the health of devices or systems in the organisation

    An Efficient Transport Protocol for delivery of Multimedia An Efficient Transport Protocol for delivery of Multimedia Content in Wireless Grids

    Get PDF
    A grid computing system is designed for solving complicated scientific and commercial problems effectively,whereas mobile computing is a traditional distributed system having computing capability with mobility and adopting wireless communications. Media and Entertainment fields can take advantage from both paradigms by applying its usage in gaming applications and multimedia data management. Multimedia data has to be stored and retrieved in an efficient and effective manner to put it in use. In this paper, we proposed an application layer protocol for delivery of multimedia data in wireless girds i.e. multimedia grid protocol (MMGP). To make streaming efficient a new video compression algorithm called dWave is designed and embedded in the proposed protocol. This protocol will provide faster, reliable access and render an imperceptible QoS in delivering multimedia in wireless grid environment and tackles the challenging issues such as i) intermittent connectivity, ii) device heterogeneity, iii) weak security and iv) device mobility.Comment: 20 pages, 15 figures, Peer Reviewed Journa

    Automatically Repairing Programs Using Both Tests and Bug Reports

    Full text link
    The success of automated program repair (APR) depends significantly on its ability to localize the defects it is repairing. For fault localization (FL), APR tools typically use either spectrum-based (SBFL) techniques that use test executions or information-retrieval-based (IRFL) techniques that use bug reports. These two approaches often complement each other, patching different defects. No existing repair tool uses both SBFL and IRFL. We develop RAFL (Rank-Aggregation-Based Fault Localization), a novel FL approach that combines multiple FL techniques. We also develop Blues, a new IRFL technique that uses bug reports, and an unsupervised approach to localize defects. On a dataset of 818 real-world defects, SBIR (combined SBFL and Blues) consistently localizes more bugs and ranks buggy statements higher than the two underlying techniques. For example, SBIR correctly identifies a buggy statement as the most suspicious for 18.1% of the defects, while SBFL does so for 10.9% and Blues for 3.1%. We extend SimFix, a state-of-the-art APR tool, to use SBIR, SBFL, and Blues. SimFix using SBIR patches 112 out of the 818 defects; 110 when using SBFL, and 55 when using Blues. The 112 patched defects include 55 defects patched exclusively using SBFL, 7 patched exclusively using IRFL, 47 patched using both SBFL and IRFL and 3 new defects. SimFix using Blues significantly outperforms iFixR, the state-of-the-art IRFL-based APR tool. Overall, SimFix using our FL techniques patches ten defects no prior tools could patch. By evaluating on a benchmark of 818 defects, 442 previously unused in APR evaluations, we find that prior evaluations on the overused Defects4J benchmark have led to overly generous findings. Our paper is the first to (1) use combined FL for APR, (2) apply a more rigorous methodology for measuring patch correctness, and (3) evaluate on the new, substantially larger version of Defects4J.Comment: working pape

    A Deep Search Architecture for Capturing Product Ontologies

    Get PDF
    This thesis describes a method to populate very large product ontologies quickly. We discuss a deep search architecture to text-mine online e-commerce market places and build a taxonomy of products and their corresponding descriptions and parent categories. The goal is to automatically construct an open database of products, which are aggregated from different online retailers. The database contains extensive metadata on each object, which can be queried and analyzed. Such a public database currently does not exist; instead the information currently resides siloed within various organizations. In this thesis, we describe the tools, data structures and software architectures that allowed aggregating, structuring, storing and searching through several gigabytes of product ontologies and their associated metadata. We also describe solutions to some computational puzzles in trying to mine data on large scale. We implemented the product capture architecture and, using this implementation, we built product ontologies corresponding to two major retailers: Wal-Mart and Target. The ontology data is analyzed to explore structural complexity and similarities and differences between the retailers. A broad product ontology has several uses, from comparison shopping applications that already exist to situation aware computing of tomorrow where computers are aware of the objects in their surroundings and these objects interact together to help humans in everyday tasks
    • …
    corecore