16,633 research outputs found

    Embedding Spatial Software Visualization in the IDE: an Exploratory Study

    Full text link
    Software visualization can be of great use for understanding and exploring a software system in an intuitive manner. Spatial representation of software is a promising approach of increasing interest. However, little is known about how developers interact with spatial visualizations that are embedded in the IDE. In this paper, we present a pilot study that explores the use of Software Cartography for program comprehension of an unknown system. We investigated whether developers establish a spatial memory of the system, whether clustering by topic offers a sound base layout, and how developers interact with maps. We report our results in the form of observations, hypotheses, and implications. Key findings are a) that developers made good use of the map to inspect search results and call graphs, and b) that developers found the base layout surprising and often confusing. We conclude with concrete advice for the design of embedded software maps.Comment: To appear in proceedings of SOFTVIS 2010 conferenc

    Spatio-Temporal Patterns act as Computational Mechanisms governing Emergent behavior in Robotic Swarms

    Get PDF
    open access articleOur goal is to control a robotic swarm without removing its swarm-like nature. In other words, we aim to intrinsically control a robotic swarm emergent behavior. Past attempts at governing robotic swarms or their selfcoordinating emergent behavior, has proven ineffective, largely due to the swarm’s inherent randomness (making it difficult to predict) and utter simplicity (they lack a leader, any kind of centralized control, long-range communication, global knowledge, complex internal models and only operate on a couple of basic, reactive rules). The main problem is that emergent phenomena itself is not fully understood, despite being at the forefront of current research. Research into 1D and 2D Cellular Automata has uncovered a hidden computational layer which bridges the micromacro gap (i.e., how individual behaviors at the micro-level influence the global behaviors on the macro-level). We hypothesize that there also lie embedded computational mechanisms at the heart of a robotic swarm’s emergent behavior. To test this theory, we proceeded to simulate robotic swarms (represented as both particles and dynamic networks) and then designed local rules to induce various types of intelligent, emergent behaviors (as well as designing genetic algorithms to evolve robotic swarms with emergent behaviors). Finally, we analysed these robotic swarms and successfully confirmed our hypothesis; analyzing their developments and interactions over time revealed various forms of embedded spatiotemporal patterns which store, propagate and parallel process information across the swarm according to some internal, collision-based logic (solving the mystery of how simple robots are able to self-coordinate and allow global behaviors to emerge across the swarm)

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Critical Management Issues for Implementing RFID in Supply Chain Management

    Get PDF
    The benefits of radio frequency identification (RFID) technology in the supply chain are fairly compelling. It has the potential to revolutionise the efficiency, accuracy and security of the supply chain with significant impact on overall profitability. A number of companies are actively involved in testing and adopting this technology. It is estimated that the market for RFID products and services will increase significantly in the next few years. Despite this trend, there are major impediments to RFID adoption in supply chain. While RFID systems have been around for several decades, the technology for supply chain management is still emerging. We describe many of the challenges, setbacks and barriers facing RFID implementations in supply chains, discuss the critical issues for management and offer some suggestions. In the process, we take an in-depth look at cost, technology, standards, privacy and security and business process reengineering related issues surrounding RFID technology in supply chains

    Performance by Unified Model Analysis (PUMA)

    Get PDF
    Evaluation of non-functional properties of a design (such as performance, dependability, security, etc.) can be enabled by design annotations specific to the property to be evaluated. Performance properties, for instance, can be annotated on UML designs by using the UML Profile for Schedulability, Performance and Time (SPT) . However the communication between the design description in UML and the tools used for non-functional properties evaluation requires support, particularly for performance where there are many alternative performance analysis tools that might be applied. This paper describes a tool architecture called PUMA, which provides a unified interface between different kinds of design information and different kinds of performance models, for example Markov models, stochastic Petri nets and process algebras, queues and layered queues. The paper concentrates on the creation of performance models. The unified interface of PUMA is centered on an intermediate model called Core Scenario Model (CSM), which is extracted from the annotated design model. Experience shows that CSM is also necessary for cleaning and auditing the design information, and providing default interpretations in case it is incomplete, before creating a performance model

    A Review on Image mosaicing for secure Transmission of University Exam Question Paper

    Get PDF
    The rapid spread of the digital world nowadays which is powered by ever faster system demands greater speed and security. Real time to secure an image is a challenging task due to the processing time and computational requirement for RGB image. So, to cope with these concerns, many innovative techniques of image processing for data hiding are required. In this paper new data hiding scheme is proposed which is known as image mosaicing. Image mosaicing is the process of merging split images to produce a single and complete image of the document. For this technique two input images are required one is secret image and second is target image, by merging these two a new image is made called as a mosaic image. So, the creation of mosaic image and lossless recovery of secret input image for question paper security is presented in this paper

    Cost and Benefit of Embedded Feature Annotation: A Case Study

    Get PDF
    In software industry, organizations often need to develop a set of similar software-intensive systems in order to satisfy different requirements of customers. In the literature, it has been traditionally recommended that organizations adopt Product Line Engineering-- an approach that uses a set of shared assets to derive the variants. However, in reality, organizations usually develop multiple variants using the clone-and-own approach, in which a new product is developed by cloning and modifying the assets of an existing product. Although the clone-and-own approach has several advantages, it can easily lead to inconsistencies and hardness to manage product portfolios. In both the clone-and-own and the product line engineering context, the concept of feature can be used to characterize different variants. A feature is a function unit of a software product which provides a user-observable behavior and a unit of reuse. In the clone-and-own approach, there are two key challenges when doing cloning: reuse and consistency. For both of these activities, knowing the location of features is essential. In this thesis, we propose a lightweight approach for recording and maintaining feature models and mappings between features and software assets. We evaluated this approach in a case study, by applying it retroactively to an existing set of cloned projects in a way which simulated the actual development as if the approach had been used originally. Preliminary results showed that the extra cost of creating and maintaining a feature model and feature mapping information is negligible compared to the software development cost, and the benefit of it can justify the investment provided certain amount of reuse and consistency management is required

    Management of collaborative BIM data by the Federatinon of Distributed Models

    Get PDF
    The architecture engineering and construction sector is currently undergoing a significant period of change and modernization. In the United Kingdom in particular this is driven by the government’s objective of reducing the cost of construction projects. This is to be achieved by requiring all publicly funded projects to utilize fully collaborative building information modeling by 2016. A common goal in increasing building information model (BIM) adoption by the industry is the movement toward the realization of a BIM as either a single data model or a series of tightly coupled federated models. However, there are key obstacles to be overcome, including uncertainty over data ownership, concerns relating to the security/privacy of data, and reluctance to “outsource” data storage. This paper proposes a framework that is able to provide a solution for managing collaboration in the architecture engineering and construction (AEC) sector. The solution presented in this paper provides an overlay that automatically federates and governs distributed BIM data. The use of this overlay provides an integrated BIM model that is physically distributed across the stakeholders in a construction project. The key research question addressed by this paper is whether such an overlay can, by providing dynamic federation and governance of BIM data, overcome some key obstacles to BIM adoption, including questions over data ownership, the security/privacy of data, and reluctance to share data. More specifically, this paper provides the following contributions: (1) presentation of a vision for the implementation and governance of a federated distributed BIM data model; (2) description of the BIM process and governance model that underpins the approach; (3) provision of a validation case study using real construction data from a U.K. highways project, demonstrating that both the federated BIM overlay and the process and governance model are fit for purpose. - See more at: http://ascelibrary.org/doi/full/10.1061/(ASCE)CP.1943-5487.0000657#sthash.jIj574Lh.dpu
    • …
    corecore