983,787 research outputs found

    An expert system for integrated structural analysis and design optimization for aerospace structures

    Get PDF
    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided

    A framework for the design of business intelligence dashboard tools

    Get PDF
    Vast amounts of data are collected on a daily basis, making it difficult for humans to derive at valuable information to make effective decisions. In recent years, the field of Business Intelligence (BI) and Information Visualisation (IV) have become a key driver of an organisation’s success. BI tools supporting decision making need to be accessible to a larger audience on different levels of the organisation. The problem is that non-expert users, or novice users, of BI tools do not have the technical knowledge to conduct data analysis and often rely on expert users to assist. For this reason, BI vendors are shifting their focus to self-service BI, a relatively new term where novice users can analyse data without the traditional human mediator. Despite the proliferation of self-service BI tools, limited research is available on their usability and design considerations to assist novice users with decision making and BI analysis. The contribution of this study is a conceptual framework for designing, evaluating or selecting BI tools that support non-expert users to create dashboards (the BI Framework). A dashboard is a particular IV technique that enables users to view critical information at a glance. The main research problem addressed by this study is that non-expert users often have to utilise a number of software tools to conduct data analysis and to develop visualisations, such as BI dashboards. The research problem was further investigated by following a two-step approach. The first approach was to investigate existing problems by using an in-depth literature review in the fields of BI and IV. The second approach was to conduct a field study (Field Study 1) using a development environment consisting of a number of software components of which SAP Xcelsius was the main BI tool used to create a dashboard. The aim of the field study was to compare the identified problems and requirements with those found in literature. The results of the problem analysis revealed a number of problems in terms of BI software. One of the major problems is that BI tools do not adequately guide users through a logical process to conduct data analysis. In addition, the process becomes increasingly difficult when several BI tools are involved that need to be integrated. The results showed positive aspects when the data was mapped to a visualisation, which increased the users’ understanding of data they were analysing. The results were verified in a focus group discussion and were used to establish an initial set of problems and requirements, which were then synthesised with the problems and requirements identified from literature. Once the major problems were verified, a framework was established to guide the design of BI dashboard tools for novice users. The framework includes a set of design guidelines and usability evaluation criteria for BI tools. An extant systems analysis was conducted using BI tools to compare the advantages and disadvantages. The results revealed that a number of tools could be used by non-experts, however, their usability hinders users. All the participants used in all field studies and evaluations were Computer Science (CS) and Information Systems (IS) students. Participants were specially sourced from a higher education institution such as the Nelson Mandela Metropolitan University (NMMU). A second field study (Field Study 2) was conducted with participants using another traditional BI tool identified from the extant systems analysis, PowerPivot. The objective of this field study was to verify the design guidelines and related features that served as a BI Scorecard that can be used to select BI tools. Another BI tool, Tableau, was used for the final evaluation. The final evaluation was conducted with a large participant sample consisting of IS students in their second and third year of study. The results for the two groups revealed a significant difference between participants’ education levels and the usability ratings of Tableau. Additionally, the results indicated a significant relationship between the participants’ experience level and the usability ratings of Tableau. The usability ratings of Tableau were mostly positive and the results revealed that participants found the tool easy to use, flexible and efficient. The proposed BI Framework can be used to assist organisations when evaluating BI tools for adoption. Furthermore, designers of BI tools can use the framework to improve the usability of these tools, reduce the workload for users when creating dashboards, and increase the effectiveness and efficiency of decision support

    Reconfigurable Computing Systems for Robotics using a Component-Oriented Approach

    Get PDF
    Robotic platforms are becoming more complex due to the wide range of modern applications, including multiple heterogeneous sensors and actuators. In order to comply with real-time and power-consumption constraints, these systems need to process a large amount of heterogeneous data from multiple sensors and take action (via actuators), which represents a problem as the resources of these systems have limitations in memory storage, bandwidth, and computational power. Field Programmable Gate Arrays (FPGAs) are programmable logic devices that offer high-speed parallel processing. FPGAs are particularly well-suited for applications that require real-time processing, high bandwidth, and low latency. One of the fundamental advantages of FPGAs is their flexibility in designing hardware tailored to specific needs, making them adaptable to a wide range of applications. They can be programmed to pre-process data close to sensors, which reduces the amount of data that needs to be transferred to other computing resources, improving overall system efficiency. Additionally, the reprogrammability of FPGAs enables them to be repurposed for different applications, providing a cost-effective solution that needs to adapt quickly to changing demands. FPGAs' performance per watt is close to that of Application-Specific Integrated Circuits (ASICs), with the added advantage of being reprogrammable. Despite all the advantages of FPGAs (e.g., energy efficiency, computing capabilities), the robotics community has not fully included them so far as part of their systems for several reasons. First, designing FPGA-based solutions requires hardware knowledge and longer development times as their programmability is more challenging than Central Processing Units (CPUs) or Graphics Processing Units (GPUs). Second, porting a robotics application (or parts of it) from software to an accelerator requires adequate interfaces between software and FPGAs. Third, the robotics workflow is already complex on its own, combining several fields such as mechanics, electronics, and software. There have been partial contributions in the state-of-the-art for FPGAs as part of robotics systems. However, a study of FPGAs as a whole for robotics systems is missing in the literature, which is the primary goal of this dissertation. Three main objectives have been established to accomplish this. (1) Define all components required for an FPGAs-based system for robotics applications as a whole. (2) Establish how all the defined components are related. (3) With the help of Model-Driven Engineering (MDE) techniques, generate these components, deploy them, and integrate them into existing solutions. The component-oriented approach proposed in this dissertation provides a proper solution for designing and implementing FPGA-based designs for robotics applications. The modular architecture, the tool 'FPGA Interfaces for Robotics Middlewares' (FIRM), and the toolchain 'FPGA Architectures for Robotics' (FAR) provide a set of tools and a comprehensive design process that enables the development of complex FPGA-based designs more straightforwardly and efficiently. The component-oriented approach contributed to the state-of-the-art in FPGA-based designs significantly for robotics applications and helps to promote their wider adoption and use by specialists with little FPGA knowledge

    Measuring the efficiency of software development in a data processing environment

    Get PDF
    Bibliography: pages 162-182.The development of software for data processing systems has, during the last 25 years, grown into a large industry. Thus the efficiency of the software development process is of major importance. It is indicative of the level of understanding of this activity that no generally accepted measure of the efficiency of software development currently exists. The purpose of this study is to derive such a measure from a set of principles, to determine criteria for the acceptability of this measure, to test it according to the criteria set, and to describe inefficiencies obtained in a number of software projects. The definition of data processing software is based on the concepts of Management Information Systems. Flows, files and processes are identified as the main structural elements of such systems. A model of the software development life cycle describes these elements in detail and identifies the main resources required. A review of the literature shows that lines of code per programmer man-month is commonly proposed as a measure of efficiency of software development, but this measure is generally found to be inaccurate. In defining efficiency as the ratio of the prescribed results of a process divided by the total resources absorbed, a number of desirable properties of a practical measure of efficiency of software development are then put forward. Based on these properties a specific model is proposed which consists of the sum of flows, files and processes, divided by total project costs. Various other models are also considered. Validity and reliability are identified as the most important criteria for the acceptability of the proposed measure. Its reliability is tested in a separate experiment and found to be adequate. A field survey is set up to collect data to test its validity. The survey design chosen is a purposive sample of twenty software development projects. The main result of the survey is that the proposed model of efficiency is found to be valid. Other models investigated are less attractive. Efficiencies achieved in the twenty projects included in the sample are found to differ substantially from one another. Apart from achieving its specific objectives, the study also provides a perspective on some of the problems of software development. Several subjects for related research are identified

    New models and patterns for traceability

    Get PDF
    Includes bibliographical references.Traceability is a critical software engineering practice that manages activities across the product development lifecycle. It is the discipline of getting an entire organisation to work together to build better quality products. Traceability is also about relationships between traceability items, the management of change and requires good communication between personnel on matters that impact the system in any way. At the start of the 21st Century it is evident that there was a proliferation in new traceability research promoting techniques from a number of emerging research communities. However, some researchers still report that there are still many problems, in particular the lack of empirical data from small, medium and large organisations. In this study we address this shortcoming by performing two empirical studies. Firstly, we carry out a four year case study investigating traceability in a large multinational that develops complex enterprise systems. Ericsson's is a world leader in the development of large telecom's systems and is renowned for their mature development processes, tools and highly skilled staff. We examine the state of the art at Ericsson and the factors that influence traceability, paying particular attention to how these factors change during the study and the impact that these changes have on the traceability practices. Secondly, we execute an industrial survey across nineteen corporations to further our understanding of traceability in small and medium sized organisations. Using this empirical data as the major design inputs, we design and test a Traceability Framework consisting of three solution components namely, a TRAceability Model (TRAM), a TRAceability Process (TRAP) and Traceability Patterns. The TRAceability Model (TRAM) consists of semantic models, designed using a layered approach, with each layer presenting traceability semantics from different user perspectives. The TRAceability Process (TRAP) consists of process models also utilising a layered approach but in this case capturing process elements that can be used in the creation of a traceability process in a variety of different contexts. At the lowest layer the models represent the actual traceability situation in a project at Ericsson. While patterns are a widely accepted method for describing best practices and recurring problems in many aspects of software development, they have not been applied to the field of traceability. Structural patterns emerged from the semantic and process models. Furthermore, we utilise a pre-defined pattern template for formalising the findings of the empirical data and communicating the outcomes to different users. The three components together promote better communication, reusability and understandability of traceability concepts and practices

    A secondary analyses of Bradac et al. s prototype process-monitoring experiment

    Get PDF
    We report on the secondary analyses of some conjectures and empirical evidence presented in Bradac et al. s prototype process-monitoring experiment, published previously in IEEE Transactions on Software Engineering. We identify 13 conjectures in the original paper, and re-analyse six of these conjectures using the original evidence. Rather than rejecting any of the original conjectures, we identify assumptions underlying those conjectures, identify alternative interpretations of the conjectures, and also propose a number of new conjectures. Bradac et al. s study focused on reducing the project schedule interval. Some of our re-analysis has--considered improving software quality. We note that our analyses were only possible because of the quality and quantity of evidence presented in the original paper. Reflecting on our analyses leads us to speculate about the value of descriptive papers --that seek to present empirical material (together with an explicit statement of goals, assumptions and constraints) separate from the analyses that proceeds from that material. Such descriptive papers could improve the public scrutiny of software engineering research and may respond, in part, to some researchers criticisms concerning the small amount of software engineering research that is actually--evaluated. We also consider opportunities for further research, in particular opportunities for relating individual actions to project outcomes

    A Review on Software Architectures for Heterogeneous Platforms

    Full text link
    The increasing demands for computing performance have been a reality regardless of the requirements for smaller and more energy efficient devices. Throughout the years, the strategy adopted by industry was to increase the robustness of a single processor by increasing its clock frequency and mounting more transistors so more calculations could be executed. However, it is known that the physical limits of such processors are being reached, and one way to fulfill such increasing computing demands has been to adopt a strategy based on heterogeneous computing, i.e., using a heterogeneous platform containing more than one type of processor. This way, different types of tasks can be executed by processors that are specialized in them. Heterogeneous computing, however, poses a number of challenges to software engineering, especially in the architecture and deployment phases. In this paper, we conduct an empirical study that aims at discovering the state-of-the-art in software architecture for heterogeneous computing, with focus on deployment. We conduct a systematic mapping study that retrieved 28 studies, which were critically assessed to obtain an overview of the research field. We identified gaps and trends that can be used by both researchers and practitioners as guides to further investigate the topic

    Interactive Visual Analysis of Networked Systems: Workflows for Two Industrial Domains

    Get PDF
    We report on a first study of interactive visual analysis of networked systems. Working with ABB Corporate Research and Ericsson Research, we have created workflows which demonstrate the potential of visualization in the domains of industrial automation and telecommunications. By a workflow in this context, we mean a sequence of visualizations and the actions for generating them. Visualizations can be any images that represent properties of the data sets analyzed, and actions typically either change the selection of data visualized or change the visualization by choice of technique or change of parameters
    • …
    corecore