245 research outputs found

    New Product Development Portfolio Web Interface version 2.0

    Get PDF
    This project involved the development of an interactive web interface for a new product portfolio assessment tool. The tool\u27s purpose is to assess the various new products that a company is developing in order to assist the company in making selection, prioritization, and resource allocation decisions. Users input numerical data about each project (e.g. unit price, unit cost, life cycle, etc) and score the project on 19 questions across four categories (competitive advantage, marketing resources, technical resources, and technical uncertainty) using a seven-point scale. The tool then makes a variety of calculations based on these inputs and allows the user to generate comparisons of the company\u27s projects based on several criteria (e.g. net present value, return on investment, etc)

    Mechanizing Webassembly Proposals

    Get PDF
    WebAssembly is a modern low-level programming language designed to provide high performance and security. To enable these goals, the language specifies a relatively small number of low-level types, instructions, and language constructs. The language is proven to be sound with respect to its types and execution, and a separate mechanized formalization of the specification and type soundness proofs confirms this. As an emerging technology, the language is continuously being developed, with modifications being proposed and discussed in the open and on a frequent basis. In order to ensure the soundness properties exhibited by the original core language are maintained as WebAssembly evolves, these proposals should too be mechanized and verified to be sound. This work extends the existing Isabelle mechanization to include three such proposals which add additional features to the language, and shows that the language maintains its soundness properties with their inclusion

    Web Based Candidate Assessment System

    Get PDF
    Devplex Technologies Limited is a privately owned company based in Galway Ireland. They have been operating for over two years and currently undertake contract projects for the travel and financial industries. The projects are varied and a wide range of skills are necessary. Devplex Technologies are currently undergoing expansion and intend to hire a number of new employees with varying levels of experience. Devplex Technologies also employ a high number of contractors, with varying skills and contract periods range from one month to twenty four months. The current technical leaders are all very busy with project work. The human resource manager actively advertises positions on both the internet and local newspapers which results in a large number of responses. It is difficult to sort through all the applicants as a high level of technical knowledge is required to vet them. When the human resource (HR) manager selects a number of potential candidates from the vetted curriculum vitas, phone interviews are conducted. The HR manger pools questions which have been submitted from employees who have experience in the relevant technologies. The HR manager has to decide if the candidate\u27s answers are satisfactory for the questions. The most successful candidates are then requested to attend a formal interview. Once a candidate presents for interview they are requested to take a short 10 minute written exam where they are asked to answer five questions relevant to the position they are applying for. Regardless of the outcome of the exam the candidate then proceeds to a formal interview where two or more employees from Devplex Technology interview the candidate and take note of their findings. Once the candidate has left the interview, the HR manager and interviewers meet to discuss the exam and interview and decide if the candidate should be brought for a second interview. If the candidate\u27s second interview is successful the candidate is hired. Devplex Technology interviews a high number of unsuccessful candidates resulting in wasted time and effort. Sometimes employees who are not technically strong enough can be erroneously hired. Devplex Technology wishes to reduce this workload and hire more suitable people by implementing an enterprise based candidate assessment system. The system should allow the remote assessment of potential candidates. It should also allow the HR manager to easily retrieve questions and answers on a selected topic. The system should test the candidates only on subjects which apply to the role they are hired for, the questions should progressively get harder as the candidate gets more questions correct, this will allow for a truly strong candidate achieve the highest score. The overall aim of the system is to reduce workload and help find the best possible candidate for Devplex Technologies

    Quality-Aware Tooling

    Get PDF
    Programming is a fascinating activity that can yield results capable of changing people lives by automating daily tasks or even completely reimagining how we perform certain activities. Such a great power comes with a handful of challenges, with software maintainability being one of them. Maintainability cannot be validated by executing the program but has to be assessed by analyzing the codebase. This tedious task can be also automated by the means of software development. Programs called static analyzers can process source code and try to detect suspicious patterns. While these programs were proven to be useful, there is also an evidence that they are not used in practice. In this dissertation we discuss the concept of quality-aware tooling —- an approach that seeks a promotion of static analysis by seamlessly integrating it into development tools. We describe our experience of applying quality-aware tooling on a core distribution of a development environment. Our main focus is to provide live quality feedback in the code editor, but we also integrate static analysis into other tools based on our code quality model. We analyzed the attitude of the developers towards the integrated static analysis and assessed the impact of the integration on the development ecosystem. As a result 90% of software developers find the live feedback useful, quality rules received an overhaul to better match the contemporary development practices, and some developers even experimented with a custom analysis implementations. We discovered that live feedback helped developers to avoid dangerous mistakes, saved time, and taught valuable concepts. But most importantly we changed the developers' attitude towards static analysis from viewing it as just another tool to seeing it as an integral part of their toolset

    How Important are Good Method Names in Neural Code Generation? A Model Robustness Perspective

    Full text link
    Pre-trained code generation models (PCGMs) have been widely applied in neural code generation which can generate executable code from functional descriptions in natural languages, possibly together with signatures. Despite substantial performance improvement of PCGMs, the role of method names in neural code generation has not been thoroughly investigated. In this paper, we study and demonstrate the potential of benefiting from method names to enhance the performance of PCGMs, from a model robustness perspective. Specifically, we propose a novel approach, named RADAR (neuRAl coDe generAtor Robustifier). RADAR consists of two components: RADAR-Attack and RADAR-Defense. The former attacks a PCGM by generating adversarial method names as part of the input, which are semantic and visual similar to the original input, but may trick the PCGM to generate completely unrelated code snippets. As a countermeasure to such attacks, RADAR-Defense synthesizes a new method name from the functional description and supplies it to the PCGM. Evaluation results show that RADAR-Attack can reduce the CodeBLEU of generated code by 19.72% to 38.74% in three state-of-the-art PCGMs (i.e., CodeGPT, PLBART, and CodeT5) in the fine-tuning code generation task, and reduce the Pass@1 of generated code by 32.28% to 44.42% in three state-of-the-art PCGMs (i.e., Replit, CodeGen, and CodeT5+) in the zero-shot code generation task. Moreover, RADAR-Defense is able to reinstate the performance of PCGMs with synthesized method names. These results highlight the importance of good method names in neural code generation and implicate the benefits of studying model robustness in software engineering.Comment: UNDER REVIE

    A Data-driven, High-performance and Intelligent CyberInfrastructure to Advance Spatial Sciences

    Get PDF
    abstract: In the field of Geographic Information Science (GIScience), we have witnessed the unprecedented data deluge brought about by the rapid advancement of high-resolution data observing technologies. For example, with the advancement of Earth Observation (EO) technologies, a massive amount of EO data including remote sensing data and other sensor observation data about earthquake, climate, ocean, hydrology, volcano, glacier, etc., are being collected on a daily basis by a wide range of organizations. In addition to the observation data, human-generated data including microblogs, photos, consumption records, evaluations, unstructured webpages and other Volunteered Geographical Information (VGI) are incessantly generated and shared on the Internet. Meanwhile, the emerging cyberinfrastructure rapidly increases our capacity for handling such massive data with regard to data collection and management, data integration and interoperability, data transmission and visualization, high-performance computing, etc. Cyberinfrastructure (CI) consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high-performance networks to improve research productivity and enable breakthroughs that are not otherwise possible. The Geospatial CI (GCI, or CyberGIS), as the synthesis of CI and GIScience has inherent advantages in enabling computationally intensive spatial analysis and modeling (SAM) and collaborative geospatial problem solving and decision making. This dissertation is dedicated to addressing several critical issues and improving the performance of existing methodologies and systems in the field of CyberGIS. My dissertation will include three parts: The first part is focused on developing methodologies to help public researchers find appropriate open geo-spatial datasets from millions of records provided by thousands of organizations scattered around the world efficiently and effectively. Machine learning and semantic search methods will be utilized in this research. The second part develops an interoperable and replicable geoprocessing service by synthesizing the high-performance computing (HPC) environment, the core spatial statistic/analysis algorithms from the widely adopted open source python package – Python Spatial Analysis Library (PySAL), and rich datasets acquired from the first research. The third part is dedicated to studying optimization strategies for feature data transmission and visualization. This study is intended for solving the performance issue in large feature data transmission through the Internet and visualization on the client (browser) side. Taken together, the three parts constitute an endeavor towards the methodological improvement and implementation practice of the data-driven, high-performance and intelligent CI to advance spatial sciences.Dissertation/ThesisDoctoral Dissertation Geography 201
    • …
    corecore