1,125,465 research outputs found

    The Open Source Software Paradigm

    Get PDF
    A lot misinformation and mystery surrounds the topic of Open Source software.  The conceptual misunderstanding of software is fostered, in some respects, by a lack of understanding within the professional ranks of the computer field.  There are many different views of Open Source software today, based upon ones perspective.  After about 20 years of evolution, the position of Open Source in our economy still has not coalesced.  Few seems to understand whether Open Source is free software for running a computer (i.e., Linux) or just a way to obtain some software products without giving Microsoft ones money. “There is one thing stronger than all the armies in the world and that is an idea whose time has come” - Victor Hug

    Fuzzy Cognitive Map based Prediction Tool for Schedule Overrun

    Get PDF
    The main aim of any software development organizations is to finish the project within acceptable or customary schedule and budget Software schedule overrun is one of a question that needs more concentration Schedule overrun may affect the whole project success like cost quality and increases risks Schedule overrun can be reason of project failure In today s competitive world controlling the schedule slippage of software project development is a challenging task Effective handling of schedule is an essential need for any software project organization The main tasks for software development estimation are determining the effort cost and schedule of developing the project under consideration Underestimation of project done knowingly just to win contract results into loses and also the poor quality project So precise schedule prediction leads to efficient control of time and budget during software development In this paper we developed a new technique for the prediction of schedule overrun This paper also presents the comparison with other algorithms of schedule estimation and Tool developed by us and at last proved that Fuzzy cognitive map based prediction tool gives more accurate results than other training algorithm

    C-Sheep: Controlling Entities in a 3D Virtual World as a Tool for Computer Science Education

    Get PDF
    One of the challenges in teaching computer science in general and computer programming in particular is to maintain the interest of students, who often perceive the subject as difficult and tedious. To this end, we introduce C-Sheep, a mini-language-like system for computer science education, using a state of the art rendering engine, usually found in entertainment systems. The intention is to motivate students to spend more time programming, which can be achieved by providing an enjoyable experience. Computer programming is an essential skill for software developers and as such is always an integral part of every computer science curriculum. However, even if students are pursuing a computer science related degree, it can be very difficult to interest them in the act of computer programming, the writing of software, itself. In the C-Sheep system this is addressed by using the visual gimmickry of modern computer games, which allows programs to provide instant visualisation of algorithms. This visual feedback is invaluable to the understanding of how the algorithm works, and - if there are unintended results - how errors in the program can be debugged. The C-Sheep programming language is a (100% compatible) subset of the ANSI C programming language. Apart from just being a tool for learning the basics of the C programming language, C-Sheep implements the C control structures that are required for teaching the basic computer science principles encountered in structured programming. Unlike other teaching languages which have minimal syntax and which are variable free to provide an environment with minimal complexity, C-Sheep allows the declaration and use of variables. C-Sheep also supports the definition of sub-routines (functions) which can be called recursively. "The Meadow" virtual environment is the virtual world in which entities (in our case sheep) controlled by C-Sheep programs exist. This micro world provides a graphical representation of the algorithms used in the programs controlling the virtual entities. Their position and orientation within the virtual world visualise the current state of the program. "The Meadow" is based on our proprietary "Crossbow" game engine which incorporates a virtual machine for executing CSheep programs. The Crossbow Engine is a compact game engine which is flexible in design and offers a number of features common to more complex engines. The Crossbow Virtual Machine used with C-Sheep in "The Meadow" - an improvement on the ZBL/0 virtual machine - is a module of the Crossbow Engine. The C-Sheep system also provides a counterpart library for C, mirroring the CSheep library functions of the virtual machine. This allows C-Sheep programs to be compiled into an executable using a normal off-the-shelf C/C++ compiler. This executable can then be run from within the native working environment of the operating system. The purpose of this library is to simplify the migration from the educational mini-language to real-world systems by allowing novice programmers to make an easy transition from using the C-Sheep system to using the C programming language

    Excavator Design Validation

    Get PDF
    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included

    Interactive Planetarium Project

    Get PDF
    The Interactive Planetarium Project will design and build the software framework for connectivity between the Digistar 6 planetarium projection software and the smartphones of all audience members in the Jim and Linda Lee Planetarium. The goal of this project is to make planetarium shows more participatory, add a feature to our planetarium shows that many other universities do not yet have, and create a framework for future students and faculty to build from. To demonstrate our technology, we will make a real-time competitive trivia game able to support 60 concurrent users (number of expected audience members in the planetarium). The framework created by the Interactive Planetarium Project will serve as a unique opportunity that will allow future students to explore and create more complex interactive software within the planetarium with mass scale audience participation. The project will also be an additive to the current STEM Outreach program, gaining the attention of outside communities to this new experience provided at the Jim and Linda Lee Planetarium, with the potential to be used not just for video games played by the audience but also for interactive planetarium shows, surveys or group activities. This project is based on modern web programming paradigms as well as research in the Human-Computer Interaction space. Smartphones are ubiquitous and the ability for them to interact with the world around us is a frontier that is still being explored. This project aims to explore how smartphones can make shows and performances more engaging and participatory

    Offshoring: The Transition From Economic Drivers Toward Strategic Global Partnership and 24-Hour Knowledge Factory

    Get PDF
    The concept of offshoring of professional services first gained attention slightly over 25 years ago. At that time, US companies began to realize the cost-advantage of getting their computer software developed in India and other countries. The concept gained momentum with the advent of Internet and the availability of inexpensive communication technologies. Unrelated events, such as the need to address the Y2K problem, in a timebound manner, further increased the use of computer personnel based in faraway places. Studies conducted by professional organizations, such as ACM, IEEE, and NSPE, focus on the cost and labor aspects of offshoring and its direct impact on employment opportunities in the countries involved. This paper broadens this perspective by emphasizing that the key drivers for offshoring will be strategic, not economic, over time. A formal mathematical model is presented to highlight the new trend. Further, instead of a binary model in which the work is performed in the country of the sponsoring organization or a different country, we will gradually see a new work paradigm in which the work is performed in a sequence in factories located in multiple continents of the world. Such 24-Hour Knowledge Factories can leverage factors beyond cost savings. One can employ professionals in multiple parts of the world, perform tasks at all times of the day, and bring new products and services quicker to the market. Just as the advent of multiple shifts allowed machines to be utilized round the clock leading to the benefits of the Industrial Revolution, the creation of new globally distributed workforces and global partnerships can lead to major strategic advantages for companies and countries alike

    Metaheuristics for the Minimum Time Cut Path Problem with Different Cutting and Sliding Speeds

    Get PDF
    The problem of efficiently cutting smaller two-dimensional pieces from a larger surface is recurrent in several manufacturing settings. This problem belongs to the domain of cutting and packing (C&P) problems. This study approached a category of C&P problems called the minimum time cut path (MTCP) problem, which aims to identify a sequence of cutting and sliding movements for the head device to minimize manufacturing time. Both cutting and slide speeds (just moving the head) vary according to equipment, despite their relevance in real-world scenarios. This study applied the MTCP problem on the practical scope and presents two metaheuristics for tackling more significant instances that resemble real-world requirements. The experiments presented in this study utilized parameter values from typical laser-cutting machines to assess the feasibility of the proposed methods compared to existing commercial software. The results show that metaheuristic-based solutions are competitive when addressing practical problems, achieving increased performance regarding the processing time for 94% of the instances

    Automated Software Transplantation

    Get PDF
    Automated program repair has excited researchers for more than a decade, yet it has yet to find full scale deployment in industry. We report our experience with SAPFIX: the first deployment of automated end-to-end fault fixing, from test case design through to deployed repairs in production code. We have used SAPFIX at Facebook to repair 6 production systems, each consisting of tens of millions of lines of code, and which are collectively used by hundreds of millions of people worldwide. In its first three months of operation, SAPFIX produced 55 repair candidates for 57 crashes reported to SAPFIX, of which 27 have been deem as correct by developers and 14 have been landed into production automatically by SAPFIX. SAPFIX has thus demonstrated the potential of the search-based repair research agenda by deploying, to hundreds of millions of users worldwide, software systems that have been automatically tested and repaired. Automated software transplantation (autotransplantation) is a form of automated software engineering, where we use search based software engineering to be able to automatically move a functionality of interest from a ‘donor‘ program that implements it into a ‘host‘ program that lacks it. Autotransplantation is a kind of automated program repair where we repair the ‘host‘ program by augmenting it with the missing functionality. Automated software transplantation would open many exciting avenues for software development: suppose we could autotransplant code from one system into another, entirely unrelated, system, potentially written in a different programming language. Being able to do so might greatly enhance the software engineering practice, while reducing the costs. Automated software transplantation manifests in two different flavors: monolingual, when the languages of the host and donor programs is the same, or multilingual when the languages differ. This thesis introduces a theory of automated software transplantation, and two algorithms implemented in two tools that achieve this: µSCALPEL for monolingual software transplantation and τSCALPEL for multilingual software transplantation. Leveraging lightweight annotation, program analysis identifies an organ (interesting behavior to transplant); testing validates that the organ exhibits the desired behavior during its extraction and after its implantation into a host. We report encouraging results: in 14 of 17 monolingual transplantation experiments involving 6 donors and 4 hosts, popular real-world systems, we successfully autotransplanted 6 new functionalities; and in 10 out of 10 multlingual transplantation experiments involving 10 donors and 10 hosts, popular real-world systems written in 4 different programming languages, we successfully autotransplanted 10 new functionalities. That is, we have passed all the test suites that validates the new functionalities behaviour and the fact that the initial program behaviour is preserved. Additionally, we have manually checked the behaviour exercised by the organ. Autotransplantation is also very useful: in just 26 hours computation time we successfully autotransplanted the H.264 video encoding functionality from the x264 system to the VLC media player, a task that is currently done manually by the developers of VLC, since 12 years ago. We autotransplanted call graph generation and indentation for C programs into Kate, (a popular KDE based test editor used as an IDE by a lot of C developers) two features currently missing from Kate, but requested by the users of Kate. Autotransplantation is also efficient: the total runtime across 15 monolingual transplants is 5 hours and a half; the total runtime across 10 multilingual transplants is 33 hours

    Anomaly of Existing Intellectual Property Protection for Software

    Get PDF
    The digital sphere, “cyberspace,” is growing by leaps and bounds. Computers and programs are making a profound impact on every aspect of human life: education, work, warfare, entertainment and social life, health, law enforcement, etc. For instance, software plays an enormous role in the health sector by assisting in monitoring patients, refilling prescriptions and billing and keeping medical records. In finance, transactions involving calculations such as interest and account balances are operated by software. Air traffic control, flight schedules, booking and related tasks in the airline industry; and calculations of all sorts of incomes, benefits, expenses and interests in insurance and tax administration institutions have been undertaken with the use of software. This is just at the macro/highest level. At the individual level, the more we use digital devices, the more we need to use software to access services and products. So, the fact that people now need access to digital technologies to sustain modern social, economic and political life is not in dispute. Most digital devices such as computers are useless without programs. Simply stated, access to digital technologies depends highly on software. More precisely, it is practically impossible these days to find a life without the involvement of software and software-based devices. Software used to be, in the 1970s and early 1980s, applied to huge mainframe computers that took up the space of, maybe, an entire room. These days, we have software applied everywhere, in many aspects of our lives. It is not just in laptops but also on our mobile devices and is increasingly integrated into all sorts of objects. We hear about the coming “internet of things,” a phrase summing up the radically increasing connectivity of all sorts of items around us that, expectedly, will be communicating with each other. They will be doing so on the basis of software-based algorithms. Our computers, smartphones, etc. are dependent for their functions on these logical instructions. Before the 1960s, vendors distributed and sold software bundled with computer hardware. Professor Pamela Samuelson quoted the work of Justice Stephen Breyer and has stated the following: “Systems software was, ‘and should continue to be, created by hardware manufacturers and sold along with their hardware at a single price”. During that time there was no clearly recognized protection for computer programs. As time went on, vendors began to unbundle software from hardware and started to provide programs to the public separately packaged. With a view to responding to the needs of industry, on one hand, and to advancing innovation, and encouraging the dissemination of useful arts for the general public on the other, different jurisdictions began to afford separate legal protections to computer software. Many jurisdictions opted for copyright protection as the best option. Recent international copyright treaties such as the World Intellectual Property Organization Copy Rights Treaty (WCT) and the World Trade Organization Trade related aspects of Intellectual Property Right (TRIPS) have a clause on the copyrightability of computer programs. Obviously, it is reasonable to raise questions as to why it is not included in early copyright instruments such as the Berne Convention for the Protection of Literary and Artistic Works. There were early concerns as to the inclusion of computer software in international copyright instruments. This was, partly, justified by the non-inclusion of computer software in Berne Convention. At the regional level, too, certain jurisdictions have adopted separate copyright instruments for the protections of computer software. Nation states such as the U.S. , Canada , Ethiopia , etc. also have recognized the copyrightability of computer programs. A closer look at the history of the tendency to regard software as a copyrightable subject matter tells us that the choice was not the result of research and in-depth study. We also see widespread protection of software products by patent law. In spite of the absence of legislation which directly allows for the patentability of computer software, we witness frequent disputes and litigation as regards the scope and extent of software protection. In addition to intellectual property protections, computing companies are using technological means to exclude others from using their digital works. This approach is called self-regulation. They do so by using technology: encryption, coding, etc. It is also illegal to reverse engineer and decompile computer programs. A trade secret can be used to protect computer software, especially the inner working of software. Software developers also use the law of industrial design as another form of protection for the ‘look and feel’ aspect of their software. On the other extreme, we see some movements which advocate for free and open-source software. It is based on a unique model of innovation. Free software can have two formats: free or open-source software. They are sometimes called FLOSS (Free/Libre/Open Source Software). When we say software is free, we mean that users can use it as they wish, modify it or fix some of its bugs, redistribute it, and access its source code. The problem with existing software protection is that it overlooks its special nature. Software is unique. It involves the writing of millions of lines of codes in the form of source code. There is no dispute as to why software is protected. Writing those millions of lines of code requires an investment of time, intellect and money. Hence, protection is required. The issue is as to the choice of the form of protection. So, this thesis argues the blanket copyright and patent protections of software raise a fairness issue, particularly from the perspective of the consumer’s interest. It also argues the existing laws governing computer software lack clarity and certainty. . Overall, the thesis discusses the existing legal framework for computer programs. It concludes that the system needs reform as it mainly considers the interest of software industry. In other words, consumers and new entrants’ interests have not been given much regard. More importantly, the thesis reflects on the general purpose of intellectual property rights and their applicability to computer programs. The most important reason for the reform is the unique nature of software. By doing so, the thesis suggests for the adoption of special law for computer programs
    corecore