263 research outputs found

    Image stitching algorithm based on feature extraction

    Get PDF
    This paper proposes a novel edge-based stitching method to detect moving objects and construct\ud mosaics from images. The method is a coarse-to-fine scheme which first estimates a\ud good initialization of camera parameters with two complementary methods and then refines\ud the solution through an optimization process. The two complementary methods are the edge\ud alignment and correspondence-based approaches, respectively. The edge alignment method\ud estimates desired image translations by checking the consistencies of edge positions between\ud images. This method has better capabilities to overcome larger displacements and lighting variations\ud between images. The correspondence-based approach estimates desired parameters from\ud a set of correspondences by using a new feature extraction scheme and a new correspondence\ud building method. The method can solve more general camera motions than the edge alignment\ud method. Since these two methods are complementary to each other, the desired initial estimate\ud can be obtained more robustly. After that, a Monte-Carlo style method is then proposed for\ud integrating these two methods together. In this approach, a grid partition scheme is proposed to\ud increase the accuracy of each try for finding the correct parameters. After that, an optimization\ud process is then applied to refine the above initial parameters. Different from other optimization\ud methods minimizing errors on the whole images, the proposed scheme minimizes errors only on\ud positions of features points. Since the found initialization is very close to the exact solution and\ud only errors on feature positions are considered, the optimization process can be achieved very\ud quickly. Experimental results are provided to verify the superiority of the proposed method

    A comparison of software engines for simulation of closed-loop control systems

    Get PDF
    A wide array of control system design and simulation software engines is available in market. It includes MATLAB-Simulink, LabVIEW, Maple-MapleSim, Scilab-Scicos, VisSim and Mathematica-Control Professional Suite (CPS). Among all of them MATLAB-Simulink is dominant and widely used software engine. The main aim of this study is to implement different state space control methods for non-linear Furuta pendulum system in each one of them and to compare performance against MATLAB-Simulink. Different parameters like learning curve, interoperability, flexibility, control design tools, documentation and tech support are considered for efficiency comparison. It is shown that MapleSim has multi-body intuitive physical modeling (acausal) approach faster than Simulink with unique control animation feature. It is found that MapleSim has the ability to generate differential equations from acausal modeling. It was verified that differential equations generated by MapleSim were similar to original equations. Scilab-Scicos is cost-efficient being open source engine with all control design and simulation capability similar to Matlab-Simulink. LabVIEW has better front end and back end for control design simulation at the cost of steep learning curve. VisSim has complete symbolic modeling approach with great flexibility and ease of learning. Mathematica\u27s Control System Professional does not have symbolic modeling capability. It is observed that CPS has a cumbersome approach for modeling non linear systems

    THE ALGORITHMIC AUTOREGULATION SOFTWARE DEVELOPMENT METHODOLOGY

    Get PDF
    We present a new self-regulating methodology for coordinating distributed team work called Algorithmic Autoregulation (AA), based on recent social networking concepts and individual merit. Team members take on an egalitarian role, and stay voluntarily logged into so-called AA sessions for part of their time (e.g. 2 hours per day), during which they create periodical logs — short text sentences — they wish to share about their activity with the team. These logs are publicly aggregated in a Website and are peer-validated after the end of a session, as in code review. A short screencast is ideally recorded at the end of each session to make AA logs more understandable. This methodology has shown to be well-suited for increasing the efficiency of distributed teams working on what is called Global Software Development (GSD), as observed in our experience in actual real-world situations. This efficiency boost is mainly achieved through 1) built-in asynchronous on-demand communication in conjunction with documentation of work products and processes, and 2) reduced need for central management, meetings or time-consuming reports. Hence, the AA methodology legitimizes and facilitates the activities of a distributed software team. It thus enables other entities to have a solid means to fund these activities, allowing for new and concrete business models to emerge for very distributed software development. AA has been proposed, at its core, as a way of sustaining self-replicating hacker initiatives. These claims are discussed in a real case-study of running a distributed free software hacker team called Lab Macambira.O artigo apresenta uma nova metodologia para a coordenação do trabalho de uma equipe dispersa fisicamente chamada Autorregulação algorítmica (AA). A metodologia se baseia em conceitos recentes de redes sociais e mérito individual. Os membros da equipe assumem papéis igualitários e se mantêm logados voluntariamente a sessões de AA por parte do seu tempo (por exemplo, duas horas por dia), criando logs periódicos — frases curtas — que desejam compartilhar com os demais envolvidos nas atividades da equipe. Estes logs são agregados publicamente em um website e são validados pelos pares após o fim da sessão, da mesma forma que se faz na revisão de código. Preferencialmente, um breve screencast é gravado ao final de casa sessão para tornar os logs de AA mais compreensíveis. Esta metodologia se demonstrou adequada para aumentar a eficiência de equipes dispersas fisicamente trabalhando em projetos de Desenvolvimento de Software Global (GSD), conforme observado em nossa experiência em situações de uso cotidiano. O aumento de eficiência é obtido principalmente por meio de: 1) comunicação assíncrona e sob demanda em conjunto com a documentação dos produtos do trabalho e processos, e 2) necessidade reduzida de gestão centralizada, reuniões ou relatórios que consomem tempo. Assim, a metodologia AA legitima e facilita as atividades de uma equipe de desenvolvimento de software distribuída. Ela possibilita que outras entidades disponham de meios para financiar essas atividades, possibilitando que novos e concretos modelos de negócio se tornem possíveis para desenvolvimentos de software muito distribuídos. A AA foi proposta, em sua essência, como uma forma de possibilitar a auto-replicação de iniciativas de atividade hacker. Estes argumentos são discutidos com base em um estudo de caso real de atuação de uma equipe hacker de software livre distribuído chamada Lab Macambira

    Can my chip behave like my brain?

    Get PDF
    Many decades ago, Carver Mead established the foundations of neuromorphic systems. Neuromorphic systems are analog circuits that emulate biology. These circuits utilize subthreshold dynamics of CMOS transistors to mimic the behavior of neurons. The objective is to not only simulate the human brain, but also to build useful applications using these bio-inspired circuits for ultra low power speech processing, image processing, and robotics. This can be achieved using reconfigurable hardware, like field programmable analog arrays (FPAAs), which enable configuring different applications on a cross platform system. As digital systems saturate in terms of power efficiency, this alternate approach has the potential to improve computational efficiency by approximately eight orders of magnitude. These systems, which include analog, digital, and neuromorphic elements combine to result in a very powerful reconfigurable processing machine.Ph.D

    Assessment of individual photovoltaic module performance after 26 years of field exposure at the Telonicher Marine Lab in Trinidad, California

    Get PDF
    In 1990, 192 ARCO M75 photovoltaic (PV) modules were installed as a part of the Schatz Solar Hydrogen Project at the Humboldt State University (HSU) Telonicher Marine Lab in Trinidad, California, within 150 m of the Pacific Ocean. This 9.2 kW-rated PV array was used to power the marine laboratory air compressor and an electrolyzer. Individual current-voltage (IV) curve tests were performed on each of the PV modules prior to the array’s construction in 1990 and again in 2001, 2010, and, most recently, in 2016, following decommissioning of the array. After 25.5 years of use, 188 of the original 192 modules were operational, significantly outliving their 10-year warranties. Based on the previous testing results and the 2016 results, the lifetime decline in the maximum power output, at the normal operating cell temperature (NOCT) testing conditions of 1000 W/m2 of solar insolation and 47°C module temperature, of the modules averaged 21.6%, or 8.6 W, with 47% of the modules still producing at least 80% of their original (1990) measured maximum power. The average rate of the power output degradation grew from 0.4%/year in the first decade to 1.4%/year in the second decade, and the average degradation rate over the 25.5 years of exposure came to 0.85%/year

    Multifractal analysis of memory usage patterns

    Get PDF
    The discovery of fractal phenomenon in computer-related areas such as network traffic flow leads to the hypothesis that many computer resources display fractal characteristics. The goal of this study is to apply fractal analysis to computer memory usage patterns. We devise methods for calculating the Holder exponent of a time series and calculating the fractal dimension of a plot of a time series. These methods are then applied to memory-related data collected from a Unix server. We find that our methods for calculating the Holder exponent of a time series yield results that are independently confirmed through calculation of the fractal dimension of the time series, and that computer memory use does indeed display multifractal behavior. In addition, it is hypothesized that this multifractal behavior may be useful in making certain predictions about the future behavior of an operating system
    • …
    corecore