15 research outputs found

    Weighted ensemble analysis of extreme precipitation under climate change

    Get PDF
    The frequency or intensity of heavy precipitation has likely increased in North America since 1950s. In order to analyze climate change impacts on extreme precipitation events in Chicago area, historical (1961-2000) and projected (2046-2065, 2081-2100) daily precipitation data are calculated from 13 statistical downscaling general circulation models under 3 CMIP3 emission scenarios: A1B, A2 and B1, as well as from 17 stations in NCDC and CCPN rain gage network. Then precipitation events of different recurrence intervals are calculated through regional frequency analysis and based on average deviation of climate model estimates from observation estimates, tricube weight function is used to assign weights to climate model ensemble. This weight result is further applied to projected quantile estimates to derive weighted expected values and confidence intervals of future extreme precipitation events under different emission scenarios, these results are further compared with current available estimates from NOAA Atlas 14. Finally, maximum entropy method (MEM) is applied to assign weights and the results are compared with those from weighted ensemble method (WEM). It is found that intensity and the confidence intervals of heavy precipitation is likely to increase significantly for about 20% from now to 2050s under all emission scenarios (A1B>A2>B1), afterwards, this increase trend will slow down (B1>A1B>A2). As for the performance of expected value projection based on MEM, it can also provide accurate estimates with high computational efficiency

    Color science and technology of novel nanophosphors for high-efficiency high-quality LEDs

    Get PDF
    Ankara : The Department of Electrical and Electronics and the Graduate School of Engineering and Sciences of Bilkent University, 2011.Thesis (Master's) -- Bilkent University, 2011.Includes bibliographical references leaves 118-129.Today almost one-fifth of the world‟s electrical energy is consumed for artificial lighting. To revolutionize general lighting to reduce its energy consumption, high-efficiency, high-quality light-emitting diodes (LEDs) are necessary. However, to achieve the targeted energy efficiency, present technologies have important drawbacks. For example, phosphor-based LEDs suffer from the emission tail of red phosphors towards longer wavelengths. This deep-red emission decreases substantially the luminous efficiency of optical radiation. Additionally, the emission spectrum of phosphor powders cannot be controlled properly for high-quality lighting, as this requires careful spectral tuning. At this point, new nanophosphors made of colloidal quantum dots and crosslinkable conjugated polymer nanoparticles have risen among the most promising alternative color convertors because they allow for an excellent capability of spectral tuning. In this thesis, we propose and present high-efficiency, highquality white LEDs using quantum dot nanophosphors that that exhibit luminous efficacy of optical radiation ≄380 lm/Wopt, color rendering index ≄90 and correlated color temperature ≀4000 K. We find that Stoke‟s shift causes a fundamental loss >15%, which limits the maximum feasible luminous efficiency to 326.6 lm/Welect. Considering a state-of-the-art blue LED (with 81.3% photon conversion efficiency), this corresponds to 265.5 lm/Welect. To achieve 100 and 200 lm/Welect, the layered quantum dot films are required to have respective quantum efficiencies of 39 and 79%. In addition, we report our numerical modeling and experimental demonstrations of the quantum dot integrated LEDs for the different vision regimes of human eye. Finally, we present LEDs based on the color tuning capability of conjugated polymer nanoparticles for the first time. Considering the outcomes of this thesis, we believe that our research efforts will help the development and industrialization of white light emitting diodes using nanophosphor components.Erdem, TalhaM.S

    The pragmatics of clone detection and elimination

    Get PDF
    The occurrence of similar code, or ‘code clones’, can make program code difficult to read, modify and maintain. This paper describes industrial case studies of clone detection and elimination, and were were performed in collaboration with engineers from Ericsson AB using the refactoring and clone detection tool Wrangler for Erlang. We use the studies to illustrate the complex set of decisions that have to be taken when performing clone elimination in practice; we also discuss how the studies have informed the design of the tool. However, the conclusions we draw are largely language-independent, and set out the pragmatics of clone detection and elimination in real-world projects as well as design principles for clone detection decision-support tools. Context. The context of this work is the fact that a software tool is designed to be used; the success of such a tool therefore depends on its suitability and usability in practice. The work proceeds by observing the use of a tool in particular case studies in detail, through a “partici- pant observer” approach, and drawing qualitative conclusions from these studies, rather than collecting and analysing quantitative data from a larger set of applications. Our conclusions help not only programmers but also the designers of software tools. Inquiry. Data collected in this way make two kinds of contribution. First, they provide the basis for deriving a set of questions that typically need to be answered by engineers in the process of removing clones from an application, and a set of heuristics that can be used to help answer these questions. Secondly, they provide feedback on existing features of software tools, as well as suggesting new features to be added to the tools. Approach. The work was undertaken by the tool designers and engineers from Ericsson AB, working to- gether on clone elimination for code from the company. Knowledge. The work led to a number of conclusions, at different levels of generality. At the top level, there is overwhelming evidence that the process of clone elimination cannot be entirely automated, and needs to include the input of engineers familiar with the domain in question. Furthermore, there is strong evidence that the automated tools are sensitive to a set of parameters, which will differ for different applications and programming styles, and that individual clones can be over- and under-identified: again, involving those with knowledge of the code and the domain is key to successful application. Grounding. The work is grounded in “participant observation” by the tool builders, who made detailed logs of the processes undertaken by the group. Importance. The work gives guidelines that assist an engineer in using clone detection and elimination in practice, as well as helping a tool developer to shape their tool building. Although the work was in the context of a particular tool and programming language, the authors would argue that the high-level knowledge gained applies equally well to other notions of clone, as well as other tools and programming languages

    Usage and refactoring studies of python regular expressions

    Get PDF
    Though regular expressions provide a powerful search technique that is baked into every major language, is incorporated into a myriad of essential tools, and has been a fundamental aspect of Computer Science since the 1960\u27s, no one has ever formally studied how they are used in practice, or how to apply refactoring principals to improve understandability and conformance to community standards. This thesis presents the original work of studying a sample of regexes taken from Python projects mined from GitHub, determining what features are used most often, defining some categories that illuminate common use cases, and identifying areas of significance for language and tool designers. Furthermore, this thesis defines an equivalence class model used to explore comprehension of regexes, identifying the most common and most understandable representations of semantically identical regexes, suggesting several refactorings and preferred representations. Opportunities for future work include the novel and rich field of regex refactoring, semantic search of regexes, and further fundamental research into regex usage and understandability

    A Software Vulnerability Prediction Model Using Traceable Code Patterns And Software Metrics

    Get PDF
    Software security is an important aspect of ensuring software quality. The goal of this study is to help developers evaluate software security at the early stage of development using traceable patterns and software metrics. The concept of traceable patterns is similar to design patterns, but they can be automatically recognized and extracted from source code. If these patterns can better predict vulnerable code compared to the traditional software metrics, they can be used in developing a vulnerability prediction model to classify code as vulnerable or not. By analyzing and comparing the performance of traceable patterns with metrics, we propose a vulnerability prediction model. Objective: This study explores the performance of code patterns in vulnerability prediction and compares them with traditional software metrics. We have used the findings to build an effective vulnerability prediction model. Method: We designed and conducted experiments on the security vulnerabilities reported for Apache Tomcat (Releases 6, 7 and 8), Apache CXF and three stand-alone Java web applications of Stanford Securibench. We used machine learning and statistical techniques for predicting vulnerabilities of the systems using traceable patterns and metrics as features. Result: We found that patterns have a lower false negative rate and higher recall in detecting vulnerable code than the traditional software metrics. We also found a set of patterns and metrics that shows higher recall in vulnerability prediction. Conclusion: Based on the results of the experiments, we proposed a prediction model using patterns and metrics to better predict vulnerable code with higher recall rate. We evaluated the model for the systems under study. We also evaluated their performance in the cross-dataset validation

    NANOSCALE CHARACTERIZATION OF FIBER/MATRIX INTERPHASE AND ITS IMPACT ON THE PERFORMANCE OF NATURAL FIBER REINFORCED POLYMER COMPOSITES

    Get PDF
    Contact resonance force microscopy (CR-FM) is a valuable technique for evaluating the interphase of natural fiber-reinforced polymer composites and for characterizing the elastic properties of cell wall layers of natural fibers. The nanoscale spatial resolution of CR-FM, combined with its ability to provide quantitative modulus images, makes it possible to investigate the mechanical properties of interphases as narrow as 30 nm in NFRPCs and thin cell wall layers in natural fibers. The nanoscale characterization of interphase and its effects on the bulk mechanical properties in this study shows that an increased interphase thickness is very essential for the improved tensile strength in lyocell/polypropylene (PP)/maleic anhydride grafted polypropylene (MAPP) composites. An optimum amount of MAPP increase the interphase thickness to the maximum of 100 nm and further addition only decreased the interphase thickness and adversely affected the strength properties. The average impact strength was found to decrease with the increasing concentration of MAPP and our results showed that matrix properties were also a determinant factor on the impact strength. After comparing the results obtained from CR-FM, tensile testing, and dynamic mechanical analysis (DMA), it was quite clear that ÎČ transition was not a strong indicator of the filler –matrix interaction within these composites. For lyocell/PP/maleic anhydride grafted styrene-ethylene/butylene-styrene (MA-SEBS) composites, tensile strength was not a direct reflection of interfacial bonding. The impact strength was found to increase with addition of MA-SEBS. Interphase region showed gradient of modulus values that ranged between the modulus values of the fiber and the matrix for both lyocell/PP/MAPP and lyocell/PP/MA-SEBS composites. The interphase region showed a gradient in modulus that could be described to first order by a linear fit, with a gradual decrease in modulus from fiber to matrix. Also, it was quite evident that the interphase thickness accounts for the majority of property variations within the interphase for different treatments. This result defies the earlier perception of a flexible interphase with low modulus than the matrix formed by the elastomers in composites

    Mining software repositories: measuring effectiveness and affectiveness in software systems.

    Get PDF
    Software Engineering field has many goals, among them we can certainly deal with monitoring and controlling the development process in order to meet the business requirements of the released software artifact. Software engineers need to have empirical evidence that the development process and the overall quality of software artifacts is converging to the required features. Improving the development process's Effectiveness leads to higher productivity, meaning shorter time to market, but understanding or even measuring the software de- velopment process is an hard challenge. Modern software is the result of a complex process involving many stakeholders such as product owners, quality assurance teams, project manager and, above all, developers. All these stake- holders use complex software systems for managing development process, issue tracking, code versioning, release scheduling and many other aspect concerning software development. Tools for project management and issues/bugs tracking are becoming useful for governing the development process of Open Source soft- ware. Such tools simplify the communications process among developers and ensure the scalability of a project. The more information developers are able to exchange, the clearer are the goals, and the higher is the number of developers keen on joining and actively collaborating on a project. By analyzing data stored in such systems, researchers are able to study and address questions such as: Which are the factors able to impact the software productivity? Is it possible to improve software productivity shortening the time to market?. The present work addresses two major aspect of software development pro- cess: Effectiveness and Affectiveness. By analyzing data stored in project man- agement and in issue tracking system of Open Source Communities, we mea- sured the Effectiveness as the time required to resolve an issue and analyzed factors able to impact it
    corecore