377 research outputs found

    Krylov subspace techniques for model reduction and the solution of linear matrix equations

    No full text
    This thesis focuses on the model reduction of linear systems and the solution of large scale linear matrix equations using computationally efficient Krylov subspace techniques. Most approaches for model reduction involve the computation and factorization of large matrices. However Krylov subspace techniques have the advantage that they involve only matrix-vector multiplications in the large dimension, which makes them a better choice for model reduction of large scale systems. The standard Arnoldi/Lanczos algorithms are well-used Krylov techniques that compute orthogonal bases to Krylov subspaces and, by using a projection process on to the Krylov subspace, produce a reduced order model that interpolates the actual system and its derivatives at infinity. An extension is the rational Arnoldi/Lanczos algorithm which computes orthogonal bases to the union of Krylov subspaces and results in a reduced order model that interpolates the actual system and its derivatives at a predefined set of interpolation points. This thesis concentrates on the rational Krylov method for model reduction. In the rational Krylov method an important issue is the selection of interpolation points for which various techniques are available in the literature with different selection criteria. One of these techniques selects the interpolation points such that the approximation satisfies the necessary conditions for H2 optimal approximation. However it is possible to have more than one approximation for which the necessary optimality conditions are satisfied. In this thesis, some conditions on the interpolation points are derived, that enable us to compute all approximations that satisfy the necessary optimality conditions and hence identify the global minimizer to the H2 optimal model reduction problem. It is shown that for an H2 optimal approximation that interpolates at m interpolation points, the interpolation points are the simultaneous solution of m multivariate polynomial equations in m unknowns. This condition reduces to the computation of zeros of a linear system, for a first order approximation. In case of second order approximation the condition is to compute the simultaneous solution of two bivariate polynomial equations. These two cases are analyzed in detail and it is shown that a global minimizer to the H2 optimal model reduction problem can be identified. Furthermore, a computationally efficient iterative algorithm is also proposed for the H2 optimal model reduction problem that converges to a local minimizer. In addition to the effect of interpolation points on the accuracy of the rational interpolating approximation, an ordinary choice of interpolation points may result in a reduced order model that loses the useful properties such as stability, passivity, minimum-phase and bounded real character as well as structure of the actual system. Recently in the literature it is shown that the rational interpolating approximations can be parameterized in terms of a free low dimensional parameter in order to preserve the stability of the actual system in the reduced order approximation. This idea is extended in this thesis to preserve other properties and combinations of them. Also the concept of parameterization is applied to the minimal residual method, two-sided rational Arnoldi method and H2 optimal approximation in order to improve the accuracy of the interpolating approximation. The rational Krylov method has also been used in the literature to compute low rank approximate solutions of the Sylvester and Lyapunov equations, which are useful for model reduction. The approach involves the computation of two set of basis vectors in which each vector is orthogonalized with all previous vectors. This orthogonalization becomes computationally expensive and requires high storage capacity as the number of basis vectors increases. In this thesis, a restart scheme is proposed which restarts without requiring that the new vectors are orthogonal to the previous vectors. Instead, a set of two new orthogonal basis vectors are computed. This reduces the computational burden of orthogonalization and the requirement of storage capacity. It is shown that in case of Lyapunov equations, the approximate solution obtained through the restart scheme approaches monotonically to the actual solution

    Energy-efficient routing and secure communication in wireless sensor networks

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Wireless Sensor Networks (WSNs) consist of miniature sensor nodes deployed to gather vital information about an area of interest. The ability of these networks to monitor remote and hostile locations has attracted a significant amount of research over the past decade. As a result of this research, WSNs have found their presence in a variety of applications such as industrial automation, habitat monitoring, healthcare, military surveillance and transportation. These networks have the ability to operate in human-inaccessible terrains and collect data on an unprecedented scale. However, they experience various technical challenges at the time of deployment as well as operation. Most of these challenges emerge from the resource limitations such as battery power, storage, computation, and transmission range, imposed on the sensor nodes. Energy conservation is one of the key issues requiring proper consideration. The need for energy-efficient routing protocols to prolong the lifetime of these networks is very much required. Moreover, the operation of sensor nodes in an intimidating environment and the presence of error-prone communication links expose these networks to various security breaches. As a result, any designed routing protocol need to be robust and secure against one or more malicious attacks. This thesis aims to provide an effective solution for minimizing the energy consumption of the nodes. The energy utilization is reduced by using efficient techniques for cluster head selection. To achieve this objective, two different cluster-based hierarchical routing protocols are proposed. The selection of an optimal percentage of cluster heads reduces the energy consumption, enhances the quality of delivered data and prolongs the lifetime of a network. Apart from an optimal cluster head selection, energy consumption can also be reduced using efficient congestion detection and mitigation schemes. We propose an application-specific priority-based congestion control protocol for this purpose. The proposed protocol integrates mobility and heterogeneity of the nodes to detect congestion. Our proposed protocol uses a novel queue scheduling mechanism to achieve coverage fidelity, which ensures that the extra resources consumed by distant nodes are utilized effectively. Apart from energy conservation issue, this thesis also aims to provide a robust solution for Sybil attack detection in WSN. In Sybil attack, one or more malicious nodes forge multiple identities at a given time to exhaust network resources. These nodes are detected prior to cluster formation to prevent their forged identities from participating in cluster head selection. Only legitimate nodes are elected as cluster heads to enhance utilization of the resources. The proposed scheme requires collaboration of any two high energy nodes to analyse received signal strengths of neighbouring nodes. Moreover, the proposed scheme is applied to a forest wildfire monitoring application. It is crucial to detect Sybil attack in a wildfire monitoring application because these forged identities have the ability to transmit high false-negative alerts to an end user. The objective of these alerts is to divert the attention of an end user from those geographical regions which are highly vulnerable to a wildfire. Finally, we provide a lightweight and robust mutual authentication scheme for the real-world objects of an Internet of Thing. The presence of miniature sensor nodes at the core of each object literally means that lightweight, energy-efficient and highly secured schemes need to be designed for such objects. It is a payload-based encryption approach which uses a simple four way handshaking to verify the identities of the participating objects. Our scheme is computationally efficient, incurs less connection overhead and safeguard against various types of replay attacks

    Scientific workflow execution reproducibility using cloud-aware provenance

    Get PDF
    Scientific experiments and projects such as CMS and neuGRIDforYou (N4U) are annually producing data of the order of Peta-Bytes. They adopt scientific workflows to analyse this large amount of data in order to extract meaningful information. These workflows are executed over distributed resources, both compute and storage in nature, provided by the Grid and recently by the Cloud. The Cloud is becoming the playing field for scientists as it provides scalability and on-demand resource provisioning. Reproducing a workflow execution to verify results is vital for scientists and have proven to be a challenge. As per a study (Belhajjame et al. 2012) around 80% of workflows cannot be reproduced, and 12% of them are due to the lack of information about the execution environment. The dynamic and on-demand provisioning capability of the Cloud makes this more challenging. To overcome these challenges, this research aims to investigate how to capture the execution provenance of a scientific workflow along with the resources used to execute the workflow in a Cloud infrastructure. This information will then enable a scientist to reproduce workflow-based scientific experiments on the Cloud infrastructure by re-provisioning the similar resources on the Cloud.Provenance has been recognised as information that helps in debugging, verifying and reproducing a scientific workflow execution. Recent adoption of Cloud-based scientific workflows presents an opportunity to investigate the suitability of existing approaches or to propose new approaches to collect provenance information from the Cloud and to utilize it for workflow reproducibility on the Cloud. From literature analysis, it was found that the existing approaches for Grid or Cloud do not provide detailed resource information and also do not present an automatic provenance capturing approach for the Cloud environment. To mitigate the challenges and fulfil the knowledge gap, a provenance based approach, ReCAP, has been proposed in this thesis. In ReCAP, workflow execution reproducibility is achieved by (a) capturing the Cloud-aware provenance (CAP), b) re-provisioning similar resources on the Cloud and re-executing the workflow on them and (c) by comparing the provenance graph structure including the Cloud resource information, and outputs of workflows. ReCAP captures the Cloud resource information and links it with the workflow provenance to generate Cloud-aware provenance. The Cloud-aware provenance consists of configuration parameters relating to hardware and software describing a resource on the Cloud. This information once captured aids in re-provisioning the same execution infrastructure on the Cloud for workflow re-execution. Since resources on the Cloud can be used in static or dynamic (i.e. destroyed when a task is finished) manner, this presents a challenge for the devised provenance capturing approach. In order to deal with these scenarios, different capturing and mapping approaches have been presented in this thesis. These mapping approaches work outside the virtual machine and collect resource information from the Cloud middleware, thus they do not affect job performance. The impact of the collected Cloud resource information on the job as well as on the workflow execution has been evaluated through various experiments in this thesis. In ReCAP, the workflow reproducibility isverified by comparing the provenance graph structure, infrastructure details and the output produced by the workflows. To compare the provenance graphs, the captured provenance information including infrastructure details is translated to a graph model. These graphs of original execution and the reproduced execution are then compared in order to analyse their similarity. In this regard, two comparison approaches have been presented that can produce a qualitative analysis as well as quantitative analysis about the graph structure. The ReCAP framework and its constituent components are evaluated using different scientific workflows such as ReconAll and Montage from the domains of neuroscience (i.e. N4U) and astronomy respectively. The results have shown that ReCAP has been able to capture the Cloud-aware provenance and demonstrate the workflow execution reproducibility by re-provisioning the same resources on the Cloud. The results have also demonstrated that the provenance comparison approaches can determine the similarity between the two given provenance graphs. The results of workflow output comparison have shown that this approach is suitable to compare the outputs of scientific workflows, especially for deterministic workflows

    Politics of Social Reformation in NWFP (KPK)- An Estimate of Khan Abdul Ghaffar Khan(1890-1988)’s Educational Philosophy

    Get PDF
    In the history of North-West Frontier Province (NWFP; renamed as Khyber Pakhtunkhwa through 18th Amendment on 15 April 2010), many renowned personalities played their role for the awakening of the masses. Among them, Khan Abdul Ghaffar Khan (alias Bacha Khan) occupies a prominent place. The history of KPK would remain incomplete without the notion of services rendered by Bacha Khan. He played his role in a dignified manner in politics, social and educational field. It is a matter of fact that continuous invasions over Pakhtuns region greatly reduced the literacy rate among Pakhtuns which resulted in backwardness of the province. Consequently intermingling of evil customs into Pakhtun society in the name of Islam became the fate of the society. They were having a little knowledge of the religion. Mullahs, the religious leaders, also did nothing for the revival of Islam and Pakhtun society. Pakhtuns were living miserably and none was ready to come forward for the revival of this society. At the same time there came Bacha Khan on the scene and he accepted the challenge of the revival of Pakhtun society. This article highlights the facts that how Bacha Khan used education as a weapon for the revival of Pakhtun society of KPK. He not only made steps to flourish education among Pakhtuns but also tried his best to refrain the Pakhtuns from believing the evil customs and their practices. This article also mentions the institutions which were established by Bacha Khan to achieve his motives

    The Impact of Financial Leverage on Firm Performance in Fuel and Energy Sector, Pakistan

    Get PDF
    This research is finds the effect of financial leverage on firm performance of the fuel and energy sector in Pakistan. For this purpose 10 listed public limited firms out of 16 form fuel and energy sector listed at Karachi Stock Exchange (KSE). The main objective of the study is investigate the financial leverage has an effect on financial performance of the firm by taking evidence from listed fuel and energy companies of Pakistan. There is a positive relationship between the financial leverage and financial performance or not?  The main plan of this study is to examine the simplification that the firms get work with high profits may choice high leverage by using different statistical tools. Keywords: Financial leverage, Firm Performance, Financial ratio, Fuel and Energy Sector in Pakistan Paper type: Research pape

    Implicit Higher-Order Moment Matching Technique for Model Reduction of Quadratic-bilinear Systems

    Full text link
    We propose a projection based multi-moment matching method for model order reduction of quadratic-bilinear systems. The goal is to construct a reduced system that ensures higher-order moment matching for the multivariate transfer functions appearing in the input-output representation of the nonlinear system. An existing technique achieves this for the first two multivariate transfer functions, in what is called the symmetric form of the multivariate transfer functions. We extend this framework to an equivalent and simplified form, the regular form, which allows us to show moment matching for the first three multivariate transfer functions. Numerical results for three benchmark examples of quadratic-bilinear systems show that the proposed framework exhibits better performance with reduced computational cost in comparison to existing techniques.Comment: 19 pages, 11 subfigures in 6 figures, Journa

    Safety of analytical X-ray appliances

    Get PDF
    Analytical X-ray appliances are widely used for materials characterization. The use of X-rays without proper protection can lead to harmful effects. In order to minimize these harmful effects, it is essential to have a proper estimation of the dose rates around the equipment. Moreover, it is important to follow the safety regulations in order to avoid accidental exposure. This thesis outlines the potential hazards in the use of X-ray analytical instrumentation at the Division of Materials Physics, University of Helsinki. The theoretical part of the thesis introduces to the generally approved fundamental principles of radiation protection and the regulations based on these principles. Also, the production and properties of X-rays, radiation dosimetry and the biological effects of X-rays are discussed. The experimental part evaluates the actual radiation protection in the X-ray laboratories. Two open X-ray appliances, a powder diffractometer and a Laue setup, were selected for a closer study. A copper anode X-ray tube was used and the tube current was set to 20 mA and the voltage to 45 kV, which are the typical experimental values. The dose rates around the equipment were mapped by an ionization chamber. Based of these measurements, the maximum expected dose rate around the powder diffractometer was estimated to be of the order of 1 µSv/h at the user position. In the case of Laue system, the dose rate around equipment was estimated to be 10-100 times the dose rate around the powder diffractometer. Additionally, for powder diffractometer the theoretical characteristic X-ray flux from the tube was calculated. The characteristic X-ray flux after the monochromator at sample position was measured using a scintillator detector and a copper filter. The dose rate calculated from the flux was as high as 10 mSv/s. In typical powder diffraction experiments the expected dose rates are lower than these estimated maximum values. In general, the equipment were found to be safe to work with, provided that the users follow the department regulations and safety guidelines

    New Strategies For Automated Random Testing

    Get PDF
    The ever increasing reliance on software-intensive systems is driving research to discover software faults more effectively and more efficiently. Despite intensive research, very few approaches have studied and used knowledge about fault domains to improve the testing or the feedback given to developers. The present thesis addresses this shortcoming: it leverages fault co-localization in a new random testing strategy called Dirt Spot Sweep- ing Random (DSSR), and it presents two new strategies: Automated Discovery of Failure Domain (ADFD) and Automated Discovery of Failure Domain+ (ADFD+). These improve the feedback given to developers by deducing more information about the failure domain (i.e. point, block, strip) in an automated way. The DSSR strategy adds the value causing the failure and its neighbouring values to the list of interesting values for exploring the underlying failure domain. The comparative evaluation showed significantly better performance of DSSR over Random and Random+ strategies. The ADFD strategy finds failures and failure domains and presents the pass and fail domains in graphical form. The results obtained by evaluating error-seeded numerical programs indicated highly effective performance of the ADFD strategy. The ADFD+ strategy is an extended version of ADFD strategy with respect to algorithm and graphical presentation of failure domains. In comparison with Randoop, ADFD+ strategy successfully detected all failures and failure domains while Randoop identified individual failures but could not detect failure domains. The ADFD and ADFD+ techniques were enhanced by integration of the automatic invariant detector Daikon, and the precision of identifying failure domains was determined through extensive experimental evaluation of real world Java projects contained in a database, namely Qualitas Corpus. The analyses of results, cross-checked by manual testing indicated that the ADFD and ADFD+ techniques are highly effective in providing assistance but are not an alternative to manual testing
    • …
    corecore