385 research outputs found

    Design and realisation of an integrated methodology for the analytical design of complex supply chains

    Get PDF
    Supply chain systems are inherently complex and are dynamically changing webs of relationships. Wider product variety, smaller production lot sizes, more tiers and different actors involved in coordinated supply chains also cause supply chain complexity and presents major challenges to production managers. This context has led modern organizations to implement new supply chain paradigms and adopt new techniques to support rapid design, analysis and implementation of the new paradigms. The present research focuses to develop an integrated methodology which can support the analytical design of complex supply chains. [Continues.

    An inclusive analysis of the leptonic decay modes of the Zo boson

    Get PDF
    This thesis describes an analysis of the process e+e— → l+l- (where l = e, ÎŒ,, τ) at centre-of-mass energies between 88 GeV and 94 GeV, using the data collected by the DELPHI detector between the years 1991 and 1993. The leptonic decays of the Z° boson are selected without attempting to separate the three lepton types, thus making it an inclusive lepton analysis. The theory behind lepton pair production is introduced and the extraction of various electroweak parameters from the experimental observables is discussed. The LEP collider and the DELPHI detector are described, with special emphasis being given to the sub-detectors used in the analysis. The criteria used to select a high purity leptonic sample are described along with calculations of various backgrounds and efficiencies. The sample of selected leptonic events is then used to measure the cross-sections and forward-backward asymmetries. Finally, a fit to these cross-sections and asymmetries, together with the hadronic (e+e- → qq̄) cross-sections, is carried out. Various Z° parameters are obtained: the mass and total width Mz-91.1876 ± 0.0052 GeV/c2, Γz = 2.4971 ± 0.0061 GeV, the ratio of the hadronic to leptonic partial widths Rl = 20.73 ± 0.09, and the pole leptonic asymmetry (A°FB)1 = 0.0195 ± 0.0042. Using these results and the value of the strong coupling constant (αs), determined by the DELPHI collaboration, the number of light neutrino species is determined to be NÎœ = 3.045 ± 0.035. The leptonic partial width is found to be: Γl = 83.82 ± 0.29 MeV. Using the measured leptonic forward-backward asymmetries, the squared vector and axial-vector couplings of the Z° to charged leptons are found to be (ĝv1)2-3 and (ĝa1)2 = 0.2505 ± 0.0009. These values can be used to determine the effective rho parameter and the effective weak mixing angle: p̂ = 1.0020 ± 0.0036, and sin2 Ξefflept = 0.2297 ± 0.0024. A full Standard Model fit to the data gives the values of the strong coupling constant, αs, and the mass of the top quark, mtop, as being: αs = 0.123 ± 0.010, mtop = 178+22-25 (expt)+18-16(Higg s)GeV/c2, where 60 < mHiggs (GeV/c2) < 1000 with a central value of 300 GeV/c2. All the results obtained agree with the results from the lepton-identified analyses (analyses in which leptonic events are selected on the basis of their individual flavour) and with the predictions of the Standard Model

    History dependence in stock price evolution

    Get PDF
    This thesis describes the modelling of stock price volatility as a function of the stock price and an exponentially weighted moving average. The data used is the Dow Jones Industrial Average Index for the years 1901 until 1995. The random walk of asset prices and the Black-Scholes model are modified to include a moving average as a third variable, the other two being stock price and time. The ideas behind technical analysis are introduced while no attempt is made to justify its use. Particular mention is made to the widely used technical indicator, the moving average. Numerous tests are performed on the Dow Jones data in order to determine whether or not the volatility, , can be a function of the ratio of the stock price and the corresponding exponentially weighted moving average, S/I. Explicit finite differencing is carried out on the p.d.e. associated with the modified Black- Scholes model and various checks are made to study the agreement between the numerical scheme and the exact results. A 2-D option, which has a payoff dependent on the performance of an exponentially weighted moving average, is introduced and solved numerically. Finally, the results detailed in the thesis are discussed along with possible future extensions. Also mentioned is an Asian option type payoff used by traders to trade the cross-overs of two different (time scale) moving averages

    Seismic Coupling and Hydrological Responses

    Get PDF
    In seismology, the capability of an earthquake to induce other seismic events has been widely accepted for decades. For example, the term aftershock involves a strong relation of such a seismic event with the incidence of a main shock. Moreover, hydrological changes (water level in wells and streams, geyser eruption and remote seismicity) in response to remote earthquakes have been reported for many years. A matter of current debate concerns the spatiotemporal scale of interaction among seismic events. However, there appears to be no clear image of what is the exact method of transmission of the triggering energy for the phenomena listed above. It appears that the P-wave and the S-wave are inadequate in terms of ground strain magnitudes at teleseismic distances, while the amplitude of the surface waves generally decreases exponentially with depth in the Earth and could not be responsible for triggering deeper earthquakes or deep-seated fluid flow fluxes in 3-5 km deep reservoirs. This leaves some other wave as a possible triggering energy sources. This thesis is based on a diffusion-dynamic theory that predicts a low velocity displacement wave, called a soliton wave, propagating in liquid-saturated porous media with velocity ~100-300 m/s, analogous to a tsunami that travels with the loss of little energy. This is hypothesized to be the mechanism for energy transfer that could be sufficient to promote changes in local pore pressure and therefore to alter the ambient effective stresses. It is also hypothesized that a soliton wave packet is emitted by a primary seismic event and may trigger sympathetic secondary earthquakes at a remote distance, fluid level fluctuation in wells, changes in geyser eruption behaviour, and changes in microseismic frequency, amplitude and patterns in appropriate places (e. g. under water reservoirs, in areas of active hydrothermalism, in tectonically active areas, and so on). This thesis undertakes a review of some of these phenomena, and finds that the evidence as to what is the triggering mechanism is not clear. Also, it appears that the soliton hypothesis is not at all disproved by the data, and there may be some evidence of its existence. To reveal the evidence of this kind of wave (soliton) in nature, real sequence and K-Q cases velocity data bases of earthquake interactions in the year of 2003 have been constructed by using information from Incorporated Seismological Research Institute (IRIS). The qualitative and quantitative analysis demonstrates that interactions between seismological and hydrological systems due to soliton waves are a definite possibility. However, the growth of fluid fluxes, geysers eruption and remote seismicity are controlled by both the principal stresses and the pore pressure. Hence, this interaction depends on the hydromechanical properties of rock such as permeability, compressibilities, and viscosities of fluids, saturations, and porosity. Perhaps the strongest argument in favour of a low-velocity soliton trigger is that the other seismic waves seem to be inadequate, and there is no evidence for their actions as a trigger. The practice of detection and analysis of a soliton is not undertaken in this work. Because current devices are incapable to measure such a wave as they are on the surface and insensitive to liquid-solid coupling, sensitive and precise sensors in the low frequency range must be installed within the liquid saturated zone, preferably under the water table, to advance further work

    E-BANKING: A CASE STUDY OF ASKARI COMMERCIAL BANK PAKISTAN

    Get PDF
    This paper has covered the operational issues related to e-banking as well as customer’s perception on usage of e-banking a case study of Askari Bank, Pakistan. 40 staff members and four customers are selected as sample for this study. Both qualitative and quantitative methods are used to present the results. Descriptive statistics is applied to describe the demographic variables while for operational problems correlation was used. Finally cross case analysis present customers’ perception about e-banking practices. Analysis shows that customer is not ready to adopt new technology that why their satisfaction level with e-banking is low. Internet speed and government policies are not supportive for e-banking in Pakistan. Due to lack of trust on technology and low computer literacy rate, customer hesitates to adopt new technology. : In order to promote IT culture in Pakistan, government has to reduce the internet rate. to promote the benefits of e-banking on media so that more user get facilitated from e-banking services.E-banking, Internet, ATM, Online transaction, E-readiness, Technology Acceptance Models

    Comparison of two surveys of head injured patients presenting during a calendar year to an urban medical centre 32 years apart

    Get PDF
    Objective: To study the patients presenting with head injuries to a tertiary hospital in Karachi during the year 2003.Methods: During the calendar year 2003, a cross-sectional study was conducted of all patients presenting to the casualty department of Jinnah Postgraduate Medical Centre (JPMC) with head injury. Personal information was collected from the patient\u27s attendants at presentation or later if the patient had been brought in by the emergency services as an unknown person. The circumstances of the injury were similarly established and the clinical features documented.Results: During the year 2003, a total of 3008 patients reported to the emergency room of JPMC. Of these 67% were males and the majority of the reporting patients (48%) had suffered their head injury in falls from a height. However, when considering the seriously injured patients warranting admission to the neurosurgery unit, road traffic injuries predominated (54%) and the age distribution was weighed towards an older age group with 70% being above the age of 20 years and mainly in the economically active 4th decade of life. One hundred and fifty four patients died for a mortality rate of 5% in the entire series of 3008 patients and 25% of the 623 admitted patients.Conclusion: The experience of head injuries reporting to our centre in two calendar years, 33 years apart, suggests that this attention to the crisis of death and disability occurring on roads is necessary

    Bayesian Game Formulation of Power Allocation in Multiple Access Wiretap Channel with Incomplete CSI

    Full text link
    In this paper, we address the problem of distributed power allocation in a KK user fading multiple access wiretap channel, where global channel state information is limited, i.e., each user has knowledge of their own channel state with respect to Bob and Eve but only knows the distribution of other users' channel states. We model this problem as a Bayesian game, where each user is assumed to selfishly maximize his average \emph{secrecy capacity} with partial channel state information. In this work, we first prove that there is a unique Bayesian equilibrium in the proposed game. Additionally, the price of anarchy is calculated to measure the efficiency of the equilibrium solution. We also propose a fast convergent iterative algorithm for power allocation. Finally, the results are validated using simulation results.Comment: 7 Pages, 2 Figures, submitted for possible publicatio

    Trans-mastoid approach to otogenic brain abscess

    Get PDF
    The treatment of otogenic brain abscess initially involves excision or aspiration of the abscess through a temporal or sub-occipital route depending on its location. This is followed by a mastoidectomy by the ENT surgeon to eradicate the primary source of infection. During the last three years, we have approached such lesions through a mastoidectomy followed by excision of the abscess through the same approach. This trans-mastoid approach is technically feasible in following the tract of suppuration, and clearing the cause and effect of pathology, at the same sitting. This paper describes our initial experience with the trans-mastoid approach to otogenic brain abscesses. On the basis of our results, we believe that transmastoid approach is an effective and logical option for the treatment of otogenic brain abscess, and merits further investigation in the form of a prospective study

    New Method of Inference Control for Statistical Databases

    Get PDF
    Computing and Information Scienc

    A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks

    Full text link
    Cyber threat attribution is the process of identifying the actor of an attack incident in cyberspace. An accurate and timely threat attribution plays an important role in deterring future attacks by applying appropriate and timely defense mechanisms. Manual analysis of attack patterns gathered by honeypot deployments, intrusion detection systems, firewalls, and via trace-back procedures is still the preferred method of security analysts for cyber threat attribution. Such attack patterns are low-level Indicators of Compromise (IOC). They represent Tactics, Techniques, Procedures (TTP), and software tools used by the adversaries in their campaigns. The adversaries rarely re-use them. They can also be manipulated, resulting in false and unfair attribution. To empirically evaluate and compare the effectiveness of both kinds of IOC, there are two problems that need to be addressed. The first problem is that in recent research works, the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address these problems in this research work, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%.Comment: 20 page
    • 

    corecore