22 research outputs found

    Pak-US relations post 9/11: Impact of aid, irritants and regimes in Pakistan (2001-2013)

    Get PDF
    Pak-US relation is a frictional course ever since and a foreign policy challenge for both states. Pakistan’s major dependence on Unites States had been due to its needs for aid. After the independence Pakistan could play a considerable role in region as far as United States’ cold war interests were concerned. The nature of relationship, however, remained very difficult. Trust deficit is a major factor on both sides to determine any future engagement. 9/11 was an event which entirely changed the progression of relationship and level of engagement. This event was the tipping point of rejuvenated bilateral relationship in renewed circumstances. Since then, there have been various factors which affected the bilateral commitment. The aid is the major signal of engagement. The level of and kind of aid determines the propensity of engagement.During War on Terror military aid extended to Pakistan determined US interest with Pakistan and made it frontline ally. Irritants also have their role in eroding the level of cooperation.United States weighed its own foreign policy benefit while engaging with governments in Pakistan. This paper analyses the nature of relationship post 9/11 and impact of few factors. The research is aimed at finding answers to few questions. 9/11 was the event, which brought about a considerable changein relations.US interests dragged it back to the region. Highs and lows of this reinvigorated companionship is determined by aid to fight War on Terror. Differences of interpretation based on varied interests of various events become irritants in conducting smooth diplomacy. Also, it is analyzed that US engagement has been deep rooted with military regime of Musharraf thandemocratically elected government in 2008

    Comprehensive comparison of hetero-homogeneous catalysts for fatty acid methyl ester production from non-edible jatropha curcas oil

    Get PDF
    The synthesis of biodiesel from Jatropha curcas by transesterification is kinetically controlled. It depends on the molar ratio, reaction time, and temperature, as well as the catalyst nature and quantity. The aim of this study was to explore the transesterification of low-cost, inedible J. curcas seed oil utilizing both homogenous (potassium hydroxide; KOH) and heterogenous (calcium oxide; CaO) catalysis. In this effort, two steps were used. First, free fatty acids in J. curcas oil were reduced from 12.4 to less than 1 wt.% with sulfuric acid-catalyzed pretreatment. Transesterification subsequently converted the oil to biodiesel. The yield of fatty acid methyl esters was optimized by varying the reaction time, catalyst load, and methanol-to-oil molar ratio. A maximum yield of 96% was obtained from CaO nanoparticles at a reaction time of 5.5 h with 4 wt.% of the catalyst and an 18:1 methanol-to-oil molar ratio. The optimum conditions for KOH were a molar ratio of methanol to oil of 9:1, 5 wt.% of the catalyst, and a reaction time of 3.5 h, and this returned a yield of 92%. The fuel properties of the optimized biodiesel were within the limits specified in ASTM D6751, the American biodiesel standard. In addition, the 5% blends in petroleum diesel were within the ranges prescribed in ASTM D975, the American diesel fuel standard

    Cooperative Diversity and Power Consumption in Multi-hop WSN : Effects of node energy on Single Frequency Networks

    No full text
    At the present time, wireless sensor networks are becoming more and more  common and energy consumption is a key factor in the deployment and  maintenance of these networks. This thesis compares non-SFN multi-hop and  a single frequency network (SFN) or cooperative diversity algorithms with  respect to the energy consumed by the nodes. Since the nodes have limited  power capacity it is extremely important to have an efficient algorithm. In  addition, the behaviour of the network when SFN is employed must be  studied and advice offered with regards to improvements in order to achieve  preferential results. The effect on the network regarding macro diversity is  positive but, the battery energy consumption is still higher and has a drainage  effect on the network for simple multi-hop. The report will include  background information regarding mobile ad-hoc networks and the  relationship with cooperative diversity. It will also deal with how different  algorithms affect the energy consumption in multi-hop networks. Simulations  will also be presented in Matlab plots for two single frequency network  scenarios against a simple multi-hop regarding node energy during the  network discovery and decline. Results will include comparative figures which  are followed by a discussion concerning the simulation results and its effects.  The applications for wireless sensor networks include area monitoring,  environmental monitoring, data logging, industrial monitoring, agriculture  and the idea can additionally be used for wireless radio and TV distribution.  The simulations have been conducted for cooperative diversity algorithms  (SFN-A and SFN-D) against an algorithm which does not use cooperative  diversity in Matlab. The node energy consumption is compared for both  scenarios with regards to both  network reachability and decline. The node  power is analysed during the reachability of the network from the start to  attaining 100% of the discovered network. During network decline, the  behaviour of the node energy is studied for algorithms with SFN-A, SFN-D  and non SFN.  Also, the number of times node transmission occurs with  regards to  node discovery is also analysed.

    Visual cues for view-invariant human action recognition

    No full text
    Human action is a visually complex phenomenon. Visual representation, analysis and recognition of human actions has become a key focus of research in computer vision, artificial intelligence, robotics and other related scientific disciplines. Various applications of automated action recognition include but not limited to intelligent health care monitoring, smart-homes, content based video search, animation and entertainment, human-computer interaction and intelligent video surveillance. The main focus of all these application areas surrounds a fundamental question: Given a human subject doing something in the field of sensory input, what is the person doing? If machine is able to correctly answer this question, it can greatly benefit computer vision system development and practical usage. However, machine recognition of human action is a daunting task due to complex motion dynamics, anthropometric variations, occlusion and high dependency over camera viewpoint. In this thesis, we exploit the importance of rich visual cues from human actions and utilize them to propose valuable solutions to human action recognition. The important problem of view-invariance under viewpoint variations is taken as a case study. We collect and explore these visual cues from geometrical relationships, spatio-temporal patterns and features, frequency domain signal analysis, contextual associations of actions and derive action representations for machine recognition. Actions are known as spatio-temporal patterns and temporal order plays an important role in their interpretations. We, therefore, explore invariance property of temporal order of actions during action execution and utilize it for devising a new view-invariant action recognition approach. We apply order constraint and feature fusion on local spatiotemporal features. These features are representation of choice for action recognition due to their computational simplicity, robustness to occlusion and minor view-point changes. We introduce STOPs (spatio-temporal ordered packets) that combine discriminative characteristics of multiple features for better recognition performance. In addition, we introduce spatio-temporal ordering constraint that removes discrepancy of orderless formation of bag-of-feature framework for action recognition. Furthermore, to deal with limitations of feature based approaches, we explore multiple view geometry which has alleviated various complex problems in computer vision. We thoroughly study applications of static and multi-body flow fundamental matrix in context of relating across-view information. We introduce spatio-temporally consistent dense optical flow to avoid explicit manual human body landmark point detection and explicit point correspondences. We employ rank constraint to derive novel tracking and training-free action similarity measures across viewpoint variations. Next, we investigate that despite the considerable success of geometrical techniques, computational complexity due to dense optical flow calculations plays a hindering role. Therefore, we study and track frequency domain analysis of action sequences. It leads toward the derivation of spatio-temporal correlation filters that use frequency domain filtering to give fast and efficient solutions to action recognition. However, these filters are originally view-dependent solutions. To achieve this objective, view clustering is explored that extends frequency domain techniques to achieve view-invariance. Contextual information is another important cue for interpreting human actions especially when actions exhibit interactive relationships with their context. These contextual clues become even more crucial when videos are captured in unfavorable conditions like extreme low light nighttime scenarios. We, therefore, take case study of night vision and present contextual action recognition at nighttime. We discover that context enhancement is imperative in such challenging multi-sensor environment to achieve reliable action recognition which leads us to develop novel context enhancement techniques for night vision using multi-sensor image fusion. Extensive experimentation on well-known action datasets is performed and results are compared with the existing action recognition approaches in literature. The research findings in this thesis greatly encourage the exploitation of spatia-temporal visual cues for deriving novel action recognition approaches and increasing their performance

    On temporal order invariance for view-invariant action recognition

    No full text
    View-invariant action recognition is one of the most challenging problems in computer vision. Various representations are being devised for matching actions across different viewpoints to achieve view invariance. In this paper, we explore the invariance property of temporal order of action instances during action execution and utilize it for devising a new view-invariant action recognition approach. To ensure temporal order during matching, we utilize spatiotemporal features, feature fusion and temporal order consistency constraint. We start by extracting spatiotemporal cuboid features from video sequences and applying feature fusion to encapsulate within-class similarity for the same viewpoints. For each action class, we construct a feature fusion table to facilitate feature matching across different views. An action matching score is then calculated based on global temporal order constraint and number of matching features. Finally, the action label of the class with the maximum value of the matching score is assigned to the query action. Experimentation is performed on multiple view Inria Xmas motion acquisition sequences and West Virginia University action datasets, with encouraging results, that are comparable to the existing view-invariant action recognition techniques

    Visual cues for view-invariant human action recognition

    No full text
    Human action is a visually complex phenomenon. Visual representation, analysis and recognition of human actions has become a key focus of research in computer vision, artificial intelligence, robotics and other related scientific disciplines. Various applications of automated action recognition include but not limited to intelligent health care monitoring, smart-homes, content based video search, animation and entertainment, human-computer interaction and intelligent video surveillance. The main focus of all these application areas surrounds a fundamental question: Given a human subject doing something in the field of sensory input, what is the person doing? If machine is able to correctly answer this question, it can greatly benefit computer vision system development and practical usage. However, machine recognition of human action is a daunting task due to complex motion dynamics, anthropometric variations, occlusion and high dependency over camera viewpoint. In this thesis, we exploit the importance of rich visual cues from human actions and utilize them to propose valuable solutions to human action recognition. The important problem of view-invariance under viewpoint variations is taken as a case study. We collect and explore these visual cues from geometrical relationships, spatio-temporal patterns and features, frequency domain signal analysis, contextual associations of actions and derive action representations for machine recognition. Actions are known as spatio-temporal patterns and temporal order plays an important role in their interpretations. We, therefore, explore invariance property of temporal order of actions during action execution and utilize it for devising a new view-invariant action recognition approach. We apply order constraint and feature fusion on local spatiotemporal features. These features are representation of choice for action recognition due to their computational simplicity, robustness to occlusion and minor view-point changes. We introduce STOPs (spatio-temporal ordered packets) that combine discriminative characteristics of multiple features for better recognition performance. In addition, we introduce spatio-temporal ordering constraint that removes discrepancy of orderless formation of bag-of-feature framework for action recognition. Furthermore, to deal with limitations of feature based approaches, we explore multiple view geometry which has alleviated various complex problems in computer vision. We thoroughly study applications of static and multi-body flow fundamental matrix in context of relating across-view information. We introduce spatio-temporally consistent dense optical flow to avoid explicit manual human body landmark point detection and explicit point correspondences. We employ rank constraint to derive novel tracking and training-free action similarity measures across viewpoint variations. Next, we investigate that despite the considerable success of geometrical techniques, computational complexity due to dense optical flow calculations plays a hindering role. Therefore, we study and track frequency domain analysis of action sequences. It leads toward the derivation of spatio-temporal correlation filters that use frequency domain filtering to give fast and efficient solutions to action recognition. However, these filters are originally view-dependent solutions. To achieve this objective, view clustering is explored that extends frequency domain techniques to achieve view-invariance. Contextual information is another important cue for interpreting human actions especially when actions exhibit interactive relationships with their context. These contextual clues become even more crucial when videos are captured in unfavorable conditions like extreme low light nighttime scenarios. We, therefore, take case study of night vision and present contextual action recognition at nighttime. We discover that context enhancement is imperative in such challenging multi-sensor environment to achieve reliable action recognition which leads us to develop novel context enhancement techniques for night vision using multi-sensor image fusion. Extensive experimentation on well-known action datasets is performed and results are compared with the existing action recognition approaches in literature. The research findings in this thesis greatly encourage the exploitation of spatia-temporal visual cues for deriving novel action recognition approaches and increasing their performance

    Automated multi-sensor color video fusion for nighttime video surveillance

    No full text
    In this paper, we present an automated color transfer based video fusion method to attain real-time color night vision capability for night-time video surveillance. We utilize simple RGB Color transfer technique to fused pseudo colored video frames without conversion to any uncorrelated color space. We investigated that final color fusion results greatly depend on the selection of target color Image. Therefore, rather than using any arbitrary target color image based on mere general visual anticipation, we have automated target color image selection using structural similarity and color saturation. We further apply color enhancement to improve final appearance of color fused images. Subjective and objective quality evaluations greatly indicate the effectiveness of our color video fusion method for nighttime video surveillance applications

    VSAMS : Video stabilization approach for multiple sensors

    No full text
    Video Stabilization is now considered an old problem which is almost solved but there are still some connecting problems which needs research attention. One of such issues arises due to multiple unstable videos streams coming from multiple sensors which often contain complementary information. To enhance system performance, instability should be removed in a single go rather than stabilizing each sensor individually. This paper proposes a cooperative video stabilization framework, VSAMS for multisensory aerial data based on robust boosting curves which encapsulate stability of high spatial frequency information as used by flying parakeets (budgerigars). For reducing shake and jitter and preservation of actual camera path, a multistage smoothing approach is visualized. Experiments are performed on multisensory UAV data which contains infrared and electro-optical video streams. Subjective and objective quality evaluation proves effectiveness of the proposed cooperative stabilization framework
    corecore