446 research outputs found

    Exploiting peer group concept for adaptive and highly available services

    Full text link
    This paper presents a prototype for redundant, highly available and fault tolerant peer to peer framework for data management. Peer to peer computing is gaining importance due to its flexible organization, lack of central authority, distribution of functionality to participating nodes and ability to utilize unused computational resources. Emergence of GRID computing has provided much needed infrastructure and administrative domain for peer to peer computing. The components of this framework exploit peer group concept to scope service and information search, arrange services and information in a coherent manner, provide selective redundancy and ensure availability in face of failure and high load conditions. A prototype system has been implemented using JXTA peer to peer technology and XML is used for service description and interfaces, allowing peers to communicate with services implemented in various platforms including web services and JINI services. It utilizes code mobility to achieve role interchange among services and ensure dynamic group membership. Security is ensured by using Public Key Infrastructure (PKI) to implement group level security policies for membership and service access.Comment: The Paper Consists of 5 pages, 6 figures submitted in Computing in High Energy and Nuclear Physics, 24-28 March 2003 La Jolla California. CHEP0

    Modeling and Forecasting of Rainfall Time Series. A Case Study for Pakistan Tayyab Raza Fraz

    Get PDF
    The change of weather conditions is considered as the major problem, particularly for developing country like Pakistan. Machine learning and artificial neural network models have become attractive forecast techniques for rainfall as compared to traditional statistical methods in the last few years. The behavioral pattern in rainfall (mm) annually by 1901 to 2020 is studied. Moreover, forecasts of three models based on past observations are evaluated. Fundamentally, different techniques are used for model development. Three modeling techniques include a traditional linear time series ARMA model, an emerging nonlinear threshold technique SETAR model, and influential machine learning technique NAR model. Evaluation of forecast performance is based on three forecast error criteria namely MSE, RMSE, and MAPE. Results indicate that the rainfall (mm) will slightly increase in the coming ten years i.e. 2021 to 2030. Furthermore, the findings also reveal that the NAR model is a suitable and appropriate model to forecast the rainfall which outperforms the ARMA as well as the SETAR model

    HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

    Full text link
    Medical image segmentation assists in computer-aided diagnosis, surgeries, and treatment. Digitize tissue slide images are used to analyze and segment glands, nuclei, and other biomarkers which are further used in computer-aided medical applications. To this end, many researchers developed different neural networks to perform segmentation on histological images, mostly these networks are based on encoder-decoder architecture and also utilize complex attention modules or transformers. However, these networks are less accurate to capture relevant local and global features with accurate boundary detection at multiple scales, therefore, we proposed an Encoder-Decoder Network, Quick Attention Module and a Multi Loss Function (combination of Binary Cross Entropy (BCE) Loss, Focal Loss & Dice Loss). We evaluate the generalization capability of our proposed network on two publicly available datasets for medical image segmentation MoNuSeg and GlaS and outperform the state-of-the-art networks with 1.99% improvement on the MoNuSeg dataset and 7.15% improvement on the GlaS dataset. Implementation Code is available at this link: https://bit.ly/HistoSegComment: Accepted by 2022 12th International Conference on Pattern Recognition Systems (ICPRS), For Implementation Code see https://bit.ly/HistoSe

    Video content analysis for intelligent forensics

    Get PDF
    The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild

    Human object annotation for surveillance video forensics

    Get PDF
    A system that can automatically annotate surveillance video in a manner useful for locating a person with a given description of clothing is presented. Each human is annotated based on two appearance features: primary colors of clothes and the presence of text/logos on clothes. The annotation occurs after a robust foreground extraction stage employing a modified Gaussian mixture model-based approach. The proposed pipeline consists of a preprocessing stage where color appearance of an image is improved using a color constancy algorithm. In order to annotate color information for human clothes, we use the color histogram feature in HSV space and find local maxima to extract dominant colors for different parts of a segmented human object. To detect text/logos on clothes, we begin with the extraction of connected components of enhanced horizontal, vertical, and diagonal edges in the frames. These candidate regions are classified as text or nontext on the basis of their local energy-based shape histogram features. Further, to detect humans, a novel technique has been proposed that uses contourlet transform-based local binary pattern (CLBP) features. In the proposed method, we extract the uniform direction invariant LBP feature descriptor for contourlet transformed high-pass subimages from vertical and diagonal directional bands. In the final stage, extracted CLBP descriptors are classified by a trained support vector machine. Experimental results illustrate the superiority of our method on large-scale surveillance video data

    The Curve Of Cross Border Cartel Enforcement (Challenges and Remedies in Global Business Environment)

    Get PDF
    The purpose of this article stated that the global economic arena has taken new insights across the shore of nations.  THE new economic challenges are waiting for the anti-trust enforcers to make sure strict compliance with the antitrust laws and in addition this dissertational work highlights the incipient violations across the borders and suggests its possible legel outcoms in the near future in order to make the economic market a level playing field for any business entrants. It particularly shed light on the cross border cartels and their effects on the relevant market, additionally we have taken the global view of the legislative aspects along with their de jure appliances and improvements for the proper economic growth under the auspices of legal framework. The ramification of cross border cartel enforcement has surfaced astoundingly between 1998 to 2015, underlining the earnest and prompt action to strengthen and revisit the competition law enforcement tools and proficiency. The technological advancements and liberalization of trade has risen significant challenges which includes the enforcement of cross border cartels and mergers. The globalization of corporate activities and deregulation of business markets and numerous industrial sectors has endangered the theoretical foundation of domestic and international competition enforcement regime. The transnational anticompetitive practices like monopolization of markets, collusive price fixing, vertical restraints of trade and international cartels currently challenged the jurisdiction and policies of OECD, WTO, UNCTAD, and ICN. This frightening situation necessarily be regularized by establishing worldwide competition policy and globally admirable enforcement standard. The weaknesses of unilateral, bilateral, and multilateral compacts be re-examined in order to cope with the cross- border competition challenges efficaciously. The extraterritorial, jurisdictional, and investigative mechanisms could be enclosed with binding nature of legal structures to deter cross border antitrust violations for smooth economic growth. The EU and US actively pursuing to establish the unanimous international antitrust regime instead of discrepancies to integrate WTO and ICN being multilateral cooperation forum. Currently, US, CANADA, EU, JAPAN and CHINA across the globe become more engaged in international cartels evidence gathering and investigations. The developments in information sharing, private enforcement, follow on civil litigation, dawn raids, extraterritorial reach of enforcement watchdog is yet to be established

    Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm

    Get PDF
    Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc
    • …
    corecore