8,956 research outputs found

    Penyelenggaraan struktur penahan cerun rock shed: langkah mitigasi runtuhan tanah di Simpang Pulai - Blue Valley, Perak

    Get PDF
    Industri pembinaan merupakan industri yang sangat mencabar bukan sahaja di Malaysia malah di seluruh dunia yang merangkumi skop 3D dirty, difficult and dangerous. Industri ini juga meruapakan antara penyumbang terbesar KDNK iaitu sebanyak 7.4 peratus pada tahun 2016, walaupun industri ini antara penyumbang terbesar dari aspek keselamatan iaitu kemalangan (CIDB, 2017). Justeru itu, pihak yang bertanggungjawab seharusnya memandang serius mengenai masalah-masalah yang dihadapi supaya industri ini mampu bersaing di peringkat antarabangsa

    Study of CMS sensitivity to neutrinoless τ\tau decay at LHC

    Get PDF
    The Large Hadron Collider (LHC), scheduled to start operation in 2006, is foreseen to provide in the first year of running a total of ∌1012\sim 10^{12} τ\tau leptons. CMS (Compact Muon Solenoid) is a general-purpose experiment designed to study proton-proton and heavy-ion collisions at LHC. Even if the Susy particles and Higgs searches togheter with the B-physics present its main goal, the large amount of τ\tau-lepton, could allow a systematic study of tau-physics. We have performed a full simulation of CMS using GEANT 3 package and the object-oriented reconstruction program ORCA to study the sensitivity to neutrinoless tau decay τ→Ό+Ό−Ό−\tau \to \mu^+ \mu^- \mu^- and Ï„â†’ÎŒÎł\tau \to \mu \gamma. We present the analysis developed for these channels and the results obtained.Comment: Invited talk at the seventh international Workshop on tau lepton physics (TAU02), Santa Cruz, Ca, Usa, September 2002 10 pages 15 eps figure

    Information visualization for DNA microarray data analysis: A critical review

    Get PDF
    Graphical representation may provide effective means of making sense of the complexity and sheer volume of data produced by DNA microarray experiments that monitor the expression patterns of thousands of genes simultaneously. The ability to use ldquoabstractrdquo graphical representation to draw attention to areas of interest, and more in-depth visualizations to answer focused questions, would enable biologists to move from a large amount of data to particular records they are interested in, and therefore, gain deeper insights in understanding the microarray experiment results. This paper starts by providing some background knowledge of microarray experiments, and then, explains how graphical representation can be applied in general to this problem domain, followed by exploring the role of visualization in gene expression data analysis. Having set the problem scene, the paper then examines various multivariate data visualization techniques that have been applied to microarray data analysis. These techniques are critically reviewed so that the strengths and weaknesses of each technique can be tabulated. Finally, several key problem areas as well as possible solutions to them are discussed as being a source for future work

    A Graphical Environment Supporting the Algebraic Specification of Abstract Data Types

    Get PDF
    Abstract Data Types (ADTs) are a powerful conceptual and practical device for building high-quality software because of the way they can describe objects whilst hiding the details of how they are represented within a computer. In order to implement ADTs correctly, it is first necessary to precisely describe their properties and behaviour, typically within a mathematical framework such as algebraic specification. These techniques are no longer merely research topics but are now tools used by software practitioners. Unfortunately, the high level of mathematical sophistication required to exploit these methods has made them unattractive to a large portion of their intended audience. This thesis investigates the use of computer graphics as a way of making the formal specification of ADTs more palatable. Computer graphics technology has recently been explored as a way of making computer programs more understandable by revealing aspects of their structure and run-time behaviour that are usually hidden in textual representations. These graphical techniques can also be used to create and edit programs. Although such visualisation techniques have been incorporated into tools supporting several phases of software development, a survey presented in this thesis of existing systems reveals that their application to supporting the formal specification of ADTs has so far been ignored. This thesis describes the development of a prototype tool (called VISAGE) for visualising and visually programming formally-specified ADTs. VISAGE uses a synchronised combination of textual and graphical views to illustrate the various facets of an ADT's structure and behaviour. The graphical views use both static and dynamic representations developed specifically for this domain. VISAGE's visual programming facility has powerful mechanisms for creating and manipulating entire structures (as well as their components) that make it at least comparable with textual methods. In recognition of the importance of examples as a way of illustrating abstract concepts, VISAGE provides a dedicated tool (called the PLAYPEN) that allows the creation of example data by the user. These data can then be transformed by the operations belonging to the ADT with the result shown by means of a dynamic, graphical display. An evaluation of VISAGE was conducted in order to detect any improvement in subjects' performance, confidence and understanding of ADT specifications. The subjects were asked to perform a set of simple specification tasks with some using VISAGE and the others using manual techniques to act as a control. An analysis of the results shows a distinct positive reaction from the VISAGE group that was completely absent in the control group thereby supporting the thesis that the algebraic specification of ADTs can be made more accessible and palatable though the use of computer graphic techniques

    Method for Effective PDF Files Manipulation Detection

    Get PDF
    KĂ€esoleva magistritöö eesmĂ€rgiks on lihtsustada PDF failides tehtud muudatuste tuvastamise protsessi kasutades faili lĂ€htekoodi enne, kui liigutakse edasi teiste meetodite juurde nagu nĂ€iteks pilditöötlus. LĂ€htekoodi analĂŒĂŒs on mĂ”eldud esimeseks sammuks, mis vĂ”imaldab sÀÀsta palju uurijate aega ning pakkuda rohkem tĂ”estusmaterjali muudatuste tegemise kohta asitĂ”endiks oleva digitaalse faili kohta. Magistritöö tulemusel valmib pĂ”hjalik ja efektiivne metoodika PDF failide terviklikkuse uurimiseks ja analĂŒĂŒsimiseks. PĂŒstitatud eesmĂ€rgi saavutamiseks Ă”pitakse kĂ”igepealt tundma PDF faili ehitust mĂ”istmaks faili struktuuri ja komponente. SeejĂ€rel tehakse ridamisi muudatusi faili lĂ€htekoodis, mis vĂ”imaldab sĂŒveneda faili varjatud kĂŒlgedesse ja leida haavatavaid kohti ning millest saadav informatsioon on abiks metoodika aluste paika panemisel. Failide enamlevinud muutmise tĂŒĂŒpide uurimisel saadakse kogum andmeid, millede suhtes hakatakse vĂ”rdlema uurimise all olevaid faile ning seelĂ€bi testitakse faili tĂ”epĂ€rasust. Lisaks otsitakse vabavaralisi tarkvarasid, millega antud ĂŒlesannet lahendada. Töö lĂ”petatakse kontrollkatsetega, sealhulgas hinnatakse saadud tulemusi ja mĂ€rgitakse Ă€ra tuleviku tegevussuunad antud valdkonnas.The aim of this thesis is to ease the process of detecting manipulations in PDF files by addressing its source code, before having to use other methods such as image processing or text-line examination. It is intended to be a previous step to tackle, which can save a lot of time to examiners and provide them with more proof of manipulations regarding digital file evidence. The result is the construction of a solid and effective method for PDF file investigation and analysis to determine its integrity. To achieve this goal, a study of PDF file anatomy will be conducted firstly, in order to become familiar with the structure and composition of this file format. Afterwards, a series of manipulations performed directly against the file source code will deepen in its secrets and vulnerabilities, and will therefore help in setting the foundations for the method. Finally, a study on the most common types of file manipulations will lead to a set of layouts to which compare the files under investigation and thus, test its veracity, complemented with a quest for specialised open source tools to accomplish this task; a set of validation experiments will complete the work, evaluating the obtained results and stating future lines of work in this field

    Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval

    Full text link
    This paper presents a new state-of-the-art for document image classification and retrieval, using features learned by deep convolutional neural networks (CNNs). In object and scene analysis, deep neural nets are capable of learning a hierarchical chain of abstraction from pixel inputs to concise and descriptive representations. The current work explores this capacity in the realm of document analysis, and confirms that this representation strategy is superior to a variety of popular hand-crafted alternatives. Experiments also show that (i) features extracted from CNNs are robust to compression, (ii) CNNs trained on non-document images transfer well to document analysis tasks, and (iii) enforcing region-specific feature-learning is unnecessary given sufficient training data. This work also makes available a new labelled subset of the IIT-CDIP collection, containing 400,000 document images across 16 categories, useful for training new CNNs for document analysis

    Information Preserving Processing of Noisy Handwritten Document Images

    Get PDF
    Many pre-processing techniques that normalize artifacts and clean noise induce anomalies due to discretization of the document image. Important information that could be used at later stages may be lost. A proposed composite-model framework takes into account pre-printed information, user-added data, and digitization characteristics. Its benefits are demonstrated by experiments with statistically significant results. Separating pre-printed ruling lines from user-added handwriting shows how ruling lines impact people\u27s handwriting and how they can be exploited for identifying writers. Ruling line detection based on multi-line linear regression reduces the mean error of counting them from 0.10 to 0.03, 6.70 to 0.06, and 0.13 to 0.02, com- pared to an HMM-based approach on three standard test datasets, thereby reducing human correction time by 50%, 83%, and 72% on average. On 61 page images from 16 rule-form templates, the precision and recall of form cell recognition are increased by 2.7% and 3.7%, compared to a cross-matrix approach. Compensating for and exploiting ruling lines during feature extraction rather than pre-processing raises the writer identification accuracy from 61.2% to 67.7% on a 61-writer noisy Arabic dataset. Similarly, counteracting page-wise skew by subtracting it or transforming contours in a continuous coordinate system during feature extraction improves the writer identification accuracy. An implementation study of contour-hinge features reveals that utilizing the full probabilistic probability distribution function matrix improves the writer identification accuracy from 74.9% to 79.5%
    • 

    corecore