91 research outputs found

    Design of Reconstruction of Railway Section Kunovice - Staré Město u Uherského Hradiště km 0,3 - 2,0

    Get PDF
    Cílem bakalářské práce je návrh úpravy geometrických parametrů koleje a návrh rekonstrukce železničního svršku v úseku Kunovice - Staré Město u Uherského Hradiště v km 0,3 - 2,0. Úsek začíná napojením na výhybku v železniční stanici Kunovice a končí v železniční stanici Uherské Hradiště. Součástí bakalářské práce je tedy i rekonstrukce kunovického zhlaví stanice Uherské Hradiště. Obsahem bakalářské práce je i návrh obnovy odvodnění tratě. Úkolem je také zvážit možnost zvýšení rychlosti v daném úseku. V rámci práce je také vypracován návrh technologie práce pro rekonstrukci a výkaz výměr.The main purpose of this thesis is to look at the reconstruction of single railway track Kunovice – Staré Město u Uherského Hradiště in the section km 0,3 – 2,0. The section starts at the connection to the last switch in railway station Kunovice and ends in the railway station Uherské Hradiště. The thesis also includes the reconstruction of Kunovice's neck of station tracks in the station Uherské Hradiště. In the next part of the thesis I present the design of the drainage's recover. The following parts consider the ability to increase the track speed, the plan of technology how to make the reconstruction as well as the bill of quantities.

    Influence of the Valve Train Flexibility to the Single Valve Motion

    Get PDF
    Cílem této diplomové práce je porovnání pohybu ventilu jedné sestavy (SVT) a pohybů jednotlivých ventilů kompletní sestavy s pružným vačkovým hřídelem s ohledem na dynamické vlastnosti. Součástí práce je také kinematický návrh v programu VALKIN. Dynamická část je řešena v MBS programu Virtual Engine. Na závěr jsou popsány vlivy na výsledný pohyb ventilu.The aim of this thesis is a comparing single valve train motion (SVT) and complete valve train motion with a flexible camshaft focused on dynamic characteristics. In this thesis is also performed kinematics analysis in the VALKIN software. Dynamic analysis is solved in the MBS software Virtual Engine. Influences to the valve train motion are described in the conclusion

    Design of Modernization of Slavkov u Brna Railway Station

    Get PDF
    Tato diplomová práce se zabývá návrhem modernizace železniční stanice Slavkov u Brna. Cílem je navrhnout úpravu nástupišť pro osoby s omezenou schopností pohybu a orientace. Rozhodnout, jestli lze realizovat nástupiště s centrálním přechodem nebo bude zřízeno nástupiště ostrovní. Vyřešit úpravu kolejiště a nástupišť s ohledem na výhledové záměry objednatelů drážní dopravy. Vzhledem se stáří a opotřebovanosti navrhnout rekonstrukci železničního svršku a spodku s očekávaným zvýšením traťové rychlosti.This thesis deals with the project of modernization of Slavkov u Brna railway station. The main aim is to draft adjustments of the platforms to fulfil the requirements for the movement of people with reduced mobility. To decide which kind of platform is possible to build. Also to plan the adjustments to track bed and platforms taking into consideration what the customer of railway operation had ordered. To draft reconstruction taking into consideration how old and worn-out permanent way and substructure is, with the posibility to increasing track speed.

    Piston and Connecting Rod Assemblies of a CI Engine

    Get PDF
    Tato práce se zabývá návrhem pístní a ojniční skupiny vybraného vznětového motoru. Před samotným návrhem je vytvořena stručná rešerše pístů a ojnic vznětového motoru. Dále je proveden rozbor klikového ústrojí, který je využit při pevnostní kontrole pístu a ojnice. Práce také obsahuje výkresovou dokumentaci.This thesis deals with design of the piston and connecting rod assemblies of selected CI engine. Before a design is created short background research of pistons and connecting rods of CI engine. Next is an analysis of the crank mechanism, which is used in the strength control of the piston and connecting rod. The work also contains drawings.

    Arabic-SOS: Segmentation, stemming, and orthography standardization for classical and pre-modern standard Arabic

    Get PDF
    This is an accepted manuscript of an article published by ACM in DATeCH2019: Proceedings of the 3rd International Conference on Digital Access to Textual Cultural Heritage in May 2019, available online: https://doi.org/10.1145/3322905.3322927 The accepted version of the publication may differ from the final published version.While morphological segmentation has always been a hot topic in Arabic, due to the morphological complexity of the language and the orthography, most effort has focused on Modern Standard Arabic. In this paper, we focus on pre-MSA texts. We use the Gradient Boosting algorithm to train a morphological segmenter with a corpus derived from Al-Manar, a late 19th/early 20th century magazine that focused on the Arabic and Islamic heritage. Since most of the cultural heritage Arabic available suffers from substandard orthography, we have trained a machine learner to standardize the text. Our segmentation accuracy reaches 98.47%, and the orthography standardization an F-macro of 0.98 and an F-micro of 0.99. We also produce stemming as a by-product of segmentation

    Distilling Information Reliability and Source Trustworthiness from Digital Traces

    Full text link
    Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the temporal traces left by these noisy evaluations give cues on the reliability of the information and the trustworthiness of the sources. Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events.Comment: Accepted at 26th World Wide Web conference (WWW-17

    Data Portraits and Intermediary Topics: Encouraging Exploration of Politically Diverse Profiles

    Full text link
    In micro-blogging platforms, people connect and interact with others. However, due to cognitive biases, they tend to interact with like-minded people and read agreeable information only. Many efforts to make people connect with those who think differently have not worked well. In this paper, we hypothesize, first, that previous approaches have not worked because they have been direct -- they have tried to explicitly connect people with those having opposing views on sensitive issues. Second, that neither recommendation or presentation of information by themselves are enough to encourage behavioral change. We propose a platform that mixes a recommender algorithm and a visualization-based user interface to explore recommendations. It recommends politically diverse profiles in terms of distance of latent topics, and displays those recommendations in a visual representation of each user's personal content. We performed an "in the wild" evaluation of this platform, and found that people explored more recommendations when using a biased algorithm instead of ours. In line with our hypothesis, we also found that the mixture of our recommender algorithm and our user interface, allowed politically interested users to exhibit an unbiased exploration of the recommended profiles. Finally, our results contribute insights in two aspects: first, which individual differences are important when designing platforms aimed at behavioral change; and second, which algorithms and user interfaces should be mixed to help users avoid cognitive mechanisms that lead to biased behavior.Comment: 12 pages, 7 figures. To be presented at ACM Intelligent User Interfaces 201

    Asking Questions the Human Way: Scalable Question-Answer Generation from Text Corpus

    Full text link
    The ability to ask questions is important in both human and machine intelligence. Learning to ask questions helps knowledge acquisition, improves question-answering and machine reading comprehension tasks, and helps a chatbot to keep the conversation flowing with a human. Existing question generation models are ineffective at generating a large amount of high-quality question-answer pairs from unstructured text, since given an answer and an input passage, question generation is inherently a one-to-many mapping. In this paper, we propose Answer-Clue-Style-aware Question Generation (ACS-QG), which aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale by imitating the way a human asks questions. Our system consists of: i) an information extractor, which samples from the text multiple types of assistive information to guide question generation; ii) neural question generators, which generate diverse and controllable questions, leveraging the extracted assistive information; and iii) a neural quality controller, which removes low-quality generated data based on text entailment. We compare our question generation models with existing approaches and resort to voluntary human evaluation to assess the quality of the generated question-answer pairs. The evaluation results suggest that our system dramatically outperforms state-of-the-art neural question generation models in terms of the generation quality, while being scalable in the meantime. With models trained on a relatively smaller amount of data, we can generate 2.8 million quality-assured question-answer pairs from a million sentences found in Wikipedia.Comment: Accepted by The Web Conference 2020 (WWW 2020) as full paper (oral presentation

    SAR: Learning Cross-Language API Mappings with Little Knowledge

    Get PDF
    To save effort, developers often translate programs from one programming language to another, instead of implementing it from scratch. Translating application program interfaces (APIs) used in one language to functionally equivalent ones available in another language is an important aspect of program translation. Existing approaches facilitate the translation by automatically identifying the API mappings across programming languages. However, these approaches still require a large number of parallel corpora, ranging from pairs of APIs or code fragments that are functionally equivalent, to similar code comments. To minimize the need for parallel corpora, this paper aims at an automated approach that can map APIs across languages with much less a priori knowledge than other approaches. Our approach is based on a realization of the notion of domain adaption, combined with code embedding, to better align two vector spaces. Taking as input large sets of programs, our approach first generates numeric vector representations of the programs (including the APIs used in each language), and it adapts generative adversarial networks (GAN) to align the vectors in different spaces of two languages. For better alignment, we initialize the GAN with parameters derived from API mapping seeds that can be identified accurately with a simple automatic signature-based matching heuristic. Then the cross-language API mappings can be identified via nearest-neighbors queries in the aligned vector spaces. We have implemented the approach (SAR, named after three main technical components, Seeding, Adversarial training, and Refinement) in a prototype for mapping APIs across Java and C# programs. Our evaluation on about 2 million Java files and 1 million C# files shows that the approach can achieve 48% and 78% mapping accuracy in its top-1 and top-10 API mapping results respectively, with only 174 automatically identified seeds, which is more accurate than other approaches using the same or much more mapping seeds
    corecore