19 research outputs found

    Measurement of the Slope Parameter for the eta->3pi0 Decay in the pp->pp eta Reaction

    Get PDF
    The CELSIUS/WASA setup is used to measure the 3pi0 decay of eta mesons produced in pp interactions with beam kinetic energies of 1.36 and 1.45 GeV. The efficiency-corrected Dalitz plot and density distributions for this decay are shown, together with a fit of the quadratic slope parameter alpha yielding alpha = -0.026 +/- 0.010(stat) +/- 0.010(syst). This value is compared to recent experimental results and theoretical predictions.Comment: 4 pages, 7 Postscript figures, uses revtex4.st

    Rencana Anggaran Biaya (RAB) Dan Time Schedule Pada Bangunan Gedung Puskesmas Dengan Menggunakan SNI 2021/2022 Dan AHSP No.1 PRT/M/2022

    Get PDF
    The construction of the West Dumai Public Health Center has been arranged by the Ministry of Health to support health facilities in the West Dumai area. For this reason, planning documents are needed to support fundraising and development proposals to local governments, one of which is the budget plan (RAB). Planning the calculation of the Cost Budget Plan (RAB) which was made using the Work Unit Price Analysis (AHSP) calculation method in accordance with the PUPR Ministerial Regulation No.1/PRT/M/2022 in the field of work on the analysis of unit prices for wages and materials. Based on the calculation of the Budget Plan (RAB) the construction of the Dumai Barat Health Center building is Rp. (Six Billion Two Hundred Seventeen Million One Hundred Nineteen Thousand Rupiah), from the recapitulation of each work item such as Foundation Work of Rp.580,241,381.8, Structural Works of Rp. 1,614,542, 364.65, Floor Works of Rp. 420,862,119.11,Wall Works Rp. 1,7768,265,750,91, Ceiling Work of Rp. 246,598,770.82, Roof Work Rp. 377,106,099.87, and utility work Rp. 810,590,931.1,Finishing work Rp. 398,911,659.27, with a construction implementation time of 140 (Two Hundred and Ten) calendar days

    What is interesting in eta and eta' Meson Decays?

    Full text link
    An introduction to the physics of eta and eta' meson decays is given. A historical account of the discovery of the mesons is presented. It is followed by an overview and classification of the common decay modes and the relevance of the mesons for modern hadron and particle physics. In more detail the hadronic decay modes are discussed and in particular some interesting features of the eta-> 3pi0 decay are presented. The last section briefly reviews and compares reactions used to produce the eta and eta' mesons for the studies of their decays.Comment: 15 pages, 5 figures,prepared for Symposium on Meson Physics at COSY-11 and WASA-at-COSY, Cracow, 17-22 June 2007; added reference

    Fachsprache Deutsch: Die Darstellung von Begriffsbeziehungen in Fachtexten

    Get PDF
    German for Special Purposes: The Representation of Conceptual Relations in Specialized TextsThe work of terminologists is not only the elaboration of glossaries, but also includes the explanation and representation of specialized knowledge. This task is facilitated by the use of a model of lexicological analysis for term definition that is in consonance with psycholinguistic models of information processing. This also means that terminological definitions should not only reflect the meaning of specialized concepts, but also encode the cognitive-interpretative conceptual model of the entire knowledge domain. This article describes a method for the representation of terminological units of specialized knowledge that is the basis for a terminological database of the domain of coastal engineering, and consequently, includes concepts from the specialized domains of hydrology, geology, oceanography and meteorology. The objective of such a knowledge representation consists of a conceptualization of the coastline with all of its possible characteristics (sea, shore, harbour, estuary, etc.) in a dynamic representation that can account for natural and non-natural agents, modifying processes, instruments, and affected entities. Specialized dictionaries and a trilingual corpus of texts in German, Spanish, and English have been used to elaborate an inventory of concepts and their respective definitions. This article explains how concepts are interrelated within the same domain as well as how such relations are linguistically encoded in specialized texts. The results obtained are based on the analysis of concordances from the corpus of German texts, which have been generated by the computer application Wordsmith Tools

    Using Sitcoms in ESL/EFL: A Handbook for Using Friends in the Classroom

    Get PDF
    English learning has become the most significant objective throughout the world for learners of English as a second language (ESL) and foreign language (EFL). Unfortunately, most learners do not have a chance to learn the language in English-speaking countries. In most ESL and EFL classrooms, teachers have solely adapted their teaching method through textbooks even though there are numerous teaching techniques to support learners’ language process. Considering the fact that we live in an age of constant change of technology in which students’ attention spans have been shortened, students might lack engagement and motivation in learning English through solely textbook-adapted teaching methods. In addition, students not only lack the development of all language skills, but also will not have a chance to build cultural competence through textbooks. Therefore, teachers should adapt their teaching method to the new technological resources in their classroom such as television. The inclusion of television in the classroom presents the opportunity of learning the language and culture in an engaging and motivating way where students\u27 affective filter is lowered. This project explores a way to present a handbook in which ESL and EFL teachers can integrate sitcoms into their classroom to create an engaging and motivating atmosphere. This handbook consists of a collection of sample activities that is based on a single episode of a television sitcom, Friends. A variety of activities in an integrated format will help learners to boost their confidence in developing their language skills and gaining cultural competence. The project is aimed to help the students to be able to learn English in a way that reduces the burden of studying and level of anxiety as well as providing them with the opportunity of learning the culture

    Implementation of control algorithm for mechanical image stabilization

    Get PDF
    Cameras mounted on boats and in other similar environments can be hard to use if waves and wind cause unwanted motions of the camera which disturbs the desired image. However, this is a problem that can be fixed by applying mechanical image stabilization which is the goal of this thesis. The mechanical image stabilization is achieved by controlling two stepper motors in a pan-tilt-zoom (PTZ) camera provided by Axis Communications. Pan and tilt indicates that the camera can be rotated around two axes that are perpendicular to one another. The thesis begins with the problem of orientation estimation, i.e. finding out how the camera is oriented with respect to e.g., a fixed coordinate system. Sensor fusion is used for fusing accelerometer and gyroscope data to get a better estimate. Both the Kalman and Complementary filters are investigated and compared for this purpose. However, the Kalman filter is the one that is used in the final implementation, due to its better performance. In order to hold a desired camera orientation a compensation generator is used, in this thesis called reference generator. The name comes from the fact that it provides reference signals for the pan and tilt motors in order to compensate for external disturbances. The generator gets information from both pan and tilt encoders and the Kalman filter. The encoders provide camera position relative to the camera’s own chassi. If the compensation signals, also seen as reference values to the inner pan-tilt control, are tracked by the pan and tilt motors, disturbances are suppressed. In the control design a model obtained from system identification is used. The design and control simulations were carried out in the MATLAB extensions Control System Designer and Simulink. The choice of controller fell on the PID. The final part of the thesis describes the result from experiments that were carried out with the real process, i.e. the camera mounted in different setups, including a robotic arm simulating sea conditions. The result shows that the pan motor manages to track reference signals up to the required frequency of 1Hz. However, the tilt motor only manages to track 0.5Hz and is thereby below the required frequency. The result, however, proves that the concept of the thesis is possible

    Računalna obrada teksta

    Get PDF
    Zbog sve većeg rasta digitalnih informacija, potrebne su napredne metode pretraživanja velikih količina teksta. Tradicionalno su relacijske baze podataka obavljale posao pretraživanja informacija, ali one su dobre samo za strukturirano i diskretno pretraživanje. Pretraživanje punog teksta zahtijeva napredne tehnike obrade teksta, kako bi se mogao dati odgovor na pitanje koliko dani upit odgovara nekom dokumentu. Pretraživanje punog teksta zahtijeva da se tekst informacija unaprijed obradi, kako bi se moglo omogućiti što brže izvršavanje upita. Tako se tekst najprije rastavlja na rečenice, a rečenice na tokene, što svodi pretraživanje teksta na pretraživanje liste tokena. Zatim se tim tokenima sva slova pretvore u mala i normaliziraju se dijakritički znakovi, što pojednostavljuje potrebne znakove za upit. Nakon toga se eliminiraju sve stop-riječi (riječi koje ne donose semantičko značenje) te se sve riječi korjeniziraju, što omogućava da riječ iz upita pronađe sve dokumente koji sadrže bilo koju varijaciju te riječi (npr. po brojnosti i padežu). Zatim se ta obrađena lista tokena sprema u strukturu koja se zove indeks, i koja se koristi pri pretraživanju. Nakon indeksiranja moguće je raditi upite na stvoreni indeks. Sam tekst upita najprije prolazi kroz istu transformaciju kao i indeks, s time da se još svaka riječ proširuje njenim sinonimima, da bi se pronašao što veći broj relevantnih dokumenata. Zatim se tekst upita analizira za potencijalne fraze (niz riječi unutar dvostrukih navodnika), booleove operatore, zamjenske znakove i regularne izraze. U upit je moguće uključiti i strukturirano pretraživanje specificiranjem polja. Za vrijeme upisivanja upita, korisno je korisniku ponuditi prozor koji automatski predviđa više nastavaka njegovog upita. Za vrijeme izvršavanja upita je dobro i tolerirati neke eventualne zatipke. Konačno, budući da kod ovakvih pretraživanja često velik broj dokumenata odgovara upitu, potrebno je rangirati dokumente po relevantnosti, na koju se može utjecati na mnogo načina. Osim automatske relevantnosti temeljene na broju pojavljivanja ključnih riječi u dokumentu, moguće je odrediti da su neka polja dokumenta važnija od drugih (npr. naslov), pa rangirati upit koji je pronađen u naslovu više. Također je dobro rangirati i dokumente po redoslijedu i međusobnoj blizini riječi iz upita. Dugo vremena su se za pretraživanje unutar web stranica koristile komercijalne tražilice poput Googlea. Međutim, kroz zadnjih 5 godina razvio se velik broj tražilica punog teksta otvorenog kôda. Najpoznatije su Apache Solr, Sphinx, PostgreSQL (koji je dobio podršku za pretraživanje punog teksta) i Elasticsearch. Kroz testiranje i analizu značajki pokazalo se da je među njima prevladao Elasticsearch. Velik broj ljudi svakidašnje koristi kvalitetne tražilice kao što je Googleova, i postaje zadatak ostalih web stranica da kvalitetu svojeg pretraživanja pokušaju što više približiti tom standardu.Because of increasing growth of digital information, advanced methods of searching large amounts of text are required. Traditionally relational databases were doing the job of searching information, but they are good only for structured and discrete searching. Full-text search requires advanced methods of text processing, so that the question how much the given query matches a document can be answered. Full-text search requires that the text is processed upfront, so that maximum query execution speed can be achieved. The text is first split into sentences, then each sentence is tokenized, which enables searching text to become searching a list of tokens. Afterwards all tokens are downcased and have their diacritics normalized, which simplifies the character set needed for queries. Then stopwords (words that don’t bring additional meaning) are removed from the token list and stemming is applied for each token, which enables a keyword in the query to find all documents with any variation of that keyword (e.g. plural or singular). Finally the processed list of tokens are saved in a structure called the index, which is then used for searching. After indexing it is possible to query the created index. The text of the query alone is first processed in the same way as the index, with an addition that each token is also expanded with its synonyms, to increase the number of relevant documents returned. At query time the text of the query is analyzed for phrases (sequence of words inside double quotes), boolean operators, wildcards and regular expressions. It’s possible to also include structured search in the query. When a user is typing the query, it can be useful to try to autocomplete the query in a box below. At query execution time it’s also good to tolerate possible typos. Finally, since this kind of searching usually yields many results, it’s necessary to rank the documents by relevancy, which can be affected in many ways. Besides automatic relevancy based on the number of appearances of keywords in the document, it’s possible to specify that some fields are more important than others (e.g. title), and rank the query which is found in the title higher. Also, it’s good to rank documents by order and distance between keywords from the query. For a long time web pages had search powered by commercial search engines like Google. However, during that past 5 years a great number of open source fulltext search engines have evolved. The most popular ones are Apache Solr, Sphinx, PostgreSQL (which received support for full-text search) and Elasticsearch. Through testing and analysis of features it was concluded that Elasticsearch prevailed. Big number of people are using quality search engines like Google in everyday life, and it becomes a mission of web applications to try to bring the quality of their search as close as possible to that standard

    Računalna obrada teksta

    Get PDF
    Zbog sve većeg rasta digitalnih informacija, potrebne su napredne metode pretraživanja velikih količina teksta. Tradicionalno su relacijske baze podataka obavljale posao pretraživanja informacija, ali one su dobre samo za strukturirano i diskretno pretraživanje. Pretraživanje punog teksta zahtijeva napredne tehnike obrade teksta, kako bi se mogao dati odgovor na pitanje koliko dani upit odgovara nekom dokumentu. Pretraživanje punog teksta zahtijeva da se tekst informacija unaprijed obradi, kako bi se moglo omogućiti što brže izvršavanje upita. Tako se tekst najprije rastavlja na rečenice, a rečenice na tokene, što svodi pretraživanje teksta na pretraživanje liste tokena. Zatim se tim tokenima sva slova pretvore u mala i normaliziraju se dijakritički znakovi, što pojednostavljuje potrebne znakove za upit. Nakon toga se eliminiraju sve stop-riječi (riječi koje ne donose semantičko značenje) te se sve riječi korjeniziraju, što omogućava da riječ iz upita pronađe sve dokumente koji sadrže bilo koju varijaciju te riječi (npr. po brojnosti i padežu). Zatim se ta obrađena lista tokena sprema u strukturu koja se zove indeks, i koja se koristi pri pretraživanju. Nakon indeksiranja moguće je raditi upite na stvoreni indeks. Sam tekst upita najprije prolazi kroz istu transformaciju kao i indeks, s time da se još svaka riječ proširuje njenim sinonimima, da bi se pronašao što veći broj relevantnih dokumenata. Zatim se tekst upita analizira za potencijalne fraze (niz riječi unutar dvostrukih navodnika), booleove operatore, zamjenske znakove i regularne izraze. U upit je moguće uključiti i strukturirano pretraživanje specificiranjem polja. Za vrijeme upisivanja upita, korisno je korisniku ponuditi prozor koji automatski predviđa više nastavaka njegovog upita. Za vrijeme izvršavanja upita je dobro i tolerirati neke eventualne zatipke. Konačno, budući da kod ovakvih pretraživanja često velik broj dokumenata odgovara upitu, potrebno je rangirati dokumente po relevantnosti, na koju se može utjecati na mnogo načina. Osim automatske relevantnosti temeljene na broju pojavljivanja ključnih riječi u dokumentu, moguće je odrediti da su neka polja dokumenta važnija od drugih (npr. naslov), pa rangirati upit koji je pronađen u naslovu više. Također je dobro rangirati i dokumente po redoslijedu i međusobnoj blizini riječi iz upita. Dugo vremena su se za pretraživanje unutar web stranica koristile komercijalne tražilice poput Googlea. Međutim, kroz zadnjih 5 godina razvio se velik broj tražilica punog teksta otvorenog kôda. Najpoznatije su Apache Solr, Sphinx, PostgreSQL (koji je dobio podršku za pretraživanje punog teksta) i Elasticsearch. Kroz testiranje i analizu značajki pokazalo se da je među njima prevladao Elasticsearch. Velik broj ljudi svakidašnje koristi kvalitetne tražilice kao što je Googleova, i postaje zadatak ostalih web stranica da kvalitetu svojeg pretraživanja pokušaju što više približiti tom standardu.Because of increasing growth of digital information, advanced methods of searching large amounts of text are required. Traditionally relational databases were doing the job of searching information, but they are good only for structured and discrete searching. Full-text search requires advanced methods of text processing, so that the question how much the given query matches a document can be answered. Full-text search requires that the text is processed upfront, so that maximum query execution speed can be achieved. The text is first split into sentences, then each sentence is tokenized, which enables searching text to become searching a list of tokens. Afterwards all tokens are downcased and have their diacritics normalized, which simplifies the character set needed for queries. Then stopwords (words that don’t bring additional meaning) are removed from the token list and stemming is applied for each token, which enables a keyword in the query to find all documents with any variation of that keyword (e.g. plural or singular). Finally the processed list of tokens are saved in a structure called the index, which is then used for searching. After indexing it is possible to query the created index. The text of the query alone is first processed in the same way as the index, with an addition that each token is also expanded with its synonyms, to increase the number of relevant documents returned. At query time the text of the query is analyzed for phrases (sequence of words inside double quotes), boolean operators, wildcards and regular expressions. It’s possible to also include structured search in the query. When a user is typing the query, it can be useful to try to autocomplete the query in a box below. At query execution time it’s also good to tolerate possible typos. Finally, since this kind of searching usually yields many results, it’s necessary to rank the documents by relevancy, which can be affected in many ways. Besides automatic relevancy based on the number of appearances of keywords in the document, it’s possible to specify that some fields are more important than others (e.g. title), and rank the query which is found in the title higher. Also, it’s good to rank documents by order and distance between keywords from the query. For a long time web pages had search powered by commercial search engines like Google. However, during that past 5 years a great number of open source fulltext search engines have evolved. The most popular ones are Apache Solr, Sphinx, PostgreSQL (which received support for full-text search) and Elasticsearch. Through testing and analysis of features it was concluded that Elasticsearch prevailed. Big number of people are using quality search engines like Google in everyday life, and it becomes a mission of web applications to try to bring the quality of their search as close as possible to that standard

    Web 2.0: Tools and applications

    Get PDF
    Cilj ovog rada je na jednostavan i razumljiv način prikazati najvažnije alate i aplikacije u sklopu Web 2.0 koncepta, te njihovu primjenu u svakodnevnom životu i poslovanju. Kako bi se pojasnilo što Web 2.0 predstavlja, u radu su prikazane najznačajnije tehnologije, njihove brojne mogućnosti i usporedba između Weba 1.0 i Weba 2.0. Razvoj Web 2.0 tehnologija doveo je do promjene u paradigmi jednosmjerne komunikacije omogućujući korisniku aktivno sudjelovanje u kreiranju i objavljivanju sadržaja. Radi se o alatima i aplikacijama koji korisnicima omogućuju uređivanje, kreiranje i objavljivanje sadržaja, kombiniranje podataka iz velikog broja izvora, komunikaciju s drugim korisnicima, stvaranje virtualnih zajednica i međusobnu suradnju na zajedničkim projektima. Najvažniji predstavnici su blogovi, wikiji, podcasti, društvene mreže, alati za društveno označavanje, mashup aplikacije i RSS. Korištenje Web 2.0 aplikacija u poslovnim aktivnostima rezultiralo je brojnim promjenama, posebice u smislu aktivnijeg sudjelovanja i suradnje zaposlenika u stvaranju sadržaja i donošenja odluka.The main aim of this paper is to describe the most important tools and applications within the Web 2.0 concept in a simple and understandable way, as well as their everyday and also business use. In order to explain what Web 2.0 really represents, this paper depicts the most significant technologies, their numerous features and comparison between Web 1.0 and Web 2.0. The evolution of Web 2.0 technologies has changed the one-way communication paradigm, enabling the user to actively participate in creating and publishing content. These are the tools and applications that enable users to edit, create and publish their content, combine data from numerous sources, communicate with other users, create virtual communities and collaborate on joint projects. The main examples are blogs, wikis, podcasts, social networks, social bookmarking, mashups, and RSS. The use of Web 2.0 applications in business activities has resulted in number of changes, especially in terms of more active participation and collaboration of employees in creating content and decision making
    corecore