6,337 research outputs found

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    Cloud Computing

    Get PDF
    In the recent years, Cloud Computing has become very popular and an interesting subject in the field of science and technology. The research efforts in the Cloud Computing have led to a number of applications used for the convenience in daily life. Cloud Computing is not only providing solutions at the enterprise level but it is also suitable in organizing a centralized database which is accessible from every corner of the world. It is said that, 10 to 15 years later when all the enterprises have adopted the Cloud Computing, there will be no more perception for the data center in the company. The aim of this Master’s thesis “Cloud Computing: Server Configuration and Software Implementation for the Data Collection with Wireless Sensor Nodes” was to integrate the Wireless Sensor Network with Cloud Computing in a such a way that the data received from the Sensor node can be access able from anywhere in the world. To accomplish this task, a Wireless Sensor Network was deployed to measure the environmental conditions such as Temperature, Light and the Sensor’s battery information and the measured values are sent to a web server from where the data can be accessed. The project also includes the software implementation to collect the sensor’s measurements and a Graphical User Interface (GUI) application which reads the values from the sensor network and stores it to the database.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Get PDF
    The web-based, Java-written SOCR (Statistical Online Computational Resource) tools have been utilized in many undergraduate and graduate level statistics courses for seven years now (Dinov 2006; Dinov et al. 2008b). It has been proven that these resources can successfully improve students' learning (Dinov et al. 2008b). Being first published online in 2005, SOCR Analyses is a somewhat new component and it concentrate on data modeling for both parametric and non-parametric data analyses with graphical model diagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learning for high school and undergraduate students. As we have already implemented SOCR Distributions and Experiments, SOCR Analyses and Charts fulfill the rest of a standard statistics curricula. Currently, there are four core components of SOCR Analyses. Linear models included in SOCR Analyses are simple linear regression, multiple linear regression, one-way and two-way ANOVA. Tests for sample comparisons include t-test in the parametric category. Some examples of SOCR Analyses' in the non-parametric category are Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirnoff test and Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman's test and Fisher's exact test. The last component of Analyses is a utility for computing sample sizes for normal distribution. In this article, we present the design framework, computational implementation and the utilization of SOCR Analyses.

    Creating an android application for acquaintance with historical buildings and places

    Get PDF
    https://www.ester.ee/record=b544778

    Large Scale Data Analysis Using Apache Pig

    Get PDF
    Käesolev magistritöö kirjeldab andmete paralleeltöötluseks mõeldud tarkvararaamistiku Apache Pig kasutamist. Esitatud on konkreetne andmeanalüüsi ülesanne, mille lahendamiseks raamistikku kasutati. Selle töö eesmärk on näidata Pig-i kasulikkust suuremahuliseks andmeanalüüsiks. Raamistik Pig on loodud töötama koos paralleelarvutuste tegemise infrastruktuuriga Hadoop. Hadoop realiseerib MapReduce programmeerimismudelit. Pig käitub lisa-abstraktsioonitasemena MapReduce-i kohal, esitades andmeid relatsiooniliste tabelitena ning lubades programmeerijatel teha päringuid, kasutades Pig Latin päringukeelt. Pig-i testimiseks püstitati andmeanalüüsi ülesanne, mis oli vaja lahendada. Üheks osaks ülesandest oli RSS veebivoogudest kogutud uudistest päevade kaupa levinumate sõnade tuvastamine. Teine osa oli, suvalise sõnade hulga puhul, kogutud uudistest leidmine, kuidas muutus päevade kaupa selle sõnade hulga koosesinemiste arv uudistes. Lisaks tuli Pig-i kasutades realiseerida regulaaravaldisi rakendav teksti otsing kogutud uudiste seast. Probleemi lahendusena realiseeriti hulk Pig Latin keelseid skripte, mis töötlevad ja analüüsivad kogutud andmeid. Funktsionaalsuse kokku sidumiseks loodi programmeerimiskeeles Java raamprogramm, mis käivitab erinevaid Pig skripte vastavalt kasutaja sisendile. Andmete kogumiseks loodi eraldi rakendus, mida kasutati regulaarsete intervallide järel uudisvoogude failide alla laadimiseks. Loodud rakendust kasutati kogutud andmete analüüsiks ja töös on esitatud ka mõned analüüsi tulemused. Tulemustest võib näha, kuidas teatud sõnade ja sõnakombinatsioonide esinemissagedused muutuvad seoses sellega, kuidas sündmuste, mida need sõnad kirjeldavad, aktuaalsus suureneb ja väheneb.This work describes Apache Pig, a software framework designed for parallel data processing. An example data analysis problem is presented and solved using the framework. The objective of the work is to demonstrate the usefulness of Pig for large scale data analysis. Pig is built to work with the parallel computing framework Hadoop, which implements the MapReduce programming model. Pig acts as a layer of abstraction on top of MapReduce, presenting data as relational tables and allowing for data manipulation and queries in the Pig Latin query language. The data analysis problem used to test Pig involved collecting news stories from on-line RSS web feeds and identifying trends in the topics covered. As the solution, a number of Pig scripts were created to perform the necessary tasks and a Java application was implemented as a user interface wrapper for the Pig scripts

    Tools of the Trade: A Survey of Various Agent Based Modeling Platforms

    Get PDF
    Agent Based Modeling (ABM) toolkits are as diverse as the community of people who use them. With so many toolkits available, the choice of which one is best suited for a project is left to word of mouth, past experiences in using particular toolkits and toolkit publicity. This is especially troublesome for projects that require specialization. Rather than using toolkits that are the most publicized but are designed for general projects, using this paper, one will be able to choose a toolkit that already exists and that may be built especially for one's particular domain and specialized needs. In this paper, we examine the entire continuum of agent based toolkits. We characterize each based on 5 important characteristics users consider when choosing a toolkit, and then we categorize the characteristics into user-friendly taxonomies that aid in rapid indexing and easy reference.Agent Based Modeling, Individual Based Model, Multi Agent Systems
    corecore