11,378 research outputs found

    Implementing intelligent asset management systems (IAMS) within an industry 4.0 manufacturing environment

    Get PDF
    9th IFAC Conference on Manufacturing Modelling, Management and Control, MIM 2019; Berlin; Germany; 28 August 2019 through 30 August 2019. Publicado en IFAC-PapersOnLine 52(13), p. 2488-2493This paper aims to define the different considerations and results obtained in the implementation in an Intelligent Maintenance System of a laboratory designed based on basic concepts of Industry 4.0. The Intelligent Maintenance System uses asset monitoring techniques that allow, on-line digital modelling and automatic decision making. The three fundamental premises used for the development of the management system are the structuring of information, value identification and risk management

    Facets, Tiers and Gems: Ontology Patterns for Hypernormalisation

    Get PDF
    There are many methodologies and techniques for easing the task of ontology building. Here we describe the intersection of two of these: ontology normalisation and fully programmatic ontology development. The first of these describes a standardized organisation for an ontology, with singly inherited self-standing entities, and a number of small taxonomies of refining entities. The former are described and defined in terms of the latter and used to manage the polyhierarchy of the self-standing entities. Fully programmatic development is a technique where an ontology is developed using a domain-specific language within a programming language, meaning that as well defining ontological entities, it is possible to add arbitrary patterns or new syntax within the same environment. We describe how new patterns can be used to enable a new style of ontology development that we call hypernormalisation

    Small business: Digital growth

    Get PDF
    Small businesses and the economy generally, can realise significant benefits by embracing mobile and internet technologies to transform their operations. Powered by PwC’s Geospatial Economic Model (GEM), the report shows that small businesses can unlock an additional 49.2billionofprivatesectoroutputoverthenexttenyearsbymakingbetteruseofthesetechnologies.IneachStateandTerritoryacrossAustralia,smallbusinesseshavethepotentialtohelpgrowtheeconomy.TheeconomyofeachStateandTerritoryisunderpinnedbydifferenteconomicdrivers.Asaresult,eachhasopportunitiestocontributedifferently.Lookingatthissamefigureingeographicterms,nationallyeveryFederalelectoratewouldcontributealmost49.2 billion of private sector output over the next ten years by making better use of these technologies. In each State and Territory across Australia, small businesses have the potential to help grow the economy. The economy of each State and Territory is underpinned by different economic drivers. As a result, each has opportunities to contribute differently. Looking at this same figure in geographic terms, nationally every Federal electorate would contribute almost 327.7 million of economic output over the next ten years (or approximately $33 million per year). This would be roughly the same as a significant capital project like a major roadway or a hospital update. And the benefits are not limited to large businesses, tech-companies, or those based in capital cities. Small and medium businesses across a wide range of industries and locations stand to benefit. GEM provides unparalleled insights into where potential economic gains are located. For the first time, an economic analysis allows us to explore where potential economic gains are located, down to every State/Territory and Federal electorate. PWC\u27s modelling shows that while all regions and industries have much to gain, some have more to gain than others. For example, 53% of the potential economic benefit can be made by small businesses located outside Australia’s inner metropolitan centres. Furthermore, 17% of Federal electorates (or 25 of the 150) could account for 50% of the potential private sector boost to the economy

    Gem State Roofing v. United Componets Clerk\u27s Record Dckt. 47484

    Get PDF
    https://digitalcommons.law.uidaho.edu/idaho_supreme_court_record_briefs/9134/thumbnail.jp

    Document Filtering for Long-tail Entities

    Full text link
    Filtering relevant documents with respect to entities is an essential task in the context of knowledge base construction and maintenance. It entails processing a time-ordered stream of documents that might be relevant to an entity in order to select only those that contain vital information. State-of-the-art approaches to document filtering for popular entities are entity-dependent: they rely on and are also trained on the specifics of differentiating features for each specific entity. Moreover, these approaches tend to use so-called extrinsic information such as Wikipedia page views and related entities which is typically only available only for popular head entities. Entity-dependent approaches based on such signals are therefore ill-suited as filtering methods for long-tail entities. In this paper we propose a document filtering method for long-tail entities that is entity-independent and thus also generalizes to unseen or rarely seen entities. It is based on intrinsic features, i.e., features that are derived from the documents in which the entities are mentioned. We propose a set of features that capture informativeness, entity-saliency, and timeliness. In particular, we introduce features based on entity aspect similarities, relation patterns, and temporal expressions and combine these with standard features for document filtering. Experiments following the TREC KBA 2014 setup on a publicly available dataset show that our model is able to improve the filtering performance for long-tail entities over several baselines. Results of applying the model to unseen entities are promising, indicating that the model is able to learn the general characteristics of a vital document. The overall performance across all entities---i.e., not just long-tail entities---improves upon the state-of-the-art without depending on any entity-specific training data.Comment: CIKM2016, Proceedings of the 25th ACM International Conference on Information and Knowledge Management. 201

    A Healthy Start for the Los Angeles Healthy Kids Program: Findings From the First Evaluation Site Visit

    Get PDF
    Analyzes the implementation and impact of the first two years of the Healthy Kids Program, and outlines key issues and challenges to achieving universal coverage and stable financing

    The space physics environment data analysis system (SPEDAS)

    Get PDF
    With the advent of the Heliophysics/Geospace System Observatory (H/GSO), a complement of multi-spacecraft missions and ground-based observatories to study the space environment, data retrieval, analysis, and visualization of space physics data can be daunting. The Space Physics Environment Data Analysis System (SPEDAS), a grass-roots software development platform (www.spedas.org), is now officially supported by NASA Heliophysics as part of its data environment infrastructure. It serves more than a dozen space missions and ground observatories and can integrate the full complement of past and upcoming space physics missions with minimal resources, following clear, simple, and well-proven guidelines. Free, modular and configurable to the needs of individual missions, it works in both command-line (ideal for experienced users) and Graphical User Interface (GUI) mode (reducing the learning curve for first-time users). Both options have “crib-sheets,” user-command sequences in ASCII format that can facilitate record-and-repeat actions, especially for complex operations and plotting. Crib-sheets enhance scientific interactions, as users can move rapidly and accurately from exchanges of technical information on data processing to efficient discussions regarding data interpretation and science. SPEDAS can readily query and ingest all International Solar Terrestrial Physics (ISTP)-compatible products from the Space Physics Data Facility (SPDF), enabling access to a vast collection of historic and current mission data. The planned incorporation of Heliophysics Application Programmer’s Interface (HAPI) standards will facilitate data ingestion from distributed datasets that adhere to these standards. Although SPEDAS is currently Interactive Data Language (IDL)-based (and interfaces to Java-based tools such as Autoplot), efforts are under-way to expand it further to work with python (first as an interface tool and potentially even receiving an under-the-hood replacement). We review the SPEDAS development history, goals, and current implementation. We explain its “modes of use” with examples geared for users and outline its technical implementation and requirements with software developers in mind. We also describe SPEDAS personnel and software management, interfaces with other organizations, resources and support structure available to the community, and future development plans.Published versio
    corecore