40 research outputs found

    Design Requirements of Office Systems

    Get PDF
    Automation of office work constitutes a new growing appl ication of information systems. The original characteri stics of an Office Information System (OIS) in comparison with a conventional information system imply the need for devel opi ng new design methodol ogies and model s, which are cl assified and discussed in this paper. OIS are not just document management systems (or word processing systems), 1.e., they do not consider only, or mainly, static aspects of data: they are in fact more general information systems where documents are only one of the many elements of the system. In addition, while conventional IS are often applied to support operational activities, office work shows many different facets, and therefore it is not reduci bl e to a set of operatl onal activities. Correspondi ngly, whil e the main phases that are commonly recognized in the design of a conventional IS (such as requi rements analysis, requi rements specification, logical design, optimization and implementation, system eval uatl on and . modification) can be transferred al so to OIS design, the , conceptual models for requirements specifications, on which the early design phases are based, should instead be changed in order to allow the specification of particular aspects of an OIS. Such aspects include new functionalities, such as filtering of data, reminding of activities to be performed, scheduling of manual and automatic activities, and communication; some specific types of data are also needed in an OIS: groups of data (documents and dossiers), unstructured and incomplete data, sophisticated handling of time, and of compl ex situations, distributed data, office workers roles. Other particular aspects are related to the fact that an office system is intrisically evol uti onary, and with the usage of the system: highly interactive, integrating different functions, requiring great flexibility with possible interruptions of tasks and with a high number of exceptions arising during the work

    Tenth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools Aarhus, Denmark, October 19-21, 2009

    Get PDF
    This booklet contains the proceedings of the Tenth Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 19-21, 2009. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.cs.au.dk/CPnets/events/workshop0

    Machine intelligence and robotics: Report of the NASA study group

    Get PDF
    Opportunities for the application of machine intelligence and robotics in NASA missions and systems were identified. The benefits of successful adoption of machine intelligence and robotics techniques were estimated and forecasts were prepared to show their growth potential. Program options for research, advanced development, and implementation of machine intelligence and robot technology for use in program planning are presented

    National Educators' Workshop: Update 1991. Standard Experiments in Engineering Materials Science and Technology

    Get PDF
    Given here is a collection of experiments presented and demonstrated at the National Educators' Workshop: Update 91, held at the Oak Ridge National Laboratory on November 12-14, 1991. The experiments related to the nature and properties of engineering materials and provided information to assist in teaching about materials in the education community

    Beyond Quantity: Research with Subsymbolic AI

    Get PDF
    How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately

    Extracting Quantitative Informationfrom Nonnumeric Marketing Data: An Augmentedlatent Semantic Analysis Approach

    Get PDF
    Despite the widespread availability and importance of nonnumeric data, marketers do not have the tools to extract information from large amounts of nonnumeric data. This dissertation attempts to fill this void: I developed a scalable methodology that is capable of extracting information from extremely large volumes of nonnumeric data. The proposed methodology integrates concepts from information retrieval and content analysis to analyze textual information. This approach avoids a pervasive difficulty of traditional content analysis, namely the classification of terms into predetermined categories, by creating a linear composite of all terms in the document and, then, weighting the terms according to their inferred meaning. In the proposed approach, meaning is inferred by the collocation of the term across all the texts in the corpus. It is assumed that there is a lower dimensional space of concepts that underlies word usage. The semantics of each word are inferred by identifying its various contexts in a document and across documents (i.e., in the corpus). After the semantic similarity space is inferred from the corpus, the words in each document are weighted to obtain their representation on the lower dimensional semantic similarity space, effectively mapping the terms to the concept space and ultimately creating a score that measures the concept of interest. I propose an empirical application of the outlined methodology. For this empirical illustration, I revisit an important marketing problem, the effect of movie critics on the performance of the movies. In the extant literature, researchers have used an overall numerical rating of the review to capture the content of the movie reviews. I contend that valuable information present in the textual materials remains uncovered. I use the proposed methodology to extract this information from the nonnumeric text contained in a movie review. The proposed setting is particularly attractive to validate the methodology because the setting allows for a simple test of the text-derived metrics by comparing them to the numeric ratings provided by the reviewers. I empirically show the application of this methodology and traditional computer-aided content analytic methods to study an important marketing topic, the effect of movie critics on movie performance. In the empirical application of the proposed methodology, I use two datasets that combined contain more than 9,000 movie reviews nested in more than 250 movies. I am restudying this marketing problem in the light of directly obtaining information from the reviews instead of following the usual practice of using an overall rating or a classification of the review as either positive or negative. I find that the addition of direct content and structure of the review adds a significant amount of exploratory power as a determinant of movie performance, even in the presence of actual reviewer overall ratings (stars) and other controls. This effect is robust across distinct opertaionalizations of both the review content and the movie performance metrics. In fact, my findings suggest that as we move from sales to profitability to financial return measures, the role of the content of the review, and therefore the critic\u27s role, becomes increasingly important

    A theoretical and practical investigation of tools and techniques for the structuring of data and for modelling its behaviour

    Get PDF
    This thesis is about data and behaviour modelling for information system development. It has been sponsored at different times by two specialist consultancies: CACI Inc International and James Martin Associates. Initially I found problem areas in the field of system development by interviewing practitioners and by consultancy. These initial problem areas were whittled down to: action modelling, entity model clustering and a diagrammer. Action modelling is the modelling of detailed data behaviour using the same structuring concepts as data modelling. It was developed because of a lack of such analysis in systems development. Entity model clustering is about aggregating the entity types in a large entity model to abstract the essential meaning and to identify the most fundamental entity types. It was developed because of a need to summarise large entity relationship models for usability and comprehension. It has been used widely and has many benefits. A parallelism between data and activity modelling was developed as a result of the research into action modelling and entity model clustering. It needed the concepts derived from the other two areas to finally complete the theory, summarised as: every data modelling concept and structure has an exact equivalent in activity modelling and vice-versa. This theory gives a wholeness and completeness to modelling data and activity. A diagrammer was produced for the automatic production and manipulation of entity relationship diagrams from a base description. These diagrams are the basic tool of the data modeller; automating them saves time and potentially raises their accuracy. The main research problem was that few companies were willing to be guinea pigs, so most of the research was developed by thought 'games'. Most areas have been published in refereed publications as this was seen as the best way of establishing their academic credibility. All areas have been incorporated into or had an impact on James Martin Associates and their methodology Information Engineering, which provides a framework for coordinating the research areas. This research can best be techniques for improving summarised as the systems an attempt to find analysis process
    corecore