6 research outputs found

    Semantic Modelling of e-Solutions Using a View Formalism with Conceptual and Logical Extensions

    Get PDF
    In industrial informatics, there exists a requirement to model and design views at a higher level of abstraction. Since the classical view definitions are only available at the query or instance level, modelling and maintaining such views for complex enterprise information systems (EIS) is a challenging task. Further, the introduction of semi-structured data (namely XML) and its rapid adaptation by the commercial and industrial systems increased the complexity for view design and specification. To address such and issue, in this paper we present; (a) a layered view model for XML, (b) a design methodology for such views and (c) some real-world industrial applications of the view model. The XML view formalism is defined at the conceptual level and the design methodology is based on the XML semantic (XSemantic) nets, a high-level object-oriented (OO) modelling language for XML domains

    Modeling views in the layered view model for XML using UML

    Get PDF
    In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of Extensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user-defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi-structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three-fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a view-driven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction

    Successful Ethical Decision-Making Practices from the Professional Accountants\u27 Perspective

    Get PDF
    Unethical behavior includes all decisions and actions counterproductive to an organization\u27s mission and can cause irrevocable damage to the organization\u27s professional reputation. The Securities and Exchange Commission reported 807 ethical violations in 2015. This study was underpinned by the ethical leadership theory, which emphasizes leadership decision making based on fair and just practices, for all involved parties. The purpose of this qualitative multiple-case study was to explore the ethical decision-making best practices that not-for-profit accounting managers in the Washington, DC, metropolitan area needed to strengthen the ethical decision-making process in their organizations. Data were collected through semistructured interviews from 5 participants who were accounting leaders of not-for-profit organizations. The analysis of data involved coding techniques, while member checking ensured confirmability of participant responses. Three themes emerged from the analysis of data as the most effective in fostering an ethical climate within the organizations, notably: the importance of leveraging internal controls, staff education on ethical decision making, and the role of leadership in fostering ethical leadership. The findings from this study may contribute to social change by providing leaders with strategies to reduce the occurrence of fraud within organizations. The beneficiaries of this research may include not-for-profit leaders, accounting professionals, and business practitioners. The goals of these individuals are to aid companies in furthering their missions and ensure organizations remain operational and utilize ethical decision making

    Writing, philology, and digital variants

    Get PDF

    Informatisation d'une forme écrite de la langue des signes française

    Get PDF
    Cette thèse étudie les moyens pour informatiser une forme écrite de la langue des signes. L'état de l'art étudie respectivement l'écriture, l'encodage, la localisation des systèmes d'exploitation à travers l'exemple de linux, les langues signées, et l'informatisation des langues signées. Il est suivi par une observation au Brésil d'une classe d'enfants utilisant les logiciels existants. Suite à ces éléments, une nouvelle approche, basée sur une segmentation du problème en couches fonctionelles intégrables au système d'exploitation, est proposée. Le fonctionnement des couches est étudié, ce qui permet d'identifier des problèmes de variabilités. Un algorithme pour gérer ces variabilités est proposé. La problématique de l'encodage est alors étudiée plus en détail, en comparant les méthodes qu'il est possible de mettre en oeuvre, et en proposant une approche intermédiaire entre les objectifs de vitesse et de place, vu les impératifs d'Unicode.This thesis details the possibles ways to incode a written from of sign language. First, existing work is studied for applied approaches of writing, encoding, localizing operating systems through the example of Linux, signed languages and computer support for signed languages. It is followed by an on-site study of a children class in a Brasilian school where exisiting software is being used. After this, a new approach is proposed, based on a segmentation of the problem in functionnal layers which can be incorporated to the operating system. The function of such layers is then studied, which shows the problem of variabilities. An algorithm to manage such variabilities is proposed. The problem of encoding is then studied with more details, through a comparison of the different methods that can be applied, with the proposal of a middle-ground method between the speed requirements and the size requirements, considering what Unicode support imposes

    Analyse von IT-Anwendungen mittels Zeitvariation

    Get PDF
    Performanzprobleme treten in der Praxis von IT-Anwendungen häufig auf, trotz steigender Hardwareleistung und verschiedenster Ansätze zur Entwicklung performanter Software im Softwarelebenszyklus. Modellbasierte Performanzanalysen ermöglichen auf Basis von Entwurfsartefakten eine Prävention von Performanzproblemen. Bei bestehenden oder teilweise implementierten IT-Anwendungen wird versucht, durch Hardwareskalierung oder Optimierung des Codes Performanzprobleme zu beheben. Beide Ansätze haben Nachteile: modellbasierte Ansätze werden durch die benötigte hohe Expertise nicht generell genutzt, die nachträgliche Optimierung ist ein unsystematischer und unkoordinierter Prozess. Diese Dissertation schlägt einen neuen Ansatz zur Performanzanalyse für eine nachfolgende Optimierung vor. Mittels eines Experiments werden Performanzwechselwirkungen in der IT-Anwendung identifiziert. Basis des Experiments, das Analyseinstrumentarium, ist eine zielgerichtete, zeitliche Variation von Start-, Endzeitpunkt oder Laufzeitdauer von Abläufen der IT-Anwendung. Diese Herangehensweise ist automatisierbar und kann strukturiert und ohne hohen Lernaufwand im Softwareentwicklungsprozess angewandt werden. Mittels der Turingmaschine wird bewiesen, dass durch die zeitliche Variation des Analyseinstrumentariums die Korrektheit von sequentiellen Berechnung beibehalten wird. Dies wird auf nebenläufige Systeme mittels der parallelen Registermaschine erweitert und diskutiert. Mit diesem praxisnahen Maschinenmodell wird dargelegt, dass die entdeckten Wirkzusammenhänge des Analyseinstrumentariums Optimierungskandidaten identifizieren. Eine spezielle Experimentierumgebung, in der die Abläufe eines Systems, bestehend aus Software und Hardware, programmierbar variiert werden können, wird mittels einer Virtualisierungslösung realisiert. Techniken zur Nutzung des Analyseinstrumentariums durch eine Instrumentierung werden angegeben. Eine Methode zur Ermittlung von Mindestanforderungen von IT-Anwendungen an die Hardware wird präsentiert und mittels der Experimentierumgebung anhand von zwei Szenarios und dem Android Betriebssystem exemplifiziert. Verschiedene Verfahren, um aus den Beobachtungen des Experiments die Optimierungskandidaten des Systems zu eruieren, werden vorgestellt, klassifiziert und evaluiert. Die Identifikation von Optimierungskandidaten und -potenzial wird an Illustrationsszenarios und mehreren großen IT-Anwendungen mittels dieser Methoden praktisch demonstriert. Als konsequente Erweiterung wird auf Basis des Analyseinstrumentariums eine Testmethode zum Validieren eines Systems gegenüber nicht deterministisch reproduzierbaren Fehlern, die auf Grund mangelnder Synchronisationsmechanismen (z.B. Races) oder zeitlicher Abläufe entstehen (z.B. Heisenbugs, alterungsbedingte Fehler), angegeben.Performance problems are very common in IT-Application, even though hardware performance is consistently increasing and there are several different software performance engineering methodologies during the software life cycle. The early model based performance predictions are offering a prevention of performance problems based on software engineering artifacts. Existing or partially implemented IT-Applications are optimized with hardware scaling or code tuning. There are disadvantages with both approaches: the model based performance predictions are not generally used due to the needed high expertise, the ex post optimization is an unsystematic and unstructured process. This thesis proposes a novel approach to a performance analysis for a subsequent optimization of the IT-Application. Via an experiment in the IT-Application performance interdependencies are identified. The core of the analysis is a specific variation of start-, end time or runtime of events or processes in the IT-Application. This approach is automatic and can easily be used in a structured way in the software development process. With a Turingmachine the correctness of this experimental approach was proved. With these temporal variations the correctness of a sequential calculation is held. This is extended and discussed on concurrent systems with a parallel Registermachine. With this very practical machine model the effect of the experiment and the subsequent identification of optimization potential and candidates are demonstrated. A special experimental environment to vary temporal processes and events of the hardware and the software of a system was developed with a virtual machine. Techniques for this experimental approach via instrumenting are stated. A method to determine minimum hardware requirements with this experimental approach is presented and exemplified with two scenarios based on the Android Framework. Different techniques to determine candidates and potential for an optimization are presented, classified and evaluated. The process to analyze and identify optimization candidates and potential is demonstrated on scenarios for illustration purposes and real IT-Applications. As a consistent extension a test methodology enabling a test of non-deterministic reproducible errors is given. Such non-deterministic reproducible errors are faults in the system caused by insufficient synchronization mechanisms (for example Races or Heisenbugs) or aging-related faults
    corecore