11 research outputs found

    Internet-based solutions to support distributed manufacturing

    Get PDF
    With the globalisation and constant changes in the marketplace, enterprises are adapting themselves to face new challenges. Therefore, strategic corporate alliances to share knowledge, expertise and resources represent an advantage in an increasing competitive world. This has led the integration of companies, customers, suppliers and partners using networked environments. This thesis presents three novel solutions in the tooling area, developed for Seco tools Ltd, UK. These approaches implement a proposed distributed computing architecture using Internet technologies to assist geographically dispersed tooling engineers in process planning tasks. The systems are summarised as follows. TTS is a Web-based system to support engineers and technical staff in the task of providing technical advice to clients. Seco sales engineers access the system from remote machining sites and submit/retrieve/update the required tooling data located in databases at the company headquarters. The communication platform used for this system provides an effective mechanism to share information nationwide. This system implements efficient methods, such as data relaxation techniques, confidence score and importance levels of attributes, to help the user in finding the closest solutions when specific requirements are not fully matched In the database. Cluster-F has been developed to assist engineers and clients in the assessment of cutting parameters for the tooling process. In this approach the Internet acts as a vehicle to transport the data between users and the database. Cluster-F is a KD approach that makes use of clustering and fuzzy set techniques. The novel proposal In this system is the implementation of fuzzy set concepts to obtain the proximity matrix that will lead the classification of the data. Then hierarchical clustering methods are applied on these data to link the closest objects. A general KD methodology applying rough set concepts Is proposed In this research. This covers aspects of data redundancy, Identification of relevant attributes, detection of data inconsistency, and generation of knowledge rules. R-sets, the third proposed solution, has been developed using this KD methodology. This system evaluates the variables of the tooling database to analyse known and unknown relationships in the data generated after the execution of technical trials. The aim is to discover cause-effect patterns from selected attributes contained In the database. A fourth system was also developed. It is called DBManager and was conceived to administrate the systems users accounts, sales engineers’ accounts and tool trial monitoring process of the data. This supports the implementation of the proposed distributed architecture and the maintenance of the users' accounts for the access restrictions to the system running under this architecture

    Investigation into standardising the graphical and operator input device modules for tactical command and control man-machine interfaces

    Get PDF
    Includes bibliographical references.The operating environment of a Tactical Command and Control system is a highly tense one in which the operator needs to perform certain complex tasks with minimum confusion, and be able to obtain an instant response from the system. Since many of the systems designed for these types of environments are similar in nature with regard to the user-interface, a need has arisen to try and standardise certain elements of the systems. This report looks specifically at standardising certain graphical display element and operator input device interfaces. It investigates the problem from a systems design level, identifying the elements required and their associated functions, discussing the results of work already undertaken in this field, and making recommendations on the use of the elements. The main objective to standardising the Man-Machine Interface (MMI) design elements is to make the code easily transferable between different hardware platforms. To transfer the code, one would ideally like to change only the interface code to the new platform, in particular, the interface to a different set of operator input devices and a different type of graphics card. Various topics related to the standardisation process are discussed, including a description of MMI design, some definitions of tactical command and control environment subjects, and a look at code reusability, rapid prototyping of systems, and object-oriented design

    Negotiating Software: Redistributing Control at Work and on the Web

    Get PDF
    Since the 1970s, digital technologies increasingly determine who gets what, when, and how; and the workings of informational capitalism have concentrated control over those technologies into the hands of just a few private corporations. The normative stance of this dissertation is that control over software should be distributed and subject to processes of negotiation: consensus-based decision making oriented towards achieving collective benefits. It explores the boundaries of negotiating software by trying to effect a change in two different kinds of software using two different approaches. The first approach targets application software – the paradigmatic model of commodified, turn-key computational media – in the context of knowledge work – labour that involves the creation and distribution of information through non-routine, creative, and abstract thinking. It tries to effect change by developing negotiable software as an alternative to the autocratic application model, which is software that embeds the support for distributed control in and over its design. These systems successfully demonstrate the technological feasibility of this approach, but also the limitations of design as a solution to systemic power asymmetries. The second approach targets consent management platforms – pop-up interfaces on the web that capture visitor’s consent for data processing – in the context of the European Union’s data protection regulation. It tries to effect change by employing negotiation software, which is software that supports existing processes of negotiation in complex systems, i.e., regulatory oversight and the exercise of digital rights. This approach resulted in a considerable increase in data protection compliance on Danish websites, but showed that sustainable enforcement using digital tools also requires design changes to data processing technologies. Both approaches to effecting software change – making software negotiable and using software in negotiations – revealed the drawbacks of individualistic strategies. Ultimately, the capacity of the liberal subject to stand up against corporate power is limited, and more collective approaches to software negotiation need to be developed, whether when making changes to designs or leveraging regulation

    Casting the runes and parsing them: Unpacking software mediation, interactions, and computational literacy in non-conventional programming configurations

    Get PDF
    Abstract This dissertation is an investigation of computational literacy and how it is shaped by software use and mediation. Early visionaries such as Perlis and Naur recognized the need for everyone to learn computing, but these ideals are yet to be fully realized. Arguably, a narrow focus on computational thinking is the more popular approach in contemporary computing education research and policymaking. Another branch of researchers, in particular Kay and diSessa, have argued for the need for providing the right media for computing. In line with them, I argue that a more materially grounded literacy is a necessary step. By extension, this means providing a better understanding of how these material conditions (e.g., software) influence the development of computational literacy. Through eight studies, I have employed a mix of qualitative methods and constructive design research. The qualitative methods fall under ethnography, technography, and retrospective autoethnography. The empirically grounded research draws from interviews with five humanities students, interviews and observations of four biomolecular scientists, interviews with 12 experienced programmers, and a workshop and observations of 12 experienced knitters. These interviews focused on their experiences with programming, their ability to use and appropriate unfamiliar software, and their feelings of mastery and disempowerment. This is supplemented with technographic investigations of computational media, literate computing environments, and programming interfaces that focuses on the mediating qualities of software for programming such as interaction, semiotics, ethics, and transformation. My work has shown the importance of the material foundations of computational literacy in these contexts. More specifically, the material conditions affect this literacy in multiple ways such as the dissonance between software visions, people’s expectations, and the practical implementations. People experience disempowerment and crises and resolve those through various means such as enrolling a more capable peer or incorporating supporting artifacts. The dissertation further presents computational media as a promising, yet fragile software paradigm and shows how this paradigm blends use and development, inscribes particular user roles, and balances between evoking trust and alienation in its users. Finally, by emphasizing a theoretical lens of self-concept in the context of computational literacy, the dissertation provides a view of literacy as a product of continuous experiences and confirmations from people’s social and material lifeworlds. These findings should resonate with scholars of new media, human-computer interaction, and computing education, as the dissertation explores the complex mutual relationships between people’s cultural, social, and material environments as well as their ongoing and sometimes contradictory ways of seeing themselves. Computational literacy can be emancipatory for everyone, not just for computer scientists, yet the development of literacy demands adequate conditions. This dissertation is an argument for the importance of those conditions. Dansk resumĂ© At riste og rĂ„de runerne: En udlĂŠgning af mediering, interaktion og datalogiske kompetencer i ukonventionelle programmeringskonfigurationer Denne afhandling er skrevet pĂ„ engelsk. Jeg benytter engelske begreber, som desvĂŠrre er vanskelige at oversĂŠtte uden at miste noget af deres betydning. Eksempelvis er der pĂ„ engelsk en forskel mellem literacy og kompetencer, som ikke findes pĂ„ samme mĂ„de pĂ„ dansk. Computational har jeg nogle steder i det fĂžlgende oversat til datalogisk, selvom der er en mindre begrebsmĂŠssig forskel. Ligeledes har jeg oversat computing til datalĂŠre, hvor det er hensigtsmĂŠssigt. Afhandlingen er en undersĂžgelse af datalogiske kompetencer (en. computational literacy), og hvordan de formes af softwarebrug og -mediering. Tidlige visionĂŠrer som Perlis og Naur indsĂ„ behovet for, at alle skulle lĂŠre datalĂŠre, men disse idealer er endnu ikke opfyldt. Et snĂŠvert fokus pĂ„ datalogisk tĂŠnkning (en. computational thinking) er tilsyneladende en mere populĂŠr tilgang i aktuel uddannelsesforskning og pĂ„ den politiske dagsorden. En forskningsgren, anfĂžrt af blandt andre Kay og diSessa, har argumenteret for behovet for de rigtige medier i datalĂŠre (en. computing). I trĂ„d med disse argumenter italesĂŠtter jeg ogsĂ„ det nĂždvendige i mere materielt funderede kompetencer. Det betyder altsĂ„ i denne sammenhĂŠng, at der er behov for en bedre forstĂ„else af, hvordan de materielle betingelser (fx software) pĂ„virker udviklingen af datalogiske kompetencer. Gennem otte studier har jeg anvendt en kombination af kvalitative metoder og konstruktiv designforskning. De kvalitative metoder er mere specifikt etnografi, teknografi og retrospektiv autoetnografi. Min empirisk funderede forskning trĂŠkker pĂ„ interviews med fem humanistiske studerende, interviews og observationer med fire biomolekylĂŠre forskere, interviews med tolv erfarne programmĂžrer og en workshop og observationer med tolv erfarne strikkere. Fokus i interviews og observationer var deltagernes erfaringer med programmering, deres evne til at bruge og tilegne sig ukendt software samt deres fĂžlelse af mestring og umyndiggĂžrelse. Dette blev suppleret med teknografiske undersĂžgelser af computational medier, literate computing-miljĂžer og programmeringsinterfaces med fokus pĂ„ de medierende kvaliteter i programmeringssoftware, eksempelvis interaktion, semiotik, etik og transformation. Mit arbejde har demonstreret betydningen af det materielle fundament for datalogiske kompetencer i disse kontekster. Mere specifikt pĂ„virker de materielle betingelser kompetencerne pĂ„ flere mĂ„der, eksempelvis i form af dissonans mellem softwarevisioner, forventninger og praktiske implementeringer. Deltagerne oplever umyndiggĂžrelse og kriser, som de lĂžser pĂ„ forskellige mĂ„der, blandt andet ved indrullering af mere kompetente ligesindede eller inkorporering af andre artefakter. Afhandlingen prĂŠsenterer derudover computational medier som et lovende, men skrĂžbeligt softwareparadigme og viser, hvordan paradigmet blander brug og udvikling, indskriver bestemte brugerroller og balancerer mellem tillid og fremmedgĂžrelse for de mennesker, som har med det at gĂžre. Slutteligt bidrager afhandlingen med et perspektiv pĂ„ kompetencer, der bygger pĂ„ selvbegreb som et teoretisk indgangsvinkel. I dette perspektiv er kompetence et produkt af folks lĂžbende erfaringer og bekrĂŠftelser fra deres sociale og materielle livsverdener. Mine resultater skulle gerne give genlyd blandt forskere med interesse i nye medier, menneske-maskin-interaktion og informatikundervisning. Dette skyldes isĂŠr, at afhandlingen udforsker de komplekse og gensidige forhold mellem folks kulturelle, sociale og materielle omverden samt de lĂžbende og ofte selvmodsigende mĂ„der, de ser sig selv. Datalogiske kompetencer kan vĂŠre frigĂžrende og dannende for alle, ikke bare for dataloger og programmĂžrer, men udviklingen af disse kompetencer forudsĂŠtter altsĂ„ tilstrĂŠkkelige betingelser. Denne afhandling er et argument for vigtigheden af disse betingelser

    Informationssysteme auf der Basis aktiver Hypertextdokumente

    Get PDF
    Die Arbeit beschĂ€ftigt sich mit der Implementierung von Informationssystemen, die mittels Web-Techniken wie etwa der Hypertext Markup Language (HTML), des Hypertext Transport Protocols (HTTP) oder der Extensible Markup Language (XML) erstellt werden. Web-basierte Informationssysteme werden verstĂ€rkt eingesetzt, um vollstĂ€ndige Applikationen fĂŒr die Abwicklung von GeschĂ€ftsprozessen zu implementieren. Die Ausgangslage fĂŒr die Arbeit ist das Fehlen formeller Modelle, mit der solche Systeme umgesetzt werden können, kombiniert mit dem Aufkommen neuer Anwendungsgebiete wie der Business-to-Business-Kopplung mittels Web-basierter Systeme. Im Verlauf der Arbeit werden bestehende Systeme analysiert um darauf aufbauend die Anforderungen fĂŒr ein Modell zur Beschreibung und Realisierung Web-basierter Anwendungen festzulegen. Das daraus entwickelte Modell stellt die Information, die in solchen Anwendungen ausgetauscht und verarbeitet wird, in den Vordergrund, und setzt als wichtigstes Beschreibungsmittel Hypertextdokumente ein, welche um aktive Komponenten ergĂ€nzt zu aktiven Hypertextdokumenten (AHDs) werden. Das Modell fĂŒr aktive Hypertextdokumente (AHDM) umfaßt ein Informationsmodell, welches den Aufbau aktiver Hypertextdokumente beschreibt, ein Kommunikationsmodell zur Regelung des Informationsaustausches, ein Objektmodell fĂŒr die Definition des Zusammenspiels der aktiven Bestandteile eines AHDs und ein Laufzeitmodell fĂŒr die tatsĂ€chliche AusfĂŒhrung der aktiven Bestandteile. Aktive Hypertextdokumente werden als XML-Dokumente realisiert, die entsprechend dem Informationsmodell neben den ursprĂŒnglichen Nutzdaten auch Funktionen und Variablen enthalten. Neben dem Modell wird auch eine Vorgehensweise beschrieben, die den Einsatz aktiver Hypertextdokumente erleichtern soll. Die PraktikabilitĂ€t des Modells wird anhand von Beispielanwendungen demonstriert, die von einfachen, eigenstĂ€ndigen Anwendungen hin zu kooperativen, vernetzten Anwendungen mit mobilen Dokumenten reichen. Die zur Nutzung aktiver Hypertextdokumente notwendigen Werkzeuge werden ebenfalls beschrieben

    Digitale Editionsformen. Zum Umgang mit der Überlieferung unter den Bedingungen des Medienwandels. Teil 3: Textbegriffe und Recodierung. [Finale Print-Fassung]

    Get PDF
    Die wissenschaftliche Edition zielt auf die zuverlĂ€ssige Wiedergabe des Textes. Aber was ist dieser Text eigentlich? Bei genauerer Betrachtung erlaubt nur ein erweiterter Textbegriff und ein neues pluralistisches Textmodell eine Beschreibung aller textuellen PhĂ€nomene, die in einer wissenschaftlichen Edition zu berĂŒcksichtigen sind. Auch unsere Technologien und Methodologien der Textcodierung, hier vor allem die Auszeichnungssprachen im Allgemeinen und die Beschreibungsempfehlungen der Text Encoding Initiative (TEI) im Besonderen können unter dieser Schablone genauer beschrieben und hinsichtlich ihrer Grenzen charakterisiert werden. Schließlich erlaubt das pluralistische Textmodell auch die prĂ€zisere theoretische Fundierung jener Prozesse, die als "Transkription" Grundlage und HerzstĂŒck einer jeden wissenschaftlichen Edition sind

    Digitale Editionsformen. Zum Umgang mit der Überlieferung unter den Bedingungen des Medienwandels. Teil 3: Textbegriffe und Recodierung. [Preprint-Fassung]

    Get PDF
    Die wissenschaftliche Edition zielt auf die zuverlĂ€ssige Wiedergabe des Textes. Aber was ist dieser Text eigentlich? Bei genauerer Betrachtung erlaubt nur ein erweiterter Textbegriff und ein neues pluralistisches Textmodell eine Beschreibung aller textuellen PhĂ€nomene, die in einer wissenschaftlichen Edition zu berĂŒcksichtigen sind. Auch unsere Technologien und Methodologien der Textcodierung, hier vor allem die Auszeichnungssprachen im Allgemeinen und die Beschreibungsempfehlungen der Text Encoding Initiative (TEI) im Besonderen können unter dieser Schablone genauer beschrieben und hinsichtlich ihrer Grenzen charakterisiert werden. Schließlich erlaubt das pluralistische Textmodell auch die prĂ€zisere theoretische Fundierung jener Prozesse, die als "Transkription" Grundlage und HerzstĂŒck einer jeden wissenschaftlichen Edition sind
    corecore