11 research outputs found
Internet-based solutions to support distributed manufacturing
With the globalisation and constant changes in the marketplace, enterprises are adapting themselves to face new challenges. Therefore, strategic corporate alliances to share knowledge, expertise and resources represent an advantage in an increasing competitive world. This has led the integration of companies, customers, suppliers and partners using networked environments. This thesis presents three novel solutions in the tooling area, developed for Seco tools Ltd, UK. These approaches implement a proposed distributed computing architecture using Internet technologies to assist geographically dispersed tooling engineers in process planning tasks. The systems are summarised as follows. TTS is a Web-based system to support engineers and technical staff in the task of providing technical advice to clients. Seco sales engineers access the system from remote machining sites and submit/retrieve/update the required tooling data located in databases at the company headquarters. The communication platform used for this system provides an effective mechanism to share information nationwide. This system implements efficient methods, such as data relaxation techniques, confidence score and importance levels of attributes, to help the user in finding the closest solutions when specific requirements are not fully matched In the database. Cluster-F has been developed to assist engineers and clients in the assessment of cutting parameters for the tooling process. In this approach the Internet acts as a vehicle to transport the data between users and the database. Cluster-F is a KD approach that makes use of clustering and fuzzy set techniques. The novel proposal In this system is the implementation of fuzzy set concepts to obtain the proximity matrix that will lead the classification of the data. Then hierarchical clustering methods are applied on these data to link the closest objects. A general KD methodology applying rough set concepts Is proposed In this research. This covers aspects of data redundancy, Identification of relevant attributes, detection of data inconsistency, and generation of knowledge rules. R-sets, the third proposed solution, has been developed using this KD methodology. This system evaluates the variables of the tooling database to analyse known and unknown relationships in the data generated after the execution of technical trials. The aim is to discover cause-effect patterns from selected attributes contained In the database. A fourth system was also developed. It is called DBManager and was conceived to administrate the systems users accounts, sales engineersâ accounts and tool trial monitoring process of the data. This supports the implementation of the proposed distributed architecture and the maintenance of the users' accounts for the access restrictions to the system running under this architecture
Investigation into standardising the graphical and operator input device modules for tactical command and control man-machine interfaces
Includes bibliographical references.The operating environment of a Tactical Command and Control system is a highly tense one in which the operator needs to perform certain complex tasks with minimum confusion, and be able to obtain an instant response from the system. Since many of the systems designed for these types of environments are similar in nature with regard to the user-interface, a need has arisen to try and standardise certain elements of the systems. This report looks specifically at standardising certain graphical display element and operator input device interfaces. It investigates the problem from a systems design level, identifying the elements required and their associated functions, discussing the results of work already undertaken in this field, and making recommendations on the use of the elements. The main objective to standardising the Man-Machine Interface (MMI) design elements is to make the code easily transferable between different hardware platforms. To transfer the code, one would ideally like to change only the interface code to the new platform, in particular, the interface to a different set of operator input devices and a different type of graphics card. Various topics related to the standardisation process are discussed, including a description of MMI design, some definitions of tactical command and control environment subjects, and a look at code reusability, rapid prototyping of systems, and object-oriented design
Negotiating Software: Redistributing Control at Work and on the Web
Since the 1970s, digital technologies increasingly determine who gets what, when, and how; and the workings of informational capitalism have concentrated control over those technologies into the hands of just a few private corporations. The normative stance of this dissertation is that control over software should be distributed and subject to processes of negotiation: consensus-based decision making oriented towards achieving collective benefits. It explores the boundaries of negotiating software by trying to effect a change in two different kinds of software using two different approaches.
The first approach targets application software â the paradigmatic model of commodified, turn-key computational media â in the context of knowledge work â labour that involves the creation and distribution of information through non-routine, creative, and abstract thinking. It tries to effect change by developing negotiable software as an alternative to the autocratic application model, which is software that embeds the support for distributed control in and over its design. These systems successfully demonstrate the technological feasibility of this approach, but also the limitations of design as a solution to systemic power asymmetries.
The second approach targets consent management platforms â pop-up interfaces on the web that capture visitorâs consent for data processing â in the context of the European Unionâs data protection regulation. It tries to effect change by employing negotiation software, which is software that supports existing processes of negotiation in complex systems, i.e., regulatory oversight and the exercise of digital rights. This approach resulted in a considerable increase in data protection compliance on Danish websites, but showed that sustainable enforcement using digital tools also requires design changes to data processing technologies.
Both approaches to effecting software change â making software negotiable and using software in negotiations â revealed the drawbacks of individualistic strategies. Ultimately, the capacity of the liberal subject to stand up against corporate power is limited, and more collective approaches to software negotiation need to be developed, whether when making changes to designs or leveraging regulation
Casting the runes and parsing them: Unpacking software mediation, interactions, and computational literacy in non-conventional programming configurations
Abstract
This dissertation is an investigation of computational literacy and how it is shaped by software use and mediation. Early visionaries such as Perlis and Naur recognized the need for everyone to learn computing, but these ideals are yet to be fully realized. Arguably, a narrow focus on computational thinking is the more popular approach in contemporary computing education research and policymaking. Another branch of researchers, in particular Kay and diSessa, have argued for the need for providing the right media for computing. In line with them, I argue that a more materially grounded literacy is a necessary step. By extension, this means providing a better understanding of how these material conditions (e.g., software) influence the development of computational literacy.
Through eight studies, I have employed a mix of qualitative methods and constructive design research. The qualitative methods fall under ethnography, technography, and retrospective autoethnography. The empirically grounded research draws from interviews with five humanities students, interviews and observations of four biomolecular scientists, interviews with 12 experienced programmers, and a workshop and observations of 12 experienced knitters. These interviews focused on their experiences with programming, their ability to use and appropriate unfamiliar software, and their feelings of mastery and disempowerment. This is supplemented with technographic investigations of computational media, literate computing environments, and programming interfaces that focuses on the mediating qualities of software for programming such as interaction, semiotics, ethics, and transformation.
My work has shown the importance of the material foundations of computational literacy in these contexts. More specifically, the material conditions affect this literacy in multiple ways such as the dissonance between software visions, peopleâs expectations, and the practical implementations. People experience disempowerment and crises and resolve those through various means such as enrolling a more capable peer or incorporating supporting artifacts. The dissertation further presents computational media as a promising, yet fragile software paradigm and shows how this paradigm blends use and development, inscribes particular user roles, and balances between evoking trust and alienation in its users. Finally, by emphasizing a theoretical lens of self-concept in the context of computational literacy, the dissertation provides a view of literacy as a product of continuous experiences and confirmations from peopleâs social and material lifeworlds.
These findings should resonate with scholars of new media, human-computer interaction, and computing education, as the dissertation explores the complex mutual relationships between peopleâs cultural, social, and material environments as well as their ongoing and sometimes contradictory ways of seeing themselves. Computational literacy can be emancipatory for everyone, not just for computer scientists, yet the development of literacy demands adequate conditions. This dissertation is an argument for the importance of those conditions.
Dansk resumé
At riste og rÄde runerne: En udlÊgning af mediering, interaktion og datalogiske kompetencer i ukonventionelle programmeringskonfigurationer
Denne afhandling er skrevet pÄ engelsk. Jeg benytter engelske begreber, som desvÊrre er vanskelige at oversÊtte uden at miste noget af deres betydning. Eksempelvis er der pÄ engelsk en forskel mellem literacy og kompetencer, som ikke findes pÄ samme mÄde pÄ dansk. Computational har jeg nogle steder i det fÞlgende oversat til datalogisk, selvom der er en mindre begrebsmÊssig forskel. Ligeledes har jeg oversat computing til datalÊre, hvor det er hensigtsmÊssigt.
Afhandlingen er en undersÞgelse af datalogiske kompetencer (en. computational literacy), og hvordan de formes af softwarebrug og -mediering. Tidlige visionÊrer som Perlis og Naur indsÄ behovet for, at alle skulle lÊre datalÊre, men disse idealer er endnu ikke opfyldt. Et snÊvert fokus pÄ datalogisk tÊnkning (en. computational thinking) er tilsyneladende en mere populÊr tilgang i aktuel uddannelsesforskning og pÄ den politiske dagsorden. En forskningsgren, anfÞrt af blandt andre Kay og diSessa, har argumenteret for behovet for de rigtige medier i datalÊre (en. computing). I trÄd med disse argumenter italesÊtter jeg ogsÄ det nÞdvendige i mere materielt funderede kompetencer. Det betyder altsÄ i denne sammenhÊng, at der er behov for en bedre forstÄelse af, hvordan de materielle betingelser (fx software) pÄvirker udviklingen af datalogiske kompetencer.
Gennem otte studier har jeg anvendt en kombination af kvalitative metoder og konstruktiv designforskning. De kvalitative metoder er mere specifikt etnografi, teknografi og retrospektiv autoetnografi. Min empirisk funderede forskning trÊkker pÄ interviews med fem humanistiske studerende, interviews og observationer med fire biomolekylÊre forskere, interviews med tolv erfarne programmÞrer og en workshop og observationer med tolv erfarne strikkere. Fokus i interviews og observationer var deltagernes erfaringer med programmering, deres evne til at bruge og tilegne sig ukendt software samt deres fÞlelse af mestring og umyndiggÞrelse. Dette blev suppleret med teknografiske undersÞgelser af computational medier, literate computing-miljÞer og programmeringsinterfaces med fokus pÄ de medierende kvaliteter i programmeringssoftware, eksempelvis interaktion, semiotik, etik og transformation.
Mit arbejde har demonstreret betydningen af det materielle fundament for datalogiske kompetencer i disse kontekster. Mere specifikt pÄvirker de materielle betingelser kompetencerne pÄ flere mÄder, eksempelvis i form af dissonans mellem softwarevisioner, forventninger og praktiske implementeringer. Deltagerne oplever umyndiggÞrelse og kriser, som de lÞser pÄ forskellige mÄder, blandt andet ved indrullering af mere kompetente ligesindede eller inkorporering af andre artefakter. Afhandlingen prÊsenterer derudover computational medier som et lovende, men skrÞbeligt softwareparadigme og viser, hvordan paradigmet blander brug og udvikling, indskriver bestemte brugerroller og balancerer mellem tillid og fremmedgÞrelse for de mennesker, som har med det at gÞre. Slutteligt bidrager afhandlingen med et perspektiv pÄ kompetencer, der bygger pÄ selvbegreb som et teoretisk indgangsvinkel. I dette perspektiv er kompetence et produkt af folks lÞbende erfaringer og bekrÊftelser fra deres sociale og materielle livsverdener.
Mine resultater skulle gerne give genlyd blandt forskere med interesse i nye medier, menneske-maskin-interaktion og informatikundervisning. Dette skyldes isÊr, at afhandlingen udforsker de komplekse og gensidige forhold mellem folks kulturelle, sociale og materielle omverden samt de lÞbende og ofte selvmodsigende mÄder, de ser sig selv. Datalogiske kompetencer kan vÊre frigÞrende og dannende for alle, ikke bare for dataloger og programmÞrer, men udviklingen af disse kompetencer forudsÊtter altsÄ tilstrÊkkelige betingelser. Denne afhandling er et argument for vigtigheden af disse betingelser
Informationssysteme auf der Basis aktiver Hypertextdokumente
Die Arbeit beschÀftigt sich mit der Implementierung von
Informationssystemen, die mittels Web-Techniken wie etwa der Hypertext
Markup Language (HTML), des Hypertext Transport Protocols (HTTP) oder
der Extensible Markup Language (XML) erstellt werden. Web-basierte
Informationssysteme werden verstÀrkt eingesetzt, um vollstÀndige
Applikationen fĂŒr die Abwicklung von GeschĂ€ftsprozessen zu
implementieren. Die Ausgangslage fĂŒr die Arbeit ist das Fehlen formeller
Modelle, mit der solche Systeme umgesetzt werden können, kombiniert mit
dem Aufkommen neuer Anwendungsgebiete wie der
Business-to-Business-Kopplung mittels Web-basierter Systeme. Im Verlauf
der Arbeit werden bestehende Systeme analysiert um darauf aufbauend die
Anforderungen fĂŒr ein Modell zur Beschreibung und Realisierung
Web-basierter Anwendungen festzulegen. Das daraus entwickelte Modell
stellt die Information, die in solchen Anwendungen ausgetauscht und
verarbeitet wird, in den Vordergrund, und setzt als wichtigstes
Beschreibungsmittel Hypertextdokumente ein, welche um aktive Komponenten
ergĂ€nzt zu aktiven Hypertextdokumenten (AHDs) werden. Das Modell fĂŒr
aktive Hypertextdokumente (AHDM) umfaĂt ein Informationsmodell, welches
den Aufbau aktiver Hypertextdokumente beschreibt, ein
Kommunikationsmodell zur Regelung des Informationsaustausches, ein
Objektmodell fĂŒr die Definition des Zusammenspiels der aktiven
Bestandteile eines AHDs und ein Laufzeitmodell fĂŒr die tatsĂ€chliche
AusfĂŒhrung der aktiven Bestandteile. Aktive Hypertextdokumente werden
als XML-Dokumente realisiert, die entsprechend dem Informationsmodell
neben den ursprĂŒnglichen Nutzdaten auch Funktionen und Variablen
enthalten. Neben dem Modell wird auch eine Vorgehensweise beschrieben,
die den Einsatz aktiver Hypertextdokumente erleichtern soll. Die
PraktikabilitÀt des Modells wird anhand von Beispielanwendungen
demonstriert, die von einfachen, eigenstÀndigen Anwendungen hin zu
kooperativen, vernetzten Anwendungen mit mobilen Dokumenten reichen. Die
zur Nutzung aktiver Hypertextdokumente notwendigen Werkzeuge werden
ebenfalls beschrieben
Digitale Editionsformen. Zum Umgang mit der Ăberlieferung unter den Bedingungen des Medienwandels. Teil 3: Textbegriffe und Recodierung. [Finale Print-Fassung]
Die wissenschaftliche Edition zielt auf die zuverlĂ€ssige Wiedergabe des Textes. Aber was ist dieser Text eigentlich? Bei genauerer Betrachtung erlaubt nur ein erweiterter Textbegriff und ein neues pluralistisches Textmodell eine Beschreibung aller textuellen PhĂ€nomene, die in einer wissenschaftlichen Edition zu berĂŒcksichtigen sind. Auch unsere Technologien und Methodologien der Textcodierung, hier vor allem die Auszeichnungssprachen im Allgemeinen und die Beschreibungsempfehlungen der Text Encoding Initiative (TEI) im Besonderen können unter dieser Schablone genauer beschrieben und hinsichtlich ihrer Grenzen charakterisiert werden. SchlieĂlich erlaubt das pluralistische Textmodell auch die prĂ€zisere theoretische Fundierung jener Prozesse, die als "Transkription" Grundlage und HerzstĂŒck einer jeden wissenschaftlichen Edition sind
Digitale Editionsformen. Zum Umgang mit der Ăberlieferung unter den Bedingungen des Medienwandels. Teil 3: Textbegriffe und Recodierung. [Preprint-Fassung]
Die wissenschaftliche Edition zielt auf die zuverlĂ€ssige Wiedergabe des Textes. Aber was ist dieser Text eigentlich? Bei genauerer Betrachtung erlaubt nur ein erweiterter Textbegriff und ein neues pluralistisches Textmodell eine Beschreibung aller textuellen PhĂ€nomene, die in einer wissenschaftlichen Edition zu berĂŒcksichtigen sind. Auch unsere Technologien und Methodologien der Textcodierung, hier vor allem die Auszeichnungssprachen im Allgemeinen und die Beschreibungsempfehlungen der Text Encoding Initiative (TEI) im Besonderen können unter dieser Schablone genauer beschrieben und hinsichtlich ihrer Grenzen charakterisiert werden. SchlieĂlich erlaubt das pluralistische Textmodell auch die prĂ€zisere theoretische Fundierung jener Prozesse, die als "Transkription" Grundlage und HerzstĂŒck einer jeden wissenschaftlichen Edition sind