45 research outputs found

    Citizen Science, Fall/Winter 2016, Issue 33

    Get PDF

    Multi-objective Optimization Methods for Allocation and Prediction

    Get PDF

    Multi-objective Optimization Methods for Allocation and Prediction

    Get PDF

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Surveillance Graphs: Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures

    Get PDF
    Information is power, and that power has been largely enclosed by a handful of information conglomerates. The logic of the surveillance-driven information economy demands systems for handling mass quantities of heterogeneous data, increasingly in the form of knowledge graphs. An archaeology of knowledge graphs and their mutation from the liberatory aspirations of the semantic web gives us an underexplored lens to understand contemporary information systems. I explore how the ideology of cloud systems steers two projects from the NIH and NSF intended to build information infrastructures for the public good to inevitable corporate capture, facilitating the development of a new kind of multilayered public/private surveillance system in the process. I argue that understanding technologies like large language models as interfaces to knowledge graphs is critical to understand their role in a larger project of informational enclosure and concentration of power. I draw from multiple histories of liberatory information technologies to develop Vulgar Linked Data as an alternative to the Cloud Orthodoxy, resisting the colonial urge for universality in favor of vernacular expression in peer to peer systems

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Inferring implicit relevance from physiological signals

    Get PDF
    Ongoing growth in data availability and consumption has meant users are increasingly faced with the challenge of distilling relevant information from an abundance of noise. Overcoming this information overload can be particularly difficult in situations such as intelligence analysis, which involves subjectivity, ambiguity, or risky social implications. Highly automated solutions are often inadequate, therefore new methods are needed for augmenting existing analysis techniques to support user decision making. This project investigated the potential for deep learning to infer the occurrence of implicit relevance assessments from users' biometrics. Internal cognitive processes manifest involuntarily within physiological signals, and are often accompanied by 'gut feelings' of intuition. Quantifying unconscious mental processes during relevance appraisal may be a useful tool during decision making by offering an element of objectivity to an inherently subjective situation. Advances in wearable or non-contact sensors have made recording these signals more accessible, whilst advances in artificial intelligence and deep learning have enhanced the discovery of latent patterns within complex data. Together, these techniques might make it possible to transform tacit knowledge into codified knowledge which can be shared. A series of user studies recorded eye gaze movements, pupillary responses, electrodermal activity, heart rate variability, and skin temperature data from participants as they completed a binary relevance assessment task. Participants were asked to explicitly identify which of 40 short-text documents were relevant to an assigned topic. Investigations found this physiological data to contain detectable cues corresponding with relevance judgements. Random forests and artificial neural networks trained on features derived from the signals were able to produce inferences with moderate correlations with the participants' explicit relevance decisions. Several deep learning algorithms trained on the entire physiological time series data were generally unable to surpass the performance of feature-based methods, and instead produced inferences with low correlations with participants' explicit personal truths. Overall, pupillary responses, eye gaze movements, and electrodermal activity offered the most discriminative power, with additional physiological data providing diminishing or adverse returns. Finally, a conceptual design for a decision support system is used to discuss social implications and practicalities of quantifying implicit relevance using deep learning techniques. Potential benefits included assisting with introspection and collaborative assessment, however quantifying intrinsically unknowable concepts using personal data and abstruse artificial intelligence techniques were argued to pose incommensurate risks and challenges. Deep learning techniques therefore have the potential for inferring implicit relevance in information-rich environments, but are not yet fit for purpose. Several avenues worthy of further research are outlined

    Personality Traits, States, and Social Cognition – in life and everyday life

    Get PDF
    Beeinflusst unsere Variabilität, wie wir über andere denken? Betrifft die Veränderung unserer Persönlichkeitszustände mehr als uns selbst? Wie beeinflussen andere unsere Persönlichkeitsentwicklung? Wie wirkt sich Selbstbezug auf das Denken über andere aus? In dieser Arbeit werden die vielfältigen Beziehungen zwischen unserer Persönlichkeit und der Beziehung zu und Interaktion mit anderen Menschen in verschiedenen Bereichen der Persönlichkeitspsychologie untersucht. Neben der Zusammenfassung der vier Veröffentlichungen, wird der theoriegeleitete Ansatz erläutert und in Persönlichkeitsdynamik und -prozesse eingeführt. Zentral sind die Konzepte der Persönlichkeitsmerkmale, der innerpersonellen Variabilität, der Persönlichkeitsentwicklung, des Selbstfokus, des Egozentrismus und der egozentrischen Verzerrung–im Rahmen ihrer Bedeutung für die Theory of Mind (ToM). Publikation 1 schlägt ein zweistufiges Model vor, wie die innerpersonelle Variabilität die ToM durch Erweiterung und Relativierung des Egozentrismus einer Person erleichtern kann. Publikation 2 fürht die Terminologie und die statistischen Werkzeuge der dynamischen Systemtheorie für die Untersuchung von Persönlichkeitszuständen ein und diskutiert Anwendungsfälle. Publikation 3 stellt ein Klassifizierungssystem vor, mit dem systematisch zwischen persönlichen und kollektiven Lebensereignissen unterschieden werden kann, wobei die unterschiedlichen Mechanismen berücksichtigt werden, durch die beide Arten von Lebensereignissen die Persönlichkeitsentwicklung beeinflussen können. Publikation 4 präsentiert Belege für eine kleine, aber robuste positive Beziehung zwischen achtsamer Selbstfokussierung und ToM. Nach der Reflektion der Beiträge zum Fachgebiet werden drei Forschungsansätze aus dem Risikomanagement, der Persönlichkeitspsychologie und den Neurowissenschaften diskutiert, die auf die Forschung zu innerpersönlicher Variabilität und Persönlichkeitsentwicklung sowie zu Egozentrismus und ToM einzahlen könnten.Does our own variability affect how we think about others? Do personality states changes involve more than ourselves? How do others affect our personality development? How does focusing on oneself affect thinking about others? This dissertation explores the many relationships between an individual’s personality and ther relation to and interaction with other people across multiple areas of personality psychological research. Before summarizing four publications of this cumulative project, I explain my theory-driven approach and introduce the field of personality dynamics and processes. In particular, I focus on the concepts of personality traits, within-person variability, personality development, self-focus, egocentrism, egocentric bias–often in light of their relevant for Theory of Mind. The first publication proposes a two-tier framework of how within-person variability can facilitate Theory of Mind by broadening and relativizing a person’s egocentrism. The second publication introduces the terminology and statistical tools of dynamic systems theory to the investigation of personality state levels and presents possible use cases. The third publication introduces a classification system to differentiate between personal and collective live events in a systematic way that is sensitive to the different mechanisms by which both kinds of life events can affect personality development. The fourth publication presents evidence for a small but robust positive relationship between mindful self-focus and Theory of Mind. Finally, I reflect on the publications’ contributions to the field and suggest three lines of research stemming from risk management, personality psychology, and neuroscience that could inform research on within-person variability and personality development as well as on egocentrism and Theory of Mind further in the future

    Open Strategies for Innovation in the Public Sector: Challenges and Opportunities

    Get PDF
    La col·laboració amb socis externs proporciona un mitjà per ampliar la base de coneixement d'una empresa, reduir els temps de desenvolupament del producte, augmentar la innovació i proporcionar avantatges competitius. Aquesta tesi contribueix a la recerca en innovació oberta i de l'usuari mitjançant l'exploració qualitativa d’estratègies en el context del sector públic. La tesi pretén comprendre els motors subjacents de la innovació cívica a partir dels esforços innovadors de ciutats europees i americanes. Les ciutats accedeixen a la informació tàcita de l'usuari aprofitant el context i la tecnologia per proporcionar solucions innovadores col·laborant amb plataformes i organitzacions cíviques. Es proposa un enfocament integral de l'ecosistema, ampliant les conceptualitzacions actuals dels ecosistemes empresarials. L'èmfasi en la capacitat desorptive en les organitzacions cíviques es considera una elusió del bloqueig, a causa dels dèficits cívics en capacitat d'absorció. La importància dels processos d'innovació situats en entorns del món real s'examina en líving labs, mentre es compara amb altres metodologies. A més a més, aquesta tesis proposa en el context de la Open Public Policy Innovation una eina millorada per tal d’accedir a la informació tàcita dels usuaris.La colaboración con socios externos proporciona un medio para ampliar la base de conocimiento de una empresa, reducir los tiempos de desarrollo del producto, aumentar la innovación y proporcionar ventajas competitivas. Esta tesis contribuye a la investigación en innovación abierta y del usuario mediante la exploración cualitativa de estrategias en el contexto del sector público. La tesis pretende comprender los motores subyacentes de la innovación cívica a partir de los esfuerzos innovadores de ciudades europeas y americanas. Las ciudades acceden a la información tácita del usuario aprovechando el contexto y la tecnología para proporcionar soluciones innovadoras colaborando con plataformas y organizaciones cívicas. Se propone un enfoque integral del ecosistema, ampliando las conceptualizaciones actuales de los ecosistemas empresariales. El énfasis en la capacidad desorptive en las organizaciones cívicas se considera una elusión del bloqueo, debido a los déficits cívicos en capacidad de absorción. La importancia de los procesos de innovación situados en entornos del mundo real se examina en living labs, mientras se compara con otras metodologías. Además, esta tesis propone en el contexto de la Open Public Policy Innovation una herramienta mejorada para acceder a la información tácita de los usuarios.Collaboration with external partners provides a means of expanding a firm’s knowledge base, decreasing product development timelines, increasing innovation, and providing competitive advantage. This thesis contributes to the research in open innovation and user innovation by exploring these strategies in the context of the public sector. By examining nascent innovation endeavors in European and American cities, the thesis seeks to understand the underlying drivers of civic innovation, how civic organizations foster communities of collaborators and civic platforms, and how governments access tacit user information by leveraging context and technology to provide innovative solutions. An Integrated Ecosystem Approach is proposed, expanding current conceptualizations of business ecosystems. An emphasis on desorptive capacity in civic organizations is considered as a circumvention of lockout, due to civic deficits in absorptive capacity. The importance of innovation processes situated in real-world environments is examined in living labs, as compared to other methodologies. And an enhanced utility of technology as a tool for accessing tacit user information is proposed in the context of Open Public Policy Innovation

    Multi-objective Optimization Methods for Allocation and Prediction

    Get PDF
    In this thesis we focus on two different aspects of auctions and we employ techniques and methods from both operations research and computer science. _First,_ we study the allocation of tasks to agents at the end of an auction. Usually, tasks are allocated in such a way that minimizes the total cost for the auctioneer. This allocation is optimal in a one-shot auction, but if the auction is repeated, this can have negative consequences for the results in the long run. Therefore, we consider a fair allocation, which costs slightly more in a one-shot auction, but has positive effects on the participation level of agents and the total cost for the auctioneer in repeated auctions. _Second,_ we consider the auction design. How an auction is set up, like which tasks should be auctioned first, or what the starting price should be, impacts the result. Usually there are experts who know what has occurred in previous auctions and how a future auction should be designed in order to obtain the best results. However, historical auctions can obtain so much information that experts overlook things. We use a combination of machine learning and optimization models to extract information from historical auctions and use this information to help design future auctions for better results
    corecore