1,126 research outputs found
A Rubik's Cube inspired approach to Clifford synthesis
The problem of decomposing an arbitrary Clifford element into a sequence of
Clifford gates is known as Clifford synthesis. Drawing inspiration from
similarities between this and the famous Rubik's Cube problem, we develop a
machine learning approach for Clifford synthesis based on learning an
approximation to the distance to the identity. This approach is probabilistic
and computationally intensive. However, when a decomposition is successfully
found, it often involves fewer gates than existing synthesis algorithms.
Additionally, our approach is much more flexible than existing algorithms in
that arbitrary gate sets, device topologies, and gate fidelities may
incorporated, thus allowing for the approach to be tailored to a specific
device.Comment: 14 pages, 4 figure
Power assignment problems in wireless communication
A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a -approximate solution for this problem from as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform -hop multicasts
A Data Mining Toolbox for Collaborative Writing Processes
Collaborative writing (CW) is an essential skill in academia and industry. Providing support during the process of CW can be useful not only for achieving better quality documents, but also for improving the CW skills of the writers. In order to properly support collaborative writing, it is essential to understand how ideas and concepts are developed during the writing process, which consists of a series of steps of writing activities. These steps can be considered as sequence patterns comprising both time events and the semantics of the changes made during those steps. Two techniques can be combined to examine those patterns: process mining, which focuses on extracting process-related knowledge from event logs recorded by an information system; and semantic analysis, which focuses on extracting knowledge about what the student wrote or edited. This thesis contributes (i) techniques to automatically extract process models of collaborative writing processes and (ii) visualisations to describe aspects of collaborative writing. These two techniques form a data mining toolbox for collaborative writing by using process mining, probabilistic graphical models, and text mining. First, I created a framework, WriteProc, for investigating collaborative writing processes, integrated with the existing cloud computing writing tools in Google Docs. Secondly, I created new heuristic to extract the semantic nature of text edits that occur in the document revisions and automatically identify the corresponding writing activities. Thirdly, based on sequences of writing activities, I propose methods to discover the writing process models and transitional state diagrams using a process mining algorithm, Heuristics Miner, and Hidden Markov Models, respectively. Finally, I designed three types of visualisations and made contributions to their underlying techniques for analysing writing processes. All components of the toolbox are validated against annotated writing activities of real documents and a synthetic dataset. I also illustrate how the automatically discovered process models and visualisations are used in the process analysis with real documents written by groups of graduate students. I discuss how the analyses can be used to gain further insight into how students work and create their collaborative documents
CONSTRUCT VALIDITY OF A LABORATORY AGGRESSION PARADIGM: A MULTITRAIT-MULTIMETHOD APPROACH
There continues to be doubt regarding the validity of laboratory aggression paradigms. This paper provides an investigation of the construct validity of one prominent aggression task, the Taylor Aggression Paradigm (TAP), within a Multitrait Multimethod Matrix (MTMM) methodology. Participants consisted of 151 male undergraduate psychology students with a median age of 19 years old (M=19.45, SD = 2.03). Participants completed self-report and behavioral measures of aggression, impulsivity, and pro-social behavior which were analyzed using a Correlated Trait – Correlated Method Confirmatory Factor Analysis model. Results supported the construct validity of the MTMM model and the TAP. This study provides one of the only a priori tests of construct validity for the TAP and provides a basis for additional validation studies using this methodology
Recommended from our members
Data-Driven Subtyping of Executive Function-Related Behavioral Problems in Children.
OBJECTIVE: Executive functions (EF) are cognitive skills that are important for regulating behavior and for achieving goals. Executive function deficits are common in children who struggle in school and are associated with multiple neurodevelopmental disorders. However, there is also considerable heterogeneity across children, even within diagnostic categories. This study took a data-driven approach to identify distinct clusters of children with common profiles of EF-related difficulties, and then identified patterns of brain organization that distinguish these data-driven groups. METHOD: The sample consisted of 442 children identified by health and educational professionals as having difficulties in attention, learning, and/or memory. We applied community clustering, a data-driven clustering algorithm, to group children by similarities on a commonly used rating scale of EF-associated behavioral difficulties, the Conners 3 questionnaire. We then investigated whether the groups identified by the algorithm could be distinguished on white matter connectivity using a structural connectomics approach combined with partial least squares analysis. RESULTS: The data-driven clustering yielded 3 distinct groups of children with symptoms of one of the following: (1) elevated inattention and hyperactivity/impulsivity, and poor EF; (2) learning problems; or (3) aggressive behavior and problems with peer relationships. These groups were associated with significant interindividual variation in white matter connectivity of the prefrontal and anterior cingulate cortices. CONCLUSION: In sum, data-driven classification of EF-related behavioral difficulties identified stable groups of children, provided a good account of interindividual differences, and aligned closely with underlying neurobiological substrates
A Framework for Hyper-Heuristic Optimisation of Conceptual Aircraft Structural Designs
Conceptual aircraft structural design concerns the generation of an airframe that will provide sufficient strength under the loads encountered during the operation of the aircraft. In providing such strength, the airframe greatly contributes to the mass of the vehicle, where an excessively heavy design can penalise the performance and cost of the aircraft. Structural mass optimisation aims to minimise the airframe weight whilst maintaining adequate resistance to load. The traditional approach to such optimisation applies a single optimisation technique within a static process, which prevents adaptation of the optimisation process to react to changes in the problem. Hyper-heuristic optimisation is an evolving field of research wherein the optimisation process is evaluated and modified in an attempt to improve its performance, and thus the quality of solutions generated. Due to its relative infancy, hyper-heuristics have not been applied to the problem of aircraft structural design optimisation. It is the thesis of this research that hyper-heuristics can be employed within a framework to improve the quality of airframe designs generated without incurring additional computational cost.
A framework has been developed to perform hyper-heuristic structural optimisation of a conceptual aircraft design. Four aspects of hyper-heuristics are included within the framework to promote improved process performance and subsequent solution quality. These aspects select multiple optimisation techniques to apply to the problem, analyse the solution space neighbouring good designs and adapt the process based on its performance. The framework has been evaluated through its implementation as a purpose-built computational tool called AStrO. The results of this evaluation have shown that significantly lighter airframe designs can be generated using hyper-heuristics than are obtainable by traditional optimisation approaches. Moreover, this is possible without penalising airframe strength or necessarily increasing computational costs. Furthermore, improvements are possible over the existing aircraft designs currently in production and operation
- …