626,366 research outputs found
Syntactic Complexity of Finite/Cofinite, Definite, and Reverse Definite Languages
We study the syntactic complexity of finite/cofinite, definite and reverse
definite languages. The syntactic complexity of a class of languages is defined
as the maximal size of syntactic semigroups of languages from the class, taken
as a function of the state complexity n of the languages. We prove that (n-1)!
is a tight upper bound for finite/cofinite languages and that it can be reached
only if the alphabet size is greater than or equal to (n-1)!-(n-2)!. We prove
that the bound is also (n-1)! for reverse definite languages, but the minimal
alphabet size is (n-1)!-2(n-2)!. We show that \lfloor e\cdot (n-1)!\rfloor is a
lower bound on the syntactic complexity of definite languages, and conjecture
that this is also an upper bound, and that the alphabet size required to meet
this bound is \floor{e \cdot (n-1)!} - \floor{e \cdot (n-2)!}. We prove the
conjecture for n\le 4.Comment: 10 pages. An error concerning the size of the alphabet has been
corrected in Theorem
On Reverse Engineering in the Cognitive and Brain Sciences
Various research initiatives try to utilize the operational principles of
organisms and brains to develop alternative, biologically inspired computing
paradigms and artificial cognitive systems. This paper reviews key features of
the standard method applied to complexity in the cognitive and brain sciences,
i.e. decompositional analysis or reverse engineering. The indisputable
complexity of brain and mind raise the issue of whether they can be understood
by applying the standard method. Actually, recent findings in the experimental
and theoretical fields, question central assumptions and hypotheses made for
reverse engineering. Using the modeling relation as analyzed by Robert Rosen,
the scientific analysis method itself is made a subject of discussion. It is
concluded that the fundamental assumption of cognitive science, i.e. complex
cognitive systems can be analyzed, understood and duplicated by reverse
engineering, must be abandoned. Implications for investigations of organisms
and behavior as well as for engineering artificial cognitive systems are
discussed.Comment: 19 pages, 5 figure
Can biological complexity be reverse engineered?
Concerns with the use of engineering approaches in biology have recently been raised. I examine two
related challenges to biological research that I call the synchronic and diachronic underdetermination
problem. The former refers to challenges associated with the inference of design principles underlying
system capacities when the synchronic relations between lower-level processes and higher-level systems
capacities are degenerate (many-to-many). The diachronic underdetermination problem regards the
problem of reverse engineering a system where the non-linear relations between system capacities and
lower-level mechanisms are changing over time. Braun and Marom argue that recent insights to biological
complexity leave the aim of reverse engineering hopeless - in principle as well as in practice.
While I support their call for systemic approaches to capture the dynamic nature of living systems, I take
issue with the conflation of reverse engineering with naĂŻve reductionism. I clarify how the notion of
design principles can be more broadly conceived and argue that reverse engineering is compatible with a
dynamic view of organisms. It may even help to facilitate an integrated account that bridges the gap
between mechanistic and systems approaches
Near-Optimal Active Learning of Halfspaces via Query Synthesis in the Noisy Setting
In this paper, we consider the problem of actively learning a linear
classifier through query synthesis where the learner can construct artificial
queries in order to estimate the true decision boundaries. This problem has
recently gained a lot of interest in automated science and adversarial reverse
engineering for which only heuristic algorithms are known. In such
applications, queries can be constructed de novo to elicit information (e.g.,
automated science) or to evade detection with minimal cost (e.g., adversarial
reverse engineering). We develop a general framework, called dimension coupling
(DC), that 1) reduces a d-dimensional learning problem to d-1 low dimensional
sub-problems, 2) solves each sub-problem efficiently, 3) appropriately
aggregates the results and outputs a linear classifier, and 4) provides a
theoretical guarantee for all possible schemes of aggregation. The proposed
method is proved resilient to noise. We show that the DC framework avoids the
curse of dimensionality: its computational complexity scales linearly with the
dimension. Moreover, we show that the query complexity of DC is near optimal
(within a constant factor of the optimum algorithm). To further support our
theoretical analysis, we compare the performance of DC with the existing work.
We observe that DC consistently outperforms the prior arts in terms of query
complexity while often running orders of magnitude faster.Comment: Accepted by AAAI 201
Strategies for protecting intellectual property when using CUDA applications on graphics processing units
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering
Assessing the Complexity of a Recovered Design and its Potential Redesign Alternatives
Organised by: Cranfield UniversityReverse engineering techniques are applied to generate a part model where there is no existing
documentation or it is no longer up to date. To facilitate the reverse engineering tasks, a modular, multiperspective
design recovery framework has been developed. An evaluation of the product and feature
complexity characteristics can readily be extracted from the design recovery framework by using a
modification of a rapid complexity assessment tool. The results from this tool provide insight with respect to
the original design and assists with the evaluation of potential alternatives and risks, as illustrated by the
case study.Mori Seiki – The Machine Tool Compan
- …