36 research outputs found
Beyond a Usage Threshold, NO Form of Energy is Sustainable or Green We are Running Out of “Garbage Dump Space ” To Dissipate “Used ” Energy Into
To date, almost all of the research on green/sustainable energy has been concerned with procurement of ever increasing amounts of energy for human consumption. This singular focus only on the supply-side of the problem completely overlooks what happens to the energy after we use it; thereby implicitly making the dangerously wrong assumption that the earth has unlimited capacity to dissipate energy. In this position paper, we remind the reader that the earth can dissipate only a finite amount of even the greenest of the green forms of energy, while still maintaining thermal equilibria that have evolved over eons. Any long term sustainable energy solution therefore must include a curbing/limiting/controlling our demand for (and consequently, our consumption of) energy. Otherwise, even if and even after all the green-house-effects are fully eliminated, the earth still might eventually experience unnaturally large temperature increase because the amount of energy dissipated is too large
Student Misconceptions about Cybersecurity Concepts: Analysis of Think-Aloud Interviews
We conducted an observational study to document student misconceptions about cybersecurity using thematic analysis of 25 think-aloud interviews. By understanding patterns in student misconceptions, we provide a basis for developing rigorous evidence-based recommendations for improving teaching and assessment methods in cybersecurity and inform future research. This study is the first to explore student cognition and reasoning about cybersecurity. We interviewed students from three diverse institutions. During these interviews, students grappled with security scenarios designed to probe their understanding of cybersecurity, especially adversarial thinking. We analyzed student statements using a structured qualitative method, novice-led paired thematic analysis, to document patterns in student misconceptions and problematic reasoning that transcend institutions, scenarios, or demographics. Themes generated from this analysis describe a taxonomy of misconceptions but not their causes or remedies. Four themes emerged: overgeneralizations, conflated concepts, biases, and incorrect assumptions. Together, these themes reveal that students generally failed to grasp the complexity and subtlety of possible vulnerabilities, threats, risks, and mitigations, suggesting a need for instructional methods that engage students in reasoning about complex scenarios with an adversarial mindset. These findings can guide teachers’ attention during instruction and inform the development of cybersecurity assessment tools that enable cross-institutional assessments that measure the effectiveness of pedagogies
DoubleMod and SingleMod: Simple Randomized Secret-Key Encryption with Bounded Homomorphicity
An encryption relation f Z Z with decryption function f 1 is “group-homomorphic”
if, for any suitable plaintexts x1 and x2, x1+x2 = f 1( f (x1)+f (x2)). It is “ring-homomorphic”
if furthermore x1x2 = f 1( f (x1) f (x2)); it is “field-homomorphic” if furthermore 1=x1 =
f 1( f (1=x1)). Such relations would support oblivious processing of encrypted data.
We propose a simple randomized encryption relation f over the integers, called
DoubleMod, which is “bounded ring-homomorphic” or what some call ”somewhat homomorphic.”
Here, “bounded” means that the number of additions and multiplications that can
be performed, while not allowing the encrypted values to go out of range, is limited (any
pre-specified bound on the operation-count can be accommodated). Let R be any large integer.
For any plaintext x 2 ZR, DoubleMod encrypts x as f (x) = x + au + bv, where a
and b are randomly chosen integers in some appropriate interval, while (u; v) is the secret
key. Here u > R2 is a large prime and the smallest prime factor of v exceeds u. With
knowledge of the key, but not of a and b, the receiver decrypts the ciphertext by computing
f 1(y) = (y mod v) mod u.
DoubleMod generalizes an independent idea of van Dijk et al. 2010. We present and
refine a new CCA1 chosen-ciphertext attack that finds the secret key of both systems (ours
and van Dijk et al.’s) in linear time in the bit length of the security parameter. Under a
known-plaintext attack, breaking DoubleMod is at most as hard as solving the Approximate
GCD (AGCD) problem. The complexity of AGCD is not known.
We also introduce the SingleMod field-homomorphic cryptosystems. The simplest
SingleMod system based on the integers can be broken trivially. We had hoped, that if
SingleMod is implemented inside non-Euclidean quadratic or higher-order fields with large
discriminants, where GCD computations appear di cult, it may be feasible to achieve a
desired level of security. We show, however, that a variation of our chosen-ciphertext attack
works against SingleMod even in non-Euclidean fields
Recommended from our members
Fault tolerance of feedforward artificial neural nets and synthesis of robust nets
A method is proposed to estimate the fault tolerance of feedforward Artificial Neural Nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes of hardware implementations to permanent stuck-at type faults of single components. A procedure is developed to build fault tolerant ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units in order to overcome faults. It is simple, robust and is applicable to any feedforward net. Based on this procedure, metrics are devised to quantify the fault tolerance as a function of redundancy. Furthermore, a lower bound on the redundancy required to tolerate all possible single faults is analytically derived. This bound demonstrates that less than Triple Modular Redundancy (TMR) cannot provide complete fault tolerance for all possible single faults. This general result establishes a necessary condition that holds for all feedforward nets, irrespective of the network topology or the task it is trained on. Extensive simulations indicate that the actual redundancy needed to synthesize a completely fault tolerant net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The data imply that the conventional TMR scheme of treplication and majority vote is the best way to achieve complete fault tolerance in most ANNs. Although the redundancy needed for complete fault tolerance is substantial, the results do show that ANNs exhibit good partial fault tolerance to begin with and degrade gracefully. For large nets, exhaustive testing of all possible single faults is prohibitive. Hence, the strategy of randomly testing a small fraction of the total number links is adopted. It yields partial fault tolerance estimates that are very close to those obtained by exhaustive testing. The last part of the thesis develops improved learning algorithms that favor fault tolerance. Here, the objective function for the gradient descent is modified to include extra terms that favor fault tolerance. Simulations indicate that the algorithm works only if the relative weight of the extra terms is small. There are two different ways to achieve fault tolerance: (1) Search for the minimal net and replicate (2) Provide redundancy to begin with and use improved training algorithms. A natural question is: which of these two schemes is better? Contrary to the expectation, the replication scheme seems to win in almost all cases. We provide a justification as to why this might be true. Several interesting open problems are discussed and future extensions are suggested
Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs
It is demonstrated that Fault tolerance, generalization and the Vapnik–Chervonenkis (VC) dimension (which is in turn related to the intrinsic capacity/complexity of the ANN) are inter-related attributes. It is well known that the generalization error if plotted as a function of the VC dimension h, exhibits a well defined minimum corresponding to an optimal value of h, say �opt. We show that if the VC dimension � of an ANN satisfies � � �opt (i.e., there is no excess capacity or redundancy), then Fault Tolerance and Generalization are mutually conflicting attributes. On the other hand, if ���opt (i.e., there is excess capacity or redundancy, then fault tolerance and generalization are mutually synergistic attributes. In other words, training methods geared toward improving the fault tolerance can also lead to better generalization and vice versa, only when there is excess capacity or redundancy. This is consistent with our previous results indicating that complete fault tolerance in ANNs requires a significant amount of redundancy. I