916 research outputs found

    From the Ne’er-Do-Well to the Criminal History Category: The Refinement of the Actuarial Model in Criminal Law

    Get PDF
    Harcourt discusses three developments in 20th century criminal law: the evolution of parole board decision-making in the early 20th century, the development of fixed sentencing guidelines in the late 20th century, and the growth of criminal profiling as a formal law enforcement tool since the 1960s. In each of these case studies, he focuses on the criminal law decision-making

    From the Ne\u27er-Do-Well to the Criminal History Category: The Refinement of the Actuarial Model in Criminal Law

    Get PDF
    Criminal law in the United States experienced radical change during the course of the twentieth century. The dawn of the century ushered in an era of individualization of punishment. Drawing on the new science of positive criminology, legal scholars called for diagnosis of the causes of delinquency and for imposition of individualized courses of remedial treatment specifically adapted to these diagnoses. States gradually developed indeterminate sentencing schemes that gave corrections administrators and parole boards wide discretion over treatment and release decisions, and by 1970 every state in the country and the federal government had adopted a system of indeterminate sentencing. At the close of the century, the contrast could hardly have been greater. Practically every state had repudiated in some way indeterminate sentencing and imposed significant, in some cases complete, constraints on the discretion of sentencing judges and parole boards. In many states, parole boards were simply abolished. The period was marked by a new era of uniformity and consistency in sentencing

    Cryptanalysis of LFSR-based Pseudorandom Generators - a Survey

    Full text link
    Pseudorandom generators based on linear feedback shift registers (LFSR) are a traditional building block for cryptographic stream ciphers. In this report, we review the general idea for such generators, as well as the most important techniques of cryptanalysis

    Expanding the Ponzi Scheme Presumption

    Get PDF
    Ponzi schemes and other investment frauds inevitably end up in bankruptcy or receivership, leaving behind numerous victims—many of whom invested their life savings in the scheme without any knowledge of its fraudulent nature. Although trustees and receivers can sometimes recover some of the fraudulently acquired funds from the assets of the perpetrators, in most cases, those assets fall woefully short of the victims’ losses. This leads to fraudulent transfer lawsuits (claw-back actions) against those who are suspected to have profited from the wrongdoing. A transfer is fraudulent if it was made with the actual intent to defraud, but actual fraud is seldom proven by direct evidence. Generally, to determine whether circumstantial evidence supports an inference of fraud, courts examine certain badges of fraud. While badges of fraud are helpful, the analysis requires individual examination of the specific transaction at issue, the effect of which diminishes the returns to the victims of the fraud because of the substantial costs involved in undertaking such assessments. However, courts nationwide have recognized that simply establishing the existence of a Ponzi scheme is sufficient to prove the perpetrator’s intent to defraud. The presumption provides receivers and trustees with an advantage and shifts the burden of showing the legitimacy of the benefits received to the perpetrator. Courts define Ponzi schemes differently, which creates uncertainty for receivers and trustees who are now required to take over the fraudulent enterprise and recover assets for the victims of the fraud. Expanding the actual fraud presumption beyond classic Ponzi-scheme cases avoids uncertainty and assists receivers and trustees in achieving a final, equitable distribution of assets. The focus should be whether applying the presumption will maximize the return to creditors and victims of the fraud

    Dynamic Thermal Imaging for Intraoperative Monitoring of Neuronal Activity and Cortical Perfusion

    Get PDF
    Neurosurgery is a demanding medical discipline that requires a complex interplay of several neuroimaging techniques. This allows structural as well as functional information to be recovered and then visualized to the surgeon. In the case of tumor resections this approach allows more fine-grained differentiation of healthy and pathological tissue which positively influences the postoperative outcome as well as the patient's quality of life. In this work, we will discuss several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke. Both applications require novel methods for data-preprocessing, visualization, pattern recognition as well as regression analysis of intraoperative thermal imaging. Online multimodal integration of preoperative and intraoperative data is accomplished by a 2D-3D image registration and image fusion framework with an average accuracy of 2.46 mm. In navigated surgeries, the proposed framework generally provides all necessary tools to project intraoperative 2D imaging data onto preoperative 3D volumetric datasets like 3D MR or CT imaging. Additionally, a fast machine learning framework for the recognition of cortical NaCl rinsings will be discussed throughout this thesis. Hereby, the standardized quantification of tissue perfusion by means of an approximated heating model can be achieved. Classifying the parameters of these models yields a map of connected areas, for which we have shown that these areas correlate with the demarcation caused by an ischaemic stroke segmented in postoperative CT datasets. Finally, a semiparametric regression model has been developed for intraoperative neural activity monitoring of the somatosensory cortex by somatosensory evoked potentials. These results were correlated with neural activity of optical imaging. We found that thermal imaging yields comparable results, yet doesn't share the limitations of optical imaging. In this thesis we would like to emphasize that thermal imaging depicts a novel and valid tool for both intraoperative functional and structural neuroimaging

    Design of secure and robust cognitive system for malware detection

    Full text link
    Machine learning based malware detection techniques rely on grayscale images of malware and tends to classify malware based on the distribution of textures in graycale images. Albeit the advancement and promising results shown by machine learning techniques, attackers can exploit the vulnerabilities by generating adversarial samples. Adversarial samples are generated by intelligently crafting and adding perturbations to the input samples. There exists majority of the software based adversarial attacks and defenses. To defend against the adversaries, the existing malware detection based on machine learning and grayscale images needs a preprocessing for the adversarial data. This can cause an additional overhead and can prolong the real-time malware detection. So, as an alternative to this, we explore RRAM (Resistive Random Access Memory) based defense against adversaries. Therefore, the aim of this thesis is to address the above mentioned critical system security issues. The above mentioned challenges are addressed by demonstrating proposed techniques to design a secure and robust cognitive system. First, a novel technique to detect stealthy malware is proposed. The technique uses malware binary images and then extract different features from the same and then employ different ML-classifiers on the dataset thus obtained. Results demonstrate that this technique is successful in differentiating classes of malware based on the features extracted. Secondly, I demonstrate the effects of adversarial attacks on a reconfigurable RRAM-neuromorphic architecture with different learning algorithms and device characteristics. I also propose an integrated solution for mitigating the effects of the adversarial attack using the reconfigurable RRAM architecture.Comment: arXiv admin note: substantial text overlap with arXiv:2104.0665

    On the Nature of Bankruptcy: An Essay of Bankruptcy Sharing and the Creditor\u27s Bargain

    Get PDF
    Finance theorists have long recognized that bankruptcy is a key component in any general theory of the capital structure of business entities. Legal theorists have been similarly sensitive to the substantial allocational and distributional effects of the bankruptcy law. Nevertheless, until recently, underlying justifications for the bankruptcy process have not been widely studied. Bankruptcy scholars have been content to recite, without critical analysis, the two normative objectives of bankruptcy: rehabilitation of overburdened debtors and equality of treatment for creditors and other claimants. The developing academic interest in legal theory has spurred a corresponding interest in expanding the theoretical foundations of bankruptcy law as well. One of us has developed over the past several years a conceptual paradigm, based on a hypothetical bargain among creditors, as a normative criterion for evaluating the bankruptcy system. The cornerstone of the creditors\u27 bargain is the normative claim that prebankruptcy entitlements should be impaired in bankruptcy only when necessary to maximize net asset distributions to the creditors as a group and never to accomplish purely distributional goals. The strength of the creditors\u27 bargain conceptualization is also its limitation. The hypothetical bargain metaphor focuses on the key bankruptcy objective of maximizing the welfare of the group through collectivization. This single-minded focus on maximizing group welfare helps to identify the underlying patterns in what appear to be unrelated aspects of the bankruptcy process. It also implies that other normative goals should be seen as competing costs of the collectivization process. Yet this claim uncovers a further puzzle. Despite the centrality of the maximization norm, persistent and systematic redistributional impulses are apparent in bankruptcy. Is redistribution in bankruptcy simply attributable to random errors or misperceptions by courts and legislators? Or are other forces present in the bankruptcy process as well? In this Article we undertake to examine the other forces that may be at work in bankruptcy. Many bankruptcy rules require sharing of assets with other creditors, shareholders, and third parties. Too often these distributional effects are grouped together under general references to equity, wealth redistribution, or appeals to communitarian values. These labels are unhelpful. They disguise the fact, for instance, that the justification and impact of consensual risk sharing among creditors is entirely different in character from the rationale for using bankruptcy to redistribute wealth to nonconsensual third parties. Understanding these diverse effects requires, therefore, a method of discriminating among the different motivations that impel redistributions in bankruptcy

    Unconventional programming: non-programmable systems

    Get PDF
    Die Forschung aus dem Bereich der unkonventionellen und natürlichen Informationsverarbeitungssysteme verspricht kontrollierbare Rechenprozesse in ungewöhnlichen Medien zu realisieren, zum Beispiel auf der molekularen Ebene oder in Bakterienkolonien. Vielversprechende Eigenschaften dieser Systeme sind das nichtlineare Verhalten und der hohe Verknüpfungsgrad der beteiligten Komponenten in Analogie zu Neuronen im Gehirn. Da aber Programmierung meist auf Prinzipien wie Modularisierung, Kapselung und Vorhersagbarkeit beruht sind diese Systeme oft schwer- bzw. unprogrammierbar. Im Gegensatz zu vielen Arbeiten über unkonventionelle Rechensysteme soll in dieser Arbeit aber nicht hauptsächlich nach neuen rechnenden Systemen und Anwendungen dieser gesucht werden. Stattdessen konzentriert sich diese Dissertation auf unkonventionelle Programmieransätze, die sowohl für unkonventionelle Computer als auch für herkommliche digitale Rechner neue Perspektiven eröffnen sollen. Hauptsächlich in Bezug auf ein Modell künstlicher chemischer Neuronen werden Ansätze für unkonventionelle Programmierverfahren, basierend auf Evolutionären Algorithmen, Informationstheorie und Selbstorganisation bis hin zur Selbstassemblierung untersucht. Ein spezielles Augenmerk liegt dabei auf dem Problem der Symbolkodierung: Oft gibt es mehrere oder sogar unendlich viele Möglichkeiten, Informationen in den Zuständen eines komplexen dynamischen Systems zu kodieren. In Neuronalen Netzen gibt es unter anderem die Spikefrequenz aber auch Populationskodes. In Abhängigkeit von den weiteren Eigenschaften des Systems, beispielsweise von der Informationsverarbeitungsaufgabe und dem gewünschten Eingabe-Ausgabeverhalten dürften sich verschiedene Kodierungen als unterschiedlich nützlich erweisen. Daher werden hier Methoden betrachtet um die verschiedene Symbolkodierungmethoden zu evaluieren, zu analysieren und um nach neuen, geeigneten Kodierungen zu suchen.Unconventional and natural computing research offers controlled information modification processes in uncommon media, for example on the molecular scale or in bacteria colonies. Promising aspects of such systems are often the non-linear behavior and the high connectivity of the involved information processing components in analogy to neurons in the nervous system. Unfortunately, such properties make the system behavior hard to understand, hard to predict and thus also hard to program with common engineering principles like modularization and composition, leading to the term of non-programmable systems. In contrast to many unconventional computing works that are often focused on finding novel computing substrates and potential applications, unconventional programming approaches for such systems are the theme of this thesis: How can new programming concepts open up new perspectives for unconventional but hopefully also for traditional, digital computing systems? Mostly based on a model of artificial wet chemical neurons, different unconventional programming approaches from evolutionary algorithms, information theory, self-organization and self-assembly are explored. A particular emphasis is given on the problem of symbol encodings: Often there are multiple or even an unlimited number of possibilities to encode information in the phase space of dynamical systems, e.g. spike frequencies or population coding in neural networks. But different encodings will probably be differently useful, dependent on the system properties, the information transformation task and the desired connectivity to other systems. Hence methods are investigated that can evaluate, analyse as well as identify suitable symbol encoding schemes
    corecore