631 research outputs found
Monoclonal antibodies against human astrocytomas and their reactivity pattern
The establishment of hybridomas after fusion of X63-Ag8.653 mouse myeloma cells and splenocytes from mice hyperimmunized against human astrocytomas is presented. The animals were primed with 5 × 106 chemically modified uncultured or cultured glioma cells. Six weeks after the last immunization step an intrasplenal booster injection was administrated and 3 days later the spleen cells were prepared for fusion experiments. According to the specificity analysis of the generated antibodies 7 hybridoma products (MUC 7-22, MUC 8-22, MUC 10-22, MUC 11-22, MUC 14-22, MUC 15-22 and MUC 2-63) react with gliomas, neuroblastomas and melanomas as well as with embryonic and fetal cells but do not recognize non-neurogenic tumors. The selected monoclonal antibodies (McAbs) of IgG1 and IgG2a isotypes are not extensively characterized but these antibodies have been demonstrated to be reactive with a panel of glioma cell lines with varying patterns of antigen distribution. Using the McAbs described above and a series of cryosections of glioma biopsies and paraffin sections of the same material as well as glioma cultures established from these, variable antigenic profiles among glioma cell populations could be demonstrated. From these results it is evident that there is not only a distinct degree of antigenic heterogeneity among and within brain tumors, but also that the pattern of antigenic expression can change continuously. Some of the glioma associated antigens recognized by the selected antibodies persist after fixation with methanol/acetone and Karnovsky's fixative and probably are oncoembryonic/oncofetal antigen(s). The data suggest that the use of McAbs recognizing tumor associated oncofetal antigens in immunohistochemistry facilitates objective typing of intracranial malignancies and precise analysis of fine needle brain/tumor biopsies in a sensitive and reproducible manner
National and firm-level drivers of the devolution of HRM decision making to line managers
Multinational companies must understand the influences on responsibility for managing people so that they can manage talent consistently thus ensuring that it is transferable across locations. We examine the impact of firm and national level characteristics on the devolution of HRM decision making to line managers. Our analysis draws on data from 2335 indigenous organizations in 21 countries. At the firm level, we found that where the HR function has higher power, devolution is less likely. At the national level, devolution of decision making to line management is more likely in societies with more stringent employment laws and lower power distance
Institutional duality and human resource management practice in foreign subsidiaries of multinationals
We examine how institutional context affects the decisions that subsidiaries of multinational corporations (MNCs) make in pursuing particular human resource management (HRM) practices in response to institutional duality. Drawing on Varieties of Capitalism, along with the concept of intermediate conformity, we argue that the use of particular HRM practices by MNC subsidiaries will differ depending on both the combination of home and host institutional contexts, and on the nature of the particular practice under consideration. Using data from a survey of HRM practices in 1196 firms across ten countries, we compare HRM practices in subsidiaries located and headquartered in different combinations of liberal and/or coordinated market economies. Our study suggests MNC subsidiaries conform only to the most persuasive norms, while exercising their agency to take advantage of the opportunities presented by institutional duality to adopt practices that distinguish them from indigenous competitors
W3Bcrypt: Encryption as a Stylesheet
While web-based communications (e.g., webmail or web chatrooms) are increasingly protected by transport-layer cryptographic mechanisms, such as the SSL/TLS protocol, there are many situations where even the web server (or its operator) cannot be trusted. The end-to-end (E2E) encryption of data becomes increasingly important in these trust models to protect the confidentiality and integrity of the data against snooping and modification. We introduce W3Bcrypt, an extension to the Mozilla Firefox platform that enables application-level cryptographic protection for web content. In effect, we view cryptographic operations as a type of style to be applied to web content, similar to and along with layout and coloring operations. Among the main benefits of using encryption as a stylesheet are (a) reduced workload on the web server, (b) targeted content publication, and (c) greatly increased privacy. This paper discusses our implementation for Firefox, although the core ideas are applicable to most current browsers
Широкополосный генератор синусоидального сигнала для спектроскопии электрохимического импеданса
Показана возможность реализации широкополосного генератора синусоидального сигнала посредством совмещения цифрового вычислительного синтезатора и цифро-аналогового преобразователя
Recommended from our members
Online Training and Sanitization of AD Systems
In this paper, we introduce novel techniques that enhance the training phase of Anomaly Detection (AD) sensors. Our aim is to both improve the detection performance and protect against attacks that target the training dataset. Our approach is two pronged: we employ a novel sanitization method for large training datasets that removes attacks and traffic artifacts by measuring their frequency and position inside the dataset. Furthermore, we extend the training phase in the spatial dimension to include model information from other collaborative systems. We demonstrate that by doing so we can protect all the participating systems against targeted training attacks. Another aspect of our system is its ability to adapt and update the normality model when there is a shift in the nature of inspected traffic that reflects actual changes in the back-end servers. Such "on-line" training appears to be the "Achilles' heel" of AD sensors because they fail to adapt when there is a legitimate deviation in the traffic behavior, thereby flooding the operator with false positives. To counter that, we discuss the integration of what we call a shadow sensor with the AD system. This sensor complements our techniques by acting as an oracle to analyze and classify the resulting "suspect data" identified by the AD sensor. We show that our techniques can be applied to a wide range of unmodified AD sensors without incurring significant additional computational cost beyond the initial training phase
Adaptive Anomaly Detection via Self-Calibration and Dynamic Updating
The deployment and use of Anomaly Detection (AD) sensors often requires the intervention of a human expert to manually calibrate and optimize their performance. Depending on the site and the type of traffic it receives, the operators might have to provide recent and sanitized training data sets, the characteristics of expected traffic (i.e. outlier ratio), and exceptions or even expected future modifications of system's behavior. In this paper, we study the potential performance issues that stem from fully automating the AD sensors' day-to-day maintenance and calibration. Our goal is to remove the dependence on human operator using an unlabeled, and thus potentially dirty, sample of incoming traffic. To that end, we propose to enhance the training phase of AD sensors with a self-calibration phase, leading to the automatic determination of the optimal AD parameters. We show how this novel calibration phase can be employed in conjunction with previously proposed methods for training data sanitization resulting in a fully automated AD maintenance cycle. Our approach is completely agnostic to the underlying AD sensor algorithm. Furthermore, the self-calibration can be applied in an online fashion to ensure that the resulting AD models reflect changes in the system's behavior which would otherwise render the sensor's internal state inconsistent. We verify the validity of our approach through a series of experiments where we compare the manually obtained optimal parameters with the ones computed from the self-calibration phase. Modeling traffic from two different sources, the fully automated calibration shows a 7.08% reduction in detection rate and a 0.06% increase in false positives, in the worst case, when compared to the optimal selection of parameters. Finally, our adaptive models outperform the statically generated ones retaining the gains in performance from the sanitization process over time
Recommended from our members
Post-Patch Retraining for Host-Based Anomaly Detection
Applying patches, although a disruptive activity, remains a vital part of software maintenance and defense. When host-based anomaly detection (AD) sensors monitor an application, patching the application requires a corresponding update of the sensor's behavioral model. Otherwise, the sensor may incorrectly classify new behavior as malicious (a false positive) or assert that old, incorrect behavior is normal (a false negative). Although the problem of "model drift" is an almost universally acknowledged hazard for AD sensors, relatively little work has been done to understand the process of re-training a "live" AD model --- especially in response to legal behavioral updates like vendor patches or repairs produced by a self-healing system. We investigate the feasibility of automatically deriving and applying a "model patch" that describes the changes necessary to update a "reasonable" host-based AD behavioral model ({\it i.e.,} a model whose structure follows the core design principles of existing host--based anomaly models). We aim to avoid extensive retraining and regeneration of the entire AD model when only parts may have changed --- a task that seems especially undesirable after the exhaustive testing necessary to deploy a patch
Recommended from our members
From STEM to SEAD: Speculative Execution for Automated Defense
Most computer defense systems crash the process that they protect as part of their response to an attack. In contrast, self-healing software recovers from an attack by automatically repairing the underlying vulnerability. Although recent research explores the feasibility of the basic concept, self-healing faces four major obstacles before it can protect legacy applications and COTS software. Besides the practical issues involved in applying the system to such software (e.g., not modifying source code), self-healing has encountered a number of problems: knowing when to engage, knowing how to repair, and handling communication with external entities. Our previous work on a self-healing system, STEM, left these challenges as future work. STEM provides self-healing by speculatively executing "slices" of a process. This paper improves STEM's capabilities along three lines: (1) applicability of the system to COTS software (STEM does not require source code, and it imposes a roughly 73% performance penalty on Apache's normal operation), (2) semantic correctness of the repair (we introduce virtual proxies and repair policy to assist the healing process), and (3) creating a behavior profile based on aspects of data and control flow
Recommended from our members
On the Infeasibility of Modeling Polymorphic Shellcode
Polymorphic malcode remains a troubling threat. The ability formal code to automatically transform into semantically equivalent variants frustrates attempts to rapidly construct a single, simple, easily verifiable representation. We present a quantitative analysis of the strengths and limitations of shellcode polymorphism and consider its impact on current intrusion detection practice. We focus on the nature of shellcode decoding routines. The empirical evidence we gather helps show that modeling the class of self-modifying code is likely intractable by known methods, including both statistical constructs and string signatures. In addition, we develop and present measures that provide insight into the capabilities, strengths, and weaknesses of polymorphic engines. In order to explore countermeasures to future polymorphic threats, we show how to improve polymorphic techniques and create a proof-of-concept engine expressing these improvements. Our results indicate that the class of polymorphic behavior is too greatly spread and varied to model effectively. Our analysis also supplies a novel way to understand the limitations of current signature-based techniques. We conclude that modeling normal content is ultimately a more promising defense mechanism than modeling malicious or abnormal content
- …