558 research outputs found

    Transportation Futures: Policy Scenarios for Achieving Greenhouse Gas Reduction Targets, MNTRC Report 12-11

    Get PDF
    It is well established that GHG emissions must be reduced by 50% to 80% by 2050 in order to limit global temperature increase to 2°C. Achieving reductions of this magnitude in the transportation sector is a challenge and requires a multitude of policies and technology options. The research presented here analyzes three scenarios: changes in the perceived price of travel, land-use intensification, and increases in transit. Elasticity estimates are derived using an activity-based travel model for the state of California and broadly representative of the U.S. The VISION model is used to forecast changes in technology and fuel options that are currently forecast to occur in the U.S., providing a life cycle GHG forecast for the road transportation sector. Results suggest that aggressive policy action is needed, especially pricing policies, but also more on the technology side. Medium- and heavy-duty vehicles are in particular need of additional fuel or technology-based GHG reductions

    Forecasting Employee Turnover in Large Organizations

    Get PDF
    Researchers and human resource departments have focused on employee turnover for decades. This study developed a methodology forecasting employee turnover at organizational and departmental levels to shorten lead time for hiring employees. Various time series modeling techniques were used to identify optimal models for effective employee-turnover prediction based on a large U.S organization\u27s 11-year monthly turnover data. A dynamic regression model with additive trend, seasonality, interventions, and a very important economic indicator efficiently predicted turnover. Another turnover model predicted both retirement and quitting, including early retirement incentives, demographics, and external economic indicators using the Cox proportional hazard model. A variety of biases in employee-turnover databases along with modeling strategies and factors were discussed. A simulation demonstrated sampling biases\u27 potential impact on predictions. A key factor in the retirement was achieving full vesting, but employees who did not retire immediately maintain a reduced hazard after qualifying for retirement. Also, the model showed that external economic indicators related to S&P 500 real earnings were beneficial in predicting retirement while dividends were most associated with quitting behavior. The third model examined voluntary turnover factors using logistic regression and forecasted employee tenure using a decision tree for four research and development departments. Company job title, gender, ethnicity, age and years of service affected voluntary turnover behavior. However, employees with higher salaries and more work experience were more likely to quit than those with lower salaries and less experience. The result also showed that college major and education level were not associated with R&D employees\u27 decision to quit

    Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient

    Full text link
    Recently, methods for skeleton-based human activity recognition have been shown to be vulnerable to adversarial attacks. However, these attack methods require either the full knowledge of the victim (i.e. white-box attacks), access to training data (i.e. transfer-based attacks) or frequent model queries (i.e. black-box attacks). All their requirements are highly restrictive, raising the question of how detrimental the vulnerability is. In this paper, we show that the vulnerability indeed exists. To this end, we consider a new attack task: the attacker has no access to the victim model or the training data or labels, where we coin the term hard no-box attack. Specifically, we first learn a motion manifold where we define an adversarial loss to compute a new gradient for the attack, named skeleton-motion-informed (SMI) gradient. Our gradient contains information of the motion dynamics, which is different from existing gradient-based attack methods that compute the loss gradient assuming each dimension in the data is independent. The SMI gradient can augment many gradient-based attack methods, leading to a new family of no-box attack methods. Extensive evaluation and comparison show that our method imposes a real threat to existing classifiers. They also show that the SMI gradient improves the transferability and imperceptibility of adversarial samples in both no-box and transfer-based black-box settings.Comment: Camera-ready version for ICCV 202

    "This is my unicorn, Fluffy": Personalizing frozen vision-language representations

    Full text link
    Large Vision & Language models pretrained on web-scale data provide representations that are invaluable for numerous V&L problems. However, it is unclear how they can be used for reasoning about user-specific visual concepts in unstructured language. This problem arises in multiple domains, from personalized image retrieval to personalized interaction with smart devices. We introduce a new learning setup called Personalized Vision & Language (PerVL) with two new benchmark datasets for retrieving and segmenting user-specific "personalized" concepts "in the wild". In PerVL, one should learn personalized concepts (1) independently of the downstream task (2) allowing a pretrained model to reason about them with free language, and (3) does not require personalized negative examples. We propose an architecture for solving PerVL that operates by extending the input vocabulary of a pretrained model with new word embeddings for the new personalized concepts. The model can then reason about them by simply using them in a sentence. We demonstrate that our approach learns personalized visual concepts from a few examples and can effectively apply them in image retrieval and semantic segmentation using rich textual queries

    One size does not fit all: A game-theoretic approach for dynamically and effectively screening for threats

    Get PDF
    An effective way of preventing attacks in secure areas is to screen for threats (people, objects) before entry, e.g., screening of airport passengers. However, screening every entity at the same level may be both ineffective and undesirable. The challenge then is to find a dynamic approach for randomized screening, allowing for more effective use of limited screening resources, leading to improved security. We address this challenge with the following contributions: (1) a threat screening game (TSG) model for general screening domains; (2) an NP-hardness proof for computing the optimal strategy of TSGs; (3) a scheme for decomposing TSGs into subgames to improve scalability; (4) a novel algorithm that exploits a compact game representation to efficiently solve TSGs, providing the optimal solution under certain conditions; and (5) an empirical comparison of our proposed algorithm against the current state-of-the-art optimal approach for large-scale game-theoretic resource allocation problems

    Detecting Deepfakes Without Seeing Any

    Full text link
    Deepfake attacks, malicious manipulation of media containing people, are a serious concern for society. Conventional deepfake detection methods train supervised classifiers to distinguish real media from previously encountered deepfakes. Such techniques can only detect deepfakes similar to those previously seen, but not zero-day (previously unseen) attack types. As current deepfake generation techniques are changing at a breathtaking pace, new attack types are proposed frequently, making this a major issue. Our main observations are that: i) in many effective deepfake attacks, the fake media must be accompanied by false facts i.e. claims about the identity, speech, motion, or appearance of the person. For instance, when impersonating Obama, the attacker explicitly or implicitly claims that the fake media show Obama; ii) current generative techniques cannot perfectly synthesize the false facts claimed by the attacker. We therefore introduce the concept of "fact checking", adapted from fake news detection, for detecting zero-day deepfake attacks. Fact checking verifies that the claimed facts (e.g. identity is Obama), agree with the observed media (e.g. is the face really Obama's?), and thus can differentiate between real and fake media. Consequently, we introduce FACTOR, a practical recipe for deepfake fact checking and demonstrate its power in critical attack settings: face swapping and audio-visual synthesis. Although it is training-free, relies exclusively on off-the-shelf features, is very easy to implement, and does not see any deepfakes, it achieves better than state-of-the-art accuracy.Comment: Our code is available at https://github.com/talreiss/FACTO

    Rational Cybersecurity for Business

    Get PDF
    Use the guidance in this comprehensive field guide to gain the support of your top executives for aligning a rational cybersecurity plan with your business. You will learn how to improve working relationships with stakeholders in complex digital businesses, IT, and development environments. You will know how to prioritize your security program, and motivate and retain your team. Misalignment between security and your business can start at the top at the C-suite or happen at the line of business, IT, development, or user level. It has a corrosive effect on any security project it touches. But it does not have to be like this. Author Dan Blum presents valuable lessons learned from interviews with over 70 security and business leaders. You will discover how to successfully solve issues related to: risk management, operational security, privacy protection, hybrid cloud management, security culture and user awareness, and communication challenges. This open access book presents six priority areas to focus on to maximize the effectiveness of your cybersecurity program: risk management, control baseline, security culture, IT rationalization, access control, and cyber-resilience. Common challenges and good practices are provided for businesses of different types and sizes. And more than 50 specific keys to alignment are included. What You Will Learn Improve your security culture: clarify security-related roles, communicate effectively to businesspeople, and hire, motivate, or retain outstanding security staff by creating a sense of efficacy Develop a consistent accountability model, information risk taxonomy, and risk management framework Adopt a security and risk governance model consistent with your business structure or culture, manage policy, and optimize security budgeting within the larger business unit and CIO organization IT spend Tailor a control baseline to your organization’s maturity level, regulatory requirements, scale, circumstances, and critical assets Help CIOs, Chief Digital Officers, and other executives to develop an IT strategy for curating cloud solutions and reducing shadow IT, building up DevSecOps and Disciplined Agile, and more Balance access control and accountability approaches, leverage modern digital identity standards to improve digital relationships, and provide data governance and privacy-enhancing capabilities Plan for cyber-resilience: work with the SOC, IT, business groups, and external sources to coordinate incident response and to recover from outages and come back stronger Integrate your learnings from this book into a quick-hitting rational cybersecurity success plan Who This Book Is For Chief Information Security Officers (CISOs) and other heads of security, security directors and managers, security architects and project leads, and other team members providing security leadership to your busines

    Maritime Boundary Disputes as a Constraint to the Commercialization of Ocean Thermal Energy Conversion in the Caribbean Sea

    Get PDF
    This thesis is an examination of the effect which the commercialization of Ocean Thermal Energy Conversion (OTEC) will have on the delimitation of maritime boundaries in the Caribbean, and the extent to which boundary disputes will reduce the area available for OTEC facility deployment there. The first chapter is a discussion of OTEC and the prospects for commercialization in the next ten to fifteen years. The second chapter is an analysis of the development of the international law of the sea, and how that evolution has stabilized in recent years establishing a jurisdictional regime in which coastal states control the resources within 200 miles of their shores. The third chapter demonstrates how the international community has been unsuccessful at establishing a systematic body of rules for the delimitation of maritime boundaries. This lack of agreement creates the potential for boundary disputes to persist for many years, especially where the boundary results in the apportionment of valuable resources. The implications of these developments for the commercialization of OTEC in the Caribbean in particular are discussed in the fourth and fifth chapters. In the Caribbean, which is endowed throughout with an OTEC resource, no maritime space exists further than 200 miles from some continental or insular territory. As a result, the entire region will be within zones of national jurisdiction. As many as 60 maritime boundaries will have to be delimited, many of which will require the consideration of a unique or special circumstance such as the presence of a remote island or an unusual curvature of the coastline. The issues are complicated in some cases by disputed titles to sovereignty over small, uninhabited or sparsely populated islands

    Systematic Procedures to Determine Incentive / Disincentive Dollar Amounts for Highway Transportation Construction Projects, Research Report 11-22

    Get PDF
    The Federal Highway Administration has encouraged state transportation agencies to implement Incentive/Disincentive (I/D) contracting provisions for early project completion. Although general guidelines to determine the I/D dollar amount for a project are available, there is no systematic and practical tool in use to determine optimum I/D dollar amounts for I/D projects considering road user cost, agency cost, contractor’s acceleration cost, and contractor’s cost savings. Therefore, systematic procedures and models to assist project planners and engineers in determining an appropriate I/D dollar amount are essential to optimizing the use of I/D contracting techniques. This research performed a literature review related to the determination of daily I/D dollar amounts. Caltrans I/D project data were then collected and evaluated. Project performance data were analyzed with regard to project outcomes in two key areas: project time and project cost. Statistical analyses were performed to identify the impact of I/D dollar amount on project time and cost performance. Using Construction Analysis for Pavement Rehabilitation Strategies (CA4PRS) software, Caltrans I/D projects were analyzed to introduce three different levels of CA4PRS implementations for the I/D dollar amounts calculation. Based on the results of the I/D project case studies, the systematic procedures to determine appropriate I/D dollar amounts were developed using the CA4PRS schedule-traffic-cost integration process for the new I-5 rehabilitation project in LA. The proposed procedures were applied to a typical highway pavement rehabilitation project using HMA (hot mix asphalt) materials. Further research is needed to apply the proposed model to other types of highway projects, with adjustment for the type of project

    Potential of microbiome-based solutions for agrifood systems

    Get PDF
    Host-associated microbiomes are central to food production systems and human nutrition and health. Harnessing the microbiome may help increase food and nutrient security, enhance public health, mitigate climate change and reduce land degradation. Although several microbiome solutions are currently under development or commercialized in the agrifood, animal nutrition, biotechnology, diagnostics, pharmaceutical and health sectors , fewer products than expected have been successfully commercialized beyond food processing, and fewer still have achieved wider adoption by farming, animal husbandry and other end-user communities. This creates concerns about the translatability of microbiome research to practical applications. Inconsistent efficiency and reliability of microbiome solutions are major constraints for their commercialization and further development, and demands urgent attention
    • …
    corecore