Machine Learning

Human Rights and Artificial Intelligence: A Universal Challenge


As artificially intelligent systems benefit citizens around the globe, there remain many ethical questions about the intrusion of AI into every aspect of our private and professional lives. This paper raises awareness of the unprecedented challenge that governments and private industry face in managing these complex systems that include regulators, markets, and special interests. 

Adversarial Attack’s Impact on Machine Learning Model in Cyber-Physical Systems


Deficiency of correctly implemented and robust defence leaves Internet of Things devices vulnerable to cyber threats, such as adversarial attacks. A perpetrator can utilize adversarial examples when attacking Machine Learning models used in a cloud data platform service. Adversarial examples are malicious inputs to ML-models that provide erroneous model outputs while appearing to be unmodified. This kind of attack can fool the classifier and can prevent ML-models from generalizing well and from learning high-level representation; instead, the ML-model learns superficial dataset regularity. This study focuses on investigating, detecting, and preventing adversarial attacks towards a cloud data platform in the cyber-physical context.

Attack Scenarios in Industrial Environments and How to Detect Them: A Roadmap


Cyberattacks on industrial companies have increased in the last years. The Industrial Internet of Things increases production efficiency at the cost of an enlarged attack surface. Physi-cal separation of productive networks has fallen prey to the paradigm of interconnectivity, present-ed by the Industrial Internet of Things. This leads to an increased demand for industrial intrusion detection solutions. There are, however, challenges in implementing industrial intrusion detection. There are hardly any data sets publicly available that can be used to evaluate intrusion detection algorithms. The biggest threat for industrial applications arises from state-sponsored and crim-inal groups.

Moving Big-Data Analysis from a ‘Forensic Sport’ to a ‘Contact Sport’ Using Machine Learning and Thought Diversity


Data characterization, trending, correlation, and sense making are almost always performed after the data is collected. As a result, big-data analysis is an inherently forensic (after-the-fact) process. In order for network defenders to be more effective in the big-data collection, analysis, and intelligence reporting mission space, first-order analysis (initial characterization and correlation) must be a contact sport—that is, must happen at the point and time of contact with the data—on the sensor. This paper will use actionable examples: (1) to advocate for running Machine-Learning (ML) algorithms on the sensor as it will result in more timely, more accurate (fewer false positives), automated, scalable, and usable analyses; (2) discuss why establishing thought-diverse (variety of opinions, perspectives, and positions) analytic teams to perform and produce analysis will not only result in more effective collection, analysis, and sense making, but also increase network defenders’ ability to counter and/or neuter adversaries’ ability to deny, degrade, and destabilize U.S. networks.

Journal of Information Warfare

The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.












Quill Logo

The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.


Get in touch

  • Journal of Information Warfare
    21 North Broad Street
    Suite 2-H
    Luray, VA 
  • 757.581.9550