Disinformation

Relating Credibility to Writing Style, Emotion, and Scope of Spread of Disinformation

Abstract:

This study focuses on Taiwan, a Chinese-speaking country suffering from disinformation attacks. To fully explore the situation in Taiwan, this study adopts the victim-oriented approach and focuses on the following question: what are some factors regarding the credibility of disinformation in Taiwan? The result of an exit poll survey (n=892) and a series of behavior experiments (n=86) indicate that, when countering disinformation, regulations that only focus on transparency of the source may have little impact since the source is not the main variable in terms of credibility.

Ambiguous Self-Induced Disinformation (ASID) Attacks: Weaponizing a Cognitive Deficiency

Abstract:

Humans quickly and effortlessly impose context onto ambiguous stimuli, as demonstrated through psychological projective testing and ambiguous figures. This feature of human cognition may be weaponized as part of an information operation. Such Ambiguous Self-Induced Disinformation (ASID) attacks would employ the following elements: the introduction of a culturally consistent narrative, the presence of ambiguous stimuli, the motivation for hypervigilance, and a social network. ASID attacks represent a low-risk, low-investment tactic for adversaries with the potential for significant reward, making this an attractive option for information operations within the context of grey-zone conflicts.

Information Warfare: Leveraging the DMMI Matrix Cube for Risk Assessment

Abstract:

This paper presents the DMMI Matrix Cube and demonstrates its use in assessing risk in the context of information warfare. By delineating and ordinating the concepts of disinformation, misinformation, malinformation, and information, its purpose is to gauge a communication’s intention to cause harm, and its likelihood of success; these, together, define the severity of weaponised information, such as those employed within sophisticated information operations. The chance or probability of the (information) risk is determined by the intention to harm, the apparent veracity of the information, and the probability of its occurrence. As an exemplar, COVID-19 anti-vaccine campaigns are mapped to the DMMI Matrix Cube, and recommendations are offered based on stakeholder needs, interests, and objectives.

False Information as a Threat to Modern Society: A Systematic Review of False Information, Its Impact on Society, and Current Remedies

Abstract:

False information and by extension misinformation, disinformation and fake news are an ever-growing concern to modern democratic societies, which value the freedom of information alongside the right of the individual to express his or her opinions freely. This paper focuses on misinformation, with the aim to provide a collation of current research on the topic and a discussion of future research directions

Social Cybersecurity: A Policy Framework for Addressing Computational Propaganda

Abstract:

After decades of Internet diffusion, geopolitical and information threats posed by cyberspace have never been greater. While distributed denial-of-service (DDOS) attacks, email hacks, and malware are concerns, nuanced online strategies for psychological influence, including state-sponsored disinformation campaigns and computational propaganda, pose threats that democracies struggle to respond to. Indeed, Western cybersecurity is failing to address the perspective of Russia’s ‘information security,’—manipulation of the user as much as of the network. Based in computational social science, this paper argues for cybersecurity to adopt more proactive social and cognitive (non-kinetic) approaches to cyber and information defense. This protects the cognitive, attitudinal, and behavioral capacities required for a democracy to function by preventing psychological apparatuses, such as confirmation bias and affective polarization, that trigger selective exposure, echo chambers, in-group tribalization, and out-group threat labelling.

Machine Intelligence to Detect, Characterise, and Defend against Influence Operations in the Information Environment

Abstract:

Deceptive content—misleading, falsified, and fabricated—is routinely created and spread online with the intent to create confusion and widen political and social divides. This study presents a comprehensive overview of content intelligence capabilities (WatchOwl– https://watchowl. pnnl.gov/) to detect, describe, and defend against information operations on Twitter as an example social platform to explain the influence of misleading content diffusion and enable those charged with defending against such manipulation and responsive parties to counter it. We first present deep learning models for misinformation and disinformation detection in multilingual and multimodal settings followed by psycho-linguistic analysis across broad deception categories. 

Influence Operations & International Law

Abstract: 

There is no treaty or specifically applicable customary international law that deals squarely with ‘Influence Operations’ (IO). Despite this, there are a number of discrete areas of international law that nonetheless apply indirectly to regulate this activity. These principally relate to the Use of Force (Jus ad Bellum), International Human Rights Law, and the Law of Armed Conflict. Influence Operations are presumptively lawful in each of these three areas provided that such activities do not cross relatively high thresholds of prohibition. In the event that an IO does cross a prohibition set by international law, there are a number of responses available to a targeted State.

Understanding and Assessing Information Influence and Foreign Interference

Abstract: 

The information influence framework was developed to identify and to assess hostile, strategy-driven, state-sponsored information activities. This research proposes and tests an analytical approach and assessment tool called information influence and interference to measure changes in the level of strategy-driven, state-sponsored information activities by the timeliness, specificity, and targeted nature of communications as well as the dissemination tactics of publicly available information. 

Disinformation in Hybrid Warfare: The Rhizomatic Speed of Social Media in the Spamosphere

Abstract:

In this paper, two case studies are analysed, namely Finland’s Rapid Reaction Force and the arrest of a Russian citizen in Finland at the request of U.S. officials. A so-called rhizomatic focus (Deleuze and Guattari 1983) is adopted to assess social networking spam and the implications that this phenomenon has for interaction in security cases. In both case studies, the respective timeline of events and the social media impacts on the rhizomatic ‘spam’ information context are analysed.

Twitter as a Vector for Disinformation

Abstract

Twitter is a social network that represents a powerful information channel with the potential to be a useful vector for disinformation. This paper examines the structure of the Twitter social network and how this structure has facilitated the passing of disinformation both accidental and deliberate. Examples of the use of Twitter as an information channel are examined from recent events. The possible effects of Twitter disinformation on the information sphere are explored as well as the defensive responses users are developing to protect against tainted information.

Journal of Information Warfare

The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.

Keywords

A

AI
APT

C

C2
C2S
CDX
CIA
CIP
CPS

D

DNS
DoD
DoS

I

IA
ICS

M

S

SOA

X

XRY

Quill Logo

The definitive publication for the best and latest research and analysis on information warfare, information operations, and cyber crime. Available in traditional hard copy or online.

SUBSCRIBE NOW

Get in touch

Principal Office

  • Journal of Information Warfare
  • ArmisteadTEC
  • 525 Landfall Arch,
  • Virginia Beach, VA 23462

Registered Agent and Mailing Address

  • Journal of Information Warfare
  •  ArmisteadTEC
  • Dr Leigh Armistead, President
  • 1624 Wakefield Drive
  • Virginia Beach, VA 23455

 757.510.4574

 JIW@ArmisteadTec.com