This piece of information describes warfare of information and influence operations as the thoughtful use of information by one party on an opponent population to mislead, confuse, and influence the actions which the targeted population does. Warfare of information and influence operations are considered to be hostile activity, or as an activity which is actioned between two parties with un-aligned interests.
In the background of the military doctrine of US, warfare of information and influence operations have the close relativity to information operations or support activities of military information. However, the main focus within the military doctrine of US on these operations is strategic. An established example would be psychological operations like propaganda broadcasts or leafleting planned to induce adversary groups to weaken their strength of fight or surrender.
By contrast, this post is focused on the warfare of information and influence operations which strive to affect the whole national populations. A basic lesson is that there are no noncombatants – every single person in the society of the adversary is a real target. Furthermore, cyber- supported information warfare and influence operations in the 21st century provide massive leverage to the capabilities of modern communication technologies and computing like the internet to meet their effects.
There are three significant factors that revolve around:
A critical but not significant condition to respond to IWIO is to have the knowledge that an adversary is leading such a campaign. When an adversary is tasked to use kinetic weapons, a country has typically the information that someone attacked, at minimum if the kinetic attack causes significant destruction or death. Detection of a cyber-campaign may not be possible. The secret operations of computers are invisible to the human eye, and a malfunctioning pc, even when observed, maybe giving malfunctions for reasons other than hostile cyber operations (such as user error). A striking cyber-campaign may become noticeable if the effects of it were envisioned to be noticed, even if the causes remain hidden – such could be the case if a campaign of cyber-activity pursued to cause physical damaging. But an effective cyber-campaign may not be detected if the effects of it were strategically kept secret, for instance, if a cyber-campaign was set for espionage.
Any appropriate defense against warfare of information and influence operation, even a half defense, needs to start with this essential act: While the velocity and volume of information has amplified by commands of magnitude in the past few years, the human mind’s architecture didn’t achieve any significant change appreciably in the past thousand years, and human beings have the same perceptual and cognitive boundaries which they have always had.
Potential defensive measures for IWIO fall into two different categories: steps to help people resist the actions of IWIO weapons directed at them; and methods to disrupt, degrade, or expose an arsenal of the adversary of IWIO weapons as they are being aimed at a targeted population.
Naturally, societal preferences evolve, and not all the changes related to societal preference and orientation are the result of warfare of information and influence operations. However, IWIO is an intensive effort by a foreign adversary (non-state or state) to modify evolutionary processes which are indigenously driven and thus gain strong influence over the destiny of the target society.
Helping humans resist IWIO:
An approach to remediate the effects of such biases is to sort options which make it easier for individuals to involve their capabilities for rational thought – this approach often defined in the psychological literature as “debiasing”. For instance, it may become possible to “inoculate” targeted audiences against false news. One type of such inoculation defined by John Banas and Stephen Rains comprises of simultaneous delivery of a preliminary message and a preventive flagging of fake claims which are likely to follow as well as an apparent rejection of potential responses.
Tools which make it convenient to ascertain the falsity or truthfulness of claims created online (whether by IWIO activists or traditional media) are unlikely to affect the thought processes of extremist partisans but might provide facilitation to more rational thought for people who have not yet been made impervious to fact and reason. Moreover, researchers at Indiana University have delivered an example of tools to facilitate computational fact-checking which helps humans to promptly asses the accuracy of the dubious claims.
The author is John Peterson working as a professional writer at a network security company in Malaysia. He works specifically on it consulting and services. He was born in Fullerton, California in June 1990. Since childhood, he has been collecting inspirational paintings, and this is his hobby!