Introduction

Research background

Artificial intelligence technology has been widely applied in various domains, such as multi-objective decision analysis [1], image fusion [2], multi-criteria decision making [3] and so on [4, 5]. An artificial intelligence system often needs to process vast amounts of information that is inherently uncertain [6]. Researchers have proposed numerous theories aimed at the processing and modeling of uncertain information. Such as intuitionistic fuzzy set theory [7, 8], Z-number theory [9, 10], evidential reasoning [11, 12], probability theory [13], evidence theory [14, 15], random permutation sets theory (RPS) [16, 17] and so on [18, 19]. These theories are widely applied in information fusion [20, 21], pattern classification [22, 23], complex network analysis [24, 25] and so on [26,27,28].

Among these theories, the essence of probability theory, evidence theory, and random permutation set theory is similar, as they all involve the allocation of belief within specific event spaces. Probability theory, evidence theory, and random permutation sets represent belief assignments by probability distribution (PD), basic probability assignment (BPA), and permutation mass (PM) function, respectively. In probability theory, the belief is allocated to independent mutually exclusive events, while evidence theory expands the belief allocation space to encompass both independent mutually exclusive events and their combinations. The random permutation sets theory was initially proposed by Deng [17] which is a further extension of evidence theory [16]. It simultaneously considers the random combinations and permutation orders of events, enabling a more refined modeling of uncertain information. Moreover, evidence theory and random permutation sets theory can be degenerated to probability theory under certain conditions. Due to its excellent knowledge modeling capabilities, random permutation sets have gradually became as a hot topic in recently published research.

Besides, the representation and processing of information remain an open issue. Negation is an important perspective of information representation. Firstly, it provides a perspective akin to the opposite of information which may facilitate the discovery of hidden, higher-order and deeper information. For instance, demonstrating the validity of a proposition can be challenging, whereas proving its falsity requires only the presentation of a single counterexample. This concept is exemplified in mathematics by the principle of "proof by contradiction". Secondly, for an uncertain proposition, analysis can be conducted from both positive and negative aspects. The conclusions drawn from both perspectives can corroborate each other. Thirdly, the uncertainty of a particular piece of information can also be assessed by measuring the discrepancy (conflict) between the information itself and its negation. The greater the discrepancy between the original information and its negation, the lower the uncertainty (fuzziness) associated with that information. The process of "negation" can be seen as a bridge from the positive aspect of an event to its negative aspect. Zadeh formally introduced the concept of negation in probability theory on his blog, sparking widespread interest among researchers. Smets employed matrix methods to investigate how to determine the negation of belief functions [29]. Yager introduced an approach to obtain the negation of probability distribution with maximal entropy [30]. Zhang et al. extended the Yager’s negation method from the aspect of Tsallis entropy [31]. Since evidece theory can be considered as a generalization of probability, inspired by Yager’s work, Yin et al. applied the concept of negation to basic probability assignment(BPA) in evidence theory [32]. Luo et al. introduced a novel definition of negation BPA by using matrix operator [33]. Under the quantum model of evidence theory, Xiao et al. studied the negation of quantum mass function [34]. Liu et al. proposed the negation of discrete Z-numbers based on the combination of probability and fuzziness [35].

Motivation and contribution of this work

As previously mentioned, similar to probability theory and evidence theory, random permutation sets theory is a method for uncertain information modeling. However, the negation of permutation mass function has not been studied. To address this issue and thereby enrich the theoretical framework of random permutation sets, the definition of negation permutation mass function is proposed in this paper, which is a new approach to represent and process the knowledge based on random permutation sets theory. In addition, based on on Chen et al. method in [36] and the maximum entropy of random permutation sets in [37], the change of entropy value and dissimilarity measure of permutation mass function after each iteration of the negation process is presented. An example is utilized to illustrate that during the process of information fusion, analysis can be simultaneously conducted from both the perspective of the information itself and its negation, which facilitates the verification of the rationality of the fusion result. Additionally, as previously stated, the difference between a proposition and its negation can reflect the degree of uncertainty of the proposition to some extent. Based on this concept, this paper first defines a negation-based uncertainty measure \({H_N}\) for PM. Subsequently, a novel information fusion method is proposed based on \({H_N}\). A comparison with existing information fusion approaches substantiates the scientific validity of the proposed method.

Organization of this work

The rest of this work is organized as follows. In “Preliminaries”, the related preliminaries are briefly presented. In “The negation of permutation mass function”, the negation of permutation mass function is proposed. The convergence, entropy and dissimilarity of permutation mass function during the negation process are analyzed in Sect. “Uncertainty of permutation mass function in negation operation”. Two applications in information fusion based on the proposed negation of PM are presented in “Uncertainty measure for RPS using the negation of PM and its application”. Finally, “Conclusion” concludes this work.

Preliminaries

Dempster–Shafer evidence theory

Frame of discernment (FOD)

In Dempster–Shafer evidence theory [14, 15], the frame of discernment is a set contains all the observable events. Assuming there are n mutually exclusive events, the corresponding FOD can be represent as \(\varPhi =\left\{ \varphi _1, \varphi _2, \cdots \varphi _n\right\} \). The power set of \(\varPhi \) is defined as:

$$\begin{aligned} 2^{\varPhi } = \{\emptyset , \{\varphi _1\},\ ...\ , \{\varphi _n\}, \{\varphi _1, \varphi _2\},\ ...\ , \{\varphi _1,\ ...\ , \varphi _n\}\}. \end{aligned}$$
(1)

The \({2^\varPhi }\) contains all possible combinations of \({\varphi _i} \in \varPhi \), \(i = 1,2,3,...,n\). The cardinality of \({2^\varPhi }\) is \({2^n }\).

Basic probability assignment (BPA)/mass function

The basic probability assignment (BPA) or the mass function is a mapping m: \( 2^{\varPhi } \rightarrow [0,1] \) where \(m(\emptyset )=0\), \(\sum \nolimits _{A \in {2^\varPhi }} {m\left( A \right) = 1} \). If \(m(A) > 0\), then A is called a focal element.

Random permutation sets theory

Permutation event space

Similar to the evidence theory, supposing the set \(\varTheta \) contains n mutually exclusive events denoted as \(\varTheta = \left\{ {{\gamma _1},{\gamma _2}, \cdots {\gamma _n}} \right\} \). The corresponding permutation event space (PES) contains all the possible permutations of the elements in \(\varTheta \) denoted as:

$$\begin{aligned} \begin{aligned} PES\big ( \varTheta \big )&= \big \{ {A_{pq}} \Bigg | p = 0,1,2, \cdots ,n;q = 1,2, \cdots ,P\big ( {n,p} \big ) \big \}\\&= \big \{ {\emptyset ,\big \{ {\gamma _1} \big \}, \cdots ,\big \{ {{\gamma _{n - 1}}} \big \},\big \{ {{\gamma _n}} \big \},\big \{ {{\gamma _1},{\gamma _2}} \big \},\big \{ {{\gamma _2},{\gamma _1}} \big \},} \\&\quad \cdots ,\big \{ {{\gamma _{n - 1}},{\gamma _n}} \big \}, \big \{ {{\gamma _n},{\gamma _{n - 1}}} \big \}, \cdots ,{\hspace{1.0pt}} \big \{ {{\gamma _1},{\gamma _2}, \cdots ,{\gamma _n}}\big \}{\hspace{1.0pt}} , \\&\quad \cdots \big \{{\gamma _n},{\gamma _{n - 1}}, \cdots {\gamma _1} \big \} \big \} \end{aligned} \end{aligned}$$
(2)

where, p is the cardinality of subset \({A_{pq}}\), q is the the existence of q distinct p-permutations, \(P\left( {n,p} \right) \) represents the p permutation of n and it can be calculated as \(P\left( {n,p} \right) = \frac{{n!}}{{\left( {n - p} \right) !}}\), the cardinality of \(PES\left( \varTheta \right) \) is denoted as \(\varDelta = \sum \nolimits _{i = 0}^n P (n,p)\).

Permutation mass function

A permutation mass (PM) function is a mapping denoted as: \(PM: PES\left( \varTheta \right) \rightarrow \left[ {0,1} \right] \), where \(PM\left( \emptyset \right) = 0\), and \(\sum \nolimits _{A \in PES\left( \varTheta \right) } {PM\left( A \right) = 1} \). If \(PM\left( A \right) > 0\), then A is called permutation focal element. The random permutation sets (RPS) is a set of pairs, which is defined as:

$$\begin{aligned} RPS\left( \varTheta \right) = \left\{ {\left. {\left\langle {A,PM\left( A \right) } \right\rangle } \right| A \in PES\left( \varTheta \right) } \right\} . \end{aligned}$$
(3)

PM(A) in the random permutation sets can be regarded as the belief assignment on A, which is similar to the meaning of mass function in evidence theory. Because \(PES\left( \varTheta \right) \) not only includes random combinations between events \({\gamma _i}\), but also considers their permutation orders, the belief in random permutation sets theory is assigned to a more refined space than probability theory and evidence theory, which has stronger information representation ability. Consider an target recognition scenario as an example. There are 3 targets A, B and C. In probability theory, probability theory can only express the probability of any one of A, B or C, but can not express the situation that there are multiple targets at the same time. The evidence theory treats the multi-target situation as an independent situation, instead of adding the single-target’s belief directly (i.e., \(m\left( {A,B} \right) \) may not equal to \(m\left( A \right) + m\left( B \right) \)), which improves this problem to some extent. On the basis of evidence theory, the random permutation sets also considers the order of the appearance of the target, and assigns the belief to the different order of the appearance of the target independently, i.e., \(PM\left( {A,B,C} \right) \) may not equals to \(PM\left( {C,B,A} \right) \), because event (ABC) is different from (CAB). Figure 1 illustrates the differences in the belief assignment space (event space) for different theories in this target recognition example. It should be highlighted that the random permutation sets theory can degenerate into evidence theory and probability theory under certain circumstances.

Fig. 1
Fig. 1
Full size image

The illustration of belief assignment space (event space) of different theories

Intersection of permutation events

Assuming that \(A, B \in {\text {PES}}(\varTheta )\) are permutation events, the left intersection(LI) and right intersection(RI) of A and B are defined as:

(4)

where \(M \backslash \backslash N\) represents removing N from M. Assuming that \(A=\left\{ \gamma _1, \gamma _2\right\} \) and \( B=\left\{ \gamma _3, \gamma _2, \gamma _1\right\} \), then , and .

Orthogonal sum of permutation mass functions

  1. (1)

    Left orthogonal sum(LOS) Given two PMs: \(PM_1\) and \(PM_2\), is used to represent the left orthogonal sum of \(PM_1\) and \(PM_2\), which can be obtained as follows:

    (5)
  2. (2)

    Right orthogonal sum(ROS) Similar to LOS, is used to represent the right orthogonal sum of \(PM_1\) and \(PM_2\), which can be obtained as follows:

    (6)

It should be noted that most current research fuses PMs from different data sources using the LOS. To maintain consistency with existing literature and facilitate comparison with existing methods, the LOS rule is employed for information fusion in this paper.

Negation information in probability theory and the evidence theory

The concept of negation has been applied in a number of theories, such as probability theory, complex-valued evidence theory, evidence theory, quantum evidence theory, discrete Z-number theory and so on. Two classical negation methods under the frameworks of probability theory and evidence theory, Yager’s negation [30] and Yin’s negation [32], are briefly introduced as follows.

Yager’s negation of probability distribution

Supposing that a probability distribution is \(P = \left\{ {{p_1},{p_2}, \cdots }\right. \left. { {p_n}} \right\} \), the corresponding negation of \({p_i}\) is defined as \({{\bar{p}}_i} = \frac{{1 - {p_i}}}{{n - 1}} = \frac{{1 - {p_i}}}{{\sum \nolimits _{i = 1}^n {1 - {p_i}} }}\). It can be find that \(\sum \nolimits _{i = 1}^n {{{{\bar{p}}}_i}} = 1\) and \(n-1\) represents normalization factor.

Yin et al.’s negation of mass function for evidence theory

Assuming that FOD \(\varPhi =\left\{ \varphi _1, \varphi _2, \ldots , \varphi _n\right\} \), the BPA defined on \(2^{\varPhi }\) is \(\left\{ m(\varnothing ), m\left( A_1\right) , m\left( A_2\right) , \cdots m\left( A_{2^n-1}\right) \right\} \) where \(A_i \in 2^{\varPhi }\). The corresponding negation of \(m\left( {{A_i}} \right) \) is \(\bar{m}\left( {{A_i}} \right) = \frac{{1 - m\left( {{A_i}} \right) }}{{n - 1}} = \frac{{1 - m\left( {{A_i}} \right) }}{{\sum \nolimits _{i = 1}^n {1 - m\left( {{A_i}} \right) } }}\). It should be noted that n is the number of focal element and there exists \(\sum \nolimits _{i = 1}^n {{\bar{m}}\left( {{A_i}} \right) } = 1\).

The distance of random permutation sets

In order to measure the of dissimilarity degree between two PMs, Chen et al. [36] proposed the distance of random permutation sets, which is defined as follows.

$$\begin{aligned}{} & {} {d_{RPS}}\left( {{P{M_1}},{P{M_2}} } \right) \nonumber \\{} & {} \quad = \sqrt{\frac{1}{2}\left( {{\textbf{P}}{{\textbf{M}}_1} - {\textbf{P}}{{\textbf{M}}_2}} \right) \underline{\underline{{\textbf{RD}}}} {{\left( {{\textbf{P}}{{\textbf{M}}_1} - {\textbf{P}}{{\textbf{M}}_2}} \right) }^T}}, \end{aligned}$$
(7)

where \({{\textbf{P}}{{\textbf{M}}_i}}\) is a vector defined as \({\textbf{P}}{{\textbf{M}}_i} = [P{M_i}\left( {{A_1}} \right) ,P{M_i} \) \( \left( {{A_2}} \right) , \cdots P{M_i}\left( {{A_\varDelta }} \right) ]\), \(i=1, 2\), \(A_1, A_2, \ldots , A_{\varDelta } \in {\text {PES}}(\varTheta )\). \(\underline{\underline{{\textbf{RD}}}} \) is a \(\varDelta \times \varDelta \) matrix defined as follows:

$$\begin{aligned} \underline{\underline{{\textbf{RD}}}} = \left( {\begin{array}{*{20}{c}} {\frac{{\left| {{A_1} \cap {A_1}} \right| }}{{\left| {{A_1} \cup {A_1}} \right| }} \times OD\left( {{A_1},{A_1}} \right) }&{}{\frac{{\left| {{A_1} \cap {A_2}} \right| }}{{\left| {{A_1} \cup {A_2}} \right| }} \times OD\left( {{A_1},{A_2}} \right) }&{} \cdots &{}{\frac{{\left| {{A_1} \cap {A_\varDelta }} \right| }}{{\left| {{A_1} \cup {A_\varDelta }} \right| }} \times OD\left( {{A_1},{A_\varDelta }} \right) }\\ {\frac{{\left| {{A_2} \cap {A_1}} \right| }}{{\left| {{A_2} \cup {A_1}} \right| }} \times OD\left( {{A_2},{A_1}} \right) }&{}{\frac{{\left| {{A_2} \cap {A_2}} \right| }}{{\left| {{A_2} \cup {A_2}} \right| }} \times OD\left( {{A_2},{A_2}} \right) }&{} \cdots &{}{\frac{{\left| {{A_2} \cap {A_\varDelta }} \right| }}{{\left| {{A_2} \cup {A_\varDelta }} \right| }} \times OD\left( {{A_2},{A_\varDelta }} \right) }\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {\frac{{\left| {{A_\varDelta } \cap {A_1}} \right| }}{{\left| {{A_\varDelta } \cup {A_1}} \right| }} \times OD\left( {{A_\varDelta },{A_1}} \right) }&{}{\frac{{\left| {{A_\varDelta } \cap {A_2}} \right| }}{{\left| {{A_\varDelta } \cup {A_2}} \right| }} \times OD\left( {{A_\varDelta },{A_2}} \right) }&{} \cdots &{}{\frac{{\left| {{A_\varDelta } \cap {A_\varDelta }} \right| }}{{\left| {{A_\varDelta } \cup {A_\varDelta }} \right| }} \times OD\left( {{A_\varDelta },{A_\varDelta }} \right) }. \end{array}} \right) \end{aligned}$$
(8)

\(OD\left( {{A_i},{A_j}} \right) \) is the ordered degree between \({{A_i}}\) and \({{A_j}}\), which is calculated as follows:

$$\begin{aligned} OD\left( {{A_i},{A_j}} \right) = \exp \left( { - \frac{{\sum \limits _{\theta \in {A_i} \cap {A_j}} {\left| {{{{\textrm{rank}} }_{{A_i}}}(\theta ) - {{{\textrm{rank}} }_{{A_j}}}(\theta )} \right| } }}{{|{A_i} \cup {A_j}|}}} \right) . \end{aligned}$$
(9)

where \({{\textrm{rank}} _{{A_i}}}(\theta )\) and \({{\textrm{rank}} _{{A_j}}}(\theta )\) are the order of an element \(\theta \) in \({{A_i}}\) and \({{A_j}}\) respectively. For example, if \({A_1} = \left\{ {{\gamma _2},{\gamma _3},{\gamma _1}} \right\} \) and \({A_2} = \left\{ {{\gamma _1},{\gamma _2}} \right\} \), then the order of \({\gamma _1}\) in \({A_1}\) is 3, the order of \({\gamma _1}\) in \({A_2}\) is 1, i.e., \({{\textrm{rank}} _{{A_1}}}({\gamma _1}) = 3\) and \({{\textrm{rank}} _{{A_2}}}({\gamma _1}) = 1\).

The entropy of random permutation sets

In order to measure the uncertainty for a random permutation sets, Deng [37] proposed the entropy of random permutation sets, which is defined as:

$$\begin{aligned} {H_{RPS}}\left( {PM} \right)= & {} - \sum \limits _{p = 1}^N {\sum \limits _{q = 1}^{P(n,p)} {PM} } \left( {{A_{pq}}} \right) {\log _2}\left( {\frac{{PM\left( {{A_{pq}}} \right) }}{{F(p) - 1}}} \right) ,\nonumber \\ \end{aligned}$$
(10)

where \(F(p) = \sum \nolimits _{k = 0}^p {P\left( {p,k} \right) } = \sum \nolimits _{k = 0}^p {\frac{{p!}}{{(p - k)!}}} \). \({H_{RPS}}\) shows a compatibility characteristic with existing theories. It will degenerate back to Deng entropy and Shannon’s entropy under certain circumstances.

The negation of permutation mass function

Negation is a novel perspective of knowledge representation. There are already various ways of to obtain negation in probability and evidence theory. However, the negation in random permutation sets theory has not been explored. According to the previous analysis, negation can be viewed as a reassignment of belief. Based on this idea, in this section, two different negation methods have been proposed within the framework of random permutation theory.

Definition of the proposed negation method

Let \(\overline{PM({A_{pq}})} \) be the negation of the permutation mass function \(PM({A_{pq}})\), it is defined as:

$$\begin{aligned} \overline{PM({A_{pq}})} = \frac{{1 - PM({A_{pq}})}}{{\varDelta - 2}}, \end{aligned}$$
(11)

where \(\varDelta = \sum \nolimits _{p = 0}^n {P\left( {n,p} \right) }\) is the cardinality of \(PES\left( \varTheta \right) \). The negation of permutation mass function denoted as PM assigns the belief on each \(A \in PES\left( \varTheta \right) \) where \(A \ne \emptyset \).

The detailed procedures of proposed negation are shown as Algorithm 1 in Table 1 and Fig. 2.

Table 1 The procedure of algorithm 1
Fig. 2
Fig. 2
Full size image

The flowchart of proposed negation of PM

Belief reassignment in negation of permutation mass function

As previous mentioned, the operation of negation is a belief reassignment in a specific event space. When the negation operation is performed using the proposed method, the belief will be assigned to all events on the \(PES\left( \varTheta \right) \) except for the empty set, regardless of whether an event is a focal element or not. The following example is used to illustrate the belief reassignment.

Numerical Example: As a numerical example, assume that \(PES\left( \varTheta \right) = \left\{ {{\gamma _1},{\gamma _2}} \right\} \), the permutation mass function defined on \(\varPhi \) is \(PM\left( {{\gamma _1}} \right) = 0.1, PM \left( {{\gamma _2}} \right) = 0.7, PM\left( {{\gamma _1},{\gamma _2}} \right) = 0.2, PM\left( {{\gamma _2},{\gamma _1}} \right) = 0\). The negation of PM by using proposed method is calculated as:

$$\begin{aligned} \begin{aligned} \overline{PM({\gamma _1})}&= \frac{{1 - PM({\gamma _1})}}{{4 - 1}} = \frac{{1 - 0.1}}{3} = 0.3\\ \overline{PM({\gamma _2})}&= \frac{{1 - PM({\gamma _2})}}{{4 - 1}} = \frac{{1 - 0.7}}{3} = 0.1\\ \overline{PM({\gamma _1},{\gamma _2})}&= \frac{{1 - PM({\gamma _1},{\gamma _2})}}{{4 - 1}} = \frac{{1 - 0.2}}{3} = 0.2667\\ \overline{PM({\gamma _2},{\gamma _1})}&= \frac{{1 - PM({\gamma _2},{\gamma _1})}}{{4 - 1}} = \frac{{1 - 0}}{3} = 0.3333. \end{aligned} \end{aligned}$$
Fig. 3
Fig. 3
Full size image

The belief reassignment process with negation of permutation mass function

Figure 3 visually shows the process of the belief reassignment. The analysis will be conducted using \(PM({\gamma _1})\) and its negation as an example.

Firstly, \(PM({\gamma _1})\) is divided into three equal parts and has been allocated to \(\overline{PM({\gamma _2})} \), \(\overline{PM({\gamma _1},{\gamma _2})} \), and \(\overline{PM({\gamma _2},{\gamma _1})} \) respectively. Secondly, the calculation of \(\overline{PM({\gamma _1})} \) can be seen as \(\overline{PM({\gamma _1})} = \frac{1}{3}PM({\gamma _2}) + \frac{1}{3}PM({\gamma _1},{\gamma _2}) + \frac{1}{3}PM({\gamma _2},{\gamma _1})\). Therefore, when proposed method is used to take the negation of original PM, for each \(A \in PES\left( \varTheta \right) ,A \ne \emptyset \) (whether A is a focal element or not), it will participate in this belief redistribution process.

Assuming that PM is regarded as a \((\varDelta - 1)\)-dimension vector and do not consider the empty set \(\emptyset \). According to the previous numerical example, when negation operation is applied, original PM assigns the components of any one dimension to all of the remaining dimensions.

Uncertainty of permutation mass function in negation operation

Consider the following negation iteration process: denote the initial permutation mass function as \(P{M_0}\), the negation of \(P{M_0}\) is denoted as \(P{M_1}\), the negation of \(P{M_1}\) is denoted as \(P{M_2}\) etc. After the ith negation operations on \(P{M_0}\), it is denoted as \(P{M_i}\). Assuming that \(\varTheta = \left\{ {A,B} \right\} \), and \(P{M_0}{\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} P{M_0}\left( A \right) = 0.1,P{M_0}\left( B \right) = 0.7,{\hspace{1.0pt}} {\hspace{1.0pt}} P{M_0}\left( {A,B} \right) = 0.7,{\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} P{M_0}\left( {B,A} \right) = 0\), during this negation iteration process, the convergence, entropy and dissimilarity will be investigated. In this section, nine consecutive negation operations are performed on the permutation mass function \(P{M_0}\).

Convergence of permutation mass function in negation operation

The value of \(P{M_i}\) by using proposed method are presented in Table 2 and Fig. 4. According to Fig. 4, the following conclusions can be obtained: (1)when the number of negations is increasing, \(P{M_i{(A)}}\), \(P{M_i{(B)}}\), \(P{M_i{(A,B)}}\) and \(P{M_i{(B,A)}}\) will converge to a fixed value 0.2500. (2)The original \(PM(P{M_0})\) is not equal to the one after taking the negation operation twice (\(P{M_2}\)). In other words, the negation process is irreversible.

Table 2 The values of \(P{M_i}\) with the proposed method
Fig. 4
Fig. 4
Full size image

The values of \(P{M_i}\) with the proposed method

The convergence of \(P{M_i}\) in the negation process is further analyzed. For the proposed negation method, there exists \(P{M_{i + 1}} = \frac{{1 - P{M_i}}}{{\varDelta - 2}}\). When \(i \rightarrow \infty \), \(P{M_i}\) will be converged to \(\frac{1}{{\varDelta - 1}}\). The proof is as follow. Based on the definition of the proposed negation method, we have:

$$\begin{aligned} P{M_{i + 1}} = \frac{{1 - P{M_i}}}{{\varDelta - 2}}. \end{aligned}$$

Then,

$$\begin{aligned} \begin{aligned} P{M_{i + 1}} - \frac{1}{{\varDelta - 1}} =&\frac{{1 - P{M_i}}}{{\varDelta - 2}} - \frac{1}{{\varDelta - 1}}\\ =&\frac{1}{{\varDelta - 2}} - \frac{{P{M_i}}}{{\varDelta - 2}} - \frac{1}{{\varDelta - 1}}\\ =&\frac{1}{{\varDelta - 2}} - \frac{{(\varDelta - 1)P{M_i} + (\varDelta - 2)}}{{(\varDelta - 2)(\varDelta - 1)}}\\ =&\left( { - \frac{1}{{\varDelta - 2}}} \right) \left( {P{M_i} - \frac{1}{{\varDelta - 1}}} \right) \end{aligned}. \end{aligned}$$

That is

$$\begin{aligned} \frac{{P{M_{i + 1}} - \frac{1}{{\varDelta - 1}}}}{{P{M_i} - \frac{1}{{\varDelta - 1}}}} = - \frac{1}{{\varDelta - 2}}. \end{aligned}$$

Assume that

$$\begin{aligned} P{M_i} - \frac{1}{{\varDelta - 1}} = {h_i},P{M_{i + 1}} - \frac{1}{{\varDelta - 1}} = {h_{i + 1}}, \end{aligned}$$

Then, it can be concluded that \({h_i}\) is a geometric sequence with common ratio \( - \frac{1}{{\varDelta - 2}}\). Since \({h_0} = P{M_0} - \frac{1}{{\varDelta - 1}}\), and the general term formula of the sequence \({h_i}\) is given by \({h_i} = \left( {P{M_0} - \frac{1}{{\varDelta - 1}}} \right) \cdot {\left( { - \frac{1}{{\varDelta - 2}}} \right) ^i}\). then \(P{M_i} - \frac{1}{{\varDelta - 1}} = \left( {P{M_0} - \frac{1}{{\varDelta - 1}}} \right) \cdot {\left( { - \frac{1}{{\varDelta - 2}}} \right) ^i},P{M_i} = \left( {P{M_0} - \frac{1}{{\varDelta - 1}}} \right) \cdot \) \( {\left( { - \frac{1}{{\varDelta - 2}}} \right) ^i} + \frac{1}{{\varDelta - 1}}\). Since \(\varDelta - 2 > 1\), it is obvious that \(\mathop {\lim }\nolimits _{i \rightarrow \infty } P{M_i} = \frac{1}{{\varDelta - 1}}\). It is obvious that the negation operation will tend to average the belief assignment.

Entropy of permutation mass function in negation operation

According to the previous analysis, in this negation process, the operation of negation is irreversible. In other words, the value obtained after two consecutive negation operations on a PM is not equal to its initial value, which may be caused by the change of uncertainty during the negation process. In this section, the entropy of random permutation set [37] \({H_{RPS}}\) is used to measure the uncertainty change in the negation process. The value of \({H_{RPS}}\) during the negation process by using different negation method is presented in Table 3 and Fig. 5.

Table 3 The values of \({H_{RPS}}\) in the negation calculation
Fig. 5
Fig. 5
Full size image

The values of \({H_{RPS}}\) in the negation calculation process with the proposed method

From Fig. 5, after the first negation operation, the \({H_{RPS}}\) will be significantly higher. In the subsequent negation process, the \({H_{RPS}}\) will fluctuate around a relatively high value and gradually converge to a fixed value. It should be highlighted that the convergence of \({H_{RPS}}\) results from the convergence of \(P{M_i}\).

Distance-based dissimilarity of permutation mass function in negation operation

According to the aforementioned analysis of convergence of permutation mass function, in the process of negation, the value of \(P{M_i}\) converges gradually, and its fluctuation becomes smaller and smaller. In other words, \(P{M_i}\) and \(P{M_{i + 1}}\) are closer and closer to each other. This phenomenon can be verified by studying the variation trend of the distance between \(P{M_i}\) and \(P{M_{i + 1}}\). In this section, Chen et al. distance-based measure [36] is used to represent the dissimilarity(conflict) of \(P{M_i}\) and \(P{M_{i + 1}}\). The value of \(d\left( {P{M_i},P{M_{i + 1}}} \right) \) by using proposed negation method is shown in Table 4.

Table 4 The value of \(d\left( {P{M_i},P{M_{i + 1}}} \right) \) in the negation calculation process

It can be find that \(d\left( {P{M_i},P{M_{i + 1}}} \right) \) can be seen as a geometric sequence with common ratio \(\frac{1}{3}\). Generally speaking, there exists \(\frac{{{d}\left( {P{M_{i + 1}},P{M_{i + 2}}} \right) }}{{{d}\left( {P{M_i},P{M_{i + 1}}} \right) }} = K,{\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} \mathop {\lim }\nolimits _{i \rightarrow \infty } {\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} {\hspace{1.0pt}} {d}\left( {P{M_i},P{M_{i + 1}}} \right) = 0\). The proof is as follows.

According to the definition of the distance of random permutation sets defined in Eq.(7), the distance between \(P{M_1}\) and \(P{M_2}\) is defined as:

$$\begin{aligned}{} & {} {d_{RPS}}\left( {{P{M_1}},{P{M_2}} } \right) \\{} & {} \quad = \sqrt{\frac{1}{2}\left( {{\textbf{P}}{{\textbf{M}}_1} - {\textbf{P}}{{\textbf{M}}_2}} \right) \underline{\underline{{\textbf{RD}}}} {{\left( {{\textbf{P}}{{\textbf{M}}_1} - {\textbf{P}}{{\textbf{M}}_2}} \right) }^T}}. \end{aligned}$$

It can be transformed as

$$\begin{aligned}{} & {} \begin{array}{l} {d}\left( {P{M_1},P{M_2}} \right) = \begin{array}{*{20}{c}} {}&{} \end{array} \sqrt{\frac{1}{2}\sum \limits _{r = 1}^\varDelta {\sum \limits _{s = 1}^\varDelta {\left( {P{M_1}\left( {{A_r}} \right) - P{M_2}\left( {{A_r}} \right) } \right) } } \left( {P{M_1}\left( {{A_s}} \right) - P{M_2}\left( {{A_s}} \right) } \right) \frac{{\left| {{A_r} \cap {A_s}} \right| }}{{\left| {{A_r} \cup {A_s}} \right| }} \times OD\left( {{A_r},{A_s}} \right) } \end{array}, \end{aligned}$$

where \({A_r},{A_s} \in PES\left( \varTheta \right) \).

Since \(P{M_{i + 1}}({A_r}) = \frac{{1 - P{M_i}({A_r})}}{{\varDelta - 2}}\), then, \(P{M_i}({A_r}) - P{M_{i + 1}}({A_r}) = \frac{{(\varDelta - 1)P{M_i}\left( {{A_r}} \right) - 1}}{{\varDelta - 2}}\), \(P{M_i}({A_s}) - P{M_{i + 1}}({A_s}) = \frac{{(\varDelta - 1)P{M_i}\left( {{A_s}} \right) - 1}}{{\varDelta - 2}}\), thus,

$$\begin{aligned} \begin{aligned}&{d}\left( {P{M_i},P{M_{i + 1}}} \right) = \sqrt{\frac{1}{2}\sum \limits _{r = 1}^\varDelta {\sum \limits _{s = 1}^\varDelta {\frac{{\left[ {(\varDelta - 1)P{M_i}\left( {{A_r}} \right) - 1} \right] \left[ {(\varDelta - 1)P{M_i}\left( {{A_s}} \right) - 1} \right] }}{{{{\left( {\varDelta - 2} \right) }^2}}}} } \frac{{\left| {{A_r} \cap {A_s}} \right| }}{{\left| {{A_r} \cup {A_s}} \right| }} \times OD\left( {{A_r},{A_s}} \right) }. \end{aligned} \end{aligned}$$

Similarly, since:

$$\begin{aligned}{} & {} P{M_{i + 1}}\left( {{A_r}} \right) - P{M_{i + 2}}\left( {{A_r}} \right) \\{} & {} \quad = P{M_{i + 1}}\left( {{A_r}} \right) - \frac{{1 - P{M_{i + 1}}\left( {{A_r}} \right) }}{{\varDelta - 2}}\\{} & {} \quad = \frac{{1 - P{M_i}\left( {{A_r}} \right) }}{{\varDelta - 2}} - \frac{{P{M_i}\left( {{A_r}} \right) + \varDelta - 3}}{{{{(\varDelta - 2)}^2}}}\\{} & {} \quad = \frac{{1 - (\varDelta - 1)P{M_i}\left( {{A_r}} \right) }}{{{{(\varDelta - 2)}^2}}}, \end{aligned}$$

then, it can be obtained that:

$$\begin{aligned}{} & {} {d}\left( {P{M_{i + 1}},P{M_{i + 2}}} \right) = \sqrt{\frac{1}{2}\sum \limits _{r = 1}^\varDelta {\sum \limits _{s = 1}^\varDelta {\frac{{\left[ {(\varDelta - 1)P{M_i}\left( {{A_r}} \right) - 1} \right] \left[ {(\varDelta - 1)P{M_i}\left( {{A_s}} \right) - 1} \right] }}{{{{(\varDelta - 2)}^4}}}} } \frac{{\left| {{A_r} \cap {A_s}} \right| }}{{\left| {{A_r} \cup {A_s}} \right| }} \times OD\left( {{A_r},{A_s}} \right) }. \end{aligned}$$

Therefore, we have: \(\frac{{{d}\left( {P{M_{i + 1}},P{M_{i + 2}}} \right) }}{{{d}\left( {P{M_i},P{M_{i + 1}}} \right) }} = \frac{1}{{\varDelta - 2}}.\)

Assuming that \({d}\left( {P{M_0},P{M_1}} \right) = {d_0}\), there exists:

\({d}\left( {P{M_i},P{M_{i + 1}}} \right) = \frac{{{d_0}}}{{{{(\varDelta - 2)}^i}}},\ \mathop {\lim }\nolimits _{i \rightarrow \infty } {d}\left( {P{M_i},P{M_{i + 1}}} \right) = 0.\)

The above proof explains the decreasing trend of \(d\left( {P{M_i},} \right. \left. {P{M_{i + 1}}} \right) \), which is a new perspective to investigate the convergence of \(P{M_{i}}\).

Uncertainty measure for RPS using the negation of PM and its application

Application of the negation method in verifying information fusion result

As previously mentioned, negation offers an alternative perspective for viewing events. Consequently, after analyzing an event from an positive standpoint, it is advisable to also approach the analysis from a negative perspective. This allows for corroboration with the results obtained from the affirmative analysis, thereby validating the rationality of the conclusions. An application in information fusion will be used to illustrate this point.

Suppose there are two possible targets, A and B, and two sensors independently generate two sets of PMs as follows:

$$\begin{aligned} \begin{aligned}&P{M_1}(A) = 0.2,P{M_1}(A,B) = 0.8,\\&P{M_2}(A) = 0.5,P{M_2}(B) = 0.5. \end{aligned} \end{aligned}$$

\(P{M_1}\) and \(P{M_2}\) can be fused by LOS, the fusion result is:

$$\begin{aligned} PM(A) = 0.55,PM(B) = 0.45. \end{aligned}$$

It can be observed that there should be only one ultimate goal, which is A. This can be verified from the perspective of negation. Firstly, the negation of \((P{M_1})\) and \((P{M_2})\) can be obtained, there exists:

$$\begin{aligned}{} & {} \overline{P{M_1}(A)} = 0.2667,\overline{P{M_1}(B)} = 0.3333,\\{} & {} \quad \overline{P{M_1}(A,B)} = 0.0667,\overline{P{M_1}(B,A)} = 0.3333,\\{} & {} \overline{P{M_2}(A)} = 0.1667,\overline{P{M_2}(B)} = 0.1667,\\{} & {} \quad \overline{P{M_2}(A,B)} = 0.3333,\overline{P{M_2}(B,A)} = 0.3333. \end{aligned}$$

\(\overline{P{M_1}}\) and \(\overline{P{M_2}}\) can be fused by LOS, the fusion result is:

$$\begin{aligned}{} & {} \overline{PM(A)} = 0.321,\overline{PM(B)} = 0.383,\\{} & {} \quad \overline{PM(A,B)} = 0.049,\overline{PM(B,A)} = 0.247. \end{aligned}$$

By comparing the results between the original PM and its negation, there exists \(\overline{PM(B)} > \overline{PM(A)} \), therefore the decision has more probability to regard target as A. By analyzing, the best target is is to have a larger original PM and smaller negation PM. This application illustrates the role of negation in validating the outcomes of information fusion.

Table 5 The procedure of algorithm 2

In addition to the issues already investigated in this paper, there remain several issues to be explored within the RPS theory and its negations. Firstly, due to the consideration of various combinations among events as well as their permutation within the PES, exponential and factorial operations are inevitably involved in this process. Consequently, as the number of elements in the identification framework increases, the size of the PES grows explosively, which may lead to large-scale matrix operations. Addressing the high computational complexity within the framework of RPS theory is a direction for future research. Secondly, how to generate the corresponding PM from the actual data measured by sensors is also an open issue. Thirdly, this paper only provides a possible form of negation for PM, how to determine a more scientific form of negation also needs further research.

Uncertainty measure for RPS theory using the negation of PM

The uncertainty measure of information is a significant issue. In the realm of probability theory, Shannon entropy, as a classic measure of uncertainty, has been applied across a multitude of domains. Within the framework of evidence theory, common measures of uncertainty primarily encompass methodologies based on entropy, those predicated on belief intervals, and approaches founded on interval probabilities, among others. In the theory of random permutation sets, as previously introduced, Deng et al. proposed the entropy of random permutation sets, which is compatible with both Deng entropy and Shannon’s entropy.

These methods are all designed based on the information itself. The studies of negation open a new door in the measurement of uncertain information. According to our intuitive understanding, if a proposition possesses greater uncertainty, then the disparity between the proposition itself and its negation tends to be smaller; conversely, if a proposition has lesser uncertainty, the divergence between the proposition and its negation tends to be larger. Based on this idea, Yin et al introduced an uncertainty measure of BPA by calculating the conflict coefficient k between the given BPA and its negation.

Table 6 The threat assessment reports by different information sources represented by PM
Fig. 6
Fig. 6
Full size image

The proposed negation-based information fusion approach

Table 7 The negation of \(\overline{P{M_1}} \), \(\overline{P{M_2}} \) and \(\overline{P{M_3}} \)

Inspired by Yin et al.’s work, this paper proposes a novel uncertainty measure within the framework of random permutation set theory. By employing Eq.(7) to calculate the distance between a PM and its negation, the uncertainty of the given PM can be obtained. Contrary to the \(H_{RPS}\) proposed by Deng et al., the method based on negation presented in this paper can be regarded as a novel perspective on the measurement of uncertainty within the RPS theory. The proposed negation-based uncertain measure \({H_N}(PM)\) is defined as:

$$\begin{aligned} {H_N}(PM) = d\left( {PM,{\overline{PM}} } \right) . \end{aligned}$$
(12)

where \(d\left( {PM,{\overline{PM}} } \right) \) is calculated by Eq.(7).

A multi-sensor information fusion approach based on proposed uncertainty measure

Information fusion technology has been extensively applied across various domains. In random permutation set theory, the left orthogonal sum and right orthogonal sum is used to fuse two PMs. It should be highlighted that contrary to probability theory and evidence theory, in the framework of RPS theory, different fusion sequences may lead to different fusion results. Previous studies have introduced some strategies to improve the rationality of the fusion results, such as expert knowledge-based method [17], data driven-based method [36] and divergence-based methods [38, 39]. These strategies predominantly focus on ascertaining a rational fusion sequence. Inspired by prior works and proposed negation-based uncertain measure \({H_N}(PM)\), this paper proposes a novel method for information fusion within the framework of RPS theory. The detailed procedures are shown as Algorithm 2 in Table 5.

Table 8 The fusion result using proposed method
Table 9 The value of RTD by using proposed method
Table 10 The values of RTD obtained by different information fusion methods

The PMs presented in reference [39] is employed to illustrate the procedure of proposed information fusion approach. The scenario presented in reference [39] is as follows. It is hypothesized that the adversary may dispatch three types of forces to attack our side, namely, A-tank units, B-bombers, and C-helicopters. Three radars are capable of independently reporting the possible choice and sequence of forces dispatched by the adversary, with the generated PMs depicted in Table 6. The proposed negation-based information fusion method is shown in Fig. 6. The steps in detail are shown as follows.

Step 1: Calculate the negation of each PM, the result is shown in Table 7.

Step 2: Calculate the \({H_N}(P{M_i})\) (i=1, 2, 3):

$$\begin{aligned}{} & {} {H_N}(P{M_1}) = d(P{M_1},{{\overline{PM}} _1}) = 0.1638,\\{} & {} {H_N}(P{M_2}) = d(P{M_2},{{\overline{PM}} _2}) = 0.1375,\\{} & {} {H_N}(P{M_3}) = d(P{M_3},{{\overline{PM}} _3}) = 0.4018. \end{aligned}$$

Step 3: Calculate the support degree of \(P{M_i}\) (i=1, 2, 3):

$$\begin{aligned}{} & {} Su{p_1} = 1 - {H_N}(P{M_1}) = 0.8362,\\{} & {} Su{p_2} = 0.8625,\\{} & {} Su{p_3} = 0.5982. \end{aligned}$$

Step 4: Calculate the reliability of \(P{M_i}\) (i=1, 2, 3):

$$\begin{aligned}{} & {} W(P{M_1}) = \frac{{Su{p_1}}}{{Su{p_1} + Su{p_2} + Su{p_3}}} = 0.3640,\\{} & {} W(P{M_2}) = 0.3755,\\{} & {} W(P{M_3}) = 0.2605. \end{aligned}$$

Step 5: Determine the fusion order of PMs. Since there exists \(W(P{M_2})> W(P{M_1}) > W(P{M_3})\), the fusion order can be determined as \(P{M_2}\overleftarrow{\oplus }P{M_1}\overleftarrow{\oplus }P{M_3}\).

Step 6: Combine the PMs by LOS. The fusion result is shown in Table 8.

Step 7: Calculate the relative threat degree. The value of RTD is shown in Table 9.

Step 8: Determine the final result. Based on Table 6, the following conclusion can be drawn. If the enemy dispatches only one type of troops, then the most likely scenario is (A). Should the enemy deploy two types of troops, the most probable composition and sequence would be (B, A). In the event that the enemy sends three types of troops, the most likely sequence would be (B, C, A). This fusion result is consistent with our intuitive.

The rationality of the proposed method is further illustrated through comparison with existing methods. The values of RTD obtained through different algorithms are shown in Table 7. Herein, RPSDF-i represents the various fusion sequences utilized when applying the RPSDF method, which are determined based on expert experience. From Table 7, he following conclusions can be drawn:

  1. (1)

    From our intuitive, the 1-type target should be (A), the 2-type target should be (B, A), the 3-type target should be (B, C, A). The RPSDF-2, RPSDF-3, RAPJS, RASRP, and the proposed method have all yielded reasonable results; however, the RPSDF-1 and DDRD methods both get a 2-type target of (A,B) and get a 3-type target of (A,B,C), which is not as our expectation. All methods correctly identify the 1-type target, and the values obtained for RTD(A) are identical across these methods. This uniformity is attributed to the absence of sequential factors within the 1-type target, hence changing the fusion sequence does not influence the outcome.

  2. (2)

    The RPSDF-2, RASRP and proposed method is not only result in a correct outcome, but also has the highest accuracy values for events (A), (B, A) and (B, C, A) based on its determined fusion sequence. However, the fusion sequence of RPSDF-2 is determined based on expert knowledge, which inherently possesses a degree of subjectivity. Therefore, the RASRP and proposed method are more rational.

  3. (3)

    The work in [38] indicates that the first PM to be fused plays a decisive role in the fusion result. Consequently, despite the fusion sequence of RPSDF-2 being \(P{M_2}\overleftarrow{\oplus }P{M_3}\overleftarrow{\oplus }P{M_1}\), and the fusion sequences of both RASRP and the proposed method being \(P{M_2}\overleftarrow{\oplus }P{M_1}\overleftarrow{\oplus } P{M_3}\), they all obtain identical fusion results.

  4. (4)

    Both the RASRP and the proposed method determine the fusion order by calculating the reliability of the information sources (radars). For the \(PM_2\) that is firstly participated in the fusion process, the reliability obtained using the proposed method is higher than that achieved with the RASRP.

In summary, as shown in Table 10, the proposed method can overcome the limitation of the RPSDF approach, which is susceptible to the subjective influence of experts, while achieving results comparable to the state-of-the-art methods.

Conclusion

Negation is an important perspective for representing information. Currently, negation within the frameworks of probability theory, evidence theory, and complex evidence theory has been systematically studied. However, how to apply the concept of negation to random permutation sets has not yet been explored. Firstly, this paper defines the negation of the permutation mass function. Subsequently, numerical examples are used to illustrate the convergence of the defined negation, as well as the changes in entropy and dissimilarity during the negation process. Furthermore, an example demonstrates that negation may assist in obtaining two complementary results from the same data, which may enhance the accuracy of decision-makers. Finally, we propose a negation-based uncertainty measure \({H_N}\), and an information fusion method based on \({H_N}\). The proposed information fusion method is compared with existing methods.