## Abstract

In this paper, we examine the role of lies in human social relations by implementing some salient characteristics of deceptive interactions into an opinion formation model, so as to describe the dynamical behaviour of a social network more realistically. In this model, we take into account such basic properties of social networks as the dynamics of the intensity of interactions, the influence of public opinion and the fact that in every human interaction it might be convenient to deceive or withhold information depending on the instantaneous situation of each individual in the network. We find that lies shape the topology of social networks, especially the formation of tightly linked, small communities with loose connections between them. We also find that agents with a larger proportion of deceptive interactions are the ones that connect communities of different opinion, and, in this sense, they have substantial centrality in the network. We then discuss the consequences of these results for the social behaviour of humans and predict the changes that could arise due to a varying tolerance for lies in society.

## 1. Background

Deception, withholding information, making misleading statements or blunt lies are attitudes that most societies abhor, and parents, mentors and educators invest a great deal of effort in teaching that such behaviour is wrong and damages society [1–4]. Yet it is also true that deception and lies are present in practically all human interactions and societies [5–8]. This being so, we must conclude that there is a fundamental reason that prevents the social world from being totally honest.

Broadly speaking, trust-based exchange relationships play an important role in the emergence of cooperation and complex structure in many social, economic and biological systems [9–11]. In human societies, trust promotes people's willingness to engage in reciprocity [12], whereas deception is likely to destroy the stability of such relationships by only favouring particular individuals [13,14]. However, most research has been focused on how to detect and police deception [15,16], rather than on the mechanisms regulating the appearance of lies and their implications for the structure of social networks.

Previously, we have studied deception and its societal consequences by means of an agent-based opinion formation model [17] where the state of an agent *i* is described with two time-dependent variables, i.e. its true (but not public) opinion *x _{i}* and its public opinion

*y*, in principle different from

_{i}*x*. Their difference quantifies the lies that are told by agent

_{i}*i*to its neighbours, which are linked by weighted links

*A*representing social interactions. Agents and links constitute a highly structured social network where opinion formation takes place. Both state variables evolve with a characteristic time scale

_{ij}*dt*, whereas link weights change on a different time scale

*D*. In addition, the network structure coevolves with the opinion dynamics via a network rewiring process with its own slower time scale, such that the weakest links are cut and the same number of new links are randomly created to conserve the average degree of the network. In the model, deception is defined as a partially truthful exchange of information between agents (i.e. a linear combination of

*x*and

_{i}*y*) with the overall proportion of honesty in the system regulated by a single parameter. Thus, lies may be considered as

_{j}*pro-*or

*anti-social*interactions if the information passed from agent

*i*to agent

*j*is proportional to

*y*or −

_{j}*y*, respectively. The selection of pro- or anti-social deception mimics the agent's intention to be as similar or different as possible from its neighbour [18]. In this context, pro-social lies are those that benefit the recipient rather than benefit the perpetrator, for example by continuing to reinforce the dyadic relationship between them. Common examples might be ‘liking’ something on someone's social media page even though one does not really, or asserting that something is fine when in fact it is not.

_{j}This quite simple model already gives us some hints about what the fundamental utility for lying might be. We discovered that, although anti-social lies destroy the connectivity of the social network, a certain frequency of pro-social deception actually enhances the centrality of liars (who serve as links between small communities of honest people). However, in this model, individuals are assumed to pursue a fixed strategy: they are always honest individuals, pro-social liars or anti-social liars. In more realistic scenarios, of course, there are enormous fluctuations away from this simple fixed strategy set and individuals vary their behaviour between the three strategies according to circumstances, even though they may naturally tend towards one strategy most of the time. An important step in exploring the dynamics of social deception, then, is to develop a model that incorporates a significant amount of strategic flexibility at the individual level. Apart from adding more realism to the model, this has the important consequence of allowing individuals and populations to evolve towards a natural equilibrium, as individuals adjust their own behaviour in accordance with the cost and benefit regimes they encounter [13,19,20].

The fundamental question in modelling deception is: why do people lie? In human deception, the key issue must be related to the benefits and costs when deciding what information to pass on in an interaction with another person. From this point of view, lying is a decision-making problem with an optimal solution dependent on the gains and risks of lying [21,22]. It is therefore important to include in the model some way of deciding the best possible answer in every instantaneous and directed dyadic interaction. In this paper, we propose a more realistic, evolutionary functional model for the dynamics of deception including these features. First, we describe the model in general, including the dynamics of link weights and the decision-making process. Then, we discuss the results of our numerical simulations and make concluding remarks.

## 2. Methods

Like in our earlier study [23], the basic dynamical equation for the opinion of an agent can be written as
2.1where the state variable *x _{i}* is bounded by [−1, 1] and represents the instantaneous opinion of agent

*i*, such that −1 corresponds to total disagreement and +1 to total agreement with a given topic of discussion. The first term on the right-hand side describes an exchange of information between a pair of agents through discussion, i.e. the interaction is short range. The second term stands for the influence of the overall opinion in the network on agent

*i*, and hence the interaction is long range. Both terms evolve with a time scale

*dt*called ‘transaction time’. The parameter

*α*is a random bounded variable that represents the attitude of agent

_{i}*i*to the overall opinion

*f*(

_{l}*i*), being near −1 if the agent is inclined to go against the crowd and near +1 otherwise.

In accordance with our earlier model of deceptive interactions between agents [17], we define a second state variable *y _{i}* corresponding to other agents' public perception of the true but private

*x*, from which

_{i}*y*may differ in value if agent

_{i}*i*is not totally honest. The difference stands for the amount of dishonesty or the size of the lie. Hence, the overall opinion

*f*(

_{l}*i*) should be formed with the publicly available information (through social meetings, rumours and news in the media) represented here by the time-dependent variable

*y*, 2.2where the second sum is over the set of all agents

_{i}*j*separated from agent

*i*by a shortest path length We assume that the influence of an agent decays with the distance

*ℓ*, i.e. the smallest number of links needed to reach

*j*from

*i*in the network. Without loss of generality, we also consider a 1/

*ℓ*-dependence.

In equation (2.1), the short-range term is the direct interaction between agents with *ℓ* = 1,
2.3where *w _{ij}*(

*t*) is the instantaneous information that agent

*j*passes to

*i*(see equation (2.5)). Observe that, in general, the matrix

**w**is not symmetric; that is, the information that agent

*i*gives to

*j*,

*w*. Therefore, the sum of the elements of a row in

_{ji}≠ w_{ij}**w**gives

*f*(

_{s}*i*), whereas the sum of the elements of each column in

**w**is proportional to the average apparent opinion the agent holds in the network, 2.4where is the degree of agent

*i*. Explicitly, the public opinion

*y*is the average of the instantaneous information

_{i}*w*received by all neighbours

_{ji}*j*, and is thus bounded between −1 and +1. Finally, we define the instantaneous information

*w*as 2.5where the optimal opinion

_{ij}*ϕ*

_{0}that agent

*j*shares with agent

*i*(i.e. between truth and pro- or anti-social lies) is the result of an individual decision-making process, as explained in §2.2.

The nature of direct transactions is illustrated in figure 1. For example, the terms in equation (2.1) imply that if *w _{ij}* has the same sign as

*x*, agent

_{i}*i*will reinforce its position and get closer to the extreme opinion sign(

*x*). Next, we introduce the dynamical processes involved in our deception model, as described in the following sections.

_{i}### 2.1. Dynamics of link weights and rewiring scheme

In social networks, individuals are connected by weighted links that vary over time in the course of their dyadic interactions and decision-making. We assume that ‘bad’ decisions (not necessarily owing to lies) are punished by weakening the link weight *A _{ij}* between agents

*i*and

*j*. This can be incorporated into the model by introducing a simple dynamics for link weights, 2.6where

*D*sets the time scale of change and

*T*is a function of the four site variables associated with a link, namely (

_{ij}*x*,

_{i}*y*) and (

_{i}*x*,

_{j}*y*). Because

_{j}*A*depends on two agents, we choose the following symmetric form: 2.7where the first square bracket represents similarity between agents according to the information agent

_{ij}*i*has at its disposal, the second bracket is the corresponding term for agent

*j*and

*P*(

_{ij}*t*) is the instant punishment for lying. Observe that the term in {…} varies between 3 and −1, such that links with

*T*< 0 are at risk of being cut, as

_{ij}*A*approaches zero. The matrix

_{ij}**T**should be symmetric under exchange between

*i*and

*j*. In that case the punishment the society imposes on liars reads as follows: 2.8where

*e*is a parameter that measures the tolerance of society against lies, being 0 if it is intolerant and 1 if it does not punish liars. Thus, the punishment

*P*is proportional to the difference between the true opinion of an agent and the instantaneous information it shares with its neighbour.

_{ij}In a real social network, its topology may coevolve with the dyadic interactions between individuals [24]. Thus, we introduce a rewiring scheme with dynamics dependent on link weights. We proceed by cutting links with negative weight () and immediately creating a new link (with initial weight 1) to substitute the cut link, in order to maintain the average degree of the network constant. This sets the time scale for network rewiring larger than *dt* and variable, unlike in our former model in which rewiring is performed at fixed intervals [17,23].

The creation of new links is performed as follows. First, we identify the two agents involved in the cut link and choose the one with the least number of neighbours (i.e. the most isolated agent); then, we look at the second neighbours of this individual and create a link with the second neighbour (friend of a friend) that has the lowest degree. This reflects the social tendency of people to make new acquaintances by ‘triadic closure’ [25], whereas the bias for favouring agents with only a few links assumes that such agents are more keen on making new friends. If there are no second neighbours, then we create a link with one of the agents with the lowest degree in the whole network. As a further remark, we note that the instantaneous information *w _{ij}* is not necessarily the same for everyone all the time (see equation (2.5)), the net effect of which is that the rewiring time is variable and controlled for each link by the slope

*T*.

_{ij}### 2.2. Decision-making process

In the Introduction, we state that a key issue for human deceptive interactions is related to the benefit and cost of lying, which an individual needs to evaluate in order to pass optimal information to others. This means that, in each transaction, acting agent *j* has to make a decision whether to lie or not to neighbour *i*, by finding the extremal values of a utility function *R* that includes all gains and costs of being deceitful or truthful,
2.9where *ϕ* is the opinion agent *j* decides to share with *i*, either the truth (*ϕ* *=* *x _{j}*) or a lie (

*ϕ*

*≠*

*x*). Note that the gain

_{j}*G*

_{H}and the cost

*C*

_{H}of being honest do not depend on

*ϕ*, whereas the gain

*G*

_{L}and the cost

*C*

_{L}of being dishonest depend on the particular opinion

*ϕ*that agent

*j*wishes to share. Then, the optimal opinion is a stationary point of

*R*(either a maximum or minimum) in the permissible interval [−1, 1], implicitly defined by 2.10

Under these conditions, the decision-making process for agent *j* is as follows. When interacting with neighbour *i*, agent *j* finds the optimal opinion *ϕ*_{0} by solving equation (2.10). If *R*(*ϕ*_{0}) > 0, then agent *j* ignores *ϕ*_{0} and shares its true opinion (i.e. *w _{ij}* =

*x*), because being truthful is a ‘better’ decision than not being truthful. Otherwise, agent

_{j}*j*shares the optimal opinion

*ϕ*

_{0}. Note that, in general,

*ϕ*

_{0}stands for a lie, except for the case when

*ϕ*

_{0}=

*x*. This particular case could be interpreted as a situation where an agent (that has initially decided to lie) finds that the optimal decision is to be honest.

_{j}For the decision-making process to be complete, we need to find concrete expressions for the gains and costs in equation (2.9), based on the available sociological knowledge about interactions between individuals. The gain for being honest is considered to be ‘prestige’ or ‘reputation’ [8], which in our context is measured by the degree *k _{j}*. This is based on a previously studied sociological assumption [26,27], namely that the more connected you are, the more prestige you have, which means that you are considered trustworthy. Therefore, we write the gain as
2.11where we have normalized the degree to compare agents within and between communities.

The risk associated with being honest is proportional to the apparent disparity of opinion, as this distance increases antagonism between agents. In other words, people tend to use small, ‘white’ lies to protect social links rather than put them at risk, because the difference in opinion corresponding to complete honesty may create tension in the relationship [28]. Then, we write 2.12which is normalized to make the gain and cost terms comparable.

If the main aim of an agent's deception is to avoid rejection by strengthening its social links, then everyday experience suggests that the gain owing to lying has two components. First, the liar benefits by not ‘losing face’, that is, by minimizing the distance *ϕ* − *y _{j}* between its lie and public opinion so that the lie

*ϕ*is not discovered easily. Second, the agent

*j*gains by mimicking the response

*w*that agent

_{ji}*i*is giving back, i.e. by pretending to be more similar to its peers than it is in reality. In this case, we write 2.13

The risk of lying is also twofold: agent *j* could pass information that is similar to its true opinion (*x _{j}*) and risk a large mismatch of opinions. The bigger this difference, the higher the penalty (or cost) that the liar will incur from being found out [29]. Simultaneously, the agent could try to mimic an agreement with the public opinion of agent

*i*, thereby risking a mismatch if agent

*i*is deceptive: the bigger the difference between the lie and public opinion, the bigger the cost the liar bears from being found out. This being so, the risk is the product of the two possibilities, 2.14

We have normalized equations (2.11)–(2.13) such that all of them vary between zero and one. The coefficient *β* in equation (2.14) is a quantity that controls the relative weight of the cost of lying, which could depend on other social and cultural properties. We have examined the behaviour of the utility function *R* and determined that *β* = 4 balances the gains and costs between lying and being honest (see the electronic supplementary material).

To summarize, the dynamics of our model is highly nonlinear: the elements of the adjacency matrix **A** depend on the vector **y** (the other agents' perception of an agent's true opinion), which, in turn, is calculated every interaction using **w**. The matrix **w**, in turn, is the instantaneous flow of information through each link, resulting from an agent's decision about the optimal information to pass on (*ϕ*_{0}). Our new approach of casting transactions between agents as an optimized decision-making process constitutes a major difference from our earlier model [17]. The benefit is that we now avoid predefining individuals as either pro- or anti-social liars. Nevertheless, we can still classify the lie *ϕ*_{0} in a binary way by comparing the distances and , being pro-social if the former is smaller and anti-social otherwise. Then, the threshold that classifies *ϕ*_{0} as a pro- or anti-social lie is 0, the midpoint between ±*y _{i}*. We emphasize that

*ϕ*

_{0}=

*ϕ*

_{0}(

*j, i, t*) is a function of

*j*,

*i*and

*t*, obtained by finding a stationary point of

*H*–

*L*as given in equation (2.9). Allowing deception to vary in this way is more realistic than our previous approach of having fixed, predefined phenotypes that do not vary in their behaviour: everyday experience tells us that people do not say the same thing to everybody all the time.

## 3. Results

Using the above-described model, we performed extensive numerical simulations for networks of different sizes (*n* = 100, 200 and 500), starting with random initial conditions for the state variables *x _{i}*,

*y*and attitude parameters

_{i}*α*, and by following a simple Euler integration scheme for a sufficient number of time steps to obtain extreme opinions (

_{i}*x*= ±1) for all agents. In some cases, we find a small number of agents remaining locked in an undecided state This number depends on the values of the parameters

_{i}*e*(tolerance of society against lies) and

*D*(time scale for the growth of link weights), the only two parameters whose variation we consider. We can follow the time history of the process and monitor the state variables and also the amount of instantaneous lying for all agents in the system. We may also distinguish anti-social lies from pro-social ones by monitoring the optimal opinion

*ϕ*

_{0}(

*j, i, t*). If this quantity is nearer to

*y*, then we consider it as a pro-social lie, and if it is nearer to −

_{i}*y*we take it as an anti-social lie [17]. As simulation results are qualitatively unaffected by network size, from now on we consider only

_{i}*n*= 100.

In figure 2, we show typical realizations of the dynamics while keeping *D* = 3 for the two extreme values of the parameter *e*. Observe that honest agents with similar opinion form small clusters (i.e. the network is assortative with respect to *x _{i}* [30]), but there is also a large number of liars that serve as weak links between these tightly bonded communities and can dwell within very small communities. The effect of increasing social tolerance (

*e*= 1) is small, but, surprisingly, the relative number of liars is smaller when there is no punishment for lying. This result is in qualitative agreement with empirical observations made in schools with and without punishment [31], where the authors report that ‘a punitive environment not only fosters increased dishonesty but also children's abilities to lie in order to conceal their transgressions'.

In figure 3, we show the proportion of pro- and anti-social lies in the instantaneous response of each agent to all of its neighbours, for the case *e* = 1 of figure 2. Observe that many agents lie pro-socially all the time and to all their neighbours. In contrast, there are very few anti-social lies and they do not persist, but instead disappear completely for some agents, whereas for others they become intermittent. If we reduce the social tolerance for lying, anti-social behaviour disappears completely. Note also that, despite using ‘ideal’ conditions for the appearance of big lies (*e* = 1), there are always some agents that behave in a totally honest manner all the time.

To analyse these results further, we find it convenient to quantify separately various groups of agents. We focus our attention on those agents who are totally honest throughout the time line of the model, those who tell only pro-social lies, those who tell anti- or pro-social lies indiscriminately, and those who only lie anti-socially. Note that, for this kind of analysis to succeed, we need many realizations to obtain an average value for each case. Also, we need to look at probability distributions rather than to look at well-defined categories, because the freedom to decide produces strategy changes in all agents. The model output suggests that agents who only lie anti-socially are very few in number, as can be seen from figure 4 where we show the probability distribution of the proportion of anti-social lies for the case of zero tolerance (*e* = 0) and no punishment (*e* = 1). Note that social tolerance to lying has very little effect on the appearance of anti-social lies, and that most of the agents turn out to tell very few lies.

In figure 5, we show the probability distribution of the proportion of lies per dyadic interaction, *r*, for agents who lie indiscriminately (anti-pro case) and for those who tell only pro-social lies (pro case), for the two extreme values of the social parameter *e*. Explicitly, *r* is the fraction of the total number of interactions that are lies. These results suggest that nearly 50% of the agents lie pro-socially a small amount of time (less than 10% of the total time). However, there are always a few agents who lie more frequently: 20% of agents lie all the time, regardless of the level of social tolerance. This result implies that it is disadvantageous to lie all the time. Figure 5 also suggests that the pro- and anti-pro strategies are qualitatively quite similar, in the sense that many agents lie sporadically (small *r*) and a few agents (approx. 20%) lie most of the time. Obviously, the relative numbers also depend on the social tolerance parameter. An interesting observation here is that the lack of punishment for lies makes very small difference between the appearance of anti- or pro-social lies.

DePaulo *et al*. [5] report a statistical study of the number and nature of lies told by a group of 144 individuals, the results of which are summarized in table 2 therein for comparison with the results of our model. For instance, the percentage of honest people (those who never tell lies) is 1.3% for individuals recruited from a college population, and 8.6% for individuals from a local community. Our results show that 2.7% of agents are honest if there is punishment, and 3.5% if there is not. Furthermore, the mean number of lies per day measured was roughly 2, and the mean number of social interactions 6, of which only 61% were dyadic interactions [5]. This means that 50% of the dyadic interactions were lies. The area under the curve in figure 5 (without distinguishing between pro- and anti-social lies) gives about 53%, thus roughly agreeing with the experimental findings. In addition, we predict that the number of lies per social interaction (obtained by calculating the mean value for the amount of dishonesty or the size of the lie, *d*) is 0.38, in close agreement with the value 0.31 ± 0.11 reported in table 2 of the experimental study [5].

We now investigate the social advantages of lying. This is done by examining network measures such as the weighted clustering coefficient (WCC) and betweenness centrality (BC). In a weighted network or graph, the WCC is the geometric average of link weights in a subgraph, where weights are normalized by the maximum weight in the network [32,33]. The BC is the sum of the fraction of all-pairs shortest paths that pass through a node [34]. With these measures, we see that liars serve as bridges between communities (figure 2); hence, they sacrifice their WCC (belonging to highly connected clusters) in order to improve their BC (easy communication with all members of the network).

It is possible for a deceptive agent to increase its clustering coefficient provided it tells small lies (*d* ≤ 0.1), irrespective of whether these are anti- or pro-social, even in the face of social punishment. In figure 6, we show WCC averaged over 300 runs of the model for *n* = 100 agents. The conclusion is that, from the perspective of clustering, there is no benefit to lying unless the lie is small. We also see that a society with total tolerance to lying does not provide liars with much advantage. However, when there is punishment, the agents who lie pro- and anti-socially have an advantage over totally honest agents, provided the number of lies is small. This can be seen in figure 7, where we show the WCC probability distribution for selected values of the proportion of lies per dyadic interaction *r*.

In conclusion, the real advantage to being a liar is that BC increases for pro-social liars, provided they tell small lies. This could be interpreted as a mechanism people use to fit into a social network better. In figure 8, we show the BC median taken over 300 runs of the model as a function of the size of lies, and for different groups of agents. We present the median, instead of the average, because, for the form of distribution functions we have here, the median is a more robust quantity. In figure 8*a*, we show the case of zero tolerance (*e* = 0), where only pro-social liars have an advantage over honest agents, provided their lies are very small. In figure 8*b*, we show the same for tolerance *e* = 1.

## 4. Discussion and conclusions

Our model for the dynamics of lying in a social network of humans incorporates the relevant fact that the individual act of lying corresponds to a flexible, personal and instantaneous decision. Hence, we have mapped this action to a decision-making problem for individuals in society, so that they can adjust their behaviour to the situations they face. In contrast to our earlier model [17], where agents have fixed behavioural strategies, the present model is more realistic as the information an agent passes on (either as truth, or as a pro- or anti-social lie) is a function of the circumstances the agent encounters. In effect, we assume that agents learn and adjust their behaviour in the light of experience. In this respect, the present model lies at the opposite extreme from our previous model in that it does not assume that agents have inherited psychological predispositions to behave in a particular way. In all likelihood, of course, the real world probably lies somewhere between these two extremes. The fact that the findings from the two models are in broad agreement is therefore comforting, in that it suggests that, irrespective of where reality actually lies, our findings will be robust.

The model studied here does not have a network rewiring time scale that is proportional to the fundamental transaction time scale *dt*. Nevertheless, the rewiring time scale can still be tuned by using one of two parameters: *D* (the time scale for the growth of link weights) or *e* (the tolerance of society against lies). In addition, we see that, as the tolerance parameter *e* increases, society is more tolerant of lies and the time at which bonds are deleted increases, thus making the process slower. Furthermore, in figure 2, we see that communities are much better defined when *e* = 0; as a result, intolerance to lies and a potential for high punishment shortens the mean life of liars, segregating the network into communities with strong internal links.

In all our simulations, we find that the number of anti-social lies diminishes, whereas pro-social lies persist to considerable numbers throughout the dynamical progress of the system. Here, we see that the social tolerance parameter *e* has little effect on the proportion of anti-social lies, although it regulates the total number of lies. Most of the agents lie sporadically, and only very few seem to lie all the time. This indicates that ‘true’ liars are very rare in society, although they are nonetheless very special because they have large BC. We also find that liars who tell small lies (*d* < 0.1) have larger WCC. In addition, we observe that the dynamics favours the formation of cliques of purely honest agents, and that liars are usually found to be in the perimeter of cliques and connected by weak links.

We also show that, in general, being honest pays off, but in some circumstances liars acquire an advantage over honest agents. For instance, agents who occasionally tell small lies have larger WCC and BC than honest agents (see figure 7 for *k* < 0.2). Moreover, an agent who tells a fair number of medium-sized lies (*d* = 1) could attain a larger BC than when it chooses to be honest.

In summary, it is interesting to note that, for small lies, all liars are better off than honest agents. Even more interesting is the fact that there is a maximal advantage for people who tell sizeable anti-social lies. In short, anti-social lies yield considerable benefits for liars in appropriate circumstances. We know that anti-social lies normally destroy the social network when they are widely distributed throughout society [17]. However, our findings suggest that, in certain specific circumstances, they could have the opposite effect and make the network more robust. This implies that we need to identify the conditions under which such a situation arises, by examining the local circumstances of those agents who present this peculiar property. Paradoxically, it might then be possible to increase the information flow in the network by adding appropriate motifs that allow agents to have both high BC and WCC.

## Authors' contributions

All authors conceived, designed and coordinated the study. R.A.B. and G.I. developed and analysed the model. R.A.B. and T.G. carried out the numerical and statistical analyses. All authors helped draft the manuscript and gave final approval for publication.

## Competing interests

We declare we have no competing interests.

## Funding

R.A.B. acknowledges support from CONACYT project no. 179616. R.D.'s research is supported by a European Research Council Advanced grant. G.I. and K.K. acknowledge support from the EU's FP7 FET Open STREP Project ICTeCollective no. 238597, and G.I. from the Academy of Finland.

- Received September 4, 2015.
- Accepted October 6, 2015.

- © 2015 The Author(s)