«Building and Rebuilding Trust with Promises and Apologies Eric Schniter and Roman Sheremeta and Daniel Sznycer Online at ...»
Finally, we evaluate whether promises used by Game 1 distrusted trustees facilitated higher joint payoff in Game 2, and whether those promises were reliable indications of subsequent trustee behavior. Game 2 investments made in previously distrusted trustees paid off for both investors and trustees. Investors in Game 1 distrusted trustees were returned on average $6.88, which is significantly higher than the OUT payoff of $5 (Wilcoxon signed rank test, pvalue=0.05, n=32) and the newly trusted trustees earned an average of $13.12.
On average, newly trusted trustees’ promises were veridical – 62.5% (20/32) kept their promises. On the other hand, 37.5% (12/32) of that set broke their promises – more than the 18.8% of trusted trustees who broke their promises in Game 1 (Wilcoxon rank-sum test, p=0.05, n1=32, n2=191). One possibility is that the excess promise-breaking of newly trusted trustees reflects a reaction against investors’ lack of trust in Game 1, perhaps based on a sense of entitlement for the profits that could have been earned had trust been extended in Game 1. By breaking their promises these presumed punishers ended up earning an average of $17.42 in two games, closer to the average earning of $21.99 across two games for Game 1 trusted trustees, than the average earnings of $10.55 for newly trusted trustees who did not break promises in Game 2. The estimation of specification (2) in Table 5 does not reveal any significant predictor for amount returned by trustees in Game 2.
5. Discussions and Conclusions Opportunities for mutual gains often exist where previous exchange histories have not yet been developed or where trust has been damaged by expectations not met. While promises and apologies appear to be important tools for building and rebuilding trust in these problematic situations, most of the research on these remedial strategies is based on self-report, anecdotal, or archival evidence, or else experiments based on fictional vignettes, videotaped dramatization, or deception. By using a non-deceptive study wherein financially motivated participants used endogenously created and naturally distributed promises and apologies we demonstrate that trustees send cheap signals to encourage new trust and rebuild damaged trust, and that these signals are often effective, leading to benefits for both investor and trustee.
From the egoist perspective of non-cooperative game theory no cooperation is predicted, yet our experiment yielded high rates of trust-extending behavior (e.g. 83.4% in Game 1) and trust-reextension (88% of those who went IN in Game 1 went In in Game 2). There are several non-exclusive accounts for these results. Profit-seeking investors need to trade the risk of trusting under-reciprocators against the risk of not trusting reciprocators. 10 As the efficiency of the investment increases, so do the possible forgone benefits for investors who choose OUT. While the multiplier of 4 used in our study, higher than the multiplier of 3 used in standard trust games, might have contributed to investor willingness to choose IN, these effects are not commonly found across trust games. A meta-analysis by Johnson & Mislin (2011), examining games with different multipliers to evaluate whether a higher multiplier might increase the likelihood of investment, found no effect of multiplier on investors and a strong negative effect on trustworthiness: a higher multiplier decreases the amount of money returned by a receiver.
Another possibility, which our data suggests, is that promises in our game may have enhanced the trust extension rate (consistent with Charness & Dufwenberg 2006, who showed IN rates of 74% with promises), and that the apologies issued may have enhanced the trust reIn Game 1, for instance, investors who chose IN received back $8.19 on average, which is significantly higher than their alternative OUT payoff of $5. Despite the sizeable rate of under-reciprocation in Game 1, decisions to take on risk by choosing IN yielded higher profits for investors than decisions to not take on risk by choosing OUT.
extension rate, despite broken promises in Game 1. Although we lack experimental controls without signals, within sample comparisons suggest that investors lent credence to specific formulations of trustees’ promises, and conditioned their credence to these promises in Game 2 according to cues of trustworthiness and untrustworthiness (i.e., kept promises and broken promise, respectively). We expected that trustees, aware of investor self-interest and motives for critical signal reception, would promise investors mutually beneficial transfers in the range $6 to $19. Consistent with results from bargaining games (where even splits are reported as modal offers that also tend to be accepted; see Guth et al. 1982; Guth & Tietz 1986; Carter & Irons 1991; Prasnikar & Roth 1992), even-split promises, which lay close to the middle of the predicted range, elicited more trust-extension than uneven-split promises among dyads with no history of trust-based exchange.
We have argued that as long as the truth value of a signal can be reliably estimated, and updated in tandem with estimates of the signaler’s trustworthiness, cheap-to-produce signals such as promises can facilitate coordination and cooperation. In the context of repeated interactions, promises and apologies should be less trusted when issued by trustees whose past promises and apologies were followed by untrustworthy behavior. The proverbial boy who cried wolf illustrates this principle in the domain of predator calls. But does the principle apply in the domain of social exchange? Our experimental design did not include a third game, so we cannot know whether the investors who suffered broken promises in Game 1, were apologized to, and again suffered broken promises in Game 2, would have discounted further apologies. Future research with similar designs but more than two successive games is needed to test the prediction that the credibility attributed to apologies would be recalibrated based on subsequent behavior by offenders. However, our results do provide a partial answer to the question of how signal credibility is calibrated by relevant behavioral cues: as evidenced by Game 2 investment rates, Game 2 promises issued by trustees who previously returned less than promised were given less credence than the Game 2 promises issued by trustworthy trustees.
Nevertheless, among investors whose trust was damaged in Game 1, messages with apologies elicited more re-extension of trust than messages without apologies. While Game 2 promise upgrades might signal intention to provide an economic contribution towards restituting the previously promised amount lost (i.e., atonement: a repair done for the sake of a damaged relationship), they could also be attempts at coercion: promise-breakers’ coaxing efforts – calibrated to compensate for the investor’s expectation that their Game 2 promises would be “exaggerated” (i.e., as they were in Game 1). The former explanation suggests an upgraded regard for the investor, the latter a strategically selfish regard. Evidence from our experiment suggests that most promise breakers upgraded their Game 2 promises out of selfish-regard, since the majority (60%) of promise-breakers who were invested in again went on to break promises a second time.
We suggest that the rate of trust re-extension seen for trustees who turned out to be repeat promise-breakers may have been lower outside of the laboratory, where emotional states are reliably communicated through other forms simultaneously (e.g., facial expressions, voice, body language) and in concert with additional reputational information and opportunities to sanction cheating. We suspect that in the “real world” of non-anonymous and face-to-face interactions, persuasive messages like promises and apologies are likely more effective and less likely to lead to further damaged trust because receivers can evaluate the veracity of verbal messages based on their correspondence with other reliable cues and signals (e.g., past demonstrations of trust or trustworthiness, facial expressions, tone of voice, eye movements, body language).11 From this study we see evidence indicating how personal exchanges are often based around establishment of trust via cheap-to-produce verbal signals, and how these signals can encourage new trust where it did not previously exist or repair trust where it had been damaged.
Not only is this important information that could improve understanding of what to expect from our everyday interpersonal relationships, it is information that complements our understanding of how market exchange systems (where interactions often take place between non-personal entities such as firms), politics, law, and religion are sometimes expected to work, with personal representatives making verbal and written promises of reciprocation or atonement or else issuing apologies and personalized messages. Both interpersonal interactions and markets are built on the ancient human foundations of adaptive giving and receiving. As such, trust-based exchanges at any level are often based around establishment of trust via signals such as verbal claims about reputation, verbal contracts, and apologies.
Hirshleifer (1984) theorized that emotions act as “guarantors of threats and promises” and several authors (Van Kleef et al. 2004, 2006; Sinaceur & Tiedens 2006; Wubben et al. 2008; Stouten & De Cremer 2010) have demonstrated experimentally that displays of emotion (including anger, guilt, happiness, disappointment, worry, regret) are used by observers for subsequent decision making in social dilemmas and negotiations.
Based on our findings and a review of the current literature we suggest three steps that can be taken as a remedial strategy to restore damaged trust. First, when trust in a relationship has been damaged, the offender should recognize the offense and any regret or sorrow stemming from having caused the offense (such as through some form of apology). An optimistic perspective on relationships fraught with damaged trust recognizes that they actually represent opportunities to develop better relations than previously established. Second, to persuade and assure victims that relationship repair is possible the offender should signal (such as with a personalized message) an indication of the value that is recognized in the other, which stems from an internal recalibrations and commits one (such as with a promise) to expectations of future cooperative behavior. In signaling recognition of relationship value it is important not to express a selfish perspective, but instead a shared-welfare or other-regarding perspective. Third, to actually begin the process of changing and redefining the relationship, an offender must be willing to pay costs to expeditiously correct the previous imbalance of welfare (thereby increasing the offended party’s welfare), or else to sacrifice wealth or status (thereby decreasing one’s own welfare). When corrective actions cannot be immediately taken, signals of intent to take corrective actions should be used. These three steps are identified as each having independent effects of improving impressions of the offender (Scher & Darley 1997; Schlenker
1980) and are consistent with the proscriptions detailed by De Cremer (2010) for the financial world to restore their damaged trust with customers, as well as the general conclusions arrived at by Lazare (2004).
As the natural occurrence of deceit in social exchanges is sampled and the effectiveness of strategies, tools, and institutions used to combat it are evaluated, practical insights are gleaned that can be extended to our personal lives, to the work of policy makers, and even applied to the practices of firms, religious clergy, and military relations. We strongly encourage further efforts to uncover effective strategies for building trust where previous trust-based exchange histories had not been developed, or where trust had been damaged by reciprocation failure.
References Axelrod, R, & Dion, D. (1988). The further evolution of cooperation. Science 242:1385-1389 Balliet, D. (2010). Communication and cooperation in social dilemmas: a meta-analytic review.
Journal of Conflict Resolution 54(1): 39-57 Barnett, M. (2003). Unringing the Bell: Can Industries Reverse Unfavorable Institutional Shifts Triggered by Their Own Mistakes? Southern Management Association Conference Proceedings pp.800-806 Benoit, W.L., & Drew, S. (1997). Appropriateness and effectiveness of image repair strategies.
Communication Reports, 10:153–163.
Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity and social history. Games and Economic Behavior 10:122-142 Bohnet, I. & Frey, B.S. (1999). The sound of silence in prisoner’s dilemma and dictator games.
Journal of Economic Behavior & Organization 38:43-57.
Bottom, W., Daniels, S., Gibson, K.S., & Murnighan, J.K. (2002). When talk is not cheap:
Substantive penance and expressions of intent in the reestablishment of cooperation.
Organization Science, 13: 497-515.
Bracht, J. & Feltovich, N. (2009). Whatever you say, your reputation precedes you: observation and cheap talk in the trust game. Journal of Public Economics 93:1036-1044.
Buchan, N.R., Croson, R. & Johnson, E.J. (2006). Let’s Get Personal: An International Examination of the Influence of Communication, Culture, and Social Distance on Other Regarding Preferences. Journal of Economic Behavior and Organization, 60(3):373-398.
Carter, J.R. & Irons, M.D. (1991). Are economists different, and if so, why? Journal of Economic Perspectives 5:171-177.
Charness, G. & Dufwenberg, M. (2006). Promises and Partnership. Econometrica 74:1579-1601.
Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20: 37-46.
Conway, N., & Briner, R. B. (2002). A daily diary study of affective responses to psychological contract breach and exceeded promises. Journal of Organizational Behavior, 23: 287–302.
Dawkins, R. & Krebs, J. R. (1978). Animal signals: information or manipulation. In:
Behavioural Ecology: an Evolutionary Approach. 1st edition (Ed. by J. R. Krebs & N. B.
Davies), pp. 282–309. Oxford: Blackwell Scientific.
De Cremer, D. (2010). Rebuilding trust. Business Strategy Review, 21(2), 79-80.
De Cremer, D., van Dijk, E., & Pillutla, M.M. (2010). Explaining unfair offers in ultimatum games and their effects on trust: an experimental approach. Business Ethics Quarterly 20(1):107-126.