«Building and Rebuilding Trust with Promises and Apologies Eric Schniter and Roman Sheremeta and Daniel Sznycer Online at ...»
Munich Personal RePEc Archive
Building and Rebuilding Trust with
Promises and Apologies
Eric Schniter and Roman Sheremeta and Daniel Sznycer
Online at http://mpra.ub.uni-muenchen.de/53596/
MPRA Paper No. 53596, posted 10. February 2014 15:14 UTC
Building and Rebuilding Trust with
Promises and Apologies
Eric Schniter a
Roman M. Sheremeta b
Daniel Sznycer c
Economic Science Institute, Chapman University
One University Drive, Orange, CA 92866
Argyros School of Business and Economics, Chapman University, One University Drive, Orange, CA 92866 c Center for Evolutionary Psychology, University of California, Santa Barbara, CA 93106 September 9, 2012 Abstract Using trust games, we study how promises and messages are used to build new trust where it did not previously exist and to rebuild damaged trust. In these games, trustees made non-binding promises of investment-contingent returns, then investors decided whether to invest, and finally trustees decided how much to return. After an unexpected second game was announced, but before it commenced, trustees could send a one-way message. This design allowed us to observe the endogenous emergence and natural distribution of trust-relevant behaviors and focus on naturally occurring remedial strategies used by promise-breakers and distrusted trustees, their effects on investors, and subsequent outcomes. In the first game 16.6% of trustees were distrusted and 18.8% of trusted trustees broke promises. Trustees distrusted in the first game used long messages and promises closer to equal splits to encourage trust in the second game. To restore damaged trust, promise-breakers used apologies and upgraded promises. On average, investments in each game paid off for investors and trustees, suggesting that effective use of cheap signals fosters profitable trust-based exchange in these economies.
Keywords: promise, atonement, apology, cheap talk, cheap signals, trust game, trust building, remedial strategies, reciprocity, experiments Corresponding author: Eric Schniter, firstname.lastname@example.org * The earlier version of this paper circulated under the title “Restoring Damaged Trust with Promises, Atonement and Apology.” For inspiration to pursue this study we thank John Dickhaut. We thank an advisory editor and an anonymous referee for their comments. Helpful comments were also received from Hilly Kaplan, Wojtek Przepiorka, and participants at the Workshop on Communication in Games (at the University of Zurich), the Human Behavior and Evolution Society annual meeting (in Montpellier, France), the Center for Evolutionary Psychology (at UC Santa Barbara), the John Dickhaut Memorial Conference (at Chapman University), and the Association for Religion, Economics and Culture annual meeting (at Chapman University). We would also like to thank the Economic Science Institute at Chapman University for funding this research.
1. Introduction In modern economies, where trust realizes vast amounts of potential gains in transactions involving deferred or risky returns, problems associated with developing and restoring trust are particularly relevant. A scientific understanding of the processes that encourage trust where it did not previously exist and restore trust when it is damaged is therefore of paramount importance.
Despite the large literature on damages to corporate reputation (e.g., see Barnett 2003 on US chemical industry disasters; see Robinson & Rousseau 1994 for a survey of corporate trust violations), very little research exists on how new trust can be encouraged where it did not previously exist and how damaged trust can be rebuilt (Dirks et al. 2009). Most of the existing research in this area (but see Fischbacher & Utikal 2010) is either exclusively theoretical (Lewicki & Bunker 1996; Mishra 1996; Lewicki & Wiethoff 2000; Ren & Gray 2009; Gillespie & Dietz 2009), based on anecdotal or archival evidence (Elsbach 1994; Knight & Pretty 1999), surveys (Slovic 1993), diary studies (Conway & Briner 2002) fictional vignettes (Tomlinson et al. 2004), videotaped dramatizations (Kim et al. 2004, 2006), or experimental designs using deception (Gibson et al. 1999; Bottom et al. 2002; Nakyachi & Watabe 2005; Schweitzer et al.
2006; Ohtsubo & Watanabe 2009).
To study how damaged trust can be rebuilt and new trust can be encouraged, we conducted a non-deceptive study wherein financially motivated participants used endogenously created and naturally distributed promises and apologies. Our study is based on a version of the “investment game” by Berg, Dickhaut & McCabe (1995). In the original investment game an investor is endowed with $10 and can invest any portion of her endowment by sending it to a trustee. The amount sent triples in value before reaching the trustee. Having received funds from this tripled investment, the trustee can reciprocate by returning any portion of these funds to the investor. Since sending money is risky, investments are usually interpreted as trust, and since returning money is costly, reciprocation via returns on investments is interpreted as evidence of trustworthiness1. The investment game, therefore, has been extensively used to study trust and This interpretation is based on the assumption that participants identify and act in accordance with unstated if-then propositions and expect others to as well (Rousseau 1989), though there is no contract stating expected or contingent behavior in the classic “investment game” (see Berg et al. 1995). Because the assertion that the original game was universally understood to be about “trust” was debatable, John Dickhaut preferred calling it the “investment game” – as it is in the 1995 Berg et al. article. By adding a new starting stage to the game where trustees make promises to return a portion of income from investment – this game becomes a game more explicitly about trust. For this reason we refer to our modified form of the classic investment game, described below, as a “trust game.” reciprocity in an investment setting (for a review see Ostrom & Walker 2005). A common finding in the literature is that investors tend to exhibit trust and trustees tend to reciprocate. It has also been well established that pre-play communication, even if “irrelevant” to game strategy, can induce higher contributions in public goods games (for meta-analyses see Sally 1995, Balliet 2010) and more cooperation in dyadic social dilemmas (Deutsch 1958, 1960;
Radlow & Weidner 1966; Buchan et al. 2006; Duffy & Feltovich 2006; Bracht & Feltovich 2009). However, with the exception of a few studies using deception, the experimental economic literature is silent as to what behavior ensues when promises fail to establish trust and what happens to trust and reciprocity in subsequent interactions after promises are broken and trust is damaged.
In this paper we describe a study using trust games that examines how promises and messages are used to build new trust where it did not previously exist and to rebuild damaged trust. In these games, trustees made non-binding promises of investment-contingent returns, then investors decided whether to invest, and finally trustees decided how much to return. After an unexpected second game was announced, but before it commenced, trustees could send a oneway message. This design allowed us to observe the endogenous emergence and natural distribution of trust-relevant behaviors and focus on naturally occurring remedial strategies used by promise-breakers and distrusted trustees, their effects on investors, and subsequent outcomes.
In the first game 16.6% of trustees were distrusted and 18.8% of trusted trustees broke promises.
Trustees distrusted in the first game used promises closer to equal splits and – compared to previously trusted promise-keepers – relatively longer messages to encourage new trust in the second game. Promise-breakers used relatively higher new promises (compared to all other trustees) and messages (usually with apology) to successfully restore damaged trust. On average, investments in each game paid off for investors and trustees, suggesting that the context-specific signaling described above, can foster profitable trust-based exchanges in these economies.
2. Background While mutually beneficial non-binding agreements help realize opportunities to gain from asynchronous trade, they are subject to exploitation by under-reciprocators or non-reciprocators.
Our research focuses on trustees’ cue and signal effects on investor trust in asynchronous exchanges that provide opportunity for mutual advantage. In these exchanges, we define trust as voluntarily ceding resources to another in the expectation that the other intends to reciprocate in accordance with signaled intentions. Trustworthiness is defined as reciprocation (of resources ceded by the investor) in accordance with signaled intentions.
To successfully navigate a trust-based cooperative interaction and avoid exploitation by cheaters, it is important for investors to obtain accurate information about the ability and willingness (propensity) of trustees to carry out their end of the cooperative deal. Trustworthy reputations that have been demonstrated by past actions serve as reliable cues upon which investors can make trust-based decisions. In the initial interactions with unknown partners, informative cues about an investor’s willingness to trust or a trustee’s trustworthiness are unavailable. In the absence of information about the interactants’ past behavior, signals 2 are often sent to receivers with the intention to communicate information about the sender (e.g., see Farrel & Rabin 1996); for example, that the sender is trustworthy. Where cues have informed investors of untrustworthiness, signals may be sent with the intention of persuading those investors that the sender is more trustworthy than inferred from those cues alone.
Signals encouraging trust appear to be important tools for developing mutually beneficial relationships under conditions where trust has not yet been established and where trust has been
damaged. Without the effective use of signals cooperative interactions may be foregone:
potential investors may decide not to extend trust when they lack reputational information and when cues indicate a breach of trust. Further, when trust has been damaged, signals give investors access to relevant though invisible propensities of trustees, such as in the case of recalibrated upgrades in trustworthiness. This is true whether trust has been damaged intentionally or unintentionally (Axelrod & Dion 1988).
Although signals that accurately convey behavioral propensities are potentially useful to both senders and receivers, signalers may send “dishonest signals” to benefit at the expense of receivers. Critical receivers can incur lower costs than naïve receivers (Dawkins & Krebs 1978;
Maynard Smith 1982), and so natural selection favors those receivers who can accurately assess We distinguish cues from signals from coercion (borrowing from similar definitions by Diggle et al. 2007; ScottPhillips 2008) as follows. Cue: Any behavior or feature that (i) affects the behavior of other organisms; (ii) which is effective because the effect has evolved to be affected by the behavior or feature; but which (iii) did not evolve.
Signal: Any behavior or feature that (i) affects the behavior of other organisms; (ii) evolved because of those effects;
and (iii) which is effective because the effect (the response) has evolved to be affected by the behavior or feature.
Coercion: Any behavior or feature that (i) affects the behavior of other organisms; (ii) evolved because of those effects; but which (iii) is effective for some reason other than that the effect has evolved to be affected by the behavior or feature.
the cost-benefit tradeoffs associated with emitters’ signals, and calibrate their trustfulness accordingly.
Zahavi (1975) addressed the question of “why are signals reliable?” suggesting that the high production cost of a signal guarantees its reliability, insofar as the production cost outweighs the benefits gained from using the signal deceptively, but not from using it honestly.
The prototypical example is the massive and colorful peacock’s tail, indexing the peacock’s genetic quality for peahens’ mate selection (Petrie et al. 1991). In this system, costly signals persuade the receivers while cheap signals fail to do so (Zahavi 1993; Grafen 1990).
Production costs are not the only warrantors of signal reliability, however. Human language, whether spoken or written, is an arbitrary communication system that often uses relatively cheap-to-produce signals to negotiate trust between individuals with conflicting interests (Lachmann et al. 2001). If these cheap signals are used by and relied upon by humans globally and on average, what explains the maintenance of their reliability?
The reliability of cheap signals is supported by the actuality or threat of social sanctions that can more than offset the short-term benefits of cheating and deception (Rohwer 1977;
Kiyonari et al. 2000; Masclet et al. 2003). A selective regime characterized by repeated interactions among known others (Kelly 1995) has led to psychological mechanisms for social exchange that balance (i) the costs of mistaking a one-shot interaction for a repeated interaction with (ii) the far greater costs of mistaking a repeated interaction for a one-shot interaction (Delton et al. 2011). Hence, participants in explicitly one-shot anonymous experiments often behave as if they expect repeated interactions with trustworthy, intrinsically valuable partners (e.g., see Hoffman et al. 1996; Kiyonari et al. 2000).