I’ve already written about trust and influence using two triangles, today I’m using a cross.
Every time we try to influence, persuade and convince someone to follow an IA recommendation we’re trying to initiate a decision. Most of us are passionate about IA and love the detail of the challenge. We fall in love with the beauty of the solution once we find it. We fixate on those details. But it’s sometimes useful to step back from the detail. What happens if we treat gaining agreement for our recommendations like any other decisions? Can we apply a generic model of decision-making to getting agreement for IA decisions? What happens when use the (perceived) risk, reward, uncertainty and control that’s wrapped up in the decisions that we’re inviting? Are we more effective when considering how the person we’re hoping to influence relates to each of these aspect?
I can’t think of a decision or IA recommendation that I’ve made that couldn’t be framed in relation to risk and reward or control and uncertainty. Why would I steam in with a detailed appraisal of the technical aspects of my recommendation? Instead I can approach recommendations through these four characteristics of decisions.
I know that some people I interact with are all about reward. What are the potential gains? For others, I need to be careful with how I talk about risk. Sometimes re-framing delaying a decision as the riskiest option is the only way I can get agreement from risk-averse stakeholders. Some people crave control – others want to avoid constraints as they risk unpicking decisions in the future. Some people want to avoid uncertainty. For other people, unless you address uncertainty and ambiguity they might believe you’re only partially aware of the true situation.
Asking ourselves questions about the people we’re trying to influence is the easiest way to plan our approach. You can model where you think their attention is by using the cross. Describe the relative importance of each element to create a personalised risk, reward, uncertainty and control blob for each person you want to influence. What would happen if this was the lens through which you introduced your recommendation – rather than focusing on the IA detail?
When you’re in a project, or emerging from a project with a set of recommendations, you’re surrounded by the work. You’ve been living in the IA, so it’s easy to adopt this perspective when you try to invite others into your world to agree with your recommendations. But most of the people we try to influence and convince aren’t embedded in our world. Non-IAs are surrounded by their own worries and hopes – their own facts and passions. How might you build a bridge from their world into yours, so that they can more easily adopt a perspective of agreement?
Use the axes and construct a blob of risk, reward, uncertainty and control. How might you adopt the language of the most dominant concern to more quickly build rapport and relevance with the person you’re hoping to influence?