
Past offenders are five times more likely to commit misconduct than the average adviser, and approximately one-third of advisers with misconduct records are repeat offenders. These numbers reveal a truth: your technical expertise matters less than your clients' confidence in you.
In this piece, we'll get into the critical mistakes we've made in our practice and share the frameworks we built to avoid repeating them—from accepting the wrong clients to going silent during market downturns.
We said yes to clients we should have declined. The enthusiasm of growing a practice clouded our judgement, and we accepted relationships that showed warning signals from the first conversation.
The most telling indicator appeared in how much we talked versus how much we listened. When advisors dominate these meetings, they miss critical information about client expectations, values, and whether the relationship will work. We spent time explaining our services while failing to understand whether the prospective client lined up with our expertise.
We accepted potential clients without proper verification. Their assertions about financial situations, investment experience and expectations went unchallenged. This surface-level assessment meant we missed inconsistencies that would create problems later. Background checks, discussions with previous advisors and verification of financial statements seemed excessive at the time. They weren't.
Chasing certain clients felt strategic. High-net-worth prospects or those with impressive professional titles seemed like they would raise our practice. We bid work below our costs, convinced that landing these "trophy clients" would attract others. The logic was flawed. We ended up staffing these unprofitable relationships with less experienced team members to manage losses, which damaged our reputation and the client's experience.
Wrong-fit clients extracted costs beyond the obvious financial metrics. Every deliverable required more explanation. Every call extended past its scheduled time. Every result faced questioning, even when performance was strong. The pattern drained resources that could have served clients who valued our approach.
You can measure profitability in realisation percentages and billing rates, but the hidden costs run deeper. Time spent managing a difficult relationship prevents prospecting better-fit clients or deepening relationships with existing ones. The chance cost compounds over months and years.
These relationships also created a referral problem. Bad-fit clients refer other bad-fit clients. They connect with people who share their expectations and approach, which perpetuates a cycle that pulls your practice further from your ideal client base.
We documented the attributes of our best clients based on data, not assumptions. This required dissecting profitability and longevity patterns. What five characteristics appeared among clients who stayed longest and generated sustainable revenue? We moved from vague notions of "ideal clients" to measurable criteria.
Our due diligence process became non-negotiable. This included conversations with referral sources, previous advisors and other professional contacts. We implemented third-party background checks covering criminal records, tax liens, judgements, bankruptcies and regulatory actions. Keep in mind that this level of scrutiny felt uncomfortable at first, but it prevented costly mistakes.
We created an annual client retention review where our finance team provides billing data and realisation percentages by client. Low realisation rates trigger deeper discussions about whether the relationship serves both parties. This approach removes emotion from necessary decisions.
We stopped rationalising according to our criteria. The temptation to accept a marginal client while convincing ourselves we could "improve the situation over time" had failed repeatedly. Some circumstances remain unchanged, and even when they can be changed, the effort required is usually not worth the outcome.
The philtre now operates at discovery. We ask ourselves whether we would choose to work with this person if compensation were identical for all potential clients. If the answer is no, we refer them elsewhere.
Our strongest market years nearly destroyed us. Portfolios climbed steadily, and clients praised our expertise, but we mistook favourable conditions for superior skill.
Overconfidence bias plagued us like it does most financial professionals. 73% of people believe themselves to be better-than-average drivers. The same goes for investors: 64% rate their investment knowledge high. We fell into the same trap and convinced ourselves that our results reflected pure skill.
The pattern proved dangerous because younger professionals tend to display more confidence than experienced ones, yet they answer fewer questions on investment knowledge assessments. We embodied this contradiction. Our enthusiasm exceeded our actual expertise by a concerning margin.
The Dunning-Kruger effect describes what we experienced: people with limited knowledge overestimate their abilities, while true experts recognise how much more they need to learn. We knew enough to be dangerous but not enough to understand the risks we were taking. Each successful quarter reinforced our inflated self-assessment.
Clients amplified this problem. They preferred confidence over caution and gravitated toward our assertive recommendations rather than toward measured analysis. People choose confident advisors even when confidence has nothing to do with actual competence. We delivered what clients wanted, which made us believe we possessed something special.
Self-attribution bias distorted our understanding of performance. Portfolios performed well, and we credited our stock selection and strategic positioning. Results disappointed, and we blamed market conditions or external factors beyond our control.
Research on fund managers reveals this pattern: they attribute 59% of performance contributors to internal factors like skill but attribute 83% of performance detractors to external factors. Then managers are 40.6% more likely to credit themselves for success than to accept responsibility for failure.
We operated the same way. Bull market gains became evidence of our analytical prowess. Downturns reflected unfortunate timing or irrational market behaviour. This mental framework prevented us from understanding our actual decision quality.
The confusion between luck and skill created compounding problems. Probabilistic thinking helped us understand that winning and losing are loose signals of decision quality at best. You can make poor decisions and profit for a while, which validates ineffective processes. We had done that and then doubled down on flawed approaches because short-term results appeared to confirm our methods.
The statistics should have humbled us: only 25% of active funds outperformed their passive counterparts over ten years. Yet we believed we belonged to that minority without understanding whether our process justified such confidence.
We built safeguards against our judgement. A formal decision review process now requires us to document the reasoning behind major recommendations before implementation. This creates a record we can examine later and separates decision quality from outcomes.
Diverse perspectives became non-negotiable. We encouraged an environment where junior team members could challenge senior recommendations without career consequences. Surrounding ourselves with people willing to question our thinking proved crucial for catching blind spots.
We implemented scenario testing that forces us to articulate both bull and bear cases for every position. This pre-mortem approach requires imagining potential failures before they occur. The exercise surfaces risks that confidence would otherwise obscure.
Regular calibration exercises test whether our certainty levels match actual accuracy. We express high confidence in a forecast and track whether that confidence is warranted. The feedback loop improved our knowing how to distinguish genuine expertise from overconfidence.
Risk questionnaires gave us false precision. We administered them during onboarding, filed the results, and proceeded as though we had captured something meaningful about our clients' psychological makeup.
Standard questionnaires explained only 13.1% of the variation in risky assets across investors' portfolios. When we factored in adviser influence, that figure rose to 31.6%. This revealed that our biases shaped portfolios more than the assessments we relied upon. Yet we treated these flawed instruments as definitive measures.
The questionnaires themselves had fundamental problems. Research that looked at risk profiling tools found that only 16.7% were fit to serve their purpose. Among the remainder, 27.8% had poorly worded questions combining multiple factors, and 75% used arbitrary scoring models. We fell into both traps and used in-house questionnaires that merged risk tolerance with risk capacity without clarity.
Risk tolerance represents psychological willingness to accept uncertainty. Risk capacity reflects the financial ability to withstand losses based on income and time horizon. We confused the two often. Clients with high tolerance but low capacity received aggressive allocations they couldn't afford. Others with substantial capacity but modest tolerance ended up in portfolios too conservative for their circumstances.
We also made clients complete assessments together rather than one at a time. This approach missed significant differences between partners. The questionnaires captured snapshots during calm markets and asked people to hypothesise about reactions to downturns they hadn't experienced.
Market corrections exposed the gap between stated tolerance and actual behaviour. Research during the COVID-19 crash found that financial professionals reduced their risk-taking by 12% even though their price expectations hadn't changed and they saw markets as less risky. Risk aversion rose during stress and became disconnected from rational assessment.
Our clients displayed similar patterns. Those who had selected aggressive portfolios during bull markets panicked when values dropped. Many sold near bottoms and destroyed wealth and emotional well-being. The questionnaires had measured current market sentiment, not stable psychological traits.
We moved from hypothetical questions to behavioural observation. During market volatility, we now track actual client responses rather than rely on predictions. Did they buy more, sell everything, or freeze? Past behaviour during stress predicts future responses much better than survey answers.
We separated tolerance from capacity in our assessments and reassess both annually, especially when clients approach retirement and sequencing risk increases. We also implement scenario analysis that shows specific portfolio declines rather than abstract percentages. This forces clients to confront realistic losses before they occur.
We disappeared at the time our clients needed to hear from us most. Market downturns arrived, and our clients watched portfolio values decline. Anxiety spiked. We retreated into silence.
Our mistake followed a predictable pattern: we waited for clients to reach out in panic rather than initiating contact ourselves. This reactive stance meant clients interpreted our silence as indifference or, worse, incompetence. Markets wobbled, and they weren't considering sophisticated planning processes. They wondered whether they would be satisfactory.
Silence creates doubt far more quickly than slower-moving markets. Clients who don't hear from their advisors for extended periods begin asking themselves critical questions: Are they still monitoring things? Is someone else more proactive? Am I missing opportunities? Those questions rarely get voiced before departure.
We believed that if clients weren't calling, they were satisfied. That assumption got pricey. Clients feeling uncertain don't schedule meetings or send emails. They quietly lose confidence, and we remain oblivious to the erosion.
Research shows clients judge advisors by portfolio performance and how well they keep them informed and support them in tough times. The 2008-09 recession arrived, and advisors with strong communication strategies received increased referrals. Those with poor communication lost clients. Of clients who rated their advisor experience very highly, 68% felt it was because the advisor was available.
Communication frequency matters more than we acknowledged. If you communicate with clients fewer than twelve times per year, you increase your attrition risk. Advisors don't lose clients because of one poor quarter. They lose clients because of inconsistent visibility.
We flipped our approach entirely. Now we initiate contact before clients have a chance to worry themselves into rash decisions. Our framework has timely market updates framed in plain language and focused on what volatility means for each client's unique situation. We schedule personal calls with the most important client relationships through turbulence and offer calm reassurance rather than reactive, panic responses.
We verify concerns and reinforce the strength of their financial strategy. We avoid dismissive phrases like 'don't worry' or 'this issue is just noise'. The change from reactive to proactive communication transformed client retention through subsequent downturns.
Commission structures compromised our objectivity before we recognised the problem. We believed we could separate our compensation from our recommendations, but the data proved otherwise.
We gravitated toward the more lucrative option at the time, as one mutual fund paid us higher commissions than the other. This wasn't conscious dishonesty. Commission-based advisors operate under a suitability standard, meaning recommendations only need to be appropriate for the client's situation, not the best option available. We recommended products that fit our clients, but we ignored better alternatives that paid us less.
Product manufacturers paid us to recommend specific investments and created inherent bias. Insurance intermediaries in some markets derived up to 99% of their total revenue from commissions. We failed to see how this compensation model shaped our product selection and steered clients toward higher-cost options that allowed for higher commissions.
Fee-only advisors eliminate commission conflicts by accepting only direct client payments. We transitioned to this model and accepted the fiduciary duty that requires putting clients' interests first by law. Clients knew how we earned compensation with this transparency, with no hidden benefits from product sales.
We discontinued receiving compensation from commissions and insurance trails. This meant selling our commission business and requesting removal of insurance appointments. Client conversations about fee structure changes felt uncomfortable, but we made the move to benefit them, not exploit them.
These mistakes cost us clients, revenue, and credibility. Accepting wrong-fit clients, confusing luck with skill, relying on flawed risk assessments, disappearing during downturns, and putting commissions over outcomes damaged the foundation of trust that advisors must build.
The patterns are clear: your technical expertise matters nowhere near as much as your integrity and communication. Clients forgive temporary underperformance but won't forgive breaches of trust or silence that drags on.
Advisory practices of similar size face similar temptations. Learn from our failures rather than repeat them. Build your practice on fiduciary principles and proactive communication with honest self-assessment. Your clients deserve nothing less, and your success in the long run depends on it.