Why fund operators need to back test more frequently

Matthew Stiling, Senior Operational Due Diligence Analyst, Investcorp-Tages, discusses best practices for fair valuations, proprietary models, and enhancing data interpretation.

Andrew Putwain POSTED ON 7/25/2024 8:00:00 AM

Matthew Stiling, Senior Operational Due Diligence Analyst, Investcorp-Tages.

Andrew Putwain: Can you discuss the major issues around data interpretation as you see them – for example, are they related to formulas around ad hoc data, standardisation, or something else?

Matthew Stiling: This is really split into two sections, which apply equally to Public and Private markets; inputs and models. These are the two drivers of a valuation and so define the valuation process. This follows on to the importance of this valuation to the fund administrator and their ability to check the underlying data being used as part of the valuation process.

The biggest emphasis throughout this process is the importance of inputs as they relate to where the accounting standards refers to a valuation methodology. For the purposes of fair value measurement, we need to understand whether the price we ultimately get to is a price that can be transacted in the market.

"These service providers giving prices could be using erroneous data points,

such as unrealistic broker quotes."

If we take a listed equity, for example, that's normally a close-of-day exchange price, typically taken from Bloomberg – which is a level one on the fair value hierarchy. Level two is where we deviate from an exchange-traded price and start to use broker quotes or third-party inputs, as part of a model.

But a lot of the time those data points come from third-party service providers. Taking a step back, these service providers giving prices could be using erroneous data points, such as unrealistic broker quotes. You need to be able to not only be a price taker around backtesting but also check if you disagree with the prices that you're seeing from these service providers. Then you need to be able to challenge those or screen for outliers that might be skewing the pricing.

That's a big challenge from an operational due diligence (ODD) perspective. A lot of underlying managers say they accept set pricing sources straight away, thinking that they've done a good job because it's “independent”. What we want to see is scrutiny of that level of independence – and that the manager is doing their due diligence regarding the underlying inputs and how they lead to a valuation.

A prime example could be in the structured credit space. Everyone uses INTEX as the cash flow modelling system. That's fine, but what they start to do is use different inputs – the assumptions that generate the price. These assumptions could be anything from using different interest rate durations or weighted-average-life, or even comparing assumptions to similar security profiles, and, as a result of that, it's these nuances that will create different valuations.

Following on from this, we should see managers use their modelling to check whether they are in line with the service providers being used, be able to drill down into the assumptions being used by the service provider. Once they've done that, there should be a challenge process in place.

A lot of service providers are happily on board with taking additional colour for the purposes of valuations. It may be that there is an error in their model or assumptions. It's making sure there’s a challenging back-and-forth process to ensure the manager’s not just a price taker and relying on the independence of that service provider.

Andrew: The standardisation of interpretation models and frameworks is a challenge for many firms. What would your advice be in this area?

Matt: As a starting guide, I recommend the use of industry-standard guidelines, especially in the Private markets space as well. There are the IPEV guidelines, or you've got the AICPA Valuation Guide. The IPEV are just that – whereas the AICPA valuation guide is methodical. But those are good starting points in looking at the framework for the models that are being applied.

Data gathering for those models is imperative, especially in the Privates space, as this is typically what's going to delay your NAV process. If you're not receiving the underlying data, or you’re having to make assumptions because you haven't received that underlying data, it’s going to skew your valuation and have an impact on NAV volatility.

Once you get that data, and you put it into the model, the importance is then being able to see how accurate that model is. Does it need reassessing? If you are looking at a methodology for pricing an underlying security, you need to make sure it’s consistent.

Is it consistent across the board? If it's not, then you need to be able to explain why. Something that we hear too often in the industry, and a phrase that auditors don't like the use of, and ODD professionals definitely don’t like – is pricing “conservatively”.

"I would emphasise back testing as the important part of assessing any model,

because this will help you realise the insufficiencies within that model."

As soon as I hear a manager say they're pricing conservatively, that automatically tells me that what we're seeing is an intention to undervalue an asset as opposed to providing a fair value.

The best way to avoid this is to back-test your methodology. How does it relate to a potential realisation that you've seen and use these real-world events to have a look at the model and see if it does need changing in that regard?

 I would emphasise back testing as the important part of assessing any model, because this will help you realise the insufficiencies within that model.

Andrew: What challenges and opportunities are you seeing in the valuation process, especially around accounting standards?

Matt: A formal government structure around valuation is important to mention.

We’ve talked about backtesting, looking through data inputs, and cleaning them. The next thing we need to see is a formal valuation committee that is chaired by a non-investment professional, typically the CFO/COO and supported by members of the risk, operations, and compliance team. The committee should have arm's length independence from the investment team. Any valuation method is going to be owned by the investment analyst working on that particular investment.

They need to understand their inputs. They should own the model and be able to explain it to an independent committee that should have sufficient talent to be able to challenge them.

That’s an extra control function, that prevents a continuous flip-flopping in methodologies, which, again, will raise issues around whether the security is being valued appropriately.

"As you're mapping out a valuation policy, it must fundamentally apply to the

accounting standard being used."

The valuation committee should have an understanding and the ability to challenge the investment professional who controls the methodology for that valuation. We do see the CIO, or the lead portfolio managers, sit on this committee too. They tend not to have voting rights in that committee, per se, but they can add colour that will further that committee's understanding, providing insight into the valuation.

As you're mapping out a valuation policy, it must apply to the accounting standard being used – be that IFRS or GAAP – because what we don't want to see during the audit process at the end of the year is changes in valuation methodologies that may conflict with that accounting standard.

Another issue, especially in the private markets space, is around stale pricing. It's common – and differs between accounting methodologies in different jurisdictions, more so when valuing-at-cost. Now, valuing at cost may be the most appropriate pricing methodology but should be explained in the context of the relevant accounting practice. For example, valuing-at-cost may be considered a fair value measurement, i.e., the last transacted price.

However, what we would expect from the manager is the ability to say they’ve done further analysis, to be able to benchmark what is the most appropriate valuation methodology, especially in the Private Equity/Venture Capital world.

Andrew: Let’s discuss proprietary models; a lot of emphasis on this is around the ESG sphere but are you seeing it in other places as well?

Matt: When it comes to ESG, due to the qualitative nature of quantifying ESG metrics, you do see variation. The problem there is that data providers have a huge variance in their output for the same company.

It goes back to screening your data – for example what is one service provider saying over the other? Going back to the Privates side, a lot of your underlying data, especially for ESG metrics, is going to come from that underlying portfolio company. The portfolio companies that are slowest to get their data to you are typically the portfolio companies that are going to perform the least well, though that’s more a nuance of the venture capital world.

Ultimately, you can have all the data you want in the world but being able to clean and quantify it appropriately – and ensure those inputs are as observable as possible – is key. That might mean you have to do industry-wide analysis to compare to individual companies. This has been the stumbling block for ESG valuations – the variance in data and the quality of that data.

This is where the proprietary model will start to come into focus but, again, there needs to be more scrutiny around that model, which refers back to the governance structures that we were describing earlier.

Andrew: That quantitative versus qualitative data ESG debate and making judgement calls of whether the data is good enough, etc. Does this situation mean you have to create the benchmark as you go along?

Matt: As it stands currently, yes. This is due to the lack of a defined standard for ESG rating and scoring. There are multiple rating agencies who conduct such scoring; however, their methodologies and rating systems differ. The whole point of ESG benchmarking is being able to draw a comparison with industry peers.

Greenwashing has been a focus over the last year, and the regulators aren't looking kindly on it, though some of this is advertent, due to the lack of a standard taxonomy.

Andrew: How is international standardisation developing?

Matt: There are international accounting bodies that are looking at ways that standards could be enhanced to incorporate ESG factors within the audited financial statements of publicly listed companies and larger companies.

This is going to create a lot of headaches for underlying companies. Fortunately, this is a good example of where the asset management industry is one step ahead of the rest. We're good at dealing with incoming regulation. However, unfortunately, the downside to this is the rest of the world hasn't caught up in terms of the data they need to provide to us for us to be able to move forward.

The Task Force for Climate Disclosure (TCFD) is a good example – it assumes that we have emissions data, which means we have to collect that data. What's the methodology for that data? There will always be all these nuances because there isn't a set methodology on how to do this, creating scope for interpretation and misinterpretation.

Andrew: And different jurisdictions too?

Matt: As with any form of regulation, if you're comparing two regulatory jurisdictions that you’re marketing in, managers typically reflect the most onerous conditions of those jurisdictions because if you are being scrutinised by the regulator, then you're beholden to that jurisdictions’ regulator.

Andrew: What would you like people to take away from this conversation?

Matt: Leverage your service providers, leverage your fund administrators, leverage your auditors. They’re all great sources of advice and so should be utilised. As an investor, I’d expect nothing less, given that the costs are borne by the Fund, and so, by us.

I’d also add that as we look at more illiquid or harder-to-value securities, consider using third-party valuation agents. They will either give you a price on a security, or they'll give you a range for assurance purposes. This is a great control function, that will assist valuation committees, administrators and auditors alike.

"A lot of managers fall into the trap of saying they have valuation policies,

where instead they have procedures."

This isn't something that has to be done every NAV cycle for every underlying security and many managers work around tolerance levels to decide this. Has there been a dramatic pricing movement? Again, it's getting that additional assurance on prices from those service providers that's key. This gives us, as ODD practitioners, comfort when it comes to reviewing audited statements, because we know that the process has been enhanced and disclosed. Using valuation agents is also a good reflection point when evaluating methodologies.

This bring us around full circle on what's been the biggest theme of this discussion, which is backtest; backtest your inputs and backtest your methodology.

A final point is that a lot of managers fall into the trap of saying they have valuation policies, where instead they have procedures. There needs to be a clear definition between the valuation policy, which stipulates the accounting standards being used, the regulatory environment that these sit within and the controls in place, versus the process of getting to those valuations.

A lot of managers don't distinguish between them. I've sat in ODD meetings where a manager presents a valuation policy, which doesn't prescribe any policies or controls. It tells me the methodology of that valuation

That clear delineation between valuation policy and valuation procedures is important, as the former is the control mechanism for the latter.

Matthew Stiling will speak at the Fund Operator Summit | Europe on 15 October 2024 in London. Read the agenda, and find out how to register here.

 

Please Sign In or Register to leave a Comment.