In Engine B’s latest LinkedIn poll, we asked auditors – ‘Can sampling for substantive and controls testing deliver a high enough quality audit now technology allows for 100% testing?’.
The result, though clearly in favour of using technology to ensure quality over traditional sampling methods where possible, highlighted the large proportion of auditors still reliant on sampling to provide supporting evidence when forming their professional opinion.
The simple answer most auditors will give you is that getting the data required to perform 100% testing is too difficult, and their clients’ systems are such that 100% testing inevitably throws up far too many ‘false positives’ to make such an approach realistic. Perhaps though the answer lies more in the simple truth that sampling has been around for many years, therefore it’s no surprise there is a reluctance to embrace change. It’s clear that more education is needed to help auditors understand how they can leverage technology to produce a far more comprehensive audit that goes beyond traditional sampling methods.
ISA 530 defines audit sampling as:
“The applications of audit procedures to less than 100% of items within a population of audit relevance such that all sampling units have a chance of selection in order to provide the auditor with a reasonable basis on which to draw conclusions about the entire population.”
Sampling provides the auditor with an opportunity to form a professional judgement based on the minimum amount of audit evidence, without having to audit every single item and transaction. However, by this very definition, the sampling process means that not all information is tested, and there remains the risk that the auditor’s conclusion could be different if it were based on the entire population rather than a sample.
ISA 530 describes the four types of potential risk associated with sampling:
All four risks are problematic. Though in audit we tend to worry more about whether we have incorrectly claimed to have assurance where we have none, there are costs of time, effort, and opportunity costs of spending time resolving material issues extrapolated out from sampling when the reality is the true scale of an error is small.
So now that 100% testing can be used in some areas, is it reasonable to still accept these sampling risks?
We think not.
When it comes to analysing 21st-century companies and markets with 21st-century volumes of data, sampling no longer delivers the quality assurance people look to the audit profession to deliver. Audit initially started using sampling because the volume of work required to test every item was unreasonable. With larger and larger data volumes and sample sizes, even a sampling approach is now enormously laborious for many firms. But for some substantive and controls areas, we can now confidently audit 100% of transactions with much less effort, using technology.
It’s time for audit firms to face the facts: audit is changing. If audit changes appropriately, including adopting the right kinds of technology in the right ways, we could have a renaissance of assurance.
It’s true that until recently, there hasn’t been any choice as we haven’t had the technology to do more. But there have always been problems with sampling. Many auditors are not statisticians, so they must trust their formulae for telling them how many samples to take, without necessarily having considered the kinds of stratification or judgement those formulae are based on. Audit quality teams help as best they can, but always there is the temptation to put in the numbers we need to get the smallest possible number out of the sample size calculator. Even rigorous sampling is based significantly on hope – does anyone trust that sampling 100 items out of a population of several million will give you a good understanding of that million? Even if the items are weighted for value or other determinants of risk? Fraud or underperformance becomes much easier to hide when it’s a needle in a haystack and your auditor is only testing 100 of the blades of straw.
Technology can do better. Knowledge Graphs link together all of the purchase orders, invoices, bank transactions, contracts with customers and internal emails to test 100% of expenditure purchases, automatically giving you complete assurance over expenditure, payables and accruals without a single human sample. The Knowledge Graph is performing the same test as a human being would on a test of detail sample, only it performs that test for every single transaction.
The same can be done with certain controls areas, assessing revenue postings that have gone through every process gateway and the details match at every stage. Once you have manually assessed the design of controls, you can see immediately whether there have been any exceptions in implementation, and how those relate to the financial statements.
Of course, moving to 100% testing is a big change of methodology, and firms will want to be confident using audit data and perhaps trialling some simpler analytics, like automated standard analytical procedures, before moving to 100% testing. The good news is that Engine B’s Audit Common Data Model (CDM) and Knowledge Graphs can help you make the transition confidently.
The Audit CDM and Knowledge Graphs alone don’t change your audit methodology – they just give you more options. Using analytics tools on top of the Audit CDM and Knowledge Graphs can give your audit much more power. Unlike traditional sampling techniques, Engine B’s analytics solutions have a laser-like focus on the real risks – so if we find anomalies or items for consideration, we group them like-with-like and present them in context to allow auditors to only test what needs to be tested. This is a change in methodology which is completely standards-compliant, and which provides greater assurance, but without generating significant additional work.
Part of the reason the auditor has more options by using Engine B’s Ingestion Engine is that, for the first time, it makes it realistic to bring in all of the data your client holds about their finance function.
Engine B is the only provider on the market extracting all of the data in your client’s ERP plus unstructured data (such as invoices or contracts) and conforming it into a Common Data Model, which you can then use with analytics tools of your choice. Once the data is out, you have complete control and you’re not tied to a specific company’s tools or environment. Because the data is conformed to a Common Model, it’s easy for the audit firm to secure new audits once a client is on Engine B, because their data is already prepped and ready for you. All the data stays with you or at the client, so you don’t have to worry about data stored in someone else’s cloud. And we’re the only tool that layers a Knowledge Graph on top of a data model, allowing you to analyse the relationships between data points and provide next-generation assurance.
We know there remains a large proportion of auditors still reliant on sampling and resistant to change. But technology is here and here to stay. The risks associated with sampling and a string of high profile audit failures mean the profession can no longer count on this method to deliver the audit quality demanded by today’s clients, regulators and wider public. Yes, the volume of work required to test every item was once unreasonable, so sampling was chosen to solve this problem. But now, audit software in the shape of technologies like Engine B’s Knowledge Graphs and Audit Common Data Model, can do better allowing for 100% testing and producing a more reliable audit. For firms choosing to move quickly and adopt good quality, tech-driven audits, there’s no end to the efficiency, assurance and quality the profession can deliver.
The question for audit firms is not, should we replace sampling with technology, but more when?