Here, we discuss themes familiar to many in CRM. We build on foundations laid in these articles and add to the discussion with our input. Do check out these academic articles. We appreciate the efforts of those authors:
- Invisible data quality issues in a CRM implementation by Reid and Catarall
- Why do CRM efforts fail? A study of the impact of data quality and data integration by Missi, Alshawi and Fritzgerald
CRM offers a definitive customer view across functions, channels, products and customer data types. It drives every customer interaction. CRM aspires to recreate the ‘traditional corner shop’ experience to millions of clients. Quality data is required to achieve this aspiration. Also, we are approaching an AI revolution for which quality data is the main fodder. It is said that Data is the new Oil.
Surprisingly, data is often neglected in a typical Enterprise implementation.
Data problems occur at different points during an implementation. First, in the collection of data. Secondly, integrating and deduplication of data across systems and sources. The tools and processes deployed make a big difference in mitigating these problems.
Integrating customer data sets is challenging, as CRM has such a broad scope. For example, a typical bank could have up to 150 different systems containing customer data. To gain a single view of each customer and their value and needs, all the data need to be combined.
Data warehousing has exposed data quality problems that become apparent when a company tries to integrate disparate data for CRM.
In one example, an insurance company was shocked when it downloaded data from its claims processing centre and found 80% of claims apparently involved broken legs. Investigation revealed that the code used to indicate a broken leg was the default code in the system used to process claims. Claims staff were paid according to speed and used a default code to accelerate processing each claim.
Additionally, traditional back office systems require a limited number of people to process data, whereas in CRM, almost everyone in an organisation interacts with part of the application, resulting in a higher probability of poor data, also web-based data entry can add to data quality problems.
Research carried out on the impact of data reveals some interesting insights
All organisations surveyed recognised the importance of data quality for a successful CRM implementation, but only 5% of them had optimally invested in their data strategies as suggested by the software vendor. These organisations got the maximum return from their CRM investment.
Data migration strategy is often siloed, with the implementation partner loading the data which is provided by the customer in a format published by the partner to accommodate the To Be Solution.
The responsibility of generating a Data strategy, cleansing, extracting and translating this data often lies with the customer.
Few organisations foresee problems when planning a CRM project. This is often because data quality problems do not present until the project is underway, when the resource required to address them can be beyond tolerance.
If addressed at the strategy formulation stage, however, improved data quality can benefit operational costs, customer satisfaction, effective decision-making and, importantly, employee confidence in CRM.
High quality, well-integrated customer data are the cornerstone of a successful CRM project.
A study conducted by the Data Warehousing Institute suggests that poor quality data can result in losses of 10–25% of organisations’ revenues.
Our experience of implementing CRM for the last two decades, has shown us many projects delayed because of poor data quality. Often, the focus on data comes too late, with the initial phases focussing mainly on build. The first real push for cleansed data comes in just before System Integration Testing or User Acceptance testing (if lucky), or even worse, just before cutover.
Because a solution is typically delivered by an external partner at significant cost, this becomes the focus in the initial phases of a project. Data extraction and cleansing is often handled by an internal team and does not get much attention. This is a substantial missed opportunity, since work on data needs early focus, and it is crucial that the extraction team understand the To Be Solution and its needs in detail.
Data strategy is crucial both for new implementations as well as ongoing BAU.
Increasingly, organisations recognise their data is valuable, it is essential that a comprehensive data management strategy is present at the beginning of any CRM implementation. The problem needs to be addressed enterprise-wide, with an holistic solution.
Such a solution is also required to resolve problems of ownership of the central integrated customer database — the CRM ‘powerhouse’.
- Develop a proper organisational Data strategy: what are the customer touchpoints? how is data captured? What type of data is being captured e.g. Master data, operational data, texts, and conversations, streaming data?
- Identify data required for operational systems like CRM solutions.
- Undertake an assessment of these main data sources to establish a baseline for the data quality; e.g. are 20 percent of cases missing the date of birth (DOB), is there malformed address information, is there a variety of use of fields or do fields contain data that the company does not intend to keep?
- Decide on the cleansing strategy, will it be done in the legacy system before extracting or will it be dealt with during the extraction and translation?
- Utilise this information to drive an immediate clean-up activity, this may involve including items such as postal address file (PAF). Many of these processes may need to be manual if data quality is very poor.
- For projects, invest time in understanding the data model of the target system through various discussions during the exploration phase especially through UI, workflow, integration, and reporting discussions.
- Ensure the person responsible for Data understands the data requirements from the business thoroughly. Also, they should have detailed understanding of the data models both in the legacy and target systems
- Identify any data loss while moving from legacy to target systems
- Form an on-going programme of work to improve data quality; eg, add a data capture step to capture DOB, change processes to accommodate better data entry, or improve staff understanding.
If done right, the additional bonus is that this can serve as a fantastic foundation for data engineering pipes to feed into the company AI engine to predict and automate tasks, not possible through existing IT solutions.