How High Quality Data Protects Against Digital Disruption
New products and innovations will provide an organization with a competitive advantage, but they invariably come with a half-life. Competitors in any given market possess the ability to mimic new developments; often within a short timeframe, rendering the profitability of the product innovator with an extremely small window of opportunity.
To combat this scenario, the organization must turn to the only asset that differentiates it from competitors: data. Customer data accumulated over time, including product purchasing trends and pricing risk aversion, is unique to any given business and is an invaluable asset. With a global focus placed on customer experience and service delivery, this data will become the organization's new competitive edge, and it is already increasingly critical for sales representatives who engage with customers in a proactive manner.
This sounds straight forward – and it would be if we lived in a perfect world – but frustratingly most data stored by organizations is either incomplete, inconsistent or outdated. While data volumes across the globe are growing at a phenomenal rate, organizations have comparatively little insight into which data is relevant to its strategy and objectives. Consequently, storage costs are being incurred by organizations which bear little or no relevance to the task at hand.
Most organizational executives are excited by the thought of data driven insights giving them a competitive edge, but with varying levels of data quality, the ability of executives to trust the data is severely hampered.
The Challenge of Safeguarding Data Quality
A further problem arises when multiple systems are utilized to record product transactions, meaning related data is stored in several repositories both internal or external to the organization. Bringing this data together in a cohesive manner can be an extremely resource intensive and costly exercise. Billions of dollars each year are being spent on data rectification exercises, with an average cost upwards of $5m per project becoming the norm. Data science is the latest organizational skill to be touted as the panacea for all data issues and the role of data scientist being the most sought after data resource across the globe.
Consequently, executives will be required to deploy directives to the workforce which ensure they are aware of the importance of data quality. Performance indicators covering the appropriate level of data quality will need to be incorporated into an individual’s responsibilities.
Different roles should now take on the following objectives:
- Risk Managers: Introduce controls to ensure compliance with guidelines
- Enterprise Architects: Ensure all data exercises are limited to the data objects required by current organizational strategy
- Business Architects: Map business strategies and objectives to the relevant organizational capabilities
- Process Architects: Break down the process to a task level representation of the capability to achieve a better understanding of the risks generated by each tasks
- Data Architect: Provide data and information for controls built into the risk framework.
With all this in place, the organization will have improved oversight of the data requirements to monitor strategy implementation progress.
All components can be uploaded to a repository, ensuring the appropriate detail is available for dashboard creation and maintenance, and making all stakeholders aware of any gaps which are still outstanding and might impact decision making in an adverse manner. At the same time, if policies, standards and guidelines are updated, incoming data from a certain point in time will meet the organizational standards, avoiding the need for a repeat exercise in the not too distant future.
Moving forward, data will be a key weapon for enterprises against disruptive startups. By ensuring its quality, they will be able to maximize the advantage of product innovations, drive new insights and deliver an enhanced customer experience.