Manual data collection is proving to become a painful thorn in the side of financial services firms seeking to implement enterprise data management (EDM) strategies. Reliance on manually collected data can scupper drives towards enterprise data management in the financial services industry.

While data projects remain high on the agenda for the industry as companies continue to centralise data functions and manage costs, many are struggling to deal with the enormity of manually gathered data that flows through their own operations.

The financial services industry has dedicated considerable thought and assets towards achieving the nirvana of the single complete and correct version of a core dataset. What is seldom discussed is how to deal with the exclusions and the non-vendor based content. However , there is an increasing awareness of the risks connected with manual data collection and its factor to valuation errors, missed deadlines, overstretched resources, scalability constraints and also operational risk.

The changing regulating environment and international accounting standards are further adding to the need for greater transparency.

Eradicating manual data collection can help resolve all these issues concurrently. The biggest challenges lie with illiquid fixed income and over the counter (OTC) derivatives where the structured nature from the assets make data less clear and therefore, not particularly easy to gather. But organisations are struggling in order to capture complete and accurate information for instruments such as American depository receipts and contracts for distinction, where an underlying security can add misunderstandings, as well as all variants of funds. Even mainstream activities such as device trust pricing can prove troublesome. These types of challenges are not only limited to pricing data but extend to cover income plus capital events as well as asset identification and static data.

There are numerous techniques on the market that can assist organisations to construct their very own data management platforms. These can add some value managing the bulk collection, storage space and processing of readily available data.

Building a process for capturing plus processing the existing data feeds might improve clarity and transparency. Nevertheless , this is far from an all-encompassing data initiative. Many organisations can achieve great levels of automated processing for the almost all their data, but manually gathered data often remains untouched with the EDM strategy.

If manual data that is input to a central data platform is not subjected to the same strict routines as readily available data, it will create background noise and misunderstandings. If multiple sources are used to validate listed content but only an one manually input entry exists for other datasets, consistency can never be performed.

Building solutions

It is widely recognized that an entity’s overall performance is limited by weakest component. In data administration terms that will be the human element. Manual data collection will still exist as soon as all of the readily available content has been automatic and will continue to cause problems plus chip away at quality, costs, resources, management and reputations.

The challenge facing the industry is finding a way to collect, database, normalise, reconcile plus validate all data even for the labour intensive and risk vulnerable manual data. The core principal of automating data collection is surely the correct approach.

Automation is the key in order to removing the weakest link in the data management chain – human error. Computers don’t care if they happen to be performing mundane tasks or not, neither do they care whether, as being a senior computer, they are performing junior tasks and are not being achieved. Credit crunch worries or deciding what they want for lunch fail to distract computer systems.

Add complexity to mundane jobs such as having to collect data through numerous sources and utilising different emails, websites, extranets, terminals, inner departments (whose primary function is not to provide data to other teams) and, ultimately, having to contact somebody from another organisation, and it’s easy to see the reason why data management is such a complex, labour intensive and fragmented process.

The information, once gathered together, typically resides on an array of spreadsheets, with colour coding and bold fonts to stop the various users falling through the splits and locked cells to stop a single user deleting another user’s macros all without audit trails to help with unraveling queries or difficulties.

In this scenario, it is easy to see why errors happen so frequently. This is a common situation that we see and is the particular starting point for defining a potential data answer. The data is generally available somewhere, simply not in an easily accessible form. It could be imbedded in an encrypted PDF, a Term document, spreadsheet, email text as part of a distribution list or on a website/extranet and many more.

This problem is not likely to go away in the future. The actually expanding range of instruments will continue to pose challenges. The advent of additional trading systems and the growth of off-exchange and algorithmic trading means liquidity is being forced into less transparent wallets. The collection of the pricing plus reference data for these instruments will still be challenging.

Clearly a defined strategy to deal with manual data collection is necessary. Technologies alone cannot solve all the complications in this area. A rigid, purely technologies focused approach can lead to silos associated with data, with any individual successes plus gains unable to be reused elsewhere, potentially resulting in the proliferation of mini data management solutions, without correlation.

Flexible automation

Combining the flexible model with process managed applications is critical to successful data management initiatives. Furthermore, spreadsheets alone are not the solution. Databasing the content is the logical starting point, allowing it to be managed via controlled processes with complete audit trails.

A small gain in a single area is all well and good but if it adds problems either up or downstream, no general gains are made. The objective is to automate every possible step, but flexibility must be maintained to ensure an automated process can be adjusted to accommodate changes plus evolve with the advent of similar requirements elsewhere in the overall process. Flexibility in individual solutions allows achievements to be combined, leading to marked enterprise-wide gains.

This type of solution is not easy to achieve and requires bespoke systems, abilities and maintenance, but to make substantial gains and provide scalable solutions that can evolve as business requirements modify, flexible automation is a must. With the ability to reuse the framework or template from each individual success for other similar challenges, it is clear to see how versatile technology based solutions can contribute to executing the high level information management strategy most efficiently.

However , exceptions do and will continue to take place, and when they do, experienced staff will be required to solve them. With senior administration staff and analysts taken out of the shackles and monotony of actually collecting data, they can concentrate more clearly on applying their knowledge and experience to those situations that justify it.

So is the ultimate goal of fully integrated, enterprise wide data management initiatives actually achievable? Possibly it is nirvana. It will certainly remain out of reach without a method for handling manual information.
If you have any inquiries regarding where and how to use ethersmart, you could contact us at our own webpage.
With years experience of introducing options for the myriad of challenges that customers and prospects face in the highly pressured valuation environment, very few companies fully understand the principles needed to deliver such solutions. Focused and experienced specialist suppliers of data validation services must develop tailored procedures to automate ‘manual data’ catch and validation. Experience tells us there are no quick fixes.