Characteristics Driving a Data Replication Solution

April 27, 2013 by · Leave a Comment
Filed under: Data Integration, Replication 

In my last post, we discussed two (presumably) complementary business drivers for instituting a standard enterprise-wide strategy for data availability: the desire to absorb massive amounts of data for analytical purposes (AKA “big data”) while simultaneously enabling accessibility to internal data stored across a variety of different siloed systems that have evolved organically over the years. Yet while the desire for decreasing the latency for data access, often to the point of what is fuzzily referred to as “real-time,” drives the expectation for immediate accessibility to all data sets, it is valuable to take a step backward and consider the characteristics of the environment that need to be effectively addressed: Read more

Ensuring that Data Availability Meet the Business Needs

April 24, 2013 by · Leave a Comment
Filed under: Data Integration, Replication 

Almost everywhere you look these days, there is talk about big data, big data analytics, and the value of massive data volumes, and underscoring the demand for exploiting big data is the need to manage big data. This will be critical when dovetailing the desire for instituting analytical systems and addressing real-time needs for operational decision-making. Whether your company is looking to streamline supply chain management and inventory control, or deriving insight for enhancing customer experiences using numerous data streams linked with existing customer profiles, the best advantage comes from enabling the integration of analytics with operational systems in real time, or at least within the window of a defined (typically short) time frame. Read more

Papers on the Value of Data Quality Improvement

Thanks to my friends at Informatica, I have been able to allocate some time to organizing and documenting some ideas about understanding different types of business impacts related to data quality improvement and how they can be isolated, organized, measured, and communicated to a variety of stakeholders across the organization.

There are four papers:

Understanding the Financial Value of Data Quality Improvement
Improved Risk Management via Data Quality Improvement
Improving Organization Productivity through Improved Data Quality

Increasing Confidence, and Satisfaction Through Improved Data Quality (link coming soon)

Optimized Maintenance and Physical Asset Data Quality

December 2, 2010 by · Comments Off on Optimized Maintenance and Physical Asset Data Quality
Filed under: Business Impacts, Business Intelligence, Data Analysis, Data Quality, Master Data, Metrics, Performance Measures 

It would be unusual for there to be a company that does not use some physical facility from which business is conducted. Even the leaders and managers of home-based and virtual businesses have to sit down at some point, whether to access the internet, make a phone call, check email, or pack an order and arrange for its delivery. Consequently, every company eventually must incur some overhead and administrative costs associated with running the business, such as rent and facility maintenance, as well as telephones, internet, furniture, hardware, and software purchase/leasing and maintenance.

Today’s thoughts are about that last item: the costs associated with building, furniture, machinery, software, and grounds maintenance. There is a balance required for effective asset maintenance – one would like to essentially optimize the program to allocate the most judicious amount of resources to provide the longest lifetime to acquired or managed assets.

As an example, how often do offices need to be painted? When you deal with one or two rooms, that is not a significant question, but when you manage a global corporation with hundreds of office buildings in scores of countries, the “office painting schedule” influences a number of other decisions regarding bulk purchasing of required materials (e.g. paint and brushes), competitive engagement of contractors to do the work, temporary office space for staff as offices are being painted, etc., which provide a wide opportunity for cost reduction and increased productivity.

And data quality fits in as a byproduct of the data associated with both the inventory of assets requiring maintenance and the information used for managing the maintenance program. In fact, this presents an interesting master data management opportunity, since it involves the consolidation of a significant amount of data from potentially many sources regarding commonly-used and shared data concepts such as “Asset.” The “Asset” concept can be hierarchically organized in relation to the different types of assets, each of which exists in a variety of representations and each of which is subject to analysis for maintenance optimization. Here are some examples:

  • Fixed assets (real property, office buildings, grounds, motor vehicles, large manufacturing machinery, other plant/facility items)
  • Computer assets (desktops, printers, laptops, scanners)
  • Telephony (PBX, handsets, mobile phones)
  • Furniture (desks, bookcases, chairs, couches, tables)

I think you see where I am going here: errors in asset data lead to improper analyses with respect to maintenance of those assets, such as arranging for a delivery truck’s oil to be changed twice in the same week, or painting some offices twice in a six month period while other office remain unpainted for years. Therefore, there is a direct dependence between the quality of asset data and the costs associated with asset maintenance.

Data Quality and Transitions in the Customer Life Cycle

November 30, 2010 by · Comments Off on Data Quality and Transitions in the Customer Life Cycle
Filed under: Business Impacts, Data Analysis, Data Quality, Performance Measures 

I have been doing some further research into the interdependence of business value drivers, related data sets, and considering financial impacts. One area of focus is customer retention, and I have been looking at a number of aspects related to some related performance measures. The one I would like to look at today involves maintainin the relationship with the customer at various points across the customer life cycle.

Thre are two aspects to the concept of customer life cycle: events associated with the life time of a product once it has been purchased by the customer, and specific events associated with the customer’s own life time. An example of the first involves a product’s manufacturer’s warranty. The warranty is associated with some qualifying criteria, such as a time period or measurable wear (such as a “3 year/36,000 mile” automobile warranty). A product life cycle event could be associated with a customer touch point. For example, three months prior to the end of a product’s warranty period might be a good opportunity to contact the customer and propose an extension to the warranty. An example of a customer life cycle event is the purchase of a home, often registered as public data with a state registry. Customer life cycle events can also trigger touch point opportunities, such as contacting a new home buyer with a proposal for a new water filtering system.

There is value in knowledge of customer life cycle events and transitions, especially in maintaining long-term relationships. That same new mother who, in 2001, registered for diaper coupons is probably going to be dealing with a toddler in 2003, a kindergartener in 2006, and a teenager learning to drive in 2017. An effective long term marketing strategy may take these life cycle events into account as part of customer analytics modeling.

That being said, the question of the impact of data quality on customer acquisition or retention has to do with the degree to which data errors increas or decrease probability of long-term customer retention and/or continued conversion at critical life cycle events. And this implies a strong command of many different data sets, their potential integration, and corresponding analysis.

Let’s continue the example: the sale and installation of a water filtering system in a recently purchased home is but the first transaction in what should be an ongoing sequence of subsequent maintenance transactions. The filters will need to be replaced on a periodic basis (every 12 months or so?), and the entire system may need to be flushed and cleaned every few years. Therefore, it is to the water filter company’s benefit to maintain high quality information about customers and their transaction dates. But since the filter itself is associated with the property, that information needs to be managed separately as well.

So here, if the customer moves to a new location, that could trigger two life cycle events: one to contact the existing customer at the new location and begin the sales cycle from the start, and one to contact the new customer at the existing site to establish a new maintenance relationship. The quality of the data is critical, since attempting to continue to provide maintenance to the existing customer at the new site would not really make sense until a new filter is installed.

But it is important to note that it is not just the quality of the data that is important: it is the business process scenarios in which the data is used. Without having specific tasks associated with the life cycle event trigger, the entire effort is wasted.

Next Page »