For those who were surprised by this recent election, be prepared to be even more surprised by the next one.
The advantage of data on read strategy is that it separates the processes of data collection from the processes of applying a schema in order to interpret the results. We can learn more easily that our prior knowledge was wrong when we get prior knowledge out of the data store.
For the project of knowledge or hypothesis discovery, this sharding of history is more valuable than attempting a historical report using the operational database. The sharded history retains the context of the data. For a business example, assume a report for the previous period involved some action by an employee who has since been promoted to a different position. Using the operational database for this historical information will naturally return the erroneous result that the new position was responsible for the prior action when in fact that action was done in capacity of the older position.
The potential return for exploiting operational data will not justify the investment. This return is naturally limited by the short time period available to take advantage of the opportunity. The window of opportunity is naturally short because new operational data will present distractions of new opportunities to pursue. Also, the competitors and customers also are employing their own operational data intelligence so that they will quickly close any advantage gap. Unfortunately, this investment distracts the organization away from historical data that offers more durable knowledge discovery.
Legacy applications can benefit from big data approaches without the need to replace the legacy architecture with new technologies. Instead the big data can augment the application by collecting higher volume, variety, and velocity data about the user’s activity using the application. Analysis of this data can inform decision makers where there may be problems with the work-products. Correspondingly, it can provide requirements analysts with information about where improvements are needed or with more complete library of edge cases to consider for new designs.
In an earlier post, I presented some interactive reporting based on custom categorization and aggregation of data available from Capital Bikeshare. Those reports used Excel pivot tools and SQL Server Reporting services using both relational T-SQL and an Analysis Services cube I constructed to make the desired navigation and aggregation easier to report. My eventual…
With modern speed of data retrieval, analysis, and visualization, we may be encountering a new form of logical fallacy of appealing to authority where the authority comes from the speed at which we can present affirming data for our theses. Assuming that human behavior is a product of evolution, there has not been enough time for evolution to adapt to the new reality of nearly instant affirmation of some consequent. Historically, we recognized a pattern that we can trust affirming data if it arrives quickly. Before modern data technologies, the speed of finding affirming data was an indication that affirming data is abundant around us so it didn’t take long to find. That particular mode of thinking is no longer valid with modern data technologies. The instant access to a wide variety of data makes it possible to find affirming data very quickly. It will take a few generations for evolution to catch up to teach us to not trust speed of affirmation as proof of some hypothesis.
When we look to data technology to solve problems, we should permit the technologies to identify the problems that can be solved with the current capabilities instead of demanding that the technologies evolve to solve the hard problems we have been working on. There are many opportunities to make progress even if we don’t touch the hard problems. Allowing technology to solve what it can solve now may transform the hard problems to be narrower, or possibly even less visible. For example, there are other ways we can improve overall life expectancy without curing any cancers, perhaps with investments in areas unrelated to health care. It is our nature to focus on objectives that catch our attention. This focus can blind us to immediate opportunities that are realistic given our current situation.
If someone wants to cause trouble for the big data owner, they can leverage the known missing data to raise accusations that the big data owner will not have any data to use in defense. The accusations can suggest cheating, fraud, criminal activities, etc that can harm reputations or invoke costly and lengthy investigations that can deny the owner of realizing the potential benefits of the big data analytics.
I’m describing this as the security of the datum instead of the data. Specific observations are vulnerable to exploitation instead of everything observed by sensors. The malware is in the population being observed instead of in the IT systems.
To combat this kind of problem, we are going to need an additional approach of datum governance to protect the observed population from deliberately inserted biases.