Historical data shards divorced from the methods, post object-oriented strategy

The advantage of data on read strategy is that it separates the processes of data collection from the processes of applying a schema in order to interpret the results. We can learn more easily that our prior knowledge was wrong when we get prior knowledge out of the data store.

Advertisements

Exploiting sharding capabilities in cloud databases for better preservation of history

For the project of knowledge or hypothesis discovery, this sharding of history is more valuable than attempting a historical report using the operational database. The sharded history retains the context of the data. For a business example, assume a report for the previous period involved some action by an employee who has since been promoted to a different position. Using the operational database for this historical information will naturally return the erroneous result that the new position was responsible for the prior action when in fact that action was done in capacity of the older position.

Revisiting Bikeshare reporting using ASP.MVC with razor and javascript libraries

In an earlier post, I presented some interactive reporting based on custom categorization and aggregation of data available from Capital Bikeshare. ┬áThose reports used Excel pivot tools and SQL Server Reporting services using both relational T-SQL and an Analysis Services cube I constructed to make the desired navigation and aggregation easier to report. My eventual…

Playing with some data: Capital Bikeshare data

With modern speed of data retrieval, analysis, and visualization, we may be encountering a new form of logical fallacy of appealing to authority where the authority comes from the speed at which we can present affirming data for our theses. Assuming that human behavior is a product of evolution, there has not been enough time for evolution to adapt to the new reality of nearly instant affirmation of some consequent. Historically, we recognized a pattern that we can trust affirming data if it arrives quickly. Before modern data technologies, the speed of finding affirming data was an indication that affirming data is abundant around us so it didn’t take long to find. That particular mode of thinking is no longer valid with modern data technologies. The instant access to a wide variety of data makes it possible to find affirming data very quickly. It will take a few generations for evolution to catch up to teach us to not trust speed of affirmation as proof of some hypothesis.

Data governance vs Datum governance

I’m describing this as the security of the datum instead of the data. Specific observations are vulnerable to exploitation instead of everything observed by sensors. The malware is in the population being observed instead of in the IT systems.

To combat this kind of problem, we are going to need an additional approach of datum governance to protect the observed population from deliberately inserted biases.

Big data can re-identify de-identified data

The enthusiasm for the benefits of big data comes from widely promoted reports of past successes. The promise of big data techniques is that it can provide similar successes in other contexts. Big data involves volume, velocity, and variety. The volume and velocity depend on automated queries and report building. The variety introduces the opportunity for new benefits. The combination of automation and opportunity from variety is what makes re-identification possible or even very likely.

Materialize the model to level the competition with observations

Having model data explicitly materialized into tables gives the data clerk to recognize the deficiency that this data is not observed data. This provides the data clerk the opportunity to ask whether there can be another source for this data. Perhaps, for example, some new sensor technology became available that provides observations that previously required models to estimate. The analyst can then revise the analysis to use that new data instead of the model-generated data.