A Data Science Central Community

*Guest Blog post by* Jean-Paul Rasson

Much of the explosion of big data has been driven by increased efficiency in sever performance, memory cost, distributed architecture improvements (cloud, and truly parallel databases, e.g. noSQL) and essentially, by how much it costs to process a terabyte of data, both in terms of memory and bandwidth resources.

However, most of the very big data is very sparse, from an information point of view : big data is essentially made of noise or redundant information (think about videos or tweet data where information redundancy is huge) and can be compacted by 90-95% without any significant information loss. Storing and processing the entire data is a very inefficient process. I believe we can do much better by smartly sampling and smartly summarizing very big data (particularly stuff that is more than 4 week old) - a process known as data reduction or signal processing - rather than storing everything. The sampling / summarizing process is a task that should be left to expert, very senior statisticians, not to computer scientists.

At the end of the day, you should answer the following questions:

- How much lift or increased ROI / reduced risk do you get by storing everything, rather than storing the 5% "core" of your data (even if this means that you still store 100%, but only for the most recent 60 minutes, and less than 1% for data 5-week old and older). My guess is that you gain very little. But have you ever tested this?
- How much does it cost to store and keep everything, versus storing 5% of very carefully, smartly selected / sampled / summarized "core" data?
- What about keeping 5% core of your data, but in addition add 3 external big data sources for which you also only keep the core? Now you have potentially 4 times as much predictive power as before for 20% (20% = 4 x 5%) of the cost of storing all your internal big data, with very minimum information loss.

Think about this: to extrapolate how many users visit your very large website on a particular month, you don't need to store all user cookies for 28 days in a row. You can extrapolate by sampling 10% of your users, and sample 7 days (1 Monday, 1 Tuesday, 1 Wednesday, etc.) out of 28, and use a bit of statistical modeling and Monte Carlo simulations. So you can very accurately answer your question by using 40 times less data than you think.

© 2020 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of BigDataNews to add comments!

Join BigDataNews