Subscribe to our Newsletter

An extensive glossary of big data terminology

Posted in Big Data Startups. I really like their NewSQL entry, althought in my opinion it should also include traditional SQL with an intermediate, transparent layer where SQL is translated into something much faster for Hadoop, a bit like the new Teradata Aster product, or the goPivotal Hawk platform. Also read our own data science dictionary.

Anyway, here's the list:

A

Aggregation – a process of searching, gathering and presenting data
Algorithms – a mathematical formula that can perform certain analyses on data
Analytics – the communication the discovery of insights in data
Anomaly detection – the search for data items in a dataset that do not match a projected pattern or expected behaviour. Anomalies are also called outliers, exceptions, surprises or contaminants and they often provide critical and actionable information.
Anonymization – making data anonymous; removing all data points that could lead to identify a person
Application – computer software that enables a computer to perform a certain task
Artificial Intelligence – developing intelligence machines and software that are capable of perceiving the environment and take corresponding action when required and even learn from those actions.

B

Behavioural Analytics – analytics that informs about the how, why and what instead of just the who and when. It looks at humanized patterns in the data
Big Data Scientist – someone who is able to develop the algorithms to make sense out of big data
Big data startup – a young company that has developed new big data technology
Biometrics – the identification of humans by their characteristics
Brontobytes – approximately 1000 Yottabytes and the size of the digital universetomorrow. A Brontobyte contains 27 zeros
Business Intelligence – the theories, methodologies and processes to makedata understandable

C

Classification analysis -  a systematic process for obtaining important and relevant information about data, also meta data called; data about data.
Cloud computing – a distributed computing system over a network used for storing data off-premises
Clustering analysis – the process of identifying objects that are similar to each other and cluster them in order to understand the differences as well as the similarities within the data.
Cold data storage – storing old data that is hardly used on low-power servers. Retrieving the data will take longer
Comparative analysis – it ensures a step-by-step procedure of comparisons and calculations to detect patterns within very large data sets.
Complex structured data – data that are composed of two or more complex, complicated, and interrelated parts that cannot be easily interpreted by structured query languages and tools.
Computer generated data – data generated by computers such as log files
Concurrency – performing and executing multiple tasks and processes at the same time
Correlation analysis – the analysis of data to determine a relationship between variables and whether that relationship is negative (- 1.00) or positive (+1.00).
Customer Relationship Management – managing the sales and business processes, big data will affect CRM strategies

D

Dashboard – a graphical representation of the analyses performed by the algorithms
Data aggregation tools - the process of transforming scattered data from numerous sources into a single new one.
Data analyst – someone analysing, modelling, cleaning or processing data
Database – a digital collection of data stored via a certain technique
Database-as-a-Service – a database hosted in the cloud on a pay per use basis, for example Amazon Web Services or
Database Management System – collecting, storing and providing access of data
Data centre – a physical location that houses the servers for storing data
Data cleansing – the process of reviewing and revising data in order to delete duplicates, correct errors and provide consistency
Data custodian – someone who is responsible for the technical environment necessary for data storage
Data ethical guidelines – guidelines that help organizations being transparent with their data, ensuring simplicity, security and privacy
Data feed – a stream of data such as a Twitter feed or RSS
Data marketplace – an online environment to buy and sell data sets
Data mining – the process of finding certain patterns or information from data sets
Data modelling – the analysis of data objects using data modelling techniques to create insights from the data
Data set – a collection of data
Data virtualization – a data integration process in order to gain more insights. Usually it involves databases, applications, file systems, websites, big data techniques, etc.)
De-identification – same as anonymization; ensuring a person cannot be identified through the data
Discriminant analysis - cataloguing of the data; distributing data into groups, classes or categories. A statistical analysis used where certain groups or clusters in data are known upfront and that uses that information to derive the classification rule.
Distributed File System – systems that offer simplified, highly available access to storing, analysing and processing data
Document Store Databases – a document-oriented database that is especially designed to store, manage and retrieve documents, also known as semi structured data.

Read full list at http://www.bigdata-startups.com/abc-big-data-glossary-terminology/

Related articles

Views: 3521

Comment

You need to be a member of Big Data News to add comments!

Join Big Data News

© 2017   BigDataNews.com is a subsidiary of DataScienceCentral LLC and not affiliated with Systap   Powered by

Badges  |  Report an Issue  |  Terms of Service