Big Data & Data Science

Big data is a term for data sets that are so large or complex that traditional data processing application softwares are inadequate to deal with them. Challenges include capture, storage,analysis, data curation, search, sharing, transfer, visualization, querying, updating and information privacy. The term "big data" often refers simply to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set.

 

Big data can be described by the following characteristics:


Volume

The quantity of generated and stored data. The size of the data determines the value and potential insight- and whether it can actually be considered big data or not.

Variety

The type and nature of the data. This helps people who analyze it to effectively use the resulting insight.

Velocity

In this context, the speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development.

Variability

Inconsistency of the data set can hamper processes to handle and manage it.

Veracity
The quality of captured data can vary greatly, affecting accurate analysis.
  • MindTelligent Big Data Focus includes
  • Predictive analytics.
  • NoSQL databases.
  • Search and knowledge discovery.
  • Stream analytics.
  • In-memory data fabric.
  • Distributed file stores.
  • Data virtualization
  • Data integration.
  • Data preparation.
  • Data quality.