Monday, October 22, 2018

Big Data: Spark

Apache Spark is an open-source distributed general-purpose cluster-computing framework. Apache Spark has as its architectural foundation the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way.
Spark uses cluster computing for its computational (analytics) power as well as its storage. This means it can use resources from many computer processors linked together for its analytics. It's a scalable solution meaning that if more oomph is needed, you can simply introduce more processors into the system. With distributed storage, the huge datasets gathered for Big Data analysis can be stored across many smaller individual physical hard discs. This speeds up read/write operations, because the "head" which reads information from the discs has less physical distance to travel over the disc surface. As with processing power, more storage can be added when needed, and the fact it uses commonly available commodity hardware (any standard computer hard discs) keeps down infrastructure costs.
Unlike Hadoop, Spark does not come with its own file system - instead it can be integrated with many file systems including Hadoop's HDFS, MongoDB and Amazon's S3 system.
Another element of the framework is Spark Streaming, which allows applications to be developed which perform analytics on streaming, real-time data - such as automatically analyzing video or social media data - on-the-fly, in real-time.
In fast changing industries such as marketing, real-time analytics has huge advantages, for example ads can be served based on a user's behavior at a particular time, rather than on historical behavior, increasing the chance of prompting an impulse purchase.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.