Dot2Data Soluctions

Hadoop

dot2data solutions (4)

 

The core of Apache Hadoop consists of a storage part (HDFS) and a processing part (Map Reduce). Hadoop splits files into large blocks and distributes them amongst the nodes in the cluster. To process the data, Hadoop MapReduce transfers packaged code for nodes to process in parallel, based on the data each node needs to process. This approach takes advantage of data locality nodes manipulating the data that they have on hand—to allow the data to be processed faster and more efficiently than it would be in a more conventional super computer architecture that relies on a parallel file system where computation and data are connected via high-speed networking.

The base Apache Hadoop framework is composed of the following modules:

  • Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
  • Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster;
  • Hadoop YARN – a resource-management platform responsible for managing computing resources in clusters and using them for scheduling of users’ applications; and
  • Hadoop MapReduce – a programming model for large scale data processing.

Dot2Data is having an expertised team in :

Hadoop Administration, development
NoSQL and real time analytics
R Programming
Big Data/Hadoop solution Architecture, Design & Development
Social Media, Community & Viral Marketing
Business Development, Global Scale
Big Data strategy

Visit Us On TwitterVisit Us On FacebookVisit Us On Google PlusVisit Us On Linkedin