Abstract: One of the fastest growing applications is big data. Analysis of big data is must to gain higher productivity and efficient data utilization. Map Reduce is emerging as an important programming model for large-scale data-parallel applications such as web indexing, data mining, and scientific simulation. Hadoop is an open source implementation of the Map Reduce framework. It is a highly robust system model in the form of scalable and fault tolerant distributed system and it is important for data storage and its processing. Goal of Hadoop is to offer efficient and high performance processing of big data application. Hadoop clusters composed of hundreds of nodes to process terabytes of user data. This paper gives a survey of different job schedulers of Map Reduce.

Keywords: Cloud Computing, HDFS, Map RedIARJSET.2020.7318uce, Hadoop.


PDF | DOI: 10.17148/IARJSET.2020.7318

Open chat