Hadoop: Difference between revisions

From Chorke Wiki
Jump to navigation Jump to search
Line 15: Line 15:
| valign="top" |
| valign="top" |
* [https://www.digitalocean.com/community/tutorials/how-to-install-hadoop-in-stand-alone-mode-on-ubuntu-20-04 Hadoop » Install Stand-Alone Mode on Ubuntu 20.04]
* [https://www.digitalocean.com/community/tutorials/how-to-install-hadoop-in-stand-alone-mode-on-ubuntu-20-04 Hadoop » Install Stand-Alone Mode on Ubuntu 20.04]
* [https://www.digitalocean.com/community/tutorials/an-introduction-to-big-data-concepts-and-terminology Hadoop » Big Data Concepts & Terminology]
* [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html Hadoop » Setting up a Single Node Cluster]
* [https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html Hadoop » Setting up a Single Node Cluster]
* [https://archive.apache.org/dist/hadoop/common/ Hadoop » Download » Archive]
* [https://archive.apache.org/dist/hadoop/common/ Hadoop » Download » Archive]
* [https://archive.apache.org/dist/hadoop/common/current/ Hadoop » Download » Current]
* [https://archive.apache.org/dist/hadoop/common/current/ Hadoop » Download » Current]
* [https://archive.apache.org/dist/hadoop/common/stable/ Hadoop » Download » Stable]
* [https://archive.apache.org/dist/hadoop/common/stable/ Hadoop » Download » Stable]
* [https://www.digitalocean.com/community/tutorials/an-introduction-to-hadoop Hadoop » An Introduction]
* [[VS Code on iPad Pro]]
* [[VS Code on iPad Pro]]
* [[Machine Learning]]
* [[Machine Learning]]

Revision as of 21:25, 23 September 2022

Hadoop is a Java-based programming framework that supports the processing and storage of extremely large datasets on a cluster of inexpensive machines. It was the first major open source project in the big data playing field and is sponsored by the Apache Software Foundation. Hadoop is comprised of four main layers:

  1. Hadoop Common is the collection of utilities and libraries that support other Hadoop modules.
  2. HDFS, which stands for Hadoop Distributed File System, is responsible for persisting data to disk.
  3. YARN, short for Yet Another Resource Negotiator, is the "operating system" for HDFS.
  4. MapReduce is the original processing model for Hadoop clusters. It distributes work within the cluster or map, then organizes and reduces the results from the nodes into a response to a query. Many other processing models are available for the 3.x version of Hadoop

Knowledge

readlink -f /usr/bin/java | sed "s:bin/java::"
sudo apt-get install pdsh
sudo apt-get install ssh

References