Hadoop on Mesos Installation Guide

This post describes how you can set up Apache Hadoop to work with Apache Mesos. In another post I also describe how you can set up Apache Spark to work on Mesos. So by following these instructions you can have Spark and Hadoop running on the same Mesos cluster. My cluster is a Eucalyptus private cloud (Version 3.4.2) with an Ubuntu 12.04.4 LTS image, but the instructions should work for any cluster running Ubuntu or even different Linux distributions after some small changes.


  • I assume you have already set up HDFS to your cluster. If not, follow my post here
  • I also assume you have already installed Mesos. If not follow the instructions here

Installation Steps:

  1. Get a hadoop distribution:
    wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.3.0-cdh5.1.2.tar.gz

    Careful: If you already have hdfs installed through apt manager make sure you download the same version with the one you are using.

  2. Run “hadoop version” to check the version you are using
  3. If for any reason you have two different versions, then the jobtracker running on your master and the task tracker running on the slave will use different versions. This might lead to some funny issues. For example, the tasks will be marked as “finished” by Mesos but will actually be unfinished and you will see a fatal error on the log files of your executor. The error will look like this:
    FATAL mapred.TaskTracker: Shutting down. Incompatible version or revision.TaskTracker version 
    '2.3.0-cdh5.1.0' and build '2.3.0-cdh5.1.0 from 8e266e052e423af592871e2dfe09d54c03f6a0e     
    by jenkins source checksum 7ec68264497939dee7ab5b91250cbd9' and JobTracker version '2.3.0-cdh5.1.2' 
    and build '2.3.0-cdh5.1.2 from 8e266e052e423af592871e2dfe09d54c03f6a0e8 by jenkins source checksum 
    ec11b8ec19ca2bf3e7cb1bbe4ee182 and hadoop.relaxed.worker.version.check is
     enabled and hadoop.skip.worker.version.check is not enabled.

    To avoid this error be sure to have hadoop.skip.worker.version.check set to true inside the mappred-site.xml on the cloudera configuration directory. Still, you might end up having other issues – so I highly recommend to use the same version instead.

  4. Untar the file:
    $ tar zxf hadoop-2.3.0-cdh5.1.2.tar.gz
  5. Clone hadooponMesos:
    $ git clone https://github.com/mesos/hadoop.git hadoopOnMesos
  6. Build hadoopOnMesos:
    $ mvn package
    1. The jar file will be located on the /target directory
  7. Copy the jar both to the cloudera installation directories and the hadoop distribution you just downloaded:
     $ cp hadoopOnMesos/target/hadoop-mesos-0.0.8.jar /usr/lib/hadoop-0.20-mapreduce/lib/
    $ cp hadoopOnMesos/target/hadoop-mesos-0.0.8.jar hadoop-2.3.0-cdh5.1.2/share/hadoop/common/lib/

    Modifying cloudera to run map-reduce 1 instead of YARN. No matter if you have installed hdfs with apt manager or if you are planning to use the distribution you just downloaded you have to do the following to avoid getting this error:

    java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/TaskTracker
    at org.apache.hadoop.mapred.MesosExecutor.launchTask(MesosExecutor.java:75)
  8. Configure CDH5 to run with MRv1 instead of YARN:
    $ cd hadoop-2.3.0-cdh5.1.2
    $ mv bin bin-mapreduce2
    $ mv examples examples-mapreduce2
    $ ln -s bin-mapreduce1 bin
    $ ln -s examples-mapreduce1 examples
    $ pushd etc
    $ mv hadoop hadoop-mapreduce2
    $ ln -s hadoop-mapreduce1 hadoop
    $ popd
    $ pushd share/hadoop
    $ rm mapreduce
    $ ln -s mapreduce1 mapreduce
    $ popd
  9. Moreover if you have installed Cloudera CDH5 with apt get (see my post here) you should do the following configuration as also described in this github issue
    $ cp target/hadoop-mesos-0.0.8.jar /usr/lib/hadoop-0.20-mapreduce/lib
    $ cat > /etc/profile.d/hadoop.sh
    $ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    $ export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
    $ chmod +x /etc/profile.d/hadoop.sh
    $ /etc/profile.d/hadoop.sh
    $ cd ..
    $ rm hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz
    $ tar czf hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz hadoop-2.3.0-cdh5.1.2/
    $ hadoop fs -put hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz /

    For this step depending on your configuration you might need either to add permissions to your root user, because for cloudera hdfs user is the root or you could simple copy the .tar.gz file from a directory created by hdfs:hadoop user and group respectively and then you put the file to hdfs by running:

    sudo -u hdfs hadoop fs -put /hdfsuser/hadoop-2.3.0-cdh5.1.2-mesos.0.20.tar.gz /
  10. Put mesos properties into /etc/hadoop/conf.cluster-name/mapred-site.xml. For a detailed list of properties that you can set take a look here. The following properties on the mapred-site.xml file are necessary:
  11. Edit hadoop startup default script with any editor you want:
    $ vim /usr/lib/hadoop-0.20-mapreduce/bin/hadoop-daemon.sh

    Export in the beginning of this script the location of MESOS_NATIVE_JAVA_LIBRARY

    export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so

    If you don’t do this you will get an error that looks like the following when you try to start the jobtracker:

    FATAL org.apache.hadoop.mapred.JobTrackerHADaemon: Error encountered requiring JT shutdown. Shutting down immediately.
    java.lang.UnsatisfiedLinkError: no mesos in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
    at java.lang.Runtime.loadLibrary0(Runtime.java:849)
    at java.lang.System.loadLibrary(System.java:1088)
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:54)
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:79)
    at org.apache.mesos.MesosSchedulerDriver.(MesosSchedulerDriver.java:61)
    at org.apache.hadoop.mapred.MesosScheduler.start(MesosScheduler.java:188) 
  12. Start the jobtracker
    $ service hadoop-0.20-mapreduce-jobtracker start

    You should be able to see the jobtracker process by running

  13. Now you should test the setup:
    $ su mapred
    $ echo “I love UCSB” > /tmp/file0
    $ echo “Do you love UCSB?” > /tmp/file1
    $ hadoop fs -mkdir -p /user/foo/data
    $ hadoop fs -put /tmp/file? /user/foo/data
    $ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples-2.3.0-mr1-cdh5.1.2.jar wordcount /user/foo/data /user/foo/out
    $ hadoop fs -ls /user/foo/out
    $ hadoop fs -cat /user/foo/out/part*
  14. *Notice that we didn’t install or started a TaskTracker on the slave machines. Mesos will be responsible to start a Task tracker when you submit a Hadoop job.

9 thoughts on “Hadoop on Mesos Installation Guide

  1. Hello,
    Thank you for your instruction.
    But I have a question, why don’t we configure and start HDFS from the Package hadoop-2.3.0-cdh5.1.2 here? Why do we need to install priorly with another package.
    I follow your instruction at other link for installing Cloudera HDFS CDH5, but it is quite incomplete and hard for me, because I am very new with Hadoop and Mesos. It has taken me a week to install Hadoop on Mesos, but I’m still unsuccessful.
    Could you please give me more detailed instruction?
    Thank you very much.

  2. Hello,
    Im new to hadoop. I have already a mesos cluster setup with marathon and chronos frameworks. Now I needs to add hadoop framework to this existing cluster. Will it be possible to do that by using the steps provided by you?

  3. Step 13 works, but I see the below in the JobTracker output. It also doesn’t show up as a task in the mesos web GUI, although I do see the framework for hadoop on just the hadoop master node. Is this expected?

    16/03/30 15:54:08 INFO mapred.ResourcePolicy: JobTracker Status
    Pending Map Tasks: 0
    Pending Reduce Tasks: 0
    Running Map Tasks: 0
    Running Reduce Tasks: 0
    Idle Map Slots: 0
    Idle Reduce Slots: 0
    Inactive Map Slots: 0 (launched but no hearbeat yet)
    Inactive Reduce Slots: 0 (launched but no hearbeat yet)
    Needed Map Slots: 0
    Needed Reduce Slots: 0
    Unhealthy Trackers: 0

    • Hi David,

      Since you haven’t submit a job to your jobtracker yet then yes this is the expected output. As long as Hadoop registers with Mesos, it will start receiving offers from mesos. This output refers to the number of tasks running etc, which since their is no job submitted are 0. You don’t see a Mesos tasks because the jobtracker itself is not running on Mesos (So no Mesos Tasks are created for it). It just registers with Mesos. When you will submit a job you should see tasks running as well.
      ps: I am not sure I understand what do you mean with “although I do see the framework for hadoop on just the hadoop master node”.

      • Ok. I was thinking that step 13 was submitting a hadoop job.

        By “framework for hadoop on just the hadoop master node”, I meant that when you click on “Frameworks”, I only see one active framework running on a single node. I assume this is expected, but its difficult to determine if I should see the framework on a single node or all 4 data nodes in my cluster.

      • ok – what you see on “frameworks” is just the schedulers register on mesos. The hadoop scheduler (jobtracker) doesn’t run on mesos. Its tasktrackers will run on mesos…

      • I don’t see any tasktrackers getting run on Mesos. Do I need to set the below in /etc/hadoop/conf/mapred-site.xml? I think its defaulting to “local”, and thats why my “hadoop jar” command is only running it locally and I don’t see any tasktrackers getting started by Mesos.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s