Hadoop on Mesos Installation Guide

This post describes how you can set up Apache Hadoop to work with Apache Mesos. In another post I also describe how you can set up Apache Spark to work on Mesos. So by following these instructions you can have Spark and Hadoop running on the same Mesos cluster. My cluster is a Eucalyptus private cloud (Version 3.4.2) with an Ubuntu 12.04.4 LTS image, but the instructions should work for any cluster running Ubuntu or even different Linux distributions after some small changes.


  • I assume you have already set up HDFS to your cluster. If not, follow my post here
  • I also assume you have already installed Mesos. If not follow the instructions here

Installation Steps:

  1. Get a hadoop distribution:
    wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.3.0-cdh5.1.2.tar.gz

    Careful: If you already have hdfs installed through apt manager make sure you download the same version with the one you are using.

  2. Run “hadoop version” to check the version you are using
  3. If for any reason you have two different versions, then the jobtracker running on your master and the task tracker running on the slave will use different versions. This might lead to some funny issues. For example, the tasks will be marked as “finished” by Mesos but will actually be unfinished and you will see a fatal error on the log files of your executor. The error will look like this:
    FATAL mapred.TaskTracker: Shutting down. Incompatible version or revision.TaskTracker version 
    '2.3.0-cdh5.1.0' and build '2.3.0-cdh5.1.0 from 8e266e052e423af592871e2dfe09d54c03f6a0e     
    by jenkins source checksum 7ec68264497939dee7ab5b91250cbd9' and JobTracker version '2.3.0-cdh5.1.2' 
    and build '2.3.0-cdh5.1.2 from 8e266e052e423af592871e2dfe09d54c03f6a0e8 by jenkins source checksum 
    ec11b8ec19ca2bf3e7cb1bbe4ee182 and hadoop.relaxed.worker.version.check is
     enabled and hadoop.skip.worker.version.check is not enabled.

    To avoid this error be sure to have hadoop.skip.worker.version.check set to true inside the mappred-site.xml on the cloudera configuration directory. Still, you might end up having other issues – so I highly recommend to use the same version instead.

  4. Untar the file:
    $ tar zxf hadoop-2.3.0-cdh5.1.2.tar.gz
  5. Clone hadooponMesos:
    $ git clone https://github.com/mesos/hadoop.git hadoopOnMesos
  6. Build hadoopOnMesos:
    $ mvn package
    1. The jar file will be located on the /target directory
  7. Copy the jar both to the cloudera installation directories and the hadoop distribution you just downloaded:
     $ cp hadoopOnMesos/target/hadoop-mesos-0.0.8.jar /usr/lib/hadoop-0.20-mapreduce/lib/
    $ cp hadoopOnMesos/target/hadoop-mesos-0.0.8.jar hadoop-2.3.0-cdh5.1.2/share/hadoop/common/lib/

    Modifying cloudera to run map-reduce 1 instead of YARN. No matter if you have installed hdfs with apt manager or if you are planning to use the distribution you just downloaded you have to do the following to avoid getting this error:

    java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/TaskTracker
    at org.apache.hadoop.mapred.MesosExecutor.launchTask(MesosExecutor.java:75)
  8. Configure CDH5 to run with MRv1 instead of YARN:
    $ cd hadoop-2.3.0-cdh5.1.2
    $ mv bin bin-mapreduce2
    $ mv examples examples-mapreduce2
    $ ln -s bin-mapreduce1 bin
    $ ln -s examples-mapreduce1 examples
    $ pushd etc
    $ mv hadoop hadoop-mapreduce2
    $ ln -s hadoop-mapreduce1 hadoop
    $ popd
    $ pushd share/hadoop
    $ rm mapreduce
    $ ln -s mapreduce1 mapreduce
    $ popd
  9. Moreover if you have installed Cloudera CDH5 with apt get (see my post here) you should do the following configuration as also described in this github issue
    $ cp target/hadoop-mesos-0.0.8.jar /usr/lib/hadoop-0.20-mapreduce/lib
    $ cat > /etc/profile.d/hadoop.sh
    $ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    $ export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
    $ chmod +x /etc/profile.d/hadoop.sh
    $ /etc/profile.d/hadoop.sh
    $ cd ..
    $ rm hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz
    $ tar czf hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz hadoop-2.3.0-cdh5.1.2/
    $ hadoop fs -put hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz /

    For this step depending on your configuration you might need either to add permissions to your root user, because for cloudera hdfs user is the root or you could simple copy the .tar.gz file from a directory created by hdfs:hadoop user and group respectively and then you put the file to hdfs by running:

    sudo -u hdfs hadoop fs -put /hdfsuser/hadoop-2.3.0-cdh5.1.2-mesos.0.20.tar.gz /
  10. Put mesos properties into /etc/hadoop/conf.cluster-name/mapred-site.xml. For a detailed list of properties that you can set take a look here. The following properties on the mapred-site.xml file are necessary:
  11. Edit hadoop startup default script with any editor you want:
    $ vim /usr/lib/hadoop-0.20-mapreduce/bin/hadoop-daemon.sh

    Export in the beginning of this script the location of MESOS_NATIVE_JAVA_LIBRARY

    export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so

    If you don’t do this you will get an error that looks like the following when you try to start the jobtracker:

    FATAL org.apache.hadoop.mapred.JobTrackerHADaemon: Error encountered requiring JT shutdown. Shutting down immediately.
    java.lang.UnsatisfiedLinkError: no mesos in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
    at java.lang.Runtime.loadLibrary0(Runtime.java:849)
    at java.lang.System.loadLibrary(System.java:1088)
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:54)
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:79)
    at org.apache.mesos.MesosSchedulerDriver.(MesosSchedulerDriver.java:61)
    at org.apache.hadoop.mapred.MesosScheduler.start(MesosScheduler.java:188) 
  12. Start the jobtracker
    $ service hadoop-0.20-mapreduce-jobtracker start

    You should be able to see the jobtracker process by running

  13. Now you should test the setup:
    $ su mapred
    $ echo “I love UCSB” > /tmp/file0
    $ echo “Do you love UCSB?” > /tmp/file1
    $ hadoop fs -mkdir -p /user/foo/data
    $ hadoop fs -put /tmp/file? /user/foo/data
    $ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples-2.3.0-mr1-cdh5.1.2.jar wordcount /user/foo/data /user/foo/out
    $ hadoop fs -ls /user/foo/out
    $ hadoop fs -cat /user/foo/out/part*
  14. *Notice that we didn’t install or started a TaskTracker on the slave machines. Mesos will be responsible to start a Task tracker when you submit a Hadoop job.