Apache Thrift Installation and Usage Tutorial by Example

In the past I’ve worked with Apache Thrift in a research project on which I wanted to use the functionality of a common Python server with different clients running in Java and Python (You can find more about this project here). On this post you can find a short video tutorial for installing and using Apache Thrift. For those who do not like videos you can just check the detailed installation documentation here and my github repo with sample code.

Advertisements

Using S3 with Eucalyptus

In order to use S3 with Eucalyptus you can choose mainly between two alternatives. The s3curl tools (https://github.com/eucalyptus/s3curl) or the s3cmd tools (https://github.com/eucalyptus/s3cmd). If you don’t have any specific reason to use s3curl I would recommend you go with s3cmd tools. s3curl tools currently do not support copying directories and this will make your life a little bit harder. This post though, helps you setup s3curl to use with Eucalyptus. For s3cmd you can just follow the instructions on the github repot here.
* These instructions are tested on Eucalyptus 3.4.2 with Ubuntu 12.04.
You can download s3curl here
You first need to install some modules missing – they are specified between lines 20 and 24 on the script. In my case they were:
perl -MCPAN -e "install Digest::HMAC_SHA1"
perl -MCPAN -e "install URI::Escape"

And then you need to edit the s3curl.pl file, commending the @endpoints at line 30 and replacing with:

my @endpoints = ( 'walrus.whatever-is-your-url.com') 

Then for example you can run:

$ s3curl.pl --id=$AWS_ACCESS_KEY --key=$AWS_SECRET_KEY --walrus.whatever-is-your-url.com:8773

For a detailed list of commands you can run with s3curl take a look on the README file.

Apache Hama on Mesos

This post describes how you can set up Apache Hama to work with Apache Mesos. In another post I also describe how you can set up Apache Hadoop (http://wp.me/p3pTv0-j) and Apache Spark (http://wp.me/p3pTv0-c) to work on Mesos.

*The instructions have been tested with mesos 0.20.0 and Hama 0.7.0. My cluster is a Eucalyptus private cloud (Version 3.4.2) with an Ubuntu 12.04.4 LTS image, but the instructions should work for any cluster running Ubuntu or even different Linux distributions after some small changes.

Prerequisites:

  • I assume you have already set up HDFS CDH 5.1.2 to your cluster. If not, follow my post here
  • I also assume you have already installed Mesos. If not, follow the instructions here

Installation Steps:

  • IMPORTANT: The current git repo has a bug working with CDH5 on Mesos.  If you compile with this version you will get an error similar to the following:
    ERROR bsp.MesosExecutor: Caught exception, committing suicide.
    java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.LocalFileSystem not found
            at java.util.ServiceLoader.fail(ServiceLoader.java:231)
            at java.util.ServiceLoader.access$300(ServiceLoader.java:181)
            at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:365)
            at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
            at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2364)
            at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2375)
            at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2392)
            at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
            at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
            at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
            at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
            at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:339)
            at org.apache.hama.bsp.GroomServer.deleteLocalFiles(GroomServer.java:483)
            at org.apache.hama.bsp.GroomServer.initialize(GroomServer.java:321)
            at org.apache.hama.bsp.GroomServer.run(GroomServer.java:860)
            at org.apache.hama.bsp.MesosExecutor$1.run(MesosExecutor.java:92)
    
  • SOLUTION: Thanks to the Hama community and in particular Jeff Fenchel the bug was quickly fixed. You can get a patched Hama 0.7.0 version from Jeff’s github repo here  or my forked repo here in case Jeff deletes it in the future. When the patch is merged I will update the links.

So now we are good to go:

  1.  Remove any previous versions on Hama on your HDFS:
    $ hadoop fs -rm /hama.tar.gz
  2. Build Hama for the particular HDFS and Mesos version you are using:
    $ mvn clean install -Phadoop2 -Dhadoop.version=2.3.0-cdh5.1.2 -Dmesos.version=0.20.0 -DskipTests
  3. Put it to HDFS (Careful with the naming of your tar file – Has to be the same with your configuration file)
    $ hadoop fs -put dist/target/hama-0.7.0-SNAPSHOT.tar.gz /hama.tar.gz
  4. Move to the distribution directory:
    $ cd dist/target/hama-0.7.0-SNAPSHOT/hama-0.7.0-SNAPSHOT/
  5. Make sure LD_LIBRARY_PATH or the MESOS_NATIVE_LIBRARY environment variables are set correctly to your mesos installation libraries. This can be under usr/lib/mesos (default) or anywhere you specified when installing mesos. For example:
    $ export LD_LIBRARY_PATH=/root/mesos-installation/lib/

    If you don’t set them correctly then you will get an ugly stack trace like this one on your bspmaster logs and the bspmaster won’t start:

    2014-11-08 01:23:50,646 FATAL org.apache.hama.BSPMasterRunner: java.lang.UnsatisfiedLinkError: Expecting an absolute path of the library:
            at java.lang.Runtime.load0(Runtime.java:792)
            at java.lang.System.load(System.java:1062)
            at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:50)
            at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:79)
            at org.apache.mesos.MesosSchedulerDriver.(MesosSchedulerDriver.java:61)
            at org.apache.hama.bsp.MesosScheduler.start(MesosScheduler.java:63)
            at org.apache.hama.bsp.MesosScheduler.init(MesosScheduler.java:46)
            at org.apache.hama.bsp.SimpleTaskScheduler.start(SimpleTaskScheduler.java:273)
            at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:523)
            at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:500)
            at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
            at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
            at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
            at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
  6. Configure hama with Mesos. I found that the instructions from the Hama wiki  are missing some things. For example it is not described to add the fs.default.name property and without this you will get an error like this on the master when running your code:
    Error reading task output: http://euca-x-x-x-x.eucalyptus.race.cs.ucsb.edu:40015/tasklog?plaintext=true&taskid=attempt_201411051242_0001_000000_0&filter=stdout

    and an ugly stack trace on the Groom Server executor log like this one:

    14/11/05 12:43:08 INFO bsp.GroomServer: Launch 1 tasks.
    14/11/05 12:43:38 WARN bsp.GroomServer: Error initializing attempt_201411051242_0001_000000_0:
    java.io.FileNotFoundException: File file:/mnt/bsp/system/submit_6i82le/job.xml does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:511)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:724)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:501)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:397)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
    at org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile(LocalFileSystem.java:88)
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1913)
    at org.apache.hama.bsp.GroomServer.localizeJob(GroomServer.java:707)
    at org.apache.hama.bsp.GroomServer.startNewTask(GroomServer.java:562)
    at org.apache.hama.bsp.GroomServer.access$000(GroomServer.java:86)
    at org.apache.hama.bsp.GroomServer$DispatchTasksHandler.handle(GroomServer.java:173)
    at org.apache.hama.bsp.GroomServer$Instructor.run(GroomServer.java:237)14/11/05 12:43:39 INFO bsp.GroomServer: Launch 1 tasks.
    14/11/05 12:43:40 INFO bsp.MesosExecutor: Killing task : Task_0
    14/11/05 12:43:40 INFO ipc.Server: Stopping server on 31000
    14/11/05 12:43:40 INFO ipc.Server: IPC Server handler 0 on 31000: exiting
    14/11/05 12:43:40 INFO ipc.Server: Stopping IPC Server listener on 31000
    14/11/05 12:43:40 INFO ipc.Server: Stopping IPC Server Responder
    14/11/05 12:43:40 INFO ipc.Server: Stopping server on 50001
    14/11/05 12:43:40 INFO ipc.Server: IPC Server handler 1 on 50001: exiting
    14/11/05 12:43:40 INFO ipc.Server: IPC Server handler 3 on 50001: exiting
    14/11/05 12:43:40 INFO ipc.Server: IPC Server handler 4 on 50001: exiting
    14/11/05 12:43:40 INFO ipc.Server: IPC Server handler 0 on 50001: exiting
    14/11/05 12:43:40 INFO ipc.Server: IPC Server handler 2 on 50001: exiting
    14/11/05 12:43:40 INFO ipc.Server: Stopping IPC Server listener on 5000114/11/05 12:43:40 INFO ipc.Server: Stopping IPC Server Responder
    14/11/05 12:43:42 WARN mortbay.log: /tasklog: java.io.IOException: Closed
    
    

    To be fair you are able to find all the configuration you need from the Hama configuration instructions but its not everything on one page.

    So, to save you from some of the issues you might encounter configuring Hama, the hama-site.xml configuration in my case looks like this:

    <configuration>
     <property>
     <name>bsp.master.address</name>
     <value>euca-10-2-112-10.eucalyptus.internal:40000</value>
     <description>The address of the bsp master server. Either the
     literal string "local" or a host[:port] (where host is a name or
     IP address) for distributed mode.
     </description>
     </property>
     <property>
     <name>bsp.master.port</name>
     <value>40000</value>
     <description>The port master should bind to.</description>
     </property>
     <property>
     <name>bsp.master.TaskWorkerManager.class</name>
     <value>org.apache.hama.bsp.MesosScheduler</value>
     <description>Instructs the scheduler to use Mesos to execute tasks of each job
     </description>
     </property>
     <property>
     <name>fs.default.name</name>
     <value>hdfs://euca-10-2-112-10.eucalyptus.internal:9000</value>
     <description>
     The name of the default file system. Either the literal string
     "local" or a host:port for HDFS.
     </description>
     </property>
     <property>
     <name>hama.mesos.executor.uri</name>
     <value>hdfs://euca-10-2-112-10.eucalyptus.internal:9000/hama.tar.gz</value>
     <description>This is the URI of the Hama distribution
     </description>
     </property>
    
     <!-- Hama requires one cpu and memory defined by bsp.child.java.opts for each slot.
     This means that a cluster with bsp.tasks.maximum.total set to 2 and bsp.child.jova.opts set to -Xmx1024m
     will need at least 2 cpus and and 2048m of memory. -->
    
     <property>
     <name>bsp.tasks.maximum.total</name>
     <value>2</value>
     <description>This is an override for the total maximum tasks that may be run.
     The default behavior is to determine a value based on the available groom servers.
     However, if using Mesos, the groom servers are not yet allocated.
     So, a value indicating the maximum number of slots available in the cluster is needed.
     </description>
     </property>
    
     <property>
     <name>hama.mesos.master</name>
     <value>zk://euca-10-2-112-10.eucalyptus.internal:2181/mesos</value>
     <description>This is the address of the Mesos master instance.
     If you're using Zookeeper for master election, use the Zookeeper address here (i.e.,zk://zk.apache.org:2181/hadoop/mesos).
     </description>
     </property>
     <property>
     <name>bsp.child.java.opts</name>
     <value>-Xmx1024m</value>
     <description>Java opts for the groom server child processes.
     </description>
     </property>
    
     <property>
     <name>bsp.system.dir</name>
     <value>${hadoop.tmp.dir}/bsp/system</value>
     <description>The shared directory where BSP stores control files.
     </description>
     </property>
     <property>
     <name>bsp.local.dir</name>
     <value>/mnt/bsp/local</value>
     <description>local directory for temporal store.</description>
     </property>
     <property>
     <name>hama.tmp.dir</name>
     <value>/mnt/hama/tmp/hama-${user.name}</value>
     <description>Temporary directory on the local filesystem.</description>
     </property>
     <property>
     <name>bsp.disk.queue.dir</name>
     <value>${hama.tmp.dir}/messages/</value>
     <description>Temporary directory on the local message buffer on disk.</description>
     </property>
     <property>
     <name>hama.zookeeper.quorum</name>
     <value>euca-10-2-112-10.eucalyptus.internal</value>
     <description>Comma separated list of servers in the ZooKeeper Quorum.
     For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
     By default this is set to localhost for local and pseudo-distributed modes
     of operation. For a fully-distributed setup, this should be set to a full
     list of ZooKeeper quorum servers. If HAMA_MANAGES_ZK is set in hama-env.sh
     this is the list of servers which we will start/stop zookeeper on.
     </description>
     </property>
     <property>
     <name>hama.zookeeper.property.clientPort</name>
     <value>2181</value>
     <description>The port to which the zookeeper clients connect
     </description>
     </property>
    
    </configuration>
     
  7. You can also configure hama-env.xml if you want for example to output the logs to a different directory than the default. *Note: You don’t need to ship the configuration to your slaves.
  8. Now we are ready to run bspmaster:
    $ ./bin/hama-daemon.sh start bspmaster
    

    Make sure that the bspmaster started without problems by checking the log files

    $ less hama-root-bspmaster-$HOSTNAME.log 
    $ less hama-root-bspmaster-$HOSTNAME.out
    
  9. If everything went ok you are ready to run some examples:
    $ $ ./bin/hama jar hama-examples-0.7.0-SNAPSHOT.jar gen fastgen 100 10 randomgraph 2

    Listing the directories on your HDFS to make sure everything went fine. You should get something like this:

    Found 2 items
    -rw-r--r-- 3 root hadoop 2241 2014-11-07 16:49 /user/root/randomgraph/part-00000
    -rw-r--r-- 3 root hadoop 2243 2014-11-07 16:49 /user/root/randomgraph/part-00001

    These two files are two partitions of a graph with 100 nodes and 1K edges.

  10. $ ./bin/hama jar hama-examples-0.7.0-SNAPSHOT.jar pagerank randomgraph pagerankresult 4

    And again if you list your files in HDFS

    $ hadoop fs -ls /user/root/randomgraph
    

    you should get something like this:

    Found 2 items
    -rw-r--r-- 3 root hadoop 1194 2014-11-05 16:02 /user/root/pagerankresult/part-00000
    -rw-r--r-- 3 root hadoop 1189 2014-11-05 16:02 /user/root/pagerankresult/part-00001

    -You can find more examples to run on the Apache Hama website example’s page here

Spark on Mesos Installation Guide

This post describes how you can set up Apache Spark to work with Apache Mesos. In another post I also describe how you can set up Apache Hadoop to work on Mesos. So by following these instructions you can have both Spark and Hadoop running on the same Mesos cluster.
*The instructions bellow have been tested with mesos 0.20 and Spark 1.1.0 ((Update: 3/16/2016: Have also tested with mesos 0.27.2 and Spark 1.6.1 on Ubuntu Trusty 14.04 – Most steps work as bellow just by changing the version numbers. I explicitly note when there is some difference on compiling steps between the two versions)

Prerequisites:

  • I assume you have already set up HDFS to your cluster. If not, follow my post here
  • I also assume you have already installed Mesos. If not follow the instructions here

To run Spark with Mesos you should do the following:

  1. Download the Spark tar file
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.1.0.tgz 
$ tar zxfv spark-1.1.0.tgz
$ cd spark-1.1.0
  1. Build spark
    • (METHOD A)
      $ sbt/sbt assembly
      
    • (METHOD B)
      $ ./make-distribution.sh --tgz

      Careful! Build as necessary if you use a different HDFS version than the default (1.0.4) – For example for the Cloudera CDH5.1.2 that I described how to install in another post use:

      $ ./make-distribution.sh --tgz -Dhadoop.version=2.3.0-cdh5.1.2 -Dprotobuf.version=2.5.0 -DskipTests
      • Compiling with -Dhadoop.version=2.3.0-mr1-cdh5.1.2 won’t work anymore as the codehaus  repo is no longer active and the files hosted now on Apache central are renamed!
    • In the above example I am specifying the protobuf version to use to avoid the following error that will be thrown when trying to run Spark (This is no longer needed on the newer Spark 1.6+ version):
      Exception in thread "main" java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$SetOwnerRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
    • You can find a detailed list of how to build with different HDFS versions here
    • Careful for the Spark 1.1 version! I recommend you compile by setting your JAVA_HOME to JAVA 6.  Later its ok to set it back to JAVA 7. If you compile with JAVA 7 and then run executors with JAVA 6 you will get an error on the executors. Moreover, I’ve seen this error even when having JAVA 7 on the executors. The worst part is that with a newer version of SPARK (1.2.1) there is not even an error thrown on the executors. You just see executors getting lost…
  2. Tar the version you’ve built and put into HDFS so it can be shipped to the executors
    $ tar czfv spark-1.1.0.tgz spark-1.1.0
    $ sudo -u hdfs hadoop fs -put /hdfsuser/spark-x.x.x.tgz /
    

    Careful! If you deploy the wrong .tgz file (such as the one you just downloaded instead of the one produced after running make-distribution) then the tasks will fail. To debug you should check your executor log files. In this case the error will look like this:

    ls: cannot access /tmp/mesos/slaves/20140913-135403-421003786-5050-24966-0/frameworks/20140914-153830-421003786-5050-27925-0001/executors/20140913-135403-421003786-5050-24966-0/runs/b17fb191-4db1-4aa7-8363-3ff0f12d88a3/spark-1.1.0/assembly/target/scala-2.10: No such file or directory
    
  3. Modify spark-1.1.0/conf/spark-env.sh by adding the following lines of configuration code:
    export MESOS_NATIVE_LIBRARY=/root/mesos-installation/lib/libmesos.so
    export SPARK_EXECUTOR_URI=hdfs://euca-10-2-24-25:9000/spark-1.1.0-bin-2.3.0.tgz
    export HADOOP_CONF_DIR=/etc/hadoop/conf.mesos-cluster/
    
  4. To run your applications with spark-submit without having to modify each time yourcode edit the configuration file: spark-1.1.0/conf/spark-defaults.conf by adding the following line:
    # 10.2.24.25 is the internal Eucalyptus IP of Master
    spark.master mesos://zk://10.2.24.25:2181/mesos
    # euca-10-2-24-25 is the hostname – put there whatever you get by running the command `hostname`
    spark.executor.uri hdfs://euca-10-2-24-25:9000/spark-1.1.0-bin-2.3.0.tgz
    

Hadoop on Mesos Installation Guide

This post describes how you can set up Apache Hadoop to work with Apache Mesos. In another post I also describe how you can set up Apache Spark to work on Mesos. So by following these instructions you can have Spark and Hadoop running on the same Mesos cluster. My cluster is a Eucalyptus private cloud (Version 3.4.2) with an Ubuntu 12.04.4 LTS image, but the instructions should work for any cluster running Ubuntu or even different Linux distributions after some small changes.

Prerequisites:

  • I assume you have already set up HDFS to your cluster. If not, follow my post here
  • I also assume you have already installed Mesos. If not follow the instructions here

Installation Steps:

  1. Get a hadoop distribution:
    wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.3.0-cdh5.1.2.tar.gz

    Careful: If you already have hdfs installed through apt manager make sure you download the same version with the one you are using.

  2. Run “hadoop version” to check the version you are using
  3. If for any reason you have two different versions, then the jobtracker running on your master and the task tracker running on the slave will use different versions. This might lead to some funny issues. For example, the tasks will be marked as “finished” by Mesos but will actually be unfinished and you will see a fatal error on the log files of your executor. The error will look like this:
    FATAL mapred.TaskTracker: Shutting down. Incompatible version or revision.TaskTracker version 
    '2.3.0-cdh5.1.0' and build '2.3.0-cdh5.1.0 from 8e266e052e423af592871e2dfe09d54c03f6a0e     
    by jenkins source checksum 7ec68264497939dee7ab5b91250cbd9' and JobTracker version '2.3.0-cdh5.1.2' 
    and build '2.3.0-cdh5.1.2 from 8e266e052e423af592871e2dfe09d54c03f6a0e8 by jenkins source checksum 
    ec11b8ec19ca2bf3e7cb1bbe4ee182 and hadoop.relaxed.worker.version.check is
     enabled and hadoop.skip.worker.version.check is not enabled.

    To avoid this error be sure to have hadoop.skip.worker.version.check set to true inside the mappred-site.xml on the cloudera configuration directory. Still, you might end up having other issues – so I highly recommend to use the same version instead.

  4. Untar the file:
    $ tar zxf hadoop-2.3.0-cdh5.1.2.tar.gz
  5. Clone hadooponMesos:
    $ git clone https://github.com/mesos/hadoop.git hadoopOnMesos
  6. Build hadoopOnMesos:
    $ mvn package
    1. The jar file will be located on the /target directory
  7. Copy the jar both to the cloudera installation directories and the hadoop distribution you just downloaded:
     $ cp hadoopOnMesos/target/hadoop-mesos-0.0.8.jar /usr/lib/hadoop-0.20-mapreduce/lib/
    $ cp hadoopOnMesos/target/hadoop-mesos-0.0.8.jar hadoop-2.3.0-cdh5.1.2/share/hadoop/common/lib/

    Modifying cloudera to run map-reduce 1 instead of YARN. No matter if you have installed hdfs with apt manager or if you are planning to use the distribution you just downloaded you have to do the following to avoid getting this error:

    java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/TaskTracker
    at org.apache.hadoop.mapred.MesosExecutor.launchTask(MesosExecutor.java:75)
  8. Configure CDH5 to run with MRv1 instead of YARN:
    $ cd hadoop-2.3.0-cdh5.1.2
    $ mv bin bin-mapreduce2
    $ mv examples examples-mapreduce2
    $ ln -s bin-mapreduce1 bin
    $ ln -s examples-mapreduce1 examples
    $ pushd etc
    $ mv hadoop hadoop-mapreduce2
    $ ln -s hadoop-mapreduce1 hadoop
    $ popd
    $ pushd share/hadoop
    $ rm mapreduce
    $ ln -s mapreduce1 mapreduce
    $ popd
    
  9. Moreover if you have installed Cloudera CDH5 with apt get (see my post here) you should do the following configuration as also described in this github issue
    $ cp target/hadoop-mesos-0.0.8.jar /usr/lib/hadoop-0.20-mapreduce/lib
    $ cat > /etc/profile.d/hadoop.sh
    $ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
    $ export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
    
    $ chmod +x /etc/profile.d/hadoop.sh
    $ /etc/profile.d/hadoop.sh
    
    $ cd ..
    $ rm hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz
    $ tar czf hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz hadoop-2.3.0-cdh5.1.2/
    $ hadoop fs -put hadoop-2.3.0-cdh5.1.2-mesos-0.20.tar.gz /
    

    For this step depending on your configuration you might need either to add permissions to your root user, because for cloudera hdfs user is the root or you could simple copy the .tar.gz file from a directory created by hdfs:hadoop user and group respectively and then you put the file to hdfs by running:

    sudo -u hdfs hadoop fs -put /hdfsuser/hadoop-2.3.0-cdh5.1.2-mesos.0.20.tar.gz /
    
  10. Put mesos properties into /etc/hadoop/conf.cluster-name/mapred-site.xml. For a detailed list of properties that you can set take a look here. The following properties on the mapred-site.xml file are necessary:
    
            mapred.jobtracker.taskScheduler/name
            org.apache.hadoop.mapred.MesosScheduler
    
            mapred.mesos.taskScheduler/name
            org.apache.hadoop.mapred.JobQueueTaskScheduler
            org.apache.hadoop.mapred.JobQueueTaskScheduler
    
            mapred.mesos.master/name
            zk://euca-x-x-x-x.eucalyptus.internal:2181/mesos
    
            mapred.mesos.executor.uri/name
            hdfs:/euca-x-x-x-x.eucalyptus.internal:9000/hadoop-2.3.0-cdh5.1.2-mesos.0.20.tar.gz
    
            mapred.job.tracker/name
            euca-x-x-x-x.eucalyptus.internal:9001
    
  11. Edit hadoop startup default script with any editor you want:
    $ vim /usr/lib/hadoop-0.20-mapreduce/bin/hadoop-daemon.sh

    Export in the beginning of this script the location of MESOS_NATIVE_JAVA_LIBRARY

    export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
    

    If you don’t do this you will get an error that looks like the following when you try to start the jobtracker:

    FATAL org.apache.hadoop.mapred.JobTrackerHADaemon: Error encountered requiring JT shutdown. Shutting down immediately.
    java.lang.UnsatisfiedLinkError: no mesos in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
    at java.lang.Runtime.loadLibrary0(Runtime.java:849)
    at java.lang.System.loadLibrary(System.java:1088)
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:54)
    at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:79)
    at org.apache.mesos.MesosSchedulerDriver.(MesosSchedulerDriver.java:61)
    at org.apache.hadoop.mapred.MesosScheduler.start(MesosScheduler.java:188) 
    
  12. Start the jobtracker
    $ service hadoop-0.20-mapreduce-jobtracker start

    You should be able to see the jobtracker process by running

    jps
  13. Now you should test the setup:
    $ su mapred
    $ echo “I love UCSB” > /tmp/file0
    $ echo “Do you love UCSB?” > /tmp/file1
    $ hadoop fs -mkdir -p /user/foo/data
    $ hadoop fs -put /tmp/file? /user/foo/data
    $ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples-2.3.0-mr1-cdh5.1.2.jar wordcount /user/foo/data /user/foo/out
    $ hadoop fs -ls /user/foo/out
    $ hadoop fs -cat /user/foo/out/part*
    
  14. *Notice that we didn’t install or started a TaskTracker on the slave machines. Mesos will be responsible to start a Task tracker when you submit a Hadoop job.

Cloudera HDFS CDH5 Installation to use with Mesos

This post is a guide for installing Cloudera HDFS CDH5 on Eucalyptus (Version 3.4.2) in order to use it later with Apache Mesos. The difference of installing HDFS for any kind of use is that we don’t start the Tasktracker on the slave nodes. This is something that Mesos will do each time a hadoop job is running.

The steps you should take are the following:

  • Disable iptables firewall:
    $ ufw disable
  • Disable selinux:
    $ setenforce 0
  • Make sure instances have unique hostnames
  • Make sure the /etc/hosts file on each system has the IP addresses and fully-qualified domain names (FQDN) of all the members of the cluster.
    • hostname –fqdn
    • An example configuration is the following:
      • For the datanode:
        127.0.0.1 localhost
        10.2.24.25 euca-10-2-24-25.eucalyptus.internal
        x.x.x.x euca-10-2-24-25
        
      • For the namenode:
        127.0.0.1 localhost
        10.2.85.213 euca-10-2-85-213.eucalyptus.internal
        y.y.y.y euca-10-2-85-213
        
      • where x.x.x.x and y.y.y.y are the external IPs of your namenode and datanode respectively.
  • Be careful to create the directories that the data  node and name node are using andwill be set later on the configuration files inside the /etc/hadoop/conf.name directory. Also remember to change ownership tohdfs:hdfs
    • on data node:
      $ mkdir -p /mnt/cloudera-hdfs/1/dfs/dn /mnt/cloudera-hdfs/2/dfs/dn /mnt/cloudera-hdfs/3/dfs/dn /mnt/cloudera-hdfs/4/dfs/dn
      $ chown -R hdfs:hdfs /mnt/cloudera-hdfs/1/dfs/dn /mnt/cloudera-hdfs/2/dfs/dn /mnt/cloudera-hdfs/3/dfs/dn /mnt/cloudera-hdfs/4/dfs/dn
      • Typically each of the /1 /2 /3 /4 directories should be different mounted devices. Though using Eucalyptus volumes to do so might add up some latency.
    • on name node:
      $ mkdir -p /mnt/cloudera-hdfs/1/dfs/nn /nfsmount/dfs/nn
      $ chown -R hdfs:hdfs /mnt/cloudera-hdfs/1/dfs/nn /nfsmount/dfs/nn
      $ chmod 700 /mnt/cloudera-hdfs/1/dfs/nn /nfsmount/dfs/nn
  • Deploy configuration to all nodes in the cluster
  • Set alternatives to each node:
    $ update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.mesos-cluster 50
    $ update-alternatives --set hadoop-conf /etc/hadoop/conf.mesos-cluster
    $ update-alternatives --set hadoop-conf /etc/hadoop/conf.mesos-cluster
  • Add configuration to core-site.xml located under /etc/hadoop/
    • Make sure they have the hostnames – not the IP address of the NameNode
  • Install cloudera CDH5: There are multiple ways to do this. Probably the easiest is the following:
    $ wget http://archive.cloudera.com/cdh5/one-click-install/precise/amd64/cdh5-repository_1.0_all.deb
    $ dpkg -i cdh5-repository_1.0_all.deb
    • To master node:
      $ sudo apt-get update;
      $ sudo apt-get update; sudo apt-get install hadoop-hdfs-namenode
    • To slave nodes:
      $ sudo apt-get update; sudo apt-get install hadoop-hdfs-datanode
      $ sudo apt-get update; sudo apt-get install hadoop-client

      cp /etc/hadoop/conf.empty/log4j.properties /etc/hadoop/conf.name/log4j.properties

  • Upgrade/ Format namenode:
    $ service hadoop-hdfs-namenode upgrade
    $ sudo -u hdfs hdfs namenode -format
  • *If the storage directories change hdfs should be reformatted
  • Start HDFS:
    • On master:
      $ service hadoop-hdfs-namenode start
    • On slave:
      $ service hadoop-hdfs-datanode start
  • Optionally: Configure your cluster to start the services after a system restart
    • On master node:
      $ update-rc.d hadoop-hdfs-namenode defaults
      
      • If you have also setted up zookeeper then:
        $ update-rc.d zookeeper-server defaults
    • On the slaves:
      $ update-rc.d hadoop-hdfs-datanode defaults