Deployment on Heroku


Screen Shot 2014-12-05 at 12.50.10 PM I recently pushed my AngularJS/Spring Boot/Rest application to Heroku.

buildscript {
    repositories {
        maven { url "" }
    dependencies {

apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'spring-boot'
mainClassName = "rest.controller.Application"

jar {
    baseName = 'Angular-Boot-Rest'
    version =  '0.1.0'

repositories {
    maven { url "" }

tasks.withType(Copy) {
        eachFile { println it.file }

dependencies {

task wrapper(type: Wrapper) {
    gradleVersion = '1.11'
task stage(dependsOn: ["build"]){}

I added a new task stage and mainClassName.

It allots a free port on which Tomcat binds. If one specified one’s own port then the application does not bind to it within 60 seconds which is the time limit allowed.

Heroku needs this file too.


web: java $JAVA_OPTS -jar target/Angular-Boot-Rest.jar

This is the screenshot. Note the URL which is allotted too.

Screen Shot 2014-12-05 at 1.02.05 PM

Processed 0.25 TB on Amazon EMR clusters

I did that by provisioning 1 m1.medium Master node and 15 m1.xlarge Core nodes. This is easy and relatively cheap.
Since I deal with Pig I don’t have to design my MapReduce Jobs. I have to learn how to code MR jobs in the future.

This command stores the result in a file. I used to count the records in the file but I realized I don’t have to because the command actually prints how many records it writes.

store variable INTO '/user/hadoop/file' USING PigStorage();


This execution cost me $1.76 for about 1 hour. The number of machines is the same(previous post).

X = FILTER ntriples BY (subject matches '.*business.*');
y = foreach X generate subject as subject2, predicate as predicate2, object as object2 PARALLEL 50;
j = JOIN X BY subject,y BY subject2 PARALLEL 50;

Screen Shot 2014-08-26 at 8.06.23 PM

Counting the records in the file.

FILE = LOAD 'join-results';

Cluster configuration

Screen Shot 2014-08-22 at 11.40.38 AM

So this is the real deal. The Pig Job mentioned in the previous post failed when the actual file was processed on the EMR cluster. It succeeded only after I resized the cluster and added more heap space.

I used 1 m1.small master node, 10 m1.small code nodes and 5 m1.small task nodes. I think so many nodes are not needed to process this file and just the increased heap without the task nodes would have been sufficient.

Screen Shot 2014-08-22 at 11.47.09 AM
Screen Shot 2014-08-22 at 11.47.29 AM

Big Data analysis on the cloud

I was given this dataset( I believe it is RDF. But more importantly I executed some Pig Jobs locally and this is how it worked for me. The main idea here is how it helped me to learn about Pig MapReduce Jobs.

The data is in quads like this.

<> <> <><> .
<> <> <> <> .

After processing by another Pig script I started working with this data.


The schema of the data is like this.

count_by_object: {group: chararray,count: long}

x = GROUP count_by_object BY count;
y = FOREACH x GENERATE group,COUNT(count_by_object);

Line 1 shown above groups the tuples by the count. This is what I get.


Line 2 of the Pig script give me this result.


It is a interesting way to learn Pig which internally spawns Hadoop MapReduce Jobs. But the real fun is the Amazon Elastic MapReduce on-demand clusters. If the file is very large the EMR clusters should be used. It is basically Big Data analysis on the cloud.

My AWS Pig Job

I executed some Pig Jobs on Elastic MapReduce by cloning the same cluster I used earlier(previous blog post). After that cluster setup my billing details were these.

I am still learning Pig. A sample of my pig commands are

grunt> fs -mkdir /user/hadoop
grunt> fs -ls /user/hadoop
grunt> register s3n://
2014-08-20 15:10:26,625 [main] INFO - Downloading file s3n:// to path /tmp/pig8610216688759169361tmp/myudfs.jar
2014-08-20 15:10:26,632 [main] INFO  org.apache.hadoop.fs.s3native.NativeS3FileSystem - Opening 's3n://' for reading
2014-08-20 15:10:26,693 [main] INFO  org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library
grunt> raw = LOAD 's3n://' USING TextLoader as (line:chararray);
grunt> ntriples = foreach raw generate FLATTEN(myudfs.RDFSplit3(line)) as (subject:chararray,predicate:chararray,object:chararray);

After submitting the jobs one can track the Jobs using the tracker UI.

The successful completion of the Hadoop Jobs.

Screen Shot 2014-08-20 at 9.03.01 PM

This is an emancipatory experience 🙂 One is set free from the local offshore job experience.

My first AWS cluster

I have deployed to the cloud before but this time it is AWS.

Screen Shot 2014-08-20 at 10.40.17 AM

Screen Shot 2014-08-20 at 10.42.14 AM

Screen Shot 2014-08-20 at 10.45.01 AM

Screen Shot 2014-08-20 at 10.45.19 AM

A billing alarm for safety.

Screen Shot 2014-08-20 at 11.07.26 AM