Processed 0.25 TB on Amazon EMR clusters
August 28, 2014 Leave a comment
I did that by provisioning 1 m1.medium Master node and 15 m1.xlarge Core nodes. This is easy and relatively cheap.
Since I deal with Pig I don’t have to design my MapReduce Jobs. I have to learn how to code MR jobs in the future.
This command stores the result in a file. I used to count the records in the file but I realized I don’t have to because the command actually prints how many records it writes.
store variable INTO '/user/hadoop/file' USING PigStorage();