Issue
I am trying to test a very simple hadoop map-reduce job on my computer (MacOS 10.7) on the local filesystem (in standalone mode). The job takes a .csv file (data-01) and counts the occurrences of some fields.
I downloaded CDH4 hadoop, ran the job, it seemed to start normally but after all the split were processed I got the following error:
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:9999220736+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000299_0
13/03/12 12:11:18 INFO mapred.Task: Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10032775168+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000300_0
13/03/12 12:11:18 INFO mapred.Task: Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10066329600+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000301_0
13/03/12 12:11:18 INFO mapred.Task: Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10099884032+33554432
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:18 INFO mapred.LocalJobRunner: Starting task: attempt_local2133287029_0001_m_000302_0
13/03/12 12:11:18 INFO mapred.Task: Using ResourceCalculatorPlugin : null
13/03/12 12:11:18 INFO mapred.MapTask: Processing split: file:/path/in/data-01:10133438464+32025555
13/03/12 12:11:18 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/03/12 12:11:19 INFO mapred.LocalJobRunner: Map task executor complete.
13/03/12 12:11:19 WARN mapred.LocalJobRunner: job_local2133287029_0001
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:949)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:389)
at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:78)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:668)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:740)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
13/03/12 12:11:19 INFO mapreduce.Job: Job job_local2133287029_0001 failed with state FAILED due to: NA
13/03/12 12:11:19 INFO mapreduce.Job: Counters: 0
I get the same error no matter how small the input file is...
Solution
It happened that the default options were superseding my local configuration (I still don't understand why).
export HADOOP_CLIENT_OPTS="-Xmx1024m"
solved the problem.
Answered By - ngrislain
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.