Hadoop 2.7.2 运行错误集锦

1. 借鉴

Yarn下Mapreduce的内存参数理解&xml参数配置
MapReduce内存调优
hive常见问题(持续更新。。。)
运行mapreduce时报错Stack trace: ExitCodeException exitCode=1:
运行hadoop的时候提示物理内存或虚拟内存溢出的解决方案running beyond physical memory或者beyond vitual memory limits
hive报错:running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physica终极解决方式
Yarn下Mapreduce的内存参数理解&xml参数配置

2. 开始

问题⑴ Cannot create directory xxx. Name node is in safe mode

案发现场

Exception in thread "main" java.lang.RuntimeException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenod
e.SafeModeException): Cannot create directory /tmp/hive/root/d4f4a84a-b9a5-4f3f-
92db-51e6f232d44f. Name node is in safe mode.

亮有一计

① 优先查看一下datanode是否全部启动,如果没有,先启动datanode。
datanode全部启动了应该可以恢复,可通过50070的端口控制台的overview页面查看错误信息

② 如果datanode全部启动又稍等了一会还是出现问题,执行以下命令退出安全模式

bin/hadoop dfsadmin -safemode leave

输出以下内容,表示关闭成功

[root@3217b3148ebd hadoop]# hadoop dfsadmin -safemode leave
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Safe mode is OFF

问题⑵ requested memory < 0, or requested memory > max configured

案发现场

java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=1024
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
        at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
        at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
        at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

亮有一计

这种问题就是内存大小的问题。
查看以下参数:

  1. yarn.app.mapreduce.am.resource.mb
    执行appManager时需要分配的内存,默认为1536

  2. yarn.nodemanager.resource.memory-mb
    该结点向操作系统申请的内存总量,默认为8G

  3. yarn.scheduler.minimum-allocation-mb
    单个container可分配的内存最小配额,默认为1G

  4. yarn.scheduler.maximum-allocation-mb
    单个container可分配的内存总量上限,默认为8G

  5. mapreduce.map.memory.mb
    map任务执行时要申请的内存,默认1G

  6. mapreduce.reduce.memory.mb
    reduce任务执行时要申请的内存,默认1G

  7. mapreduce.task.io.sort.mb
    任务在做spill时,内存的缓存量,默认为100

以上参数要配置的话需要在yarn-site.xml中配置,并且同步到集群的其他机器,我的配置如下:

yarn.app.mapreduce.am.resource.mb      = 200
yarn.nodemanager.resource.memory-mb    = 200
yarn.scheduler.minimum-allocation-mb   = 512
yarn.scheduler.maximum-allocation-mb   = 1024
mapreduce.map.memory.mb                = 1024
mapreduce.reduce.memory.mb             = 1024
mapreduce.task.io.sort.mb              = 100

问题⑶ beyond virtual memory limits

案发现场

Diagnostics: Container [pid=2848,containerID=container_1593078270463_0001_02_000001] is running 
beyond virtual memory limits. Current usage: 96.2 MB of 512 MB physical 
memory used; 2.6 GB of 1.0 GB virtual memory used. Killing container.

亮有一计

该错误是YARN的虚拟内存计算方式导致,我们的内存设置的是512MB,YARN根据此值乘以一个比例(默认为2.1)得出申请的虚拟内存的值,当YARN计算的用户程序所需虚拟内存值大于计算出来的值时,就会报出这个错误。调节比例值(yarn.nodemanager.vmem-pmem-ratio)或者禁用虚拟内存检查(yarn.nodemanager.vmem-check-enabled)都可以解决该问题。
当然最好的方式是增加可用内存
yarn-site.xml配置如下

<!-- 禁用物理内存检查 -->
<property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
</property>
<!-- 禁用虚拟内存检查 -->
<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
 </property>

问题⑷ java.lang.IllegalArgumentException: java.net.UnknownHostException: 3118b3248ebd

案发现场

ERROR org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt: Error trying to assign container token and NM
 token to an allocated container container_1593059362586_0003_01_000001
java.lang.IllegalArgumentException: java.net.UnknownHostException: 3118b3248ebd
        at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
        at org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:258)
        at org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:220)
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyAllocatedContainersAndNMTokens(SchedulerApplicationAttem
pt.java:454)
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:269)
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:988)
        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:971)
        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:964)
        at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
        at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:789)
        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:105)
        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:795)
        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:776)
        at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
        at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: 3118b3248ebd
        ... 19 more

亮有一计

在/etc/hosts文件中配置与该主机名对应的ip

172.173.16.10   3118b3248ebd

3. 大功告成

持续更新

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。