spark-shell启动失败

背景

公司小组分配了三台虚拟机,在虚拟机上面意欲装hadoop集群及spark on yarn

版本

Hadoop 2.7.2
spark 2.3.2

问题

配置好hadoop集群与spark配置后,启动spark-shell --master yarn报错如下

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
19/04/07 14:14:35 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
19/04/07 14:14:47 ERROR TransportClient: Failed to send RPC 4632628449949268526 to /172.17.1.150:41470: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
19/04/07 14:14:47 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 1 at RPC address 172.17.1.150:41480, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC 4632628449949268526 to /172.17.1.150:41470: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
    at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
19/04/07 14:14:47 ERROR YarnScheduler: Lost executor 1 on wangwei01: Slave lost
Spark context Web UI available at http://wangwei02:4040
Spark context available as 'sc' (master = yarn, app id = application_1554616735892_0001).
Spark session available as 'spark'.
19/04/07 14:14:52 ERROR YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
19/04/07 14:14:52 ERROR TransportClient: Failed to send RPC 7884125299619016997 to /172.17.1.150:41490: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
19/04/07 14:14:52 ERROR YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC 7884125299619016997 to /172.17.1.150:41490: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1316)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
    at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
19/04/07 14:14:53 ERROR Utils: Uncaught exception in thread Yarn application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:567)
    at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:95)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:155)
    at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:508)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1804)
    at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1931)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1361)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1930)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:112)
Caused by: java.io.IOException: Failed to send RPC 7884125299619016997 to /172.17.1.150:41490: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1316)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
    at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.3.2
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_201)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

ApplicationMaster日志如下

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop/hadoop-root/nm-local-dir/usercache/root/filecache/10/__spark_libs__2498524289906141930.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
19/04/07 14:14:41 INFO SignalUtils: Registered signal handler for TERM
19/04/07 14:14:41 INFO SignalUtils: Registered signal handler for HUP
19/04/07 14:14:41 INFO SignalUtils: Registered signal handler for INT
19/04/07 14:14:41 INFO SecurityManager: Changing view acls to: root
19/04/07 14:14:41 INFO SecurityManager: Changing modify acls to: root
19/04/07 14:14:41 INFO SecurityManager: Changing view acls groups to: 
19/04/07 14:14:41 INFO SecurityManager: Changing modify acls groups to: 
19/04/07 14:14:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
19/04/07 14:14:42 INFO ApplicationMaster: Preparing Local resources
19/04/07 14:14:42 INFO ApplicationMaster: ApplicationAttemptId: appattempt_1554616735892_0001_000001
19/04/07 14:14:43 INFO ApplicationMaster: Waiting for Spark driver to be reachable.
19/04/07 14:14:43 INFO ApplicationMaster: Driver now available: wangwei02:34614
19/04/07 14:14:43 INFO TransportClientFactory: Successfully created connection to wangwei02/172.17.1.151:34614 after 47 ms (0 ms spent in bootstraps)
19/04/07 14:14:43 INFO ApplicationMaster: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*<CPS>{{PWD}}/__spark_conf__/__hadoop_conf__
    SPARK_YARN_STAGING_DIR -> hdfs://wangwei01:9000/user/root/.sparkStaging/application_1554616735892_0001
    SPARK_USER -> root

  command:
    {{JAVA_HOME}}/bin/java \ 
      -server \ 
      -Xmx1024m \ 
      -Djava.io.tmpdir={{PWD}}/tmp \ 
      '-Dspark.driver.port=34614' \ 
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ 
      -XX:OnOutOfMemoryError='kill %p' \ 
      org.apache.spark.executor.CoarseGrainedExecutorBackend \ 
      --driver-url \ 
      spark://CoarseGrainedScheduler@wangwei02:34614 \ 
      --executor-id \ 
      <executorId> \ 
      --hostname \ 
      <hostname> \ 
      --cores \ 
      1 \ 
      --app-id \ 
      application_1554616735892_0001 \ 
      --user-class-path \ 
      file:$PWD/__app__.jar \ 
      1><LOG_DIR>/stdout \ 
      2><LOG_DIR>/stderr

  resources:
    __spark_libs__ -> resource { scheme: "hdfs" host: "wangwei01" port: 9000 file: "/user/root/.sparkStaging/application_1554616735892_0001/__spark_libs__2498524289906141930.zip" } size: 235228182 timestamp: 1554617677670 type: ARCHIVE visibility: PRIVATE
    __spark_conf__ -> resource { scheme: "hdfs" host: "wangwei01" port: 9000 file: "/user/root/.sparkStaging/application_1554616735892_0001/__spark_conf__.zip" } size: 183160 timestamp: 1554617678303 type: ARCHIVE visibility: PRIVATE

===============================================================================
19/04/07 14:14:43 INFO RMProxy: Connecting to ResourceManager at wangwei01/172.17.1.150:8030
19/04/07 14:14:43 INFO YarnRMClient: Registering the ApplicationMaster
19/04/07 14:14:43 INFO YarnAllocator: Will request 2 executor container(s), each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
19/04/07 14:14:43 INFO YarnAllocator: Submitted 2 unlocalized container requests.
19/04/07 14:14:43 INFO ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
19/04/07 14:14:44 INFO AMRMClientImpl: Received new token for : wangwei01:41680
19/04/07 14:14:44 INFO AMRMClientImpl: Received new token for : wangwei02:35865
19/04/07 14:14:44 INFO YarnAllocator: Launching container container_1554616735892_0001_01_000002 on host wangwei01 for executor with ID 1
19/04/07 14:14:44 INFO YarnAllocator: Launching container container_1554616735892_0001_01_000003 on host wangwei02 for executor with ID 2
19/04/07 14:14:44 INFO YarnAllocator: Received 2 containers from YARN, launching executors on 2 of them.
19/04/07 14:14:44 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
19/04/07 14:14:44 INFO ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
19/04/07 14:14:44 INFO ContainerManagementProtocolProxy: Opening proxy : wangwei01:41680
19/04/07 14:14:44 INFO ContainerManagementProtocolProxy: Opening proxy : wangwei02:35865
19/04/07 14:14:46 ERROR ApplicationMaster: RECEIVED SIGNAL TERM
19/04/07 14:14:46 INFO ApplicationMaster: Final app status: UNDEFINED, exitCode: 16, (reason: Shutdown hook called before final status was reported.)
19/04/07 14:14:46 INFO ShutdownHookManager: Shutdown hook called

上图可以看出ApplicationMaster进程丢失,导致连接失败
在AM启动节点上面查看NM的日志信息如下(部分日志)

2019-04-07 14:14:44,317 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop/hadoop-root/nm-local-dir/usercache/root/appcache/application_1554616735892_0001/container_1554616735892_0001_01_000002/default_container_executor.sh]
2019-04-07 14:14:44,321 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1554616735892_0001_01_000002 transitioned from LOCALIZED to RUNNING
2019-04-07 14:14:46,166 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1554616735892_0001_01_000002
2019-04-07 14:14:46,176 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 29146 for container-id container_1554616735892_0001_01_000002: 234.3 MB of 2 GB physical memory used; 2.8 GB of 4.2 GB virtual memory used
2019-04-07 14:14:46,188 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 29105 for container-id container_1554616735892_0001_01_000001: 330.8 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used
2019-04-07 14:14:46,188 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Process tree for container: container_1554616735892_0001_01_000001 has processes older than 1 iteration running over the configured limit. Limit=2254857728, current usage = 2498314240
2019-04-07 14:14:46,188 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=29105,containerID=container_1554616735892_0001_01_000001] is running beyond virtual memory limits. Current usage: 330.8 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1554616735892_0001_01_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 29110 29105 29105 29105 (java) 542 24 2382413824 84382 /opt/jdk1.8.0_201/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop/hadoop-root/nm-local-dir/usercache/root/appcache/application_1554616735892_0001/container_1554616735892_0001_01_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/hadoop-2.7.2/logs/userlogs/application_1554616735892_0001/container_1554616735892_0001_01_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg wangwei02:34614 --properties-file /tmp/hadoop/hadoop-root/nm-local-dir/usercache/root/appcache/application_1554616735892_0001/container_1554616735892_0001_01_000001/__spark_conf__/__spark_conf__.properties 
    |- 29105 29104 29105 29105 (bash) 0 0 115900416 307 /bin/bash -c /opt/jdk1.8.0_201/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop/hadoop-root/nm-local-dir/usercache/root/appcache/application_1554616735892_0001/container_1554616735892_0001_01_000001/tmp -Dspark.yarn.app.container.log.dir=/opt/hadoop-2.7.2/logs/userlogs/application_1554616735892_0001/container_1554616735892_0001_01_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'wangwei02:34614' --properties-file /tmp/hadoop/hadoop-root/nm-local-dir/usercache/root/appcache/application_1554616735892_0001/container_1554616735892_0001_01_000001/__spark_conf__/__spark_conf__.properties 1> /opt/hadoop-2.7.2/logs/userlogs/application_1554616735892_0001/container_1554616735892_0001_01_000001/stdout 2> /opt/hadoop-2.7.2/logs/userlogs/application_1554616735892_0001/container_1554616735892_0001_01_000001/stderr 

2019-04-07 14:14:46,189 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Removed ProcessTree with root 29105
2019-04-07 14:14:46,189 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1554616735892_0001_01_000001 transitioned from RUNNING to KILLING

有此日志可以看出由于AM的vmem内存使用(2.3G)超出了默认的(2.1G),因此被kill掉了

解决方案

  1. 设置虚拟内存与物理内存比率
    在yarn-site.xml中修改yarn.nodemanager.vmem-pmem-ratio(default:2.1)
<property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>10</value>
</property>
  1. yarn.nodemanager.vmem-check-enabled,false掉
<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
</property>
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 199,393评论 5 467
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 83,790评论 2 376
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 146,391评论 0 330
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 53,703评论 1 270
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 62,613评论 5 359
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,003评论 1 275
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,507评论 3 390
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,158评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,300评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,256评论 2 317
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,274评论 1 328
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,984评论 3 316
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,569评论 3 303
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,662评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,899评论 1 255
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,268评论 2 345
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 41,840评论 2 339

推荐阅读更多精彩内容