音频信号处理

信号(Signal)篇 (待补充)

什么是信号?说白了就是一个时间序列,非严格定义:x = x(0), x(1), ... x(n),其中,n为序列长度,此为离散信号。x = x(n), n \in R^{+},此为连续信号。

  • 采(集)样(本)(sampling):从连续数据中,采集到离散的数据点。采样有快有慢,由“采样率”来衡量,单位为赫兹(Hz)。比如,采样率为1Hz,则每秒获得一个样本点,以此类推。
  • 采样定理(sampling theorem):采样率最低应该为信号中的最高频率的两倍,否则,会产生频率混叠的现象。
  • 混叠(alias):由于采样率不足,同样的采样点,可以对应不同频率的信号。那么,频域里面就会解析出原本高采样率下的高频信号,映射到低采样率下的低频信号,与原本的频率产生交叉,即“混叠”。
  • 量化(quantize):离散化数字的存储,需要多个数字位才能最终存储。
  • 编码(encode):通过某种方式进行编码。
  • 压缩(compress):信号可能会存在冗余,此时,可以通过某种手段,在不损失原始信息或者尽可能小地损失原始信息的情况下,对存储空间进行优化,此为压缩。
  • 卷积(convolve):f(a*b) = \sum(a[n], b[N-n])
  • 自/互相关(correlation):corr =。是某种卷积,可以衡量出两个信号之间哪一段最相似。
  • 模拟(连续)信号(Analog signal):见上面描述;
  • 数字(离散)信号(Digital signal):见上面描述;一图亮之:
  • ADC(Analog digital converter):模数转换流程为:
  • DAC(Digital analog converter):数模转换流程为:
  • 滤波器(Filter):对信号进行某个操作的函数。
  • FIR/IIR(Finite/Infinite Impulse Response):
  • 梳状(comb)/陷波(notch):
  • 调制(Modulate):将信号以某种方式进行处理,比如:幅度调制,相位调制等。为什么要调制?(低频不利于传输)
  • 解调(Demodulate):将调制的信号,反调制的操作,称为解调。
  • 频率/周期(frequency/period):频率:每秒发生多少次,周期:一次完整信号用的时间。所以有: f = \frac{1}{T}。比如一个正弦信号,初相为0来考虑:是通过一定角速度旋转之后得到的,sin(\omega t) = sin(\frac{2\pi}{T} t) = sin(2\pi f t)
  • 角度/相位(angle/phase):sin(\omega t + \phi_0),相位为\omega t + \phi_0,初相为\phi_0
  • 傅里叶变换(fourier transform):f(x) = \sum^{N} (a_n cos(nt) + b_n sin(nt)),一个信号可以近似地由若干不同频率的正弦信号表示。在变换之后,获得了不同频率的正弦信号,所以,自然而然可以把不同信号前面的系数a_nb_n作为幅度,获得信号的频谱图。
  • 频谱(spectrum):一段信号经过上述fft之后的信号频率统计图。
  • 相位谱:
  • 能量(energy):定义为x^2
  • 频段(frequency band):表示某个频率段。[a, b] Hz
  • 带宽(bandwidth):在模拟信号系统又叫频宽,是指在固定的时间可传输的资料数量,亦即在传输管道中可以传递数据的能力。通常以每秒传送周期或赫兹(Hz)来表示。
  • 基线/基带(baseband):没有经过调制(进行频谱搬移和变换)的原始电信号。
  • 谐波(harmonic):比如某个频率为f_0,其谐波即为该频率的整数倍k f_0, k \in N
  • 工频():现代电传输在某个频率下,所以电器会有该频率及其谐波。以我国电力系统为例,家用电为220V,50Hz。因而,在50Hz及其整数倍会出现工频及谐波干扰。一些电信号受该干扰较为严重,比如肌电信号。
  • 时频谱(spectrogram):频谱没有考虑时间上的信息,而时频谱通过加窗,或者其他操作(如小波中的伸缩等操作),来达到频率信息和时间信息的trade-off。
  • 窗函数:为了防止能量泄露等,使用某种函数,对原始信号进行处理,使得原信号更像周期信号。
  • 短时傅里叶变换(short-time fourier transform):主要的思想是,通过分帧加窗,滑动窗口,对每个帧内的短时间信号,进行傅里叶变换。如此操作之后,能分别得到各个帧的傅里叶变换结果,可以表示出该短时窗的频谱图。由不同短时窗的频谱图组成的图像,就是时频图。
  • 小波变换(wavelet transform):个人认为:傅里叶变换使用的是正弦波基底进行分解。而小波变换则是使用不同的基底进行分解,而这些基底被称为小波函数。
  • Gabor变换(Gabor transform):基于Gabor分析的理论。
  • WVD(Wigner-Ville Distibution):(伪)WVD。

声音/乐音(Acoustic/Music)

声音是一种机械波(mechanic wave),由振动(vibration)产生,经过不同介质(medium)传播之后,到达接收端。
乐音/噪声:取决于当前是否悦耳,当前悦耳的则为乐音,否则则为噪声。
音频格式:主要分为有损(经过压缩)和无损两种。有损常见的有:mp3等,无损的常见有:wav,flac等。wav由于较为普遍,大部分的音频数据集使用的均为wav格式。


实操部分:


1. 读取及可视化:

# 使用wave进行wav读取。
import wave
# Import audio file as wave object
good_morning = wave.open("good-morning.wav", "r")
# Convert wave object to bytes
good_morning_soundwave = good_morning.readframes(-1)
# View the wav file in byte form
good_morning_soundwave
# Output:
b'\xfd\xff\xfb\xff\xf8\xff\xf8\xff\xf7\...
# wave读取之后为bytes,需要转换为更加有用的数值格式,比如int16。然后打印出前10个样本。
import numpy as np
# Convert soundwave_gm from bytes to integers
signal_gm = np.frombuffer(soundwave_gm, dtype='int16')
# Show the first 10 itemssignal_gm[:10]
# Output:
array([ -3,  -5,  -8,  -8,  -9, -13,  -8, -10,  -9, -11], dtype=int1
# 可以获得采样率等信息。
# Get the frame rateframe
rate_gm = good_morning.getframerate()
# Show the frame rateframerate_gm
# Output:
48,000
# 获得时间戳信息。
# Return evenly spaced values between start and stop
np.linspace(start=1, stop=10, num=10)
# Output:
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])
# Get the timestamps of the good morning sound wave
time_gm = np.linspace(start=0, 
                      stop=len(soundwave_gm)/framerate_gm,
                      num=len(soundwave_gm))
# View first 10 time stamps of good morning sound wave
time_gm[:10]
# Output:
array([0.00000000e+00, 2.08334167e-05, 4.16668333e-05, 6.25002500e-05, 
       8.33336667e-05, 1.04167083e-04, 1.25000500e-04, 1.45833917e-04, 
       1.66667333e-04, 1.87500750e-04])
import matplotlib.pyplot as plt
# Initialize figure and setup title
plt.title("Good Afternoon vs. Good Morning")
# x and y axis labels
plt.xlabel("Time (seconds)")
plt.ylabel("Amplitude")
# Add good morning and good afternoon values
plt.plot(time_ga, soundwave_ga, label ="Good Afternoon")
plt.plot(time_gm, soundwave_gm, label="Good Morning", alpha=0.5)
# Create a legend and show our plot
plt.legend()
plt.show()
good morning vs good afternoon

2. 语音识别库

Some existing python libraries:

  • CMU Sphinx
  • Kaldi
  • SpeechRecognition
  • Wav2letter++ by Facebook

此处使用:SpeechRecognition库:
pip install SpeechRecognition

# Import the SpeechRecognition library
import speech_recognition as sr
# Create an instance of Recognizer
recognizer = sr.Recognizer()
# Set the energy threshold
recognizer.energy_threshold = 300
# Recognizer class has built-in functions which interact with speech APIs 

# - recognize_bing()
# - recognize_google()
# - recognize_google_cloud()
# - recognize_wit()
# Input: audio_file
# Output: transcribed speech from audio_file
# Import SpeechRecognition library
import speech_recognition as sr
# Setup recognizer instance
recognizer = sr.Recognizer()
# Read in audio file
clean_support_call = sr.AudioFile("clean-support-call.wav")
# Check type of clean_support_call
type(clean_support_call)

输出:<class 'speech_recognition.AudioFile'>

# clean_support_call 此时是 AudioFile类,还需要转成AudioData类。
with clean_support_call as source:
  # Record the audio
  audio_data = recognizer.record(source, duration=x.x, offset=y.y) # duration为需要的音频时间,offset为距离起始点的时间偏移,单位均为秒。
type(audio_data)

输出:<class 'speech_recognition.AudioData'>

# Transcribe speech using Google web API 
recognizer.recognize_google(audio_data=audio_file, language="en-US") # 由于谷歌容易被墙,改成微软
recognizer.recognize_bing(audio_data=audio_file, language="en-US", key="xxxx") # key为微软服务对应的key(要配置Azure服务),该函数实现里面的等待时间需要加长,URL可能也需要改一下。language可以改为别的语言,具体参考官方的语言命名。比如中文:zh-CN。非说话声音(熊的叫声)可能的得到一个空返回。

输出:hello I'd like to get some.


# 多说话人的情况情况
# Import an audio file with multiple speakers
multiple_speakers = sr.AudioFile("multiple-speakers.wav")
# Convert AudioFile to AudioData
with multiple_speakers as source:
    multiple_speakers_audio = recognizer.record(source)
# Recognize the AudioData
recognizer.recognize_google(multiple_speakers_audio)

输出:one of the limitations of the speech recognition library is that it doesn't recognise different speakers and voices it will just return it all as one block of text

# Import audio files separately
speakers = [sr.AudioFile("s0.wav"), sr.AudioFile("s1.wav"), sr.AudioFile("s2.wav")]
# Transcribe each speaker individually
for i, speaker in enumerate(speakers):
  with speaker as source:
        speaker_audio = recognizer.record(source)
  print(f"Text from speaker {i}: {recognizer.recognize_google(speaker_audio)}"

输出:Text from speaker 0: one of the limitations of the speech recognition library Text from speaker 1: is that it doesn't recognise different speakers and voices Text from speaker 2: it will just return it all as one block a text

# 带噪声情况
# Import audio file with background nosie
noisy_support_call = sr.AudioFile(noisy_support_call.wav)
with noisy_support_call as source:# Adjust for ambient noise and record 
  recognizer.adjust_for_ambient_noise(source, duration=0.5)
  noisy_support_call_audio = recognizer.record(source)
# Recognize the audio
recognizer.recognize_google(noisy_support_call_audio)

输出:hello ID like to get some help setting up my calories


更多!结合pydub那部分一起学习:

创建一些API函数供使用

# Import os module
import os
# Check the folder of audio files
os.listdir("acme_audio_files")
#输出:(['call_1.mp3',  'call_2.mp3',  'call_3.mp3',  'call_4.mp3'])

import speech_recognition as sr
from pydub import AudioSegment
# Import call 1 and convert to .wav
call_1 = AudioSegment.from_file("acme_audio_files/call_1.mp3")
call_1.export("acme_audio_files/call_1.wav", format="wav")
# Transcribe call 1
recognizer = sr.Recognizer()
call_1_file = sr.AudioFile("acme_audio_files/call_1.wav")
with call_1_file as source:
    call_1_audio = recognizer.record(call_1_file)
recognizer.recognize_google(call_1_audio)

Functions we'll create:

  • convert_to_wav() converts non-.wav files to.wav files.
  • show_pydub_stats() shows the audio atrributes of a .wav file.
  • transcribe_audio() uses recognize_google() to transcribe a.wav file.
# Create function to convert audio file to wav
def convert_to_wav(filename):
  # "Takes an audio file of non .wav format and converts to .wav"
  # Import audio file  
  audio = AudioSegment.from_file(filename)
  # Create new filename  
  new_filename = filename.split(".")[0] + ".wav"
  # Export file as .wav  
  audio.export(new_filename, format="wav")  
  print(f"Converting {filename} to {new_filename}...")

convert_to_wav("acme_studios_audio/call_1.mp3")
#输出:Converting acme_audio_files/call_1.mp3 to acme_audio_files/call_1.wav...

def show_pydub_stats(filename):
  # "Returns different audio attributes related to an audio file."
  # Create AudioSegment instance  
  audio_segment = AudioSegment.from_file(filename)
  # Print attributes  
  print(f"Channels: {audio_segment.channels}")
  print(f"Sample width: {audio_segment.sample_width}")
  print(f"Frame rate (sample rate): {audio_segment.frame_rate}")
  print(f"Frame width: {audio_segment.frame_width}")
  print(f"Length (ms): {len(audio_segment)}")  
  print(f"Frame count: {audio_segment.frame_count()}")

show_pydub_stats("acme_audio_files/call_1.wav")
#输出:Channels: 2 Sample width: 2 Frame rate (sample rate): 32000 Frame width: 4 Length (ms): 54888 Frame count: 1756416.0

# Create a function to transcribe audio
def transcribe_audio(filename):
  # "Takes a .wav format audio file and transcribes it to text."
  # Setup a recognizer instance
  recognizer = sr.Recognizer()
  # Import the audio file and convert to audio data
  audio_file = sr.AudioFile(filename)
  with audio_file as source:
    audio_data = recognizer.record(audio_file)
  # Return the transcribed text
  return recognizer.recognize_google(audio_data)

transcribe_audio("acme_audio_files/call_1.wav")
#输出:"hello welcome to Acme studio support line my name is Daniel how can I best help you hey Daniel this is John I've recently bought a smart from you guys and I know that's not good to hear John let's let's get your cell number and then we can we can set up a way to fix it for you one number for 1757 varies how long do you reckon this is going to take about an hour now while John we're going to try our best hour I will we get the sealing member will start up this support case I'm just really really really really I've been trying to contact 34 been put on hold more than an hour and half so I'm not really happy I kind of wanna get this issue 6 is fossil"

情感分析

$ pip install nltk

# Download required NLTK packages
import nltk
nltk.download("punkt")
nltk.download("vader_lexicon")
# Import sentiment analysis class
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# Create sentiment analysis instance
sid = SentimentIntensityAnalyzer()
# Test sentiment analysis on negative text
print(sid.polarity_scores("This customer service is terrible."))
#输出:{'neg': 0.437, 'neu': 0.563, 'pos': 0.0, 'compound': -0.4767
# Transcribe customer channel of call_3
call_3_channel_2_text = transcribe_audio("call_3_channel_2.wav")
print(call_3_channel_2_text)
#输出:"hey Dave is this any better do I order products are currently on July 1st and I haven't received the product a three-week step down this parable 6987 5"
# Sentiment analysis on customer channel of call_3
sid.polarity_scores(call_3_channel_2_text){'neg': 0.0, 'neu': 0.892, 'pos': 0.108, 'compound': 0.4404}
call_3_paid_api_text = "Okay. Yeah. Hi, Diane. This is paid on this call and obvi...
# Import sent tokenizer
from nltk.tokenize import sent_tokenize
# Find sentiment on each sentence
for sentence in sent_tokenize(call_3_paid_api_text):
  print(sentence)
  print(sid.polarity_scores(sentence))

# 输出:Okay.{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.2263}Yeah.{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.296}Hi, Diane.{'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}This is paid on this call and obviously the status of my orders at three weeks ago, and that service is terrible.{'neg': 0.129, 'neu': 0.871, 'pos': 0.0, 'compound': -0.4767}Is this any better?{'neg': 0.0, 'neu': 0.508, 'pos': 0.492, 'compound': 0.4404} Yes...
# Install spaCy
$ pip install spacy
# Download spaCy language model
$ python -m spacy download en_core_web_s

import spacy
# Load spaCy language model
nlp = spacy.load("en_core_web_sm")
# Create a spaCy doc
doc = nlp("I'd like to talk about a smartphone I ordered on July 31st from your Sydney store, my order number is 40939440. I spoke to Georgia about it last week.")

# Show different tokens and positionsfor token in doc:  
print(token.text, token.idx)
#输出:
# I 0
# 'd 1
# like 4
# to 9
# talk 12
# about 17
# a 23
# smartphone 25...

# Show sentences in doc
for sentences in doc.sents:
  print(sentence)

#输出:I'd like to talk about a smartphone I ordered on July 31st from your Sydney store, my order number is 4093829. I spoke to one of your customer service team, Georgia, yesterday.

Some of spaCy's built-in named entities:

  • PERSON People, including fictional.
  • ORG Companies, agencies, institutions, etc.
  • GPE Countries, cities, states.
  • PRODUCT Objects, vehicles, foods, etc. (Not services.)
  • DATE Absolute or relative dates or periods.
  • TIME Times smaller than a day.
  • MONEY Monetary values, including unit.
  • CARDINAL Numerals that do not fall under another type.
# Find named entities in doc
for entity in doc.ents:
  print(entity.text, entity.label_)

#输出:
# July 31st DATE
# Sydney GPE
# 4093829 CARDINAL
# one CARDINAL
# Georgia GPE
# yesterday DATE
# Import EntityRuler class
from spacy.pipeline import EntityRuler
# Check spaCy pipeline
print(nlp.pipeline)
#输出:[('tagger', <spacy.pipeline.pipes.Tagger at 0x1c3aa8a470>), ('parser', <spacy.pipeline.pipes.DependencyParser at 0x1c3bb60588>), ('ner', <spacy.pipeline.pipes.EntityRecognizer at 0x1c3bb605e8>)]
# Create EntityRuler instance
ruler = EntityRuler(nlp)
# Add token pattern to ruler
ruler.add_patterns([{"label":"PRODUCT", "pattern": "smartphone"}])
# Add new rule to pipeline before ner
nlp.add_pipe(ruler, before="ner")
# Check updated 
pipelinenlp.pipeline
#输出:[('tagger', <spacy.pipeline.pipes.Tagger at 0x1c1f9c9b38>), ('parser', <spacy.pipeline.pipes.DependencyParser at 0x1c3c9cba08>), ('entity_ruler', <spacy.pipeline.entityruler.EntityRuler at 0x1c1d834b70>), ('ner', <spacy.pipeline.pipes.EntityRecognizer at 0x1c3c9cba68>)

# Test new entity rule
for entity in doc.ents:
    print(entity.text, entity.label_)
#输出:
# smartphone PRODUCT
# July 31st DATE
# Sydney GPE
# 4093829 CARDINAL
# one CARDINAL
# Georgia GPE
# yesterday DAT

sklearn分类

# Inspect post purchase audio folder
import os
post_purchase_audio = os.listdir("post_purchase")
print(post_purchase_audio[:5])
#输出:['post-purchase-audio-0.mp3',  'post-purchase-audio-1.mp3',  'post-purchase-audio-2.mp3',  'post-purchase-audio-3.mp3',  'post-purchase-audio-4.mp3']

# Loop through mp3 files
for file in post_purchase_audio:
  print(f"Converting {file} to .wav...")
  # Use previously made function to convert to .wav
  convert_to_wav(file)

#输出:Converting post-purchase-audio-0.mp3 to .wav...Converting post-purchase-audio-1.mp3 to .wav...Converting post-purchase-audio-2.mp3 to .wav...Converting post-purchase-audio-3.mp3 to .wav...Converting post-purchase-audio-4.mp3 to .wav...

# Transcribe text from wav files
def create_text_list(folder):
  text_list = []
  # Loop through folder
  for file in folder:
  # Check for .wav extension
  if file.endswith(".wav"):
    # Transcribe audio
      text = transcribe_audio(file)
    # Add transcribed text to list
      text_list.append(text)return text_list

# Convert post purchase audio to textpost_purchase_text = create_text_list(post_purchase_audio)
#输出:print(post_purchase_text[:5])['hey man I just water product from you guys and I think is amazing but I leave a li 'these clothes I just bought from you guys too small is there anyway I can change t "I recently got these pair of shoes but they're too big can I change the size", "I bought a pair of pants from you guys but they're way too small", "I bought a pair of pants and they're the wrong colour is there any chance I can ch

import pandas as pd
# Create post purchase dataframe
post_purchase_df = pd.DataFrame({"label": "post_purchase", "text": post_purchase_text})
# Create pre purchase dataframe
pre_purchase_df = pd.DataFrame({"label": "pre_purchase", "text": pre_purchase_text})
# Combine pre purchase and post purhcase
df = pd.concat([post_purchase_df, pre_purchase_df])
# View the combined dataframedf.head()

  label                                               text
0  post_purchase  yeah hello someone this morning delivered a pa...
1  post_purchase  my shipment arrived yesterday but it's not the...
2  post_purchase  hey my name is Daniel I received my shipment y...
3  post_purchase  hey mate how are you doing I'm just calling in...
4   pre_purchase  hey I was wondering if you know where my new p...

# Import text classification packages
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split# Split data into train and test setsX_train, X_test, y_train, y_test = train_test_split(    X=df["text"],    y=df["label"],    test_size=0.3)

# Create text classifier pipeline
text_classifier = Pipeline([  ("vectorizer", CountVectorizer()),  ("tfidf", TfidfTransformer()),  ("classifier", MultinomialNB())])
# Fit the classifier pipeline on the training 
datatext_classifier.fit(X_train, y_train)

# Make predictions and compare them to test labels
predictions = text_classifier.predict(X_test)
accuracy = 100 * np.mean(predictions == y_test.label)
print(f"The model is {accuracy:.2f}% accurate.")
#输出:The model is 97.87% accurate.

后续应该做的事:

  • Practice your skills with a project of your own.
  • Check out speech_recognition's Microphone() class.
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 217,509评论 6 504
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,806评论 3 394
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 163,875评论 0 354
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,441评论 1 293
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,488评论 6 392
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,365评论 1 302
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,190评论 3 418
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,062评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,500评论 1 314
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,706评论 3 335
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,834评论 1 347
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,559评论 5 345
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,167评论 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,779评论 0 22
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,912评论 1 269
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,958评论 2 370
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,779评论 2 354

推荐阅读更多精彩内容