最近工作中要用到批量插入,由于插入的数据量很大,采用foreach直接拼接SQL时,由于数据量大,拼接的SQL超过mybatis的限制后,造成异常导致插入失败,经过网上搜索一番,发现零零散散的不完整,都无法使用,后来经过自己整理后,大功告成,在此记录以备查询。
mapper代码片段:
<insert id="insertBatch" parameterType="java.util.List">
insert into ics.DAY_REPORT_PROJECT (PID, REPORTDATE, PROJECTID, PROJECTAMT, BIDTIME, BIDURL, TRANSFERSTATUS,LIMITTYPE, PROJECTLIMIT, BIDRATE, VALUEDATE, REPAYTYPE, ASSURETYPE, CREATETIME)
select ics.SEQ_report_project_PID.Nextval, a.* from (
<foreach collection="list" item="item" index="index" separator="union all" >
select #{item.reportDate}, #{item.projectId}, #{item.projectAmt}, #{item.bidTime}, #{item.bidUrl}, #{item.transferStatus},#{item.limitType}, #{item.projectLimit}, #{item.bidRate}, #{item.valueDate}, #{item.repayType}, #{item.assureType}, #{item.createTime}
from dual
</foreach>
) a
</insert>
dao代码片段:
/**
* 批量插入项目信息
*
* @param reportProjects
* @throws Exception
*/
public void insert(List<DayReportProject> reportProjects) throws Exception {
int result = 1;
SqlSession batchSqlSession = null;
try {
batchSqlSession = this.getSqlSessionFactory().openSession(ExecutorType.BATCH, false);
// 获取批量方式的sqlsession
int batchCount = 1000; // 每批commit的个数
int batchLastIndex = batchCount; // 每批最后一个的下标
for (int index = 0; index < reportProjects.size();) {
if (batchLastIndex >= reportProjects.size()) {
batchLastIndex = reportProjects.size();
result = result * batchSqlSession.insert("cn.hepai.model.DayReportProject.insertBatch", reportProjects.subList(index, batchLastIndex));
batchSqlSession.commit();
batchSqlSession.clearCache(); // 清理缓存,防止溢出
log.info("index : " + index + ", batchLastIndex : " + batchLastIndex + ", 共插入项目信息 : " + (batchLastIndex + 1) + "条");
break; // 数据插入完毕,退出循环
} else {
result = result * batchSqlSession.insert("cn.hepai.model.DayReportProject.insertBatch", reportProjects.subList(index, batchLastIndex));
batchSqlSession.commit();
batchSqlSession.clearCache();
log.info("index : " + index + ", batchLastIndex : " + batchLastIndex + ", 共插入项目信息 : " + (batchLastIndex + 1) + "条");
index = batchLastIndex; // 设置下一批下标
batchLastIndex = index + (batchCount - 1);
}
}
} catch (Exception e) {
log.error(e.getMessage(), e);
batchSqlSession.rollback();
} finally {
batchSqlSession.close();
}
}
测试通过,执行速度很快,上线后运行平稳。