当我们的应用面对数据库连接时,选择一个好用的orm框架是非常重要的,他可以为你解决sql注入,数据库切换,数据模型迁移等等问题,也会为你提供易读的优雅的语法,让你告别拼接sql语句
typeorm 作为对typescript支持度最好的orm框架除了拥有这些优势外,还提供了缓存
,关系
,日志
等等开箱即用的功能,使用typescript
的querybuilder
可以模拟出任何复杂的sql语句并且不会丢失返回数据类型,并提供query
方法直接执行sql语句,来满足对旧有sql语句的迁移,
typescript
可以运行在 NodeJS、Browser、Cordova、PhoneGap、Ionic、React Native、Expo 和 Electron
平台上,并且官方支持MySQL / MariaDB / Postgres / SQLite / Microsoft SQL Server / Oracle / sql.js / mongodb
本文对typeorm
源码的解析主要从几个疑问上切入
-
synchronize:true
时数据模型同步原理,以及为什么在生产环境时不能使用 - reposity.save()的执行过程,在
cascade: true
时是如何自动将关系一起保存的,有数据时更新,无数据时插入是如何实现的 - 在查询
relations
时是否会有性能问题,关系是如何维持的, -
queryBuilder
是如何构造出各种复杂的sql语句的
准备阶段
我们可以直接clone typeorm官方仓库代码,官方仓库包含sample
目录,里面是每个功能的示例代码,我们直接对这里面的代码进行debug,来分析每个功能的源码实现
git clone https://github.com/typeorm/typeorm
cd typeorm
npm install
npm run compile
vscode debug功能 launch.json
{
// 使用 IntelliSense 了解相关属性。
// 悬停以查看现有属性的描述。
// 欲了解更多信息,请访问: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "pwa-node",
"request": "launch",
"name": "Launch Program",
"skipFiles": [
"<node_internals>/**",
"${workspaceRoot}/node_modules/**/*.js"
],
"program": "${workspaceFolder}/sample/sample3-many-to-one/app.ts",
//"preLaunchTask": "tsc: build - tsconfig.json",
"outFiles": ["${workspaceFolder}/build/compiled/**/*.js"]
}
]
}
此例是对sample
下的 sample3-many-to-one/app.ts
进行调试,我们先看下主要代码
createConnection(options).then(connection => {
let details = new PostDetails();
details.authorName = "Umed";
details.comment = "about post";
details.metadata = "post,details,one-to-one";
let post = new Post();
post.text = "Hello how are you?";
post.title = "hello";
post.details = details;
let postRepository = connection.getRepository(Post);
postRepository
.save(post)
.then(post => console.log("Post has been saved"))
.catch(error => console.log("Cannot save. Error: ", error));
}).catch(error => console.log("Error: ", error));
Post
实体模型中通过many-to-one
保存对PostDetail
的引用,非常经典的多对一关系,我们以此示例代码来分析本文中所要完成的四个问题
synchronize原理
数据库模型的同步发生在我们刚刚连接到数据库时,我们将debug 断点断到createConnection(options)
处,主要执行的是connect
代码
async connect(): Promise<this> {
if (this.isConnected)
throw new CannotConnectAlreadyConnectedError(this.name);
// connect to the database via its driver
await this.driver.connect();
// connect to the cache-specific database if cache is enabled
if (this.queryResultCache)
await this.queryResultCache.connect();
// set connected status for the current connection
ObjectUtils.assign(this, { isConnected: true });
try {
// build all metadatas registered in the current connection
this.buildMetadatas();
await this.driver.afterConnect();
// if option is set - drop schema once connection is done
if (this.options.dropSchema)
await this.dropDatabase();
// if option is set - automatically synchronize a schema
if (this.options.synchronize)
await this.synchronize();
// if option is set - automatically synchronize a schema
if (this.options.migrationsRun)
await this.runMigrations({ transaction: this.options.migrationsTransactionMode });
} catch (error) {
// if for some reason build metadata fail (for example validation error during entity metadata check)
// connection needs to be closed
await this.close();
throw error;
}
return this;
}
this.driver
是connect的对象初始化时获得的,不同的数据库使用不同的driver。比如mysql
使用的是mysql
nodejs包,然后分别连接到配置文件中配置的数据库和存储查询缓存的缓存数据库,接下来有一个很关键的逻辑buildMetadatas
,为所有我们通过@entity
,@colomn
等装饰器定义的表和字段初始化元数据,函数里主要执行
connectionMetadataBuilder.buildEntityMetadatas(this.options.entities || [])
,接下来是new EntityMetadataBuilder(this.connection,getMetadataArgsStorage()).build(allEntityClasses)
,getMetadataArgsStorage()
是获取到我们通过装饰器@Entity() 和 @Column()
等定义的实体和属性,然后重点是build的执行,我们进入build
build(entityClasses?: Function[]): EntityMetadata[] {
// if entity classes to filter entities by are given then do filtering, otherwise use all
const allTables = entityClasses ? this.metadataArgsStorage.filterTables(entityClasses) : this.metadataArgsStorage.tables;
// filter out table metadata args for those we really create entity metadatas and tables in the db
const realTables = allTables.filter(table => table.type === "regular" || table.type === "closure" || table.type === "entity-child" || table.type === "view");
// create entity metadatas for a user defined entities (marked with @Entity decorator or loaded from entity schemas)
const entityMetadatas = realTables.map(tableArgs => this.createEntityMetadata(tableArgs));
// compute parent entity metadatas for table inheritance
entityMetadatas.forEach(entityMetadata => this.computeParentEntityMetadata(entityMetadatas, entityMetadata));
// after all metadatas created we set child entity metadatas for table inheritance
entityMetadatas.forEach(metadata => {
metadata.childEntityMetadatas = entityMetadatas.filter(childMetadata => {
return metadata.target instanceof Function
&& childMetadata.target instanceof Function
&& MetadataUtils.isInherited(childMetadata.target, metadata.target);
});
});
// build entity metadata (step0), first for non-single-table-inherited entity metadatas (dependant)
entityMetadatas
.filter(entityMetadata => entityMetadata.tableType !== "entity-child")
.forEach(entityMetadata => entityMetadata.build());
// build entity metadata (step0), now for single-table-inherited entity metadatas (dependant)
entityMetadatas
.filter(entityMetadata => entityMetadata.tableType === "entity-child")
.forEach(entityMetadata => entityMetadata.build());
// compute entity metadata columns, relations, etc. first for the regular, non-single-table-inherited entity metadatas
entityMetadatas
.filter(entityMetadata => entityMetadata.tableType !== "entity-child")
.forEach(entityMetadata => this.computeEntityMetadataStep1(entityMetadatas, entityMetadata));
// then do it for single table inheritance children (since they are depend on their parents to be built)
entityMetadatas
.filter(entityMetadata => entityMetadata.tableType === "entity-child")
.forEach(entityMetadata => this.computeEntityMetadataStep1(entityMetadatas, entityMetadata));
// calculate entity metadata computed properties and all its sub-metadatas
entityMetadatas.forEach(entityMetadata => this.computeEntityMetadataStep2(entityMetadata));
// calculate entity metadata's inverse properties
entityMetadatas.forEach(entityMetadata => this.computeInverseProperties(entityMetadata, entityMetadatas));
// go through all entity metadatas and create foreign keys / junction entity metadatas for their relations
entityMetadatas
.filter(entityMetadata => entityMetadata.tableType !== "entity-child")
.forEach(entityMetadata => {
// create entity's relations join columns (for many-to-one and one-to-one owner)
entityMetadata.relations.filter(relation => relation.isOneToOne || relation.isManyToOne).forEach(relation => {
const joinColumns = this.metadataArgsStorage.filterJoinColumns(relation.target, relation.propertyName);
const { foreignKey, columns, uniqueConstraint } = this.relationJoinColumnBuilder.build(joinColumns, relation); // create a foreign key based on its metadata args
if (foreignKey) {
relation.registerForeignKeys(foreignKey); // push it to the relation and thus register there a join column
entityMetadata.foreignKeys.push(foreignKey);
}
if (columns) {
relation.registerJoinColumns(columns);
}
if (uniqueConstraint) {
if (this.connection.driver instanceof MysqlDriver || this.connection.driver instanceof AuroraDataApiDriver
|| this.connection.driver instanceof SqlServerDriver || this.connection.driver instanceof SapDriver) {
const index = new IndexMetadata({
entityMetadata: uniqueConstraint.entityMetadata,
columns: uniqueConstraint.columns,
args: {
target: uniqueConstraint.target!,
name: uniqueConstraint.name,
unique: true,
synchronize: true
}
});
if (this.connection.driver instanceof SqlServerDriver) {
index.where = index.columns.map(column => {
return `${this.connection.driver.escape(column.databaseName)} IS NOT NULL`;
}).join(" AND ");
}
if (relation.embeddedMetadata) {
relation.embeddedMetadata.indices.push(index);
} else {
relation.entityMetadata.ownIndices.push(index);
}
this.computeEntityMetadataStep2(entityMetadata);
} else {
if (relation.embeddedMetadata) {
relation.embeddedMetadata.uniques.push(uniqueConstraint);
} else {
relation.entityMetadata.ownUniques.push(uniqueConstraint);
}
this.computeEntityMetadataStep2(entityMetadata);
}
}
if (foreignKey && this.connection.driver instanceof CockroachDriver) {
const index = new IndexMetadata({
entityMetadata: relation.entityMetadata,
columns: foreignKey.columns,
args: {
target: relation.entityMetadata.target!,
synchronize: true
}
});
if (relation.embeddedMetadata) {
relation.embeddedMetadata.indices.push(index);
} else {
relation.entityMetadata.ownIndices.push(index);
}
this.computeEntityMetadataStep2(entityMetadata);
}
});
// create junction entity metadatas for entity many-to-many relations
entityMetadata.relations.filter(relation => relation.isManyToMany).forEach(relation => {
const joinTable = this.metadataArgsStorage.findJoinTable(relation.target, relation.propertyName)!;
if (!joinTable) return; // no join table set - no need to do anything (it means this is many-to-many inverse side)
// here we create a junction entity metadata for a new junction table of many-to-many relation
const junctionEntityMetadata = this.junctionEntityMetadataBuilder.build(relation, joinTable);
relation.registerForeignKeys(...junctionEntityMetadata.foreignKeys);
relation.registerJoinColumns(
junctionEntityMetadata.ownIndices[0].columns,
junctionEntityMetadata.ownIndices[1].columns
);
relation.registerJunctionEntityMetadata(junctionEntityMetadata);
// compute new entity metadata properties and push it to entity metadatas pool
this.computeEntityMetadataStep2(junctionEntityMetadata);
this.computeInverseProperties(junctionEntityMetadata, entityMetadatas);
entityMetadatas.push(junctionEntityMetadata);
});
});
// update entity metadata depend properties
entityMetadatas
.forEach(entityMetadata => {
entityMetadata.relationsWithJoinColumns = entityMetadata.relations.filter(relation => relation.isWithJoinColumn);
entityMetadata.hasNonNullableRelations = entityMetadata.relationsWithJoinColumns.some(relation => !relation.isNullable || relation.isPrimary);
});
// generate closure junction tables for all closure tables
entityMetadatas
.filter(metadata => metadata.treeType === "closure-table")
.forEach(entityMetadata => {
const closureJunctionEntityMetadata = this.closureJunctionEntityMetadataBuilder.build(entityMetadata);
entityMetadata.closureJunctionTable = closureJunctionEntityMetadata;
this.computeEntityMetadataStep2(closureJunctionEntityMetadata);
this.computeInverseProperties(closureJunctionEntityMetadata, entityMetadatas);
entityMetadatas.push(closureJunctionEntityMetadata);
});
// generate keys for tables with single-table inheritance
entityMetadatas
.filter(metadata => metadata.inheritancePattern === "STI" && metadata.discriminatorColumn)
.forEach(entityMetadata => this.createKeysForTableInheritance(entityMetadata));
// build all indices (need to do it after relations and their join columns are built)
entityMetadatas.forEach(entityMetadata => {
entityMetadata.indices.forEach(index => index.build(this.connection.namingStrategy));
});
// build all unique constraints (need to do it after relations and their join columns are built)
entityMetadatas.forEach(entityMetadata => {
entityMetadata.uniques.forEach(unique => unique.build(this.connection.namingStrategy));
});
// build all check constraints
entityMetadatas.forEach(entityMetadata => {
entityMetadata.checks.forEach(check => check.build(this.connection.namingStrategy));
});
// build all exclusion constraints
entityMetadatas.forEach(entityMetadata => {
entityMetadata.exclusions.forEach(exclusion => exclusion.build(this.connection.namingStrategy));
});
// add lazy initializer for entity relations
entityMetadatas
.filter(metadata => metadata.target instanceof Function)
.forEach(entityMetadata => {
entityMetadata.relations
.filter(relation => relation.isLazy)
.forEach(relation => {
this.connection.relationLoader.enableLazyLoad(relation, (entityMetadata.target as Function).prototype);
});
});
entityMetadatas.forEach(entityMetadata => {
entityMetadata.columns.forEach(column => {
// const target = column.embeddedMetadata ? column.embeddedMetadata.type : column.target;
const generated = this.metadataArgsStorage.findGenerated(column.target, column.propertyName);
if (generated) {
column.isGenerated = true;
column.generationStrategy = generated.strategy;
if (generated.strategy === "uuid") {
column.type = "uuid";
} else if (generated.strategy === "rowid") {
column.type = "int";
} else {
column.type = column.type || Number;
}
column.build(this.connection);
this.computeEntityMetadataStep2(entityMetadata);
}
});
});
return entityMetadatas;
}
这里的逻辑很多,每个forEach
都有自己的逻辑,例如获得表格的名字,获取他的关系等等,最终构造出来一个EntityMetadata
类,,他包含巨多的属性,我们debug到这里时可以查看一下每个字段,也可以直接看他的class
定义,每个属性的值都来源于我们通过typeorm
提供的各种装饰器定义,最终构造出来的metadatas将存在于全局,并在各个逻辑中被频繁使用,构造完metadatas
后,我们可以看到有对synchronize
的判断
if (this.options.synchronize)
await this.synchronize();
当我们启用synchronize
,会直接执行this.synchronize()
,
async synchronize(dropBeforeSync: boolean = false): Promise<void> {
if (!this.isConnected)
throw new CannotExecuteNotConnectedError(this.name);
if (dropBeforeSync)
await this.dropDatabase();
const schemaBuilder = this.driver.createSchemaBuilder();
await schemaBuilder.build();
}
主要执行了await schemaBuilder.build()
,mongodb 和 其他关系型有不一样的构建逻辑,我们关注一下关系型数据库
async build(): Promise<void> {
this.queryRunner = this.connection.createQueryRunner();
// CockroachDB implements asynchronous schema sync operations which can not been executed in transaction.
// E.g. if you try to DROP column and ADD it again in the same transaction, crdb throws error.
const isUsingTransactions = (
!(this.connection.driver instanceof CockroachDriver) &&
this.connection.options.migrationsTransactionMode !== "none"
);
if (isUsingTransactions) {
await this.queryRunner.startTransaction();
}
try {
const tablePaths = this.entityToSyncMetadatas.map(metadata => metadata.tablePath);
// TODO: typeorm_metadata table needs only for Views for now.
// Remove condition or add new conditions if necessary (for CHECK constraints for example).
if (this.viewEntityToSyncMetadatas.length > 0)
await this.createTypeormMetadataTable();
await this.queryRunner.getTables(tablePaths);
await this.queryRunner.getViews([]);
await this.executeSchemaSyncOperationsInProperOrder();
// if cache is enabled then perform cache-synchronization as well
if (this.connection.queryResultCache)
await this.connection.queryResultCache.synchronize(this.queryRunner);
if (isUsingTransactions) {
await this.queryRunner.commitTransaction();
}
} catch (error) {
try { // we throw original error even if rollback thrown an error
if (isUsingTransactions) {
await this.queryRunner.rollbackTransaction();
}
} catch (rollbackError) { }
throw error;
} finally {
await this.queryRunner.release();
}
}
重点执行了await this.queryRunner.getTables(tablePaths)
,里面重点执行了loadTables
,通过查询关系型数据库INFORMATION_SCHEMA
表,来获取到所有的表的信息,包括名称,主外键,字段类型,字段大小等等等。。保存在loadedTables
中,然后getTables
执行完毕,接下来则是真正的数据库结构同步逻辑,executeSchemaSyncOperationsInProperOrder()
await this.dropOldViews();
await this.dropOldForeignKeys();
await this.dropOldIndices();
await this.dropOldChecks();
await this.dropOldExclusions();
await this.dropCompositeUniqueConstraints();
// await this.renameTables();
await this.renameColumns();
await this.createNewTables();
await this.dropRemovedColumns();
await this.addNewColumns();
await this.updatePrimaryKeys();
await this.updateExistColumns();
await this.createNewIndices();
await this.createNewChecks();
await this.createNewExclusions();
await this.createCompositeUniqueConstraints();
await this.createForeignKeys();
await this.createViews();
根据名字就可以看到,删除旧表,删除旧外键,删除旧索引,添加字段,添加表等等等,我们挑选createNewTables()
来看一下,
protected async createNewTables(): Promise<void> {
const currentSchema = await this.queryRunner.getCurrentSchema();
for (const metadata of this.entityToSyncMetadatas) {
// check if table does not exist yet
const existTable = this.queryRunner.loadedTables.find(table => {
const database = metadata.database && metadata.database !== this.connection.driver.database ? metadata.database : undefined;
let schema = metadata.schema || (<SqlServerDriver|PostgresDriver|SapDriver>this.connection.driver).options.schema;
// if schema is default db schema (e.g. "public" in PostgreSQL), skip it.
schema = schema === currentSchema ? undefined : schema;
const fullTableName = this.connection.driver.buildTableName(metadata.tableName, schema, database);
return table.name === fullTableName;
});
if (existTable)
continue;
this.connection.logger.logSchemaBuild(`creating a new table: ${metadata.tablePath}`);
// create a new table and sync it in the database
const table = Table.create(metadata, this.connection.driver);
await this.queryRunner.createTable(table, false, false);
this.queryRunner.loadedTables.push(table);
}
}
循环遍历this.entityToSyncMetadatas
,即我们上文提到的构建的通过各种装饰器定义的所有表的元属性,接下来在我们刚刚得到的loadedTables
中find
每一个metadata
的table
,如果找到了,继续循环,未找到说明数据库中还没有此表格,那么接下来执行新建表格的sql语句。
通过一个createNewTables()
逻辑的分析,可以看到,就是通过将数据库中真正表格的状况和我们通过装饰器定义的各种表格的元属性进行对比,来判断是插入还是删除还是更新。然后直接执行对应的sql语句,所以,如果我们修改了一个字段的名称,可能会执行的语句是删除掉旧的字段,增加新的字段,而不是通过alter
修改字段的名称,所以会导致旧的字段的所有数据全部丢失,所以生产环境要慎用数据库模型同步synchronize:true
,如果真正的要修改字段名, typeorm
为我们提供了数据迁移的功能,通过编写数据迁移脚本,可以安全的进行数据迁移,并且可以按照版本回滚,非常人性化。
save执行过程
断点断到postRepository.save(post)
处
save<Entity, T extends DeepPartial<Entity>>(targetOrEntity: (T|T[])|EntityTarget<Entity>, maybeEntityOrOptions?: T|T[], maybeOptions?: SaveOptions): Promise<T|T[]> {
// normalize mixed parameters
let target = (arguments.length > 1 && (targetOrEntity instanceof Function || targetOrEntity instanceof EntitySchema || typeof targetOrEntity === "string")) ? targetOrEntity as Function|string : undefined;
const entity: T|T[] = target ? maybeEntityOrOptions as T|T[] : targetOrEntity as T|T[];
const options = target ? maybeOptions : maybeEntityOrOptions as SaveOptions;
if (target instanceof EntitySchema)
target = target.options.name;
// if user passed empty array of entities then we don't need to do anything
if (Array.isArray(entity) && entity.length === 0)
return Promise.resolve(entity);
// execute save operation
return new EntityPersistExecutor(this.connection, this.queryRunner, "save", target, entity, options)
.execute()
.then(() => entity);
}
主要执行了EntityPersistExecutor().execute()
方法,主要内容是
const executors = await Promise.all(entitiesInChunks.map(async entities => {
const subjects: Subject[] = [];
// create subjects for all entities we received for the persistence
entities.forEach(entity => {
const entityTarget = this.target ? this.target : entity.constructor;
if (entityTarget === Object)
throw new CannotDetermineEntityError(this.mode);
subjects.push(new Subject({
metadata: this.connection.getMetadata(entityTarget),
entity: entity,
canBeInserted: this.mode === "save",
canBeUpdated: this.mode === "save",
mustBeRemoved: this.mode === "remove",
canBeSoftRemoved: this.mode === "soft-remove",
canBeRecovered: this.mode === "recover"
}));
});
// console.time("building cascades...");
// go through each entity with metadata and create subjects and subjects by cascades for them
const cascadesSubjectBuilder = new CascadesSubjectBuilder(subjects);
subjects.forEach(subject => {
// next step we build list of subjects we will operate with
// these subjects are subjects that we need to insert or update alongside with main persisted entity
cascadesSubjectBuilder.build(subject, this.mode);
});
// console.timeEnd("building cascades...");
// load database entities for all subjects we have
// next step is to load database entities for all operate subjects
// console.time("loading...");
await new SubjectDatabaseEntityLoader(queryRunner, subjects).load(this.mode);
// console.timeEnd("loading...");
// console.time("other subjects...");
// build all related subjects and change maps
if (this.mode === "save" || this.mode === "soft-remove" || this.mode === "recover") {
new OneToManySubjectBuilder(subjects).build();
new OneToOneInverseSideSubjectBuilder(subjects).build();
new ManyToManySubjectBuilder(subjects).build();
} else {
subjects.forEach(subject => {
if (subject.mustBeRemoved) {
new ManyToManySubjectBuilder(subjects).buildForAllRemoval(subject);
}
});
}
// console.timeEnd("other subjects...");
// console.timeEnd("building subjects...");
// console.log("subjects", subjects);
// create a subject executor
return new SubjectExecutor(queryRunner, subjects, this.options);
}));
// console.timeEnd("building subject executors...");
// make sure we have at least one executable operation before we create a transaction and proceed
// if we don't have operations it means we don't really need to update or remove something
const executorsWithExecutableOperations = executors.filter(executor => executor.hasExecutableOperations);
if (executorsWithExecutableOperations.length === 0)
return;
// start execute queries in a transaction
// if transaction is already opened in this query runner then we don't touch it
// if its not opened yet then we open it here, and once we finish - we close it
let isTransactionStartedByUs = false;
try {
// open transaction if its not opened yet
if (!queryRunner.isTransactionActive) {
if (!this.options || this.options.transaction !== false) { // start transaction until it was not explicitly disabled
isTransactionStartedByUs = true;
await queryRunner.startTransaction();
}
}
// execute all persistence operations for all entities we have
// console.time("executing subject executors...");
for (const executor of executorsWithExecutableOperations) {
await executor.execute();
}
// console.timeEnd("executing subject executors...");
// commit transaction if it was started by us
// console.time("commit");
if (isTransactionStartedByUs === true)
await queryRunner.commitTransaction();
// console.timeEnd("commit");
entities
则是我们save
的那个对象,本例中就是post
,为每一个entity
建立一个subject
对象,subject
对象包含了我们保存需要的一切相关内容,此例中只有post
一个,但是接下来的cascadesSubjectBuilder.build(subject, this.mode)
则是为我们post
的所有关系,继续执行subject
的建立,
build(subject: Subject, operationType: "save"|"remove"|"soft-remove"|"recover") {
subject.metadata
.extractRelationValuesFromEntity(subject.entity!, subject.metadata.relations) // todo: we can create EntityMetadata.cascadeRelations
.forEach(([relation, relationEntity, relationEntityMetadata]) => {
// we need only defined values and insert, update, soft-remove or recover cascades of the relation should be set
if (relationEntity === undefined ||
relationEntity === null ||
(!relation.isCascadeInsert && !relation.isCascadeUpdate && !relation.isCascadeSoftRemove && !relation.isCascadeRecover))
return;
// if relation entity is just a relation id set (for example post.tag = 1)
// then we don't really need to check cascades since there is no object to insert or update
if (!(relationEntity instanceof Object))
return;
// if we already has this entity in list of operated subjects then skip it to avoid recursion
const alreadyExistRelationEntitySubject = this.findByPersistEntityLike(relationEntityMetadata.target, relationEntity);
if (alreadyExistRelationEntitySubject) {
if (alreadyExistRelationEntitySubject.canBeInserted === false) // if its not marked for insertion yet
alreadyExistRelationEntitySubject.canBeInserted = relation.isCascadeInsert === true && operationType === "save";
if (alreadyExistRelationEntitySubject.canBeUpdated === false) // if its not marked for update yet
alreadyExistRelationEntitySubject.canBeUpdated = relation.isCascadeUpdate === true && operationType === "save";
if (alreadyExistRelationEntitySubject.canBeSoftRemoved === false) // if its not marked for removal yet
alreadyExistRelationEntitySubject.canBeSoftRemoved = relation.isCascadeSoftRemove === true && operationType === "soft-remove";
if (alreadyExistRelationEntitySubject.canBeRecovered === false) // if its not marked for recovery yet
alreadyExistRelationEntitySubject.canBeRecovered = relation.isCascadeRecover === true && operationType === "recover";
return;
}
// mark subject with what we can do with it
// and add to the array of subjects to load only if there is no same entity there already
const relationEntitySubject = new Subject({
metadata: relationEntityMetadata,
parentSubject: subject,
entity: relationEntity,
canBeInserted: relation.isCascadeInsert === true && operationType === "save",
canBeUpdated: relation.isCascadeUpdate === true && operationType === "save",
canBeSoftRemoved: relation.isCascadeSoftRemove === true && operationType === "soft-remove",
canBeRecovered: relation.isCascadeRecover === true && operationType === "recover"
});
this.allSubjects.push(relationEntitySubject);
// go recursively and find other entities we need to insert/update
this.build(relationEntitySubject, operationType);
});
}
通过遍历我们数据模型初始化时存储的relations
,如果哪个关系的字段名在此次保存的数据中存在,则为那一条关系也创建一个subject
并且插入到所有subject
的数组中,此方法是递归的,也就是如果关系数据中还有关系,则继续深度执行,我们注意一下在subject
的canBeInserted
等属性值进行判断时,判断条件中含有isCascadeInsert
,此值则来源于我们的cascade
属性的设定。此例PostDetails
关系存在,则被插入到数组中,接下来我们待同步到数据库的subject
有两个,Post
和postDetail
接下来
await new SubjectDatabaseEntityLoader(queryRunner, subjects).load(this.mode);
这个函数通过判断我们保存的数据中含有不含有主键字段如id,如果没有id则是新的数据,直接执行插入等,如果有id说明是旧的数据,则通过主键id从数据库里查询旧数据,接下来准备执行更新操作,
/**
* Loads database entities for all subjects.
*
* loadAllRelations flag is used to load all relation ids of the object, no matter if they present in subject entity or not.
* This option is used for deletion.
*/
async load(operationType: "save"|"remove"|"soft-remove"|"recover"): Promise<void> {
// we are grouping subjects by target to perform more optimized queries using WHERE IN operator
// go through the groups and perform loading of database entities of each subject in the group
const promises = this.groupByEntityTargets().map(async subjectGroup => {
// prepare entity ids of the subjects we need to load
const allIds: ObjectLiteral[] = [];
const allSubjects: Subject[] = [];
subjectGroup.subjects.forEach(subject => {
// we don't load if subject already has a database entity loaded
if (subject.databaseEntity || !subject.identifier)
return;
allIds.push(subject.identifier);
allSubjects.push(subject);
});
// if there no ids found (means all entities are new and have generated ids) - then nothing to load there
if (!allIds.length)
return;
const loadRelationPropertyPaths: string[] = [];
// for the save, soft-remove and recover operation
// extract all property paths of the relations we need to load relation ids for
// this is for optimization purpose - this way we don't load relation ids for entities
// whose relations are undefined, and since they are undefined its really pointless to
// load something for them, since undefined properties are skipped by the orm
if (operationType === "save" || operationType === "soft-remove" || operationType === "recover") {
subjectGroup.subjects.forEach(subject => {
// gets all relation property paths that exist in the persisted entity.
subject.metadata.relations.forEach(relation => {
const value = relation.getEntityValue(subject.entityWithFulfilledIds!);
if (value === undefined)
return;
if (loadRelationPropertyPaths.indexOf(relation.propertyPath) === -1)
loadRelationPropertyPaths.push(relation.propertyPath);
});
});
} else { // remove
// for remove operation
// we only need to load junction relation ids since only they are removed by cascades
loadRelationPropertyPaths.push(...subjectGroup.subjects[0].metadata.manyToManyRelations.map(relation => relation.propertyPath));
}
const findOptions: FindManyOptions<any> = {
loadEagerRelations: false,
loadRelationIds: {
relations: loadRelationPropertyPaths,
disableMixedMap: true
},
// the soft-deleted entities should be included in the loaded entities for recover operation
withDeleted: true
};
// load database entities for all given ids
const entities = await this.queryRunner.manager
.getRepository<ObjectLiteral>(subjectGroup.target)
.findByIds(allIds, findOptions);
// now when we have entities we need to find subject of each entity
// and insert that entity into database entity of the found subjects
entities.forEach(entity => {
const subjects = this.findByPersistEntityLike(subjectGroup.target, entity);
subjects.forEach(subject => {
subject.databaseEntity = entity;
if (!subject.identifier)
subject.identifier = subject.metadata.hasAllPrimaryKeys(entity) ? subject.metadata.getEntityIdMap(entity) : undefined;
});
});
// this way we tell what subjects we tried to load database entities of
for (let subject of allSubjects) {
subject.databaseEntityLoaded = true;
}
});
await Promise.all(promises);
}
可以看到,如果是准备更新的数据,则将subject
的databaseEntity
属性设置为要保存的值entity
,并将subject
的identifier
属性设置为此数据的主键,
接下来是一个比较关键的逻辑,也是typeorm为我们提供的非常方便的功能
if (this.mode === "save" || this.mode === "soft-remove" || this.mode === "recover") {
new OneToManySubjectBuilder(subjects).build();
new OneToOneInverseSideSubjectBuilder(subjects).build();
new ManyToManySubjectBuilder(subjects).build();
} else {
subjects.forEach(subject => {
if (subject.mustBeRemoved) {
new ManyToManySubjectBuilder(subjects).buildForAllRemoval(subject);
}
});
}
当我们在保存一个不包含关系字段但含有关系数据的entity时,例如此例中,PostDetail
对Post
的关系是one-to-many
,Post
中包含一个PostDetail
的外键postDetailId
,而PostDetail
中其实是没有任何与Post
相关的字段的,但是如果我们保存的postDetail
数据中含有post
字段,则相当于将关联postDetail
的所有post
限定为保存的post
字段数据,例如我们保存的一个postDetail
中含有post:[]
数据,则意味着没有关联到此postDetail
的post
数据,所以,那些旧的和此postDetail
关联的数据需要解除关联关系(通过orphanedRowAction
来配置解除的方式是删除数据还是将外键设置为null),我们看下new OneToManySubjectBuilder(subjects).build()
对many-to-one
关系的处理
protected buildForSubjectRelation(subject: Subject, relation: RelationMetadata) {
let relatedEntityDatabaseRelationIds: ObjectLiteral[] = [];
if (subject.databaseEntity) { // related entities in the database can exist only if this entity (post) is saved
relatedEntityDatabaseRelationIds = relation.getEntityValue(subject.databaseEntity);
}
let relatedEntities: ObjectLiteral[] = relation.getEntityValue(subject.entity!);
if (relatedEntities === null) // we treat relations set to null as removed, so we don't skip it
relatedEntities = [] as ObjectLiteral[];
if (relatedEntities === undefined) // if relation is undefined then nothing to update
return;
const relatedPersistedEntityRelationIds: ObjectLiteral[] = [];
relatedEntities.forEach(relatedEntity => { // by example: relatedEntity is a category here
let relationIdMap = relation.inverseEntityMetadata!.getEntityIdMap(relatedEntity); // by example: relationIdMap is category.id map here, e.g. { id: ... }
let relatedEntitySubject = this.subjects.find(subject => {
return subject.entity === relatedEntity;
});
if (relatedEntitySubject)
relationIdMap = relatedEntitySubject.identifier;
if (!relationIdMap) {
if (!relatedEntitySubject)
return;
relatedEntitySubject.changeMaps.push({
relation: relation.inverseRelation!,
value: subject
});
return;
}
const relationIdInDatabaseSubjectRelation = relatedEntityDatabaseRelationIds.find(relatedDatabaseEntityRelationId => {
return OrmUtils.compareIds(relationIdMap, relatedDatabaseEntityRelationId);
});
if (!relationIdInDatabaseSubjectRelation) {
if (!relatedEntitySubject) {
relatedEntitySubject = new Subject({
metadata: relation.inverseEntityMetadata,
parentSubject: subject,
canBeUpdated: true,
identifier: relationIdMap
});
this.subjects.push(relatedEntitySubject);
}
relatedEntitySubject.changeMaps.push({
relation: relation.inverseRelation!,
value: subject
});
}
EntityMetadata
.difference(relatedEntityDatabaseRelationIds, relatedPersistedEntityRelationIds)
.forEach(removedRelatedEntityRelationId => { // by example: removedRelatedEntityRelationId is category that was bind in the database before, but now its unbind
const removedRelatedEntitySubject = new Subject({
metadata: relation.inverseEntityMetadata,
parentSubject: subject,
identifier: removedRelatedEntityRelationId,
});
if (!relation.inverseRelation || relation.inverseRelation.orphanedRowAction === "nullify") {
removedRelatedEntitySubject.canBeUpdated = true;
removedRelatedEntitySubject.changeMaps = [{
relation: relation.inverseRelation!,
value: null
}];
} else if (relation.inverseRelation.orphanedRowAction === "delete") {
removedRelatedEntitySubject.mustBeRemoved = true;
}
this.subjects.push(removedRelatedEntitySubject);
});
}
核心逻辑是,将从数据库中刚刚查出来的databaseEntity
与我们保存的entity
进行对比,将databaseEntity
中存在的不存在与entity
中的关联数据标记为删除,同样构造成一个subject保存起来,
最后将这些所有需要保存或者删除的subject
构造成一个SubjectExecutor
,然后启动一个事务await queryRunner.startTransaction()
,然后对所有的subject
执行SubjectExecutor.execute
方法,该插入的插入,该更新的更新,该删除的删除,最后提交事务,则保存逻辑就执行完了
find relation 原理
其实经过上面的分析,我们也能猜到relation是怎么查出来的了,就像save时一样,通过完整的entityMetadata
,我们可以找到任意关系,也就是说,只要我们定义了many-to-one
,one-to-one
...等等等关系,那么metadata
中就会有关系的完整数据,那么我们在查询时想要携带relation
的数据,也就很容易了,,至于性能问题,其实通过getQuery
我们可以看到,其实relation
就是执行的join
,多表深层join
因为会扫描大量数据,所以性能问题其实是join
的问题,但是如果我们追求性能的话,那么使用queryBuilder
,通过on
和where
条件来限制的话,其实性能也有很大的提升空间。
queryBuilder执行过程
上文中提到的无论是表格模型的同步还是数据的查询保存其实最终都是执行的queryBuilder
,也就是说queryBuilder
是我们执行一切数据库操作的终点,接下来我们分析一下queryBuilder
断点断到任意一个createQueryBuilder
处,重点执行了new SelectQueryBuilder(this, entityOrRunner as QueryRunner|undefined)
我们看下SelectQueryBuilder
的基类QueryBuilder
中含有
/**
* Contains all properties of the QueryBuilder that needs to be build a final query.
*/
readonly expressionMap: QueryExpressionMap;
这样一个属性,他包含了关于最终查询所要执行的所有语句。
而SelectQueryBuilder
类中,则包含了巨多的方法,囊括了我们使用queryBuilder
时可以使用的所有方法,也正是这么多方法,构成了灵活丰富的queryBuilder。例如innerJoin
,innerJoinAndSelect
,andWhere
,select
等等等,他们所有函数内容都是为了填充expressionMap
,我们来简单分析一下.where()
的执行逻辑,
where(where: Brackets|string|((qb: this) => string)|ObjectLiteral|ObjectLiteral[], parameters?: ObjectLiteral): this {
this.expressionMap.wheres = []; // don't move this block below since computeWhereParameter can add where expressions
const condition = this.getWhereCondition(where);
if (condition)
this.expressionMap.wheres = [{ type: "simple", condition: condition }];
if (parameters)
this.setParameters(parameters);
return this;
}
获取到where
的条件后,赋值给expressionMap
的wheres
属性,其他的中间逻辑的queryBuilder
也是类似,真正构造sql语句并执行的是一些特别的方法如getOne()
, execute()
等,我们简单分析一下execute()
const [sql, parameters] = this.getQueryAndParameters();
const queryRunner = this.obtainQueryRunner();
try {
return await queryRunner.query(sql, parameters); // await is needed here because we are using finally
} finally {
if (queryRunner !== this.queryRunner) { // means we created our own query runner
await queryRunner.release();
}
}
很简单,直接queryRunner
执行通过this.getQueryAndParameters()
获取到的sql语句,我们继续看getQueryAndParameters()
const query = this.getQuery();
const parameters = this.getParameters();
return this.connection.driver.escapeQueryWithParameters(query, parameters, this.expressionMap.nativeParameters);
获取sql,获取我们传入的参数,然后拼接,我们看下getQuery()
getQuery(): string {
let sql = this.createComment();
sql += this.createSelectExpression();
sql += this.createJoinExpression();
sql += this.createWhereExpression();
sql += this.createGroupByExpression();
sql += this.createHavingExpression();
sql += this.createOrderByExpression();
sql += this.createLimitOffsetExpression();
sql += this.createLockExpression();
sql = sql.trim();
if (this.expressionMap.subQuery)
sql = "(" + sql + ")";
return sql;
}
通过前面构造的expressionMap
,拼接出sql语句,逻辑很清晰,然后最后附上参数,直接执行sql语句,queryBuilder
则执行完毕
总结
typeorm
通过提供给我们各种描述表结构的装饰器,构建完整的数据库结构metadata
,接下来的一切操作其实都基于这些metadata
。其实源码结构也很清晰,就是几个非常大的class
,另外typeorm
还提供了非常方便的数据库结构同步,迁移脚本编写,关系模型定义等功能,大大提高了我们项目的开发维护效率,接下来笔者可能会写一篇描述 typeorm
和 nestjs
搭配开发很好的实践的文章,敬请关注!