package constraints
Type Members
- class Check extends BaseConstraint
A CHECK constraint.
A CHECK constraint.
A CHECK constraint defines a condition each row in a table must satisfy. Connectors can define such constraints either in SQL (Spark SQL dialect) or using a
predicateif the condition can be expressed using a supported expression. A CHECK constraint can reference one or more columns. Such constraint is considered violated if its condition evaluates toFALSE, but notNULL. The search condition must be deterministic and cannot contain subqueries and certain functions like aggregates or UDFs.Spark supports enforced and not enforced CHECK constraints, allowing connectors to control whether data modifications that violate the constraint must fail. Each constraint is either valid (the existing data is guaranteed to satisfy the constraint), invalid (some records violate the constraint), or unvalidated (the validity is unknown). If the validity is unknown, Spark will check
#rely()to see whether the constraint is believed to be true and can be used for query optimization.- Annotations
- @Evolving()
- Since
4.1.0
- trait Constraint extends AnyRef
A constraint that restricts states of data in a table.
A constraint that restricts states of data in a table.
- Annotations
- @Evolving()
- Since
4.1.0
- class ForeignKey extends BaseConstraint
A FOREIGN KEY constraint.
A FOREIGN KEY constraint.
A FOREIGN KEY constraint specifies one or more columns (referencing columns) in a table that refer to corresponding columns (referenced columns) in another table. The referenced columns must form a UNIQUE or PRIMARY KEY constraint in the referenced table. For this constraint to be satisfied, each row in the table must contain values in the referencing columns that exactly match values of a row in the referenced table.
Spark doesn't enforce FOREIGN KEY constraints but leverages them for query optimization. Each constraint is either valid (the existing data is guaranteed to satisfy the constraint), invalid (some records violate the constraint), or unvalidated (the validity is unknown). If the validity is unknown, Spark will check
#rely()to see whether the constraint is believed to be true and can be used for query optimization.- Annotations
- @Evolving()
- Since
4.1.0
- class PrimaryKey extends BaseConstraint
A PRIMARY KEY constraint.
A PRIMARY KEY constraint.
A PRIMARY KEY constraint specifies ore or more columns as a primary key. Such constraint is satisfied if and only if no two rows in a table have the same non-null values in the primary key columns and none of the values in the specified column or columns are
NULL. A table can have at most one primary key.Spark doesn't enforce PRIMARY KEY constraints but leverages them for query optimization. Each constraint is either valid (the existing data is guaranteed to satisfy the constraint), invalid (some records violate the constraint), or unvalidated (the validity is unknown). If the validity is unknown, Spark will check
#rely()to see whether the constraint is believed to be true and can be used for query optimization.- Annotations
- @Evolving()
- Since
4.1.0
- class Unique extends BaseConstraint
A UNIQUE constraint.
A UNIQUE constraint.
A UNIQUE constraint specifies one or more columns as unique columns. Such constraint is satisfied if and only if no two rows in a table have the same non-null values in the unique columns.
Spark doesn't enforce UNIQUE constraints but leverages them for query optimization. Each constraint is either valid (the existing data is guaranteed to satisfy the constraint), invalid (some records violate the constraint), or unvalidated (the validity is unknown). If the validity is unknown, Spark will check
#rely()to see whether the constraint is believed to be true and can be used for query optimization.- Annotations
- @Evolving()
- Since
4.1.0