It is common wisdom that database field is firmly grounded in the two math disciplines: predicate logic and set theory. However, neither logic nor set theory are dominant math subjects. Just counting tags at can give rough idea of the importance of an individual math topic, and it is evident that algebraic geometry governs the math world. This field is so rich that it spawned new sub-fields, such as category theory. Furthermore, hints that category theory is more popular than either predicate logic or set theory! Some category theorists even proposed to rebuild the entire math foundation with category theory toolkit…

Does algebraic geometry offer any insights to database theory? First, we recall what basic mathematical objects of algebraic geometry are. The basic geometric object is affine variety — a system of polynomial equations. Affine variety is a set of tuples in R^n, and our focus on database applications prompts that we must narrow our scope to finite varieties, when polynomial system has finite number of roots.

OK, this abstract motivation is fine, but given a [database] relation, how do we construct a variety out of it? This is accomplished after we learn the two fundamental operations over varieties.

1. Set intersection of two varieties V and W is a set which is, again, a variety. The defining set of polynomials for V \cap W is a union of the polynomial sets defining the V and U.
2. Set union of two varieties V and W is a set which is, again, a variety. The defining set of polynomials for V \cup W is a set of all pairwise products of the polynomials from V and W.

Union and intersection of varieties is exactly what is needed for being able to exhibit a variety corresponding to a relation. Consider an unary relations with a single tuple


Geometrically, it corresponds to a single point on the x axis, so it becomes immediately obvious what equation defines it:

x - 1 = 0

Likewise, the relation


is defines by

y - 1 = 0

Next, we construct join of the two relations

[x y]
 1 1

Join is set intersection, and intersection of varieties gives us the system of defining equations

x - 1 = 0
y - 1 = 0

Let’s expand our example and add one more attribute:

[x y z]
 1 1 1

Our system of equations grows with one more constraint:

x - 1 = 0
y - 1 = 0
z - 1 = 0

Now, knowing how to construct “single tuple varieties”, we are ready to move onto relations with more than one tuple. This is accomplished via union. Consider a relation

[x y z]
 1 1 1
 2 1 1

which is a union of already familiar relation

[x y z]
 1 1 1


[x y z]
 2 1 1

To build the union of varieties we need a polynomial system defining the second variety

x - 2 = 0
y - 1 = 0
z - 1 = 0

The polynomial system for the union is

(x-1)(x-2) = 0
(y-1)(x-2) = 0
(z-1)(x-2) = 0
(x-1)(y-1) = 0
(y-1)(y-1) = 0
(z-1)(y-1) = 0
(x-1)(z-1) = 0
(y-1)(z-1) = 0
(z-1)(z-1) = 0

Wait a minute, 9(!) equations for a relation with just couple of tuples — this doesn’t look promising… The critical observation is that not all of these equations are independent. A well known method to greatly reduce polynomial system is Groebner basis. Executing the following command at Wolfram Alpha:

GroebnerBasis[{(x-1)(x-2), (y-1)(x-2), (z-1)(x-2),
(x-1)(y-1), (y-1)(y-1), (z-1)(y-1),
(x-1)(z-1), (y-1)(z-1), (z-1)(z-1)} ,{z, y, x}]

we obtain the system

x^2-3 x+2 = 0
y-1 = 0
z-1 = 0

As an afterthought, this result is obvious. The first equation constraints x to being either 1 or 2, the second equation asserts that y is equal to 1, while the third one asserts z=1. It is good to learn the general method, though.

With this technique we can proceed and find a variety corresponding to the relation

[x y z]
 1 1 1
 2 1 1
 3 2 1
 3 2 2

-6 + 11 x - 6 x^2 + x^3 = 0
-4 + 3 x - x^2 + 2 y = 0
3 - x - 3 z + x z = 0
2 - 3 z + z^2 = 0

An inquisitive reader might have already noticed functional dependency x -> y lurking in plain view. In the next installment we’ll investigate what functional dependency is from algebraic geometry perspective.

When it comes to teaching query languages within introductory database class, a system that supports relational algebra is a must. The first step is ASCII-matizing relational algebra syntax. One approach is to be faithful to RA notation by adopting LATEX syntax. Here is an example from popular Stanford class2go (registration required):

\select_{20 <= age and gender='female'}
(Eats \join Person);

When implementing support of Relational Algebra in QBQL, we have decided not to emphasize query rendering in its “native” relational algebra expression form in favor of more readable ASCII syntax. Here is the same query adapted to QBQL syntax:

project pizza
select "20 <= age" ^ "gender='female'"
(Eats join Person);

It is instructive to compare it to native QBQL syntax, where one can leverage the full power of Relative Composition (aka Set Intersection Join):

Eats /^ Person /^ "20 <= age" /^ "gender='female'";

With the following input (QBQL syntax):

Eats = [name, pizza]
Amy mushroom
Amy pepperoni
Ben cheese
Ben pepperoni
Cal supreme
Dan cheese
Dan mushroom
Dan pepperoni
Dan sausage
Dan supreme
Eli cheese
Eli supreme
Fay mushroom
Gus cheese
Gus mushroom
Gus supreme
Hil cheese
Hil supreme
Ian pepperoni
Ian supreme

Person = [name, age, gender]
Amy 16 female
Ben 21 male
Cal 33 male
Dan 13 male
Eli 45 male
Fay 21 female
Gus 24 male
Hil 30 female
Ian 18 male

all three queries output


Equations are ubiquitous in mathematics, physics, and computer science. In database field an optimizer rewriting a query to standard restrict-project-join leverages relational algebra identities. The craft of those reasoning abilities is culminated in Automated Theorem Proving systems.

Modern systems, such as Prover9, employs many sophisticated techniques, but as far as equality is concerned, paramodulation rules them all. In a seminal paper Robinson & Wos give an example of a problem which solution takes 47 steps with paramodulation vs. 136 steps with resolution.

The difference can be even more dramatic. Consider lattice idempotence law derived from the two absorption axioms:

x ^ (x v y) = x.
x v (x ^ y) = x.

x ^ x = x.

It takes Prover9 just a single paramodulation step and 0.00 (+ 0.03) seconds. To compare it with resolution, we have to explicitly write down equality identities fit to the two lattice operations:

x = x.
x != y | y = x.
x != y | y != z | x = z.
x != y | x ^ z = y ^ z.
x != y | z ^ x = z ^ y.
x != y | x v z = y v z.
x != y | z v x = z v y.

This explosion of axioms alone hints that the proof length must increase. It is the magnitude of the increase that caught me by surprise: 98 steps and 101 sec.

Many people criticize SQL for bloated and inconsistent syntax. However, one must admit that some of its features give Relational Algebra run for the money. Consider recent stack overflow question “Relational Algebra equivalent of SQL “NOT IN””

Is there a relational algebra equivalent of the SQL expression NOT IN? For example, if I have the relation:

A1 | A2
x | y
a | b
y | x

I want to remove all tuples in the relation for which A1 is in A2. In SQL I might query:


What is really stumping me is how to subquery inside the relational algebra selection operator, is this possible?:

\sigma_{some subquery here}R

The answer, which was accepted, suggested the query:

R - \rho_{A1,A2} ( \pi_{a_{11},a_{22}} \sigma_{a_{11}=a_{22}} (\rho_{a_{11},a_{12}} R \bowtie \rho_{a_{21},a_{22}} R) )

The length and complexity of this expression is puzzling: one can’t convincingly argue of inferiority of SQL to “truly relational” languages without being able to crack it.

On close inspection it becomes evident that the author used more renaming than its necessary, so we have:

R - ( \pi_{A1,A2} \sigma_{A1=A2} (\rho_{A1,a_{12}} R \bowtie \rho_{a_{21},A2} R) )

It is still longer and less intuitive than SQL query. Can Relational Algebra do better than this? Also, if SQL is allowed to use extra operations which are equivalent to Relational Algebra antijoin, then perhaps the competition ground is not fair?

There is one operation — relational division, initially included by Codd into the list of base operation, but excluded later when it has been discovered that it can be expressed in terms of the others. We’ll demonstrate that with division-like operators at hand we can achieve much shorter query.

Modern database theory treats relational division as set-containment-join. Set containment join is easy to describe if we temporarily step outside of first normal form relations. Consider the following relations

Certified=[name  skill]
           claire  java
           max  java
           max  sql
           peter  html
           peter  java
Requirements=[job  skill]
              appl  java
              appl  sql
              teaching  java
              web  html

We can nest their common attribute skill into the set

Certified=[name  skillSet]
           claire  {java}
           max  {java,sql}
           peter  {html,java}
Requirements=[job  skillSet]
              appl  {java,sql}
              teaching  {java}
              web  {html}

Next, for each pair of applicant and job we check if their skills matches job requirements, that is if one is subset of the other, and keep those pairs which meet this criteria.

This is classic set containment join, but mathematically inclined person would be tempted to do something about empty sets. What about the other applicants and jobs in their respective domains, shouldn’t they be included into both relations as well? Suppose there is a person named “Joe” who doesn’t have any qualification, and let the “entry” job assume no requirements at all:

Certified=[name  skillSet]
           claire  {java}
           max  {java,sql}
           peter  {html,java}
           joe {}
Requirements=[job  skillSet]
              appl  {java,sql}
              teaching  {java}
              web  {html}
              entry {}

Then, the amended set containment join would output additional tuple of Joe qualified for entry level job.

Certainly, the amended set join is no longer domain independent, but one may argue that the benefits overweight the drawbacks. For example, compared to classic set containment join, the amended operation expressed in terms of the basic ones is much more concise. We’ll show relational algebra with amended set containment join would greatly simplify our puzzle query as well.

In general, to sharpen intuition around problems like this it is useful to leverage relational algebra query engine. As usual, we use QBQL. Most of QBQL operations generalize Relational Algebra, and, therefore, are easily translatable into the later.

Here is QBQL query step by step. The most surprising is the very first step where we divide the input relation R over the empty relation with attribute A2:

[A1   A2]
x     y
a     b
y     x
R /< [A2];

which outputs


Here an explanation may be warranted. In order to evaluate set containment join we have to partition sets of attributes of R and [A2] into exclusive and common attributes. The attribute set of the result is common attributes that is symmetric difference of the attributes of both inputs. In our example the output header is, as expected, {A1}. Now, let’s evaluate the output content. For R the exclusive/common attributes partition is easy: the A1 is exclusive attribute, while A2 is common. In “de-first-normalized” form the table R looks like this:

[A1 A2]
x {y}
y {x}
a {b}
b {}

For the relation [A2] the common attribute is again A2, while the exclusive one is empty. How we set-join such a relation? In de-normalized form the table [A2] looks like this:

[A2 {}]
{} R01

That’s right, sets are relations with single attribute, and when we are challenged to exibit a nested relation with no attributes, then we have only two choices: R01 and R00 (i.e. TABLE_DEE and TABLE_DUM). After set containment join we obtain

[A1 {}]
b R01

where the fictitious empty set column should be excluded from the normalized result.

After obtaining the relation


the next steps are easy: rename the attribute to A2 and join with R. In QBQL renaming is technically set-intersection join (/^) with binary equality relation (although set-equality /^ and set-containment would produce the same result), so the query becomes:

R ^ ("A1=A2" /^ (R /< [A2]));

and it outputs

[A1 A2]
a b

as expected.

Proving that standard relational algebra operations are independent is considered an exercise in standard database theory course. Here is extract from prof. Kolaitis slides:

Theorem: Each of the five basic relational algebra operations is
independent of the other four, that is, it cannot be expressed by a
relational algebra expression that involves only the other four.

Proof Sketch: (projection and cartesian product only)
 Property of projection:
 It is the only operation whose output may have arity smaller than its input.
 Show, by induction, that the output of every relational algebra expression
in the other four basic relational algebra is of arity at least as big as the
maximum arity of its arguments.
 Property of cartesian product:
 It is the only operation whose output has arity bigger than its input.
 Show, by induction, that the output of every relational algebra expression
in the other four basic relational algebra is of arity at most as big as the
maximum arity of its arguments.
Exercise: Complete this proof.

This seems convincing until we carry over this question to relational bilattice. Then it is evident that \{ \wedge,\vee,\lhd NOT \rhd, \lhd INV \rhd \} , that is natural join, generalized union, negation, and inversion are independent. Indeed, natural join and generalized union are monothonic on both attributes and tuples, so he proof sketch from relational algebra can be employed here as well. Next, negation is domain-dependent operations which can produce tuples not present in the original relation. Likewise, inversion is “attribute”-dependent operation which can add attributes not present in the original relation.

What about the second lattice structure, are those operations independent of these four? Outer union can be expressed via De Morgan’s law:
x \lhd OR \rhd y = \lhd NOT \rhd (\lhd NOT \rhd x \wedge \lhd NOT \rhd y )
and likewise its lattice dual:
\lhd NOT \rhd (\lhd NOT \rhd x \vee \lhd NOT \rhd y )

Next, there are 0-ary operations, that is constants R00 (aka TABLE_DUM), R01 (aka TABLE_DEE), R10, and R11 (not to be confused with universal relation). The later three can be expressed in therms of the first one

R10 = \lhd INV \rhd R00
R01 = \lhd NOT \rhd R00
R11 = \lhd INV \rhd \lhd NOT \rhd R00

so the question extends if the set \{ \wedge,\vee,\lhd NOT \rhd, \lhd INV \rhd, R00 \} is independent. It is instructive to compare our conjecture with Boolean Algebra. There are many independent sets of operations in BA, the most ubiquitous being \{ \wedge,\vee,\lhd NOT \rhd \}. Then, boolean constants are expressible in terms of those, e.g.:
0 = \lhd NOT \rhd x \wedge x
Here is similar relational lattice expression
R00 = \lhd NOT \rhd(x \vee \lhd NOT \rhd(x \vee \lhd INV \rhd x))

To conclude, relational algebra cast in genuine algebraic shape contains a set of four independent operations.

A long long proof

May 7, 2012

Decomposition of a relation into join of projections serves as motivation for database normalization theory. In relational lattice terms relation x projected into sets of attributes (that is empty relations) s and t:

x = (x v s) ^ (x v t)

Lets investigate dual perspective and switch the roles of join and inner union:

x = (x ^ s) v (x ^ t)

One particular instance of this identity is known as fundamental decomposition identity

x = (x ^ R00) v (x ^ R11)

which informally asserts that a relation is an inner union of the relation header (i.e. set of attributes) and content (set of tuples). Fundamental decomposition identity can be generalized into

x = (x ^ y`) v (x ^ y')

where the ' is relation complement and ` is attribute inversion. Both operations are domain dependent (which might explain a reluctance of some researchers adopting them). Automated proof of generalized decomposition identity

% Language Options

op(300, postfix, "`" ).


% Standard lattice axioms
x ^ y = y ^ x.
(x ^ y) ^ z = x ^ (y ^ z).
x ^ (x v y) = x.
x v y = y v x.
(x v y) v z = x v (y v z).
x v (x ^ y) = x.

% Litak
x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))).
(x ^ y) v (x ^ z) = x ^ ( ((x v z) ^ y) v ((x v y) ^ z) ).
(R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y).

% Unary postfix negation operation ' aka <NOT>
x' ^ x = x ^ R00.
x' v x = x v R11.

% Unary postfix inversion operation ` aka <INV>
x` ^ x = R11 ^ x.
x` v x = R00 v x.

x = (x ^ R00) v (x ^ R11).
% Distributivity of join over outer union aka <OR>
x ^ (y' ^ z')' = ((x ^ y)' ^ (x ^ z)')'.


x = (x ^ y') v (x ^ y`).


turned out to be quite CPU consuming

% -------- Comments from original proof --------
% Proof 1 at 4894.53 (+ 50.37) seconds.
% Length of proof is 867.
% Level of proof is 47.
% Maximum clause weight is 61.
% Given clauses 10875.

How fast is your system?

Shadows of the Truth by Alexandre Borovik offers an interesting perspective upon human learning experience. The book is abundant with examples of concepts being challenging at the early age, but clarified later (sometimes, much later) in life. One of my stumbling blocks was the definition of implicit and explicit dependencies. Now, with modest help of relational thinking (that is, basic dependency theory) I can report some progress.

Intuition behind implicit and explicit dependencies is clear. For example, given the two functions

y(x,t) = x^2 + t
x(t) = t^2

then, in the formula for y we notice two variables x and t , which suggests that y explicitly depends on t . Compare it with

y(x,t) = x^2
x(t) = t^2

where formula for y involves variable x only. Since, at the second line we have x expressed in terms of t , y still depends on t , but the dependence is implicit.

The concept of implicit and explicit dependencies surfaces in many places, for example Partial Derivative Equations and Noether conservation theorems, which both are part of undergraduate math and physics curriculum. Nevertheless, most textbooks take this concept for granted, perhaps implying that mathematically mature reader should have no problems understanding it. Wikipedia offers couple dedicated articles: Dependent and Independent Variables giving overview and intuition, and more ambitious Time-Invariant System with an attempt to formal definition.

The concept of time-invariant system belongs to physics, the idea being that if we shift the time variable t , then it doesn’t affect time-invariant system behavior. This is illustrated by “formal example” in the middle of the page, where by comparing values of y(x,t) with two arguments t vs. t+\delta they suggest that y(x,t) = t x(t) is time-dependent, while y(x,t) = 10 x(t) is not. Compared to math, physics standards of rigor are lax, so it takes little effort to find a flaw. Take x(t) = t , then y(x,t) = t x(t) = {x(t)}^2 so y(x,t) is time-invariant with proper choice of x(t) !

Can relational dependency theory suggest any insight? By glancing over standard definitions of functional dependency:

\forall y_{1} \forall y_{2} \forall x : S(x, y_{1}) \wedge S(x, y_{2}) \implies y_{1} = y_{2}

and independence:

\forall y_{1} \forall y_{2} \forall x_{1} \forall x_{2} : S(x_{1}, y_{1}) \wedge S(x_{2}, y_{2}) \implies y_{1} = y_{2}

it becomes obvious that dependency concepts hinge upon equality/inequality (amended with some quantifiers, perhaps), and not upon domain algebraic structure (time-shifts). Let’s examine closely two examples:

y(x,t) = t-x
x(t) = t^2-2 t


y(x,t) = x^2-3 x
x(t) = t^2-2 t

Tabulating values at points t=0,1,2,3 we’ll get relations

R=[t  x  y]
   0  0  0
   1  -1  2 
   2  0  2  
   3  3  0


S=[t  x  y]
   0  0  0
   1  -1  4
   2  0  0
   3  3  0

correspondingly. The second relation S honors FDs t->x and x->y (and by Armstrong transitivity t->y), while the first one R does only t->x and t->y. Therefore, the formal definition of variable y being not [explicitly] dependent of t is equivalent to the absence of functional dependency x->y — if not counter intuitive, then terminologically confusing to say the least!


Get every new post delivered to your Inbox.