A relation that distinguishes one tuple, from the other is binary. It is an *equivalence relation*. Equivalence relation (in general) correspond to a sets of equivalence classes or, in other words, *partitions* of a set. There is more than one equivalence relation that can be defined upon a single relation X: one can consider equivalence classes of tuples projected to a single attribute (or set of attributes). This construction is proved to be useful for formalization of *functional dependency*.

We start with a well known fact that partitions form a *lattice*. Lattice *meet* operation is partition intersection. In example of partitions of the set `{1,2,3}`

:

`1 | 2 3`

^

1 2 | 3

=

1 | 2 | 3

Lattice *join* definition is a little more involved, so that partitions are the equivalence classes of element reachable via transitive closure. In example of partitions of the set `{1,2,3,4,5,6}`

:

`1 2 | 3 4 | 5 | 6`

v

1 | 2 3 | 4 5 | 6

=

1 2 3 4 5 | 6

Here `5`

is reachable from `1`

via the chain `1->2->3->4->5`

: both `1`

and `2`

are in the same equivalence class of the first partition set, then, `2`

and `3`

are in the same equivalence class of the second partition set, and so on.

How partitions can be applied to dependency theory? Consider the following relation

X=[p q r] 0 a 0 0 a 1 1 c 0 1 c 1 2 a 0 ;

This relation is written in QBQL notation.

Relation partitioning is a binary operation

between the two relations **x#y**`x`

and `y`

. It partitions tuples from the left hand side relation x based on the values of attributes from the right hand side relation y. Therefore the result doesn’t depend on the content of y, which may assumed to be empty. In the above example we have

X#R00 = <0,a,0> <0,a,1> <1,c,0> <1,c,1> <2,a,0>;

X#[p] = <0,a,0> <0,a,1> | <1,c,0> <1,c,1> | <2,a,0>;

X#[q] = <0,a,0> <0,a,1> <2,a,0> | <1,c,0> <1,c,1>;

X#[r] = <0,a,0> <1,c,0> <2,a,0> | <0,a,1> <1,c,1>;

X#[p q]= <0,a,0> <0,a,1> | <1,c,0> <1,c,1> | <2,a,0>;

X#[q r]= <0,a,0> <2,a,0> | <0,a,1> | <1,c,0> | <1,c,1>;

X#[r p]= <0,a,0> | <0,a,1> | <1,c,0> | <1,c,1> | <2,a,0>;

X#R10 = <0,a,0> | <0,a,1> | <1,c,0> | <1,c,1> | <2,a,0>;

which is the actual output of the QBQL program:

X=[p q r] 0 a 0 0 a 1 1 c 0 1 c 1 2 a 0 ; X#R00; X#[p]; X#[q]; X#[r]; X#[p q]; X#[q r]; X#[r p]; X#R10;

Here is the partitions lattice diagram:

Lattice, in general, defines order relation via the following equivalences

`x < y <-> x v y = y.`

x < y <-> x ^ y = x.

Informally, the element y is greater than x when it is reachable from node x and positioned higher in lattice diagram. The higher in the lattice order we ascend, the coarser partition granularity becomes.

Functional dependency `x->y`

between the two arbitrary relations `x`

and `y`

respective to the host relation r is defined as a constraint:

`r#x < r#y.`

Informally, the partitioning of the relation `r`

defined by tuple values projected into the set of attributes of the relation `x`

has finer granularity than its partitioning induced by the attributes of `y`

.

Our critical observation is that the partition lattice and relational lattice are connected via semi-lattice homomorphism. Specifically:

`r#(x ^ y) = (r#x) ^ (r#y).`

is meet homomorphism (^H). The dual assertion

`r#(x v y) = (r#x) v (r#y).`

which would have elevated it to full lattice homomorphism, is not valid.

These meet homomorphism allows to prove *Armstrong axioms*:

**Reflexivity:**

`x < y -> r#x < r#y.`

**Augmentation:**

`r#x < r#y -> r#(x^z) < r#(y^z).`

**Transitivity:**

`r#x < r#y & r#y < r#z -> r#x < r#z.`

Reflexivity proof starts with rewriting the inequality condition of the implication in terms of lattice meet, join and equality:

`x ^ y = x.`

We need to prove partition inequality which, likewise, can be rewritten as equality in lattice terms:

`r#x ^ r#y = r#x. `

This is accomplished in just two steps:

r#x ^ r#y = (^H) r#(x ^ y) = (given) r#x

Augmentation proof follows the similar venue. Transitivity is the easiest of all: it is transitivity of partition lattice order.

**Formal Concept Analysis**

Another way to formalize functional dependencies is converting database relation into a boolean matrix. The matrix columns retain their attribute structure, while rows correspond to differences between the tuples. In our running example, the relation

X=[p q r] 0 a 0 -- t1 0 a 1 -- t2 1 c 0 -- t3 1 c 1 -- t4 2 a 0 -- t5 ;

is converted into

X=[p q r] = = != --(t1,t2) != != = --(t1,t3) != != != --(t1,t4) != = = --(t1,t5) != != != --(t2,t3) != != = --(t2,t4) != = != --(t2,t5) = = != --(t3,t4) != != = --(t3,t5) != != != --(t4,t5) ;

Here the first row `<=,=,!=>`

is the difference between `<0,a,0>`

and `<0,a,1>`

. We can eliminate duplicate rows in the matrix, or just feed it into Concept Explorer program as it is:

and then, generate FCA lattice

The lattice is isomorphic to lattice of partitions, which leads to conclusion that partitions are formal concepts.

August 6, 2013 at 9:40 am

Hi Vadim, I’ve recently come across this:

Functional dependencies in relational databases: A lattice point of view (1992)

http://citeseerx.ist.psu.edu/showciting?cid=964420

(I don’t think it came up when I searched for relational lattice material a few months ago.)

From what I can make out, they use Armstrong’s axioms to generate the lattice of dependencies. But the paper doesn’t seem to treat relation values as a lattice.

Judging by the paper’s references, Demetrovics and team have done quite a bit of work around relations and lattice — but it’s mostly behind pay walls.

August 7, 2013 at 11:05 am

Hi Vadim, I’ve finally put my finger on what’s been confusing the heck out of me.

On this Functional Dependencies page you have:

“Lattice, in general, defines order relation via the following equivalences …

x < y x ^ y = x. ”

I understand that is the usual equivalence given (for example on Wikipedia).

In your 2009 paper, section 8 re Dependency Theory you have:

“This section uses the standard definition of lattice order

x < y x ^ y = y. ”

And that is the ordering in most of your papers and in the Rel Lattice web pages.

This ordering puts R10 as the supremum (top of the diagram) and R01 (aka Dee) as the infimum (bottom of the diagram). Alongside you have a diagram of a boolean algebra lattice. This has 111 (True) at the top and 000 (False) at the bottom.

R01 is usually taken to be True (in D&D’s algebra, for example). That makes sense: R10 is the identity for .

Now, R10 ^ x = R10 (taking ^ as lattice meet = Natural Join).

And R01 ^ x = x

I expected Rel lattice ordering to work the other way up to how you show it. For example, I expected the sub-lattice for a single relation to work like the ordering for the powerset of all possible tuples, with < meaning subset and ^ meaning intersection (Nat Join).

But section 8 discussing IND shows s is a subset of r as r < s.

So what is the basis for your lattice ordering?

Is the ordering on this Func Deps page different?

Is that because you're dealing with intersections of partitions?

August 7, 2013 at 5:28 pm

You are correct, I flipped the order to conventional definition since that 2009 paper. Note, that lattice order must generalize either a subset order either on attributes, or on tuples, and initially I thought that generalizing attributes order was “more natural.”

August 9, 2013 at 10:05 am

Whew! I’m glad I’m not ‘losing it’.

I don’t want to take away from the amazing work you’ve put together with the Relational Lattice.

I do think there might be more insights (and more help for people coming to it), to follow more of the ‘textbook’ approach.

For example, use a conventional definition of complement:

x ^ x’ = R10

x v x’ = R01

(This something like your .)

Then we could see that the complement is not unique in general. (To be precise: the complement’s heading is always unique; the content is unique (empty!) when x is non-empty.)

We could use the axiom of choice to determine a unique complement. (I’d suggest we make it such that:

R00′ = R11 — because R11′ = R00

But in general the Involution law fails.)

We could see that it is futile to expect distributivity.

But it might yield insights to think in terms of modular pairs (intervals), or properties weaker than distributivity.

August 12, 2013 at 6:33 pm

WordPress has eaten part of your comment, but i guess you suggested that “boolean-like” complement that you introduce is equivalent to composition NOT*INV where NOT is defined uniquely as

`NOT(x) ^ x = x ^ R00.`

NOT(x) v x = x v R11.

and INV as

`INV(x) ^ x = x ^ R11.`

INV(x) v x = x v R00.

Formally we have:

`x ^ NOT(INV(x)) = R10. (1)`

(2)x v NOT(INV(x)) = R01.

For comparison

`x v INV(NOT(x)) = R01.`

is also true, but this one`x ^ INV(NOT(x)) = R10.`

fails.

In one of your previous comments you have mentioned that R01 is the analog of boolean True, while R00 (not R10) is False — one more reason to be dissatisfied with identities (1) and (2) as axioms defining “boolean-like” complement.

August 13, 2013 at 3:29 am

Thanks Vadim, then NOT*INV is what I’m looking for, and I’m satisfied with (1) and (2).

False is R10. It’s common (but not correct) in the D&D algebra to take R00 as False. Indeed any empty relation is ‘False enough’. But R00 is not ‘as False as possible’; R10 is ‘Falser’:

R00 ^ R10 = R10.

x ^ R10 = R10, for any x.

x v R10 = x. (Identity for v.)

So R10 is ‘absolutely False’.

August 13, 2013 at 7:47 pm

Glad you are on my side. I had similar discussion with Marshall Spight (who also took D&D position that R00=false). My argument was that relational lattice is weaker than boolean algebra, therefore dyadic intuition about true and false, although helpful, eventually breaks.