Let X and Y be sets. A function f:X→Y is called an isomorphism, denoted X→≅Y, if there exists a function g:Y→X such that g∘f=idX and f∘g=idY.
We also say that f is invertible and we say that g is the inverse of f. If there exists an isomorphism X→≅Y we say that X and Y are isomorphic sets and may write X≅Y.
We represent each type as a box containing a singular indefinite noun phrase…
The text in the box should:
begin with the word “a” or “an”.
refer to a distinction made and recognizable by the olog’s author.
refer to a distinction for which instances can be documented.
declare all variables in a compound structure.
We consider aspects as functions. for example:
we can say “a woman has an aspect of being a person”, and “a molecule has an aspect (where we call mass usually) of a positive real number.”
However, “a person has a child” is not a valid aspect, since there exists people who don’t have children.
Remark: since a father can have many children, so we write achild→hasafather instead of afather→hasachild.
A path in an olog is a head-to-tail sequence of arrows. The number of arrows is the length of the path.
We all know that “the woman in parents is called mother” and “mother is the woman in parents” (though it sounds weird), so the two paths are equivalent, the triangle is commutative. And the checker-mark √ in the region is bounded by the two paths.
Let X and Y be sets. The product of X and Y, denoted X×Y, is defined as
X×Y={(x,y)∣x∈X,y∈Y}
There are two natural project functions π1:X×Y→X and π2:X×Y→Y.
Now we can construct more examples of non-commutative diagrams:
Lemma(Universal property for product)
Let X and Y be sets. For any set A and functions f:A→X and g:A→Y, there exists a unique function A→X×Y such that the following diagram commutes.
We may write the unique function as ⟨f,g⟩:A→X×Y. And obviously, ⟨f,g⟩(a)=(f(a),g(a)).
Definition(coproducts)
Let X and Y be sets. The coproduct of X and Y, denoted X⊔Y, is defined as the “disjoint union” of X and Y. If an element is both in X and Y, then we need to copy it in X⊔Y so we can distinguish where each of it comes from.
There are two natural project functions i1:X→X⊔Y and i2:Y→X⊔Y.
Example:
{a,b}⊔{a,c}={i1a,b,i2a,c}
Lemma(Universal property for coproduct)
Let X and Y be sets. For any set A and functions f:X→A and g:Y→A, there exists a unique function X⊔Y→A such that the following diagram commutes.
Suppose given the diagram of sets and functions below:
Its fiber product is the set
X×ZY:={(x,w,y)∣f(x)=w=g(y)}
There are obvious projections π1:X×ZY→X and π2:X×ZY→Y. Note that if W=X×ZY then the diagram
commutes. So given the setup diagram we define the pullback of X and Y over Z to be any set W for which we have an isomorphism W→≅X×ZY.
The corner symbol ┘ indicates that W is the pullback.
Lemma(Universal property for pullback)
Suppose given the diagram as below:
For any set A and commutative solid arrow diagram as below (i.e. t∘f=u∘g)
there exists a unique arrow ⟨f,f⟩Z:A→X×ZY making everything commute.
Definition(Preimage)
Let f:X→Y be a function and y∈Y an element. The preimage of y under f, denoted f−1(y), is the subset f−1(y):={x∈X∣f(x)=y}. If Y′⊆Y is any subset, the preimage of Y′ under f, denoted f−1(Y′), is the subset f−1(Y′)={x∈X∣f(x)∈Y′}.
Proposition: Consider the diagram below:
where B′≅B×CC′ is a pullback. Then there is an isomorphism A×BB′≅A×CC′. Said another way,
A×B(B×CC′)≅A×CC′
Exercise: Consider the diagram on the left below, where both squares commute.
Let W=X×ZY and W′=X′×ZY′, and form the diagram to the right. Use the universal property of fiber products to construct a map W→W′ such that all squares commute.
Solution:
Definition
Given sets A and B, a span on A and B is a set R together with functions f:R→A and g:R→B.
Definition
Let A,B and C be sets, and let A←fR→gC and B←f′R′→g′C be spans. Their composite span is given by the fiber product R×BR′ as in the diagram below:
Let’s look at a example of span, which may be a little confusing but reveals the relationship between matrix plus and span.
Let A and B be sets and let A←R→B be a span. By the universal property of products, we have a unique map R→pA×B.
We make a matrix of natural numbers out of this data as follows. The set of rows is A, the set of columns is B. For elements a∈A and b∈B, the (a,b)-entry is the cardinality of its preimage, ∣p−1(a,b)∣, i.e. the number of elements in R that are sent by p to (a,b). For example, if R={(1,1),(1,2),(2,1),(2,2)} and p(a,b)=(a,b), then we can make the matrix (1111).
Given two spans A←R→B and A←R′→B, by the universal property of coproducts we have a unique span A←R⊔R′→B making the diagram commute. The matrix corresponding to this new span will be the sum of the two matrices corresponding to span R and R′.
Definition
Suppose given two parallel arrows:
The equalizer of f and g is the commutative diagram as to the right shown above, where we define
Suppose given the diagram of sets and functions below:
Its fiber sum, denoted X⊔WY, is defined as the quotient of X⊔W⊔Y by the equivalence relation ∼ generated by w∼f(w) and w∼g(w) for all w∈W.
X⊔WY:=(X⊔W⊔Y)/∼
where ∀w∈W,w∼f(w) and w∼g(w).
There are obvious inclusions i1:X→X⊔WY and i2:X→X⊔WY. The following diagram commutes:
We define the pushout of X and Y over W to be any set Z for which we have an isomorphism Z→≅X⊔WY. The corner symbol ┌ indicates that Z is the pushout.
Example: W={1,2},X={0,1,2,3},Y={0,1,2,3,4} and f(x)=x+1,g(x)=x. Then
Example: For any sets A,B, let’s consider the function ev:BA×A→B. ev(f,a)=f(a).
specially, ev:53×3→5. Since it’s not an isomorphism, so the corresponding arithmetic expression is not equality. However, 575→5 seems weird and meaningless.
Definition
Let V be a set and let P(V) be its powerset. A subset X⊆P(V) is called downward-closed if, for every u∈X and every u′⊆u, we have u′∈X. We say that Xcontains all atoms if for every v∈V the singleton set {v} is an element of X.
A simplicial complex is a pair (V,X) where V is a set and X⊆P(V) is a downward-closed subset that contains all atoms. The elements of X are called simplices. We have a function X→N sending each simplex to its cardinality. The set of simplices with cardinality n+1 is denoted as Xn. Since X contains all atoms, so X0≅V. Sometimes we call 0-simplices vertices and 1-simplices edges.
Example: let n be a natural number and V=n+1. Define the n-simplex denoted Δn to be (V,P(V)), obviously P(V) is downward-closed and contains all atoms.
We can draw Δ0,Δ1,Δ2,Δ3 as follows:
Definition
Define the subobject classifier for Set, denoted Ω, to be the set Ω:={True,False}, together with the function {♡}→Ω sending the unique element to True.
Proposition: Let B be a set. There is an isomorphism
Let f:X→Y be a function. We say that f is surjective if, for all y∈Y there exists some x∈X such that f(x)=y. We say that f is injective if, for all x∈X and all x′∈X with f(x)=f(x′) we have x=x′.
Definition
Let f:X→Y be a function. We say that f is a monomorphism if for all sets A and pairs of functions g, g′:A→X,
if f∘g=f∘g′ then g=g′.
We say that f is an epimorphism if for all sets B and pairs of functions h,h′:Y→B,
if h∘f=h∘f′ then h=h′.
Proposition: Let f:X→Y be a function. Then f is injective if and only if it is a monomorphism; f is surjective if and only if it is an epimorphism.
Proposition: Let f:X→Y be a monomorphism. Then for any function g:A→Y, the top map f′:X×YA→A in the diagram
is a monomorphism.
Proof: Consider the diagram:
B is an arbitrary set and we have
f′∘m=f′∘n
we need to prove that m=n. Firstly we construct two morphism
q=g′∘mr=g′∘n
Then
f∘q=f∘g′∘m=g∘f′∘m=g∘f′∘n=f∘g′∘n=f∘r
since f is a monomorphism, so q=r. Due to the universal property of pullback, there is a unique morphism B→X×YA. So m=n.
Remark: if the morphism from B to A and from B to X is unique (f′∘m=f′∘n and q=r) then the morphism from B to X×YA is unique.
Corollary: Let i:A→X be a monomorphism. Then there is a fiber product square of the form:
emmm… it’s quite self-evident if you think carefully. The behavior of f and f′ is obvious.
Definition
A multiset is a sequence X:=(E,B,π) where E and B are sets and π:E→B is a surjective function. We refer to E as the set of element instances of X, we refer to b as the set of element names of X and we refer to π as the naming function for X. Given an element name x∈B, let π−1(x)⊆E be the preimage; the number of elements in π−1(x) is called the multiplicity of x.
Example: E={1,2,3,4,5}, B={a,b,c} ,
π(1)=a,π(2)=aπ(3)=bπ(4)=c,π(5)=c
then it is somehow representing multiset {a,a,b,c,c}.
Suppose that X=(E,B,π) and X′=(E′,B′,π′) are multisets. A mapping from X to Y, denoted f:X→Y, consists of a pair (f1,f0) such that f1:E→E′ and f0:B→B′ are functions and such that the following diagram commutes:
We can understand it as “the map preserves the instance-name mapping”.
Definition(Relative set)
Let B be a set. A relative set over B, or simply a set over B, is a pair (E,π) such that E is a set and π:E→B is a function. A mapping of relative sets over B, denoted f:(E,B)→(E′,B′), is a function f:E→E′ such that the triangle below commutes, i.e. π=π′∘f.
Definition(Indexed set)
Let A be a set. An A-indexed set is a collection of sets Sa, one for each element a∈A; for now we denote this by (Sa)a∈A. If (Sa′)a∈A is another A-indexed set, a mapping of A-indexed sets from (Sa)a∈A to (Sa′)a∈A, denoted
(fa)a∈A:(Sa)a∈A→(Sa′)a∈A
is a collection of functions fa:Sa→Sa′, one for each element a∈A.
A monoid is a sequence (M,e,∗), where M is a set, e∈m is an element, and ∗:M×M→M is a function, such that the following conditions hold for all m,n,p∈M:
m∗e=m
e∗m=m
(m∗n)∗p=m∗(n∗p)
We refer to e as the identity element and to ∗ as the multiplication formula for the monoid. We call the first two rules identity laws and the third rule the associativity law for monoids.
Example (Trivial monoid) There is a monoid with only one element, M=({e},e,∗) where ∗:(e,e)→e. We call this monoid the trivial monoid and sometimes denoted it 1/
Definition
Let X be a set. A list in X is a pair (n,f) where n∈N is a natural number (called the length of the list) and f:n→X is a function. We may denote such a list by
(n,f)=[f(1),f(2),...,f(n)]
Given two lists L=(n,f),L′=(n′,f′), the concatenation of L and L′ is denoted L++L′=(n+n′,f++f′). The behavior of f++f′:n+n′→X is natural:
(f++f′)(i):={f(i)f′(i−n)i≤ni≥n+1
Definition(Free monoid)
Let X be a set. The free monoid generated by X is the sequence M:=(List(X),[],++), where List(X) is the set of lists of elements in X. We refer to X as the set of generators for the monoid M.
Definition(Presented monoid)
Let G be a finite set, and n be a natural number, and for each 1≤i≤n, let mi and mi′ be elements of List(G). The monoid presented by generators G and relations {(mi,mi′)∣1≤i≤n} is the monoid M=(M,e,∗) defined as follows:
Let ∼ denote the equivalence relation on List(G) generated by \
define M=List(G)/∼
e=[]
∗ is concatenating lists.
Example: Let G={a,b,c,d} representing four buttons that can be pressed. The free monoid List(G) is the set of all ways of pressing buttons, like [a,a,c,c,d] means you push button a, then a, then c, then c, then d.
But when you notice that you press [a,a,c] is always equivalent to [d,d], and you press [a,c,a,c] is actually doing nothing. So we can have relation:
([a,a,c],[d,d]),([a,c,a,c],[])
So we can have the equivalent relation in Presented monoid:
A monoid is called cyclic if it has a representation involving only one generator, In other words, ∣G∣=1.
Example: Consider the monoid where G={a} and the relation is empty. We get the additive monoid of natural numbers now since [a,a]++[a]=[a,a,a]. With the really strong relation [a]∼[] we would get the trivial monoid having only one element [].
Let M=(M,e,∗) and M′=(M′,e′,∗′) be monoids. A monoid homomorphismf from M to M′, denoted f:M→M′, is a function f:M→M′, satisfying:
f(e)=e′
f(m1∗m2)=f(m1)∗′f(m2)
The set of monoid homomorphisms from M to M′ is denoted HomMon(M,M′).
Example: Given any monoid M there is a unique monoid homomorphism from M to the trivial monoid 1. There is also a unique homomorphism 1→M. These facts together have an upshot: we can always construct a homomorphism:
M→!1→!M′
which we call the trivial homomorphism.
Proposition: Let M=(Z,0,+) and M′=(N,0,+). The only monoid homomorphism f:M→M′ sends every element m∈Z to 0∈N.
Proposition: Let M=(M,e,∗) and M′=(M′,e′,∗′) be monoids, f:M→M′ a monoid homomorphism, S a set, and suppose that α:M′×S→S is an action of M′ on S. Then Δf(α):M×S→S, defined as:
Let (M,e,∗) be a monoid. An element m∈M is said to have an inverse if there exists an m′∈M such that mm′=e and m′m=e. A group is a monoid in which every element m∈M has an inverse.
Proposition: The inverse element is unique.
Definition (Group action)
Let (G,e,∗) be a group and S a set. An action of G on S is a function:
↺:G×S→S
such that for all s∈S and g,g′∈G, we have
e↺s=s
g↺(g′↺s)=(g∗g′)↺s.
Definition(orbit)
Let G be a group acting on a set X. For any point x∈X, the orbit of x, denoted Gx, is the set
Gx:={x′∈X∣∃g∈G,g↺x=x′}
Definition
Let G and G′ be groups. A group homomorphismf:G→G′ is defined to be a monoid homomorphism.
src:A→V is a function, called the source function.
tgt:A→V is a function, called the target function.
Definition(Path)
Let G=(V,A,src,tgt) be a graph. A path of length n in G, denoted p∈PathG(n) is a head-to-tail sequence
p=(v0→a1v1→a2v2→a3...→anvn)
of arrows in G, which we denote by v0a1a2...an. In particular we have canonical isomorphism PathG(1)≅A and PathG(0)≅V. And
PathG=n∈N⋃PathG(n)
we denote the source and the target vertex of a path by
srcˉ,tgtˉ:PathG→V
Definition
Let G=(V,A,src,tgt) and G′=(V′,A′,src′,tgt′) be graphs. A graph homomorphismf from G to G′, denoted f:G→G′, consists of two functions f0:V→V′ and f1:A→A′ such that the two diagrams below commute:
Let S be a set and R⊆S×S a binary relation on S; if (s,s′)∈R we will write s≤s′, then we say that R is a preorder if, for all s,s′,s′′∈S we have
s≤s
if s≤s′ and s′≤s′′ then s≤s′′
If we want it to be a partial order, then we need to add
If s≤s′ and s′≤s then s=s′
If we want it to be a linear order, then we need to add
Either s≤s′ or s′≤s.
Definition(clique)
Let (S,≤) be a preorder. A clique is a subset S′⊆S such that for each a,b∈S′ one has a≤b.
Definition(meet, join)
Let (S,≤) be a preorder and let s,t∈S be elements. A meet of s and t is an element w∈S satisfying
w≤s and w≤t
for any x∈S, if x≤s and x≤t then x≤w
We write as w≅s∧t.
A join of s and t is an element w∈S satisfying
s≤w and t≤w
for any x∈S, if s≤x and t≤x then w≤x
We write as w≅s∨t.
Definition(opposite order)
Let S=(S,≤) be a preorder. The opposite preorder, denoted Sop is the preorder (S,≤op) having the same set of elements but where s≤ops′ iff s′≤s.
Definition(opposite order)
Let S=(S,≤) and S′=(S′,≤′) be preorders, A morphism of preorders denoted f:S→S′ is a function f:S→S′ such that, for every pair of elements s1,s2∈S, if s1≤s2 then f(s1)≤′f(s2).
A categoryC is defined as follows: One announces some constituents (objects, morphisms, identities, compositions) and asserts (identity law, associativity law) that they conform to some laws. Specifically, one announces:
a collection Ob(C), elements of which are called objects.
for every pair x,y∈Ob(C), a set HomC(x,y). It is called the hom-set from x to y; its elements are called morphisms from x to y.
for every object x∈Ob(C), a specified morphism denoted idx∈HomC(x,x) called the identity morphism on x.
for every three objects x,y,z∈Ob(C), a function
∘:HomC(y,z)×HomC(x,y)→HomC(x,z)
called the composition formula.
Given objects x,y∈Ob(C), we can denote a morphism f∈HomC(x,y) by f:x→y; we say that x is the domain of f and that y is the codomain of f. Given also g:y→z, the composition formula is written using infix notation, so g∘f:x→z means ∘(g,f)∈HomC(x,z).
for every x,y∈Ob(C) and every morphism f:x→y, we have
f∘idx=fidy∘f=f
if w,x,y,z,∈Ob(C) are any objects and f:w→x,g:x→y and h:y→z are any morphisms, then the two ways to compose are the same:
(h∘g)∘f=h∘(g∘f)
Remark: Although we say Ob(C) is a collection of objects(that’s because sometimes we want to talk about set category, where all objects are sets), at the beginning of study, we still consider Ob(C) as a set.
Example: Inside the category Set is a subcategory Fin⊆Set, called the category of finite sets. Whereas an object S∈Ob(Set) is a set that can have arbitrary cardinality, we define Fin such that its objects include all the sets S with finitely many elements, i.e. ∣S∣=n for some natural number n. Every object of Fin is an object of Set, but not vice versa. The morphisms are the same:
HomFin(S,S′)=HomSet(S,S′)
Example(The category Mon of monoids): objects are monoids, and morphisms are the homomorphisms in monoids.
Example(The category Grp of groups): objects are groups, and morphisms are the homomorphisms in groups.
Example(The category PrO of preorders): objects are preorders, and morphisms are the homomorphisms in preorders.
Example(The category Grph of graphs): objects are graphs, and morphisms are the graph homomorphisms. Notice that since the left rectangular and the right rectangular commute, the whole diagram commutes.
So the associativity is ensured.
Definition(isomorphism)
Let C be a category and let X,Y∈Ob(C) be objects. An isomorphism f from X to Y is a morphism f:X→Y in C, such that there exists a morphism g:Y→X in C such that
g∘f=idX,f∘g=idY
In this case we say that the morphism f is invertible and that g is the inverse of f. We may also say that the objects X and Y are isomorphic.
Lemma
Let C be a category and let ∼ be the relation on Ob(C) given by saying X∼Y iff X and Y are isomorphic. Then ∼ is an equivalence relation.
Let’s consider another definition of category:
Definition
A categoryC consists of a sequence (Ob(C),HomC,dom,cod,ids,∘) where
Ob(C) is a set
HomC is a set, and dom,cod:HomC→Ob(C) are functions
ids:Ob(C)→HomC is a function
∘ is a function as depicted in the commutative diagram below
Definition(functors)
Let C and C′ be categories. A functor F from C to C′, denoted F:C→C′, is defined as follows: One announces some constituents and asserts that they conform to some laws. Specifically, one announces
a function Ob(F):Ob(C)→Ob(C′) which we sometimes denote simply by F:Ob(C)→Ob(C′)
for every pair of objects c,d∈Ob(C), a function
HomF(c,d):HomC(c,d)→HomC′(F(c),F(d))
which we sometimes denote simply by F:HomC(c,d)→HomC′(F(c),F(d)).
One asserts that the following laws hold:
Identities are preserved by F:
F(idc)=idF(c)
Composition is preserved by F:
F(h∘g)=F(h)∘F(g)
Example: Consider the functor:
U:Mon→Set
Which means every monoid has a underlying set, and every homomorphisms between monoids has a corresponding function between sets.
Proposition: Let PrO be the category of preorders and Grph be the category of graphs. There is function P:PrO→Grph such that for any preorder X=(X,≤), the graph P(X) has vertices X.
Proposition: There exists a category, called the category of small categories and denoted Cat, in which the objects are the small categories and the morphisms are the functors:
HomCat(C,D)={F:C→D∣Fisafunctor}
# Categories and functors commonly arising in mathematics
Let (M,e,∗) be a monoid. We consider it as a category M with one object, Ob(M)={Δ}, and
HomM(Δ,Δ)=M
The identity morphism idΔ serves as the monoid identity e, and the composition formula
∘:HomM(Δ,Δ)×HomM(Δ,Δ)→HomM(Δ,Δ)
is given by ∗:M×M→M. The associativity and identity laws for the monoid match precisely with the associativity and identity for categories.
The homomorphism between two monoids can be interpreted as the functor between two one-object categories.
"A monoid is a category with one object. A monoid homomorphism is just a functor between one-object categories."
Theorem:
There is a functor i:Mon→Cat with the following properties:
for every monoid M∈Ob(Mon), the category i(M)∈Ob(Cat) itself has exactly one object.
for every pair of monoids M,M′∈Ob(Mon) the function
HomMon(M,M′)→≅HomCat(i(M),i(M′))
induced by the functor i, is a bijection.
Example: Let C be a category and x∈Ob(C) an object. Let M=HomC(x,x). Note that for any two elements f,g∈m we have f∘g∈M. Let M=(M,idx,∘). It is easy to check that M is a monoid; it is called the endomorphism monoid of x in C.
A preorder (X,≤) consists of a set X and a binary relation. We can make from (X,≤)∈Ob(PrO) a category X∈Ob(Cat) as follows. Define Ob(X)=X and for every two objects x,y∈X:
HomX(x,y)={{"x≤y"}∅x≤yx≤y
Theorem:
There is a functor i:PrO→Cat with the following properties
the category X=i(X,≤) has objects Ob(X)=X.
for each pair of elements x,x′∈Ob(X) the set HomX(x,x′) has at most one element.
"A preorder is a category in which every hom-set has either 0 or 1 element. A preorder morphism is just a functor between such categories."
"Naturality works like this: Using a functionf:X→Yto convert a list of lists ofX's into a list of list ofY's and then concatenating to get a simple list ofY's does the same thing as first concatenating our list of lists ofX's into a simple list ofX's and then using our functionfto convert it into a list ofY's***"***.
Example: Let X={a,b,c} and Y={1,2,3} and let f:X→Y assign f(a)=1,f(b)=1,f(c)=2. Our naturality condition says the following for any list of lists of X's, in particular for [[a,b],[a,c,a,b,c],[c]]:
Keep these μX in mind in the following definition – they serve as the “components” of a natural transformation List∘List→List of functors C→D, where C=D=Set.
Definition
Let C and D be categories and let F:C→D and G:C→D be functors. A natural transformation α from F to G, denoted α:F→G, is defined as follows: one announces some constituents and asserts they conform to some laws. Specifically, one announces:
For each object c∈Ob(C) a morphism αc:F(c)→G(c) in D, called the c-component of α.
One asserts that the following law holds:
For every morphism h:c→c′ in C, the following square, called the naturality square for h, must commute:
Example: Consider the categories C≅[1] and D≅[2] drawn below:
Consider the functors F,G:[1]→[2] where
F(0)=A,F(1)=BG(0)=A,G(1)=C
Then the naturality square for h, the only morphism in C, is
Of course it commutes.
Lemma:
Let C and D be categories, let F,G:C→D be functors, and for every object c∈Ob(C), let αc:F(c)→G(c) be a morphism in D. Suppose given a path
c0→f1c1→f2...→fncn
such that the naturality square
commutes for each 1≤i≤n. Then the naturality square for the composite p=fn∘...∘f2∘f1:c0→cn
also commutes.
In particular, when n=0, it means that the naturality square commutes for every identity morphism idc.
The proof is quite self-evident…since the functors require to preserve associativity:
Proposition: Let C and D be categories. There exists a category, called the category of functors from C to D and denoted Fun(C,D), whose objects are the functors C→D and whose morphisms are the natural transformations,
Let C and D be categories and let F,G:C→D be functors. A natural transformation α:F→G is an isomorphism in Fun(C,D) if and only if the component αc:F(c)→G(c) is an isomorphism for each object c∈Ob(C). In this case α is called a natural isomorphism.
Proof: First suppose that α is an isomorphism with inverse β:G→F, and let βc:G(c)→F(c) denote its c component. We know that α∘β=idG and β∘α=idF. So for every c∈Ob(C), αc∘βc=idG(c) and βc∘αc=idF(c). So αc is an isomorphism.
Second suppose that each αc is an isomorphism with inverse βc. We need to see that these components assemble into a natural transformation, i.e. the right square commutes:
Let B,C,D and E be categories, let G1,G2:C→D be functors, and let α:G1→G2 a natural transformation. Suppose that F:B→C is a functor (respectively H:D→E), depicted below:
Then the pre-whiskering of α by F, denoted α⋄F:G1∘F→G2∘F (respectively, the post-whiskering of α by H, denoted H⋄α:H∘G1→H∘G2) is defined as follows:
For each b∈Ob(B) the component (α⋄F)b:G1∘F(b)→G2∘F(b) is defined to be αF(b). (Respectively, for each c∈Ob(C) the component (H⋄α)c:H∘G1(c)→H∘G2(c) is defined to be H(α).) Checking that the naturality squares is straightforward.
Definition
Let B,C and D be categories, let F1,F2:B→C and G1,G2:C→D be functors, and let α:F1→F2 and β:G1→G2 be natural transformations, as depicted below:
By pre- and post-whiskering in one order or the other we get the following diagram
It is straightforward to show that this diagram commutes, so we can take the composition to be our definition of the horizontal composition
β⋄α:G1∘F1→G2∘F2
Theorem:
Given a setup of categories, functors, and natural transformations as above, we have
A database is a collection of tables, each table T of which consists of a set of columns and a set of rows.
The primary key column is tasked with uniquely identifying different rows, like the “ID” column in the left table.
The data columns house elementary data of a certain sort.
The foreign key columns link one table to another, creating a connection pattern between tables. Each foreign key column houses data that needs to be further unpacked. For example, the Source column is a foreign key to the supplier table.
What’s more, we could even say that data columns are specific kinds of foreign keys. That is, each data column constitutes a foreign key to some non-branching leaf table, which has no additional data.
The five tables from (3.13) and (3.15) are seen as five vertices.
The six foreign key columns from (3.13) and (3.15) are seen as six arrows, each points from a table to a foreign table.
The two rules below are seen as statements at the top of Display (3.16).
Rule1. For every employee e, the manager of eworks in the same department that eworks in.
Rule2. For every department d, the secretary of dworks in department d.
Think carefully about the diagram, then you can understand it quickly. Remember that there are only three types of columns: primary key column(ID), data columns (fist, last, name) and foreign key columns (manager, worksIn, secretary). And the element of foreign key columns is some table’s ID.
Definition
Let G=(V,A,src,tgt) be a graph, and let PathG denote the set of paths in G. A path equivalence declaration is an expression of the form p≃q where p,q∈PathG have the same source and target:
src(p)=src(q),tgt(p)=tgt(q)
A congruence on G is a relation ≃ on PathG that has the following properties:
The relation ≃ is an equivalence relation.
If p≃q then src(p)=src(q).
If p≃q then tgt(p)=tgt(q).
Suppose p,q:b→q are paths, and m:a→b is an arrow. If p≃q then mp≃mq.
Suppose p,q:a→b are paths, and n:b→c is an arrow. If p≃q then pn≃qn.
Any set of path equivalence declarations (PEDs) generates a congruence. We tend to elide the difference between a congruence and the set of PEDs that generates it.
Lemma
Suppose that G is a graph and ≃ is a congruence on G. Suppose p≃q:a→b and r≃s:b→c. Then pr≃qs.
Definition(Schema)
A database schemaC consists of a pair C=(G,≃) where G is a graph and ≃ is a congruence on G.
Converting a schema C=(G,≃) into a table layout should be done as follows:
There should be a table for every vertex in G and if the vertex is named, the table should have that name.
Each table should have a left-most column called ID, set apart from the other columns by a double vertical line.
To each arrow a in G having source vertex s=src(a) and target vertex t=tgt(a), there should be a foreign key column a in table s, referring to table t; if the arrow a is named column a should have that name.
Let C=(G,≃) where G=(V,A,src,tgt). An instance onC, denoted (PL,FK):C→Set, is defined as follows: One announces some constituents and asserts that they conform to a law. Specifically, one announces:
a function PK:V→Set. i.e. to each vertex v∈V one provides a set PK(v)
for every arrow a∈A with v=src(a) and w=tgt(a), a function FK(a):PK(v)→PK(w).
One asserts that the following law holds for any vertices v,w and paths p=va1a2...am and q=va1′a2′...an′ from v to w;
Let C be a schema (or category). The category of instances on C, denoted C−Set, is Fun(C,Set). Its objects are C-instances (i.e. functors C→Set) and its morphisms are natural transformations.
Firstly, let’s think about Paths. Paths can be viewed as a functor Grph→Grph in that
Path(G)=(V,PathG,srcˉ,tgtˉ)∈Ob(Grph)
and given a graph homomorphism f:G→G′, every path in G is sent under f to a path in G′. So Paths is a functor.
Definition
Let G=(V,A,src,tgt) and G′=(V′,A′,src′,tgt′) be graphs, and let C=(G,≃G) and D=(G′,≃G′) be schemas. A schema morphismFfromCtoD, denoted F:C→D is a graph homomorphism
F:G→Paths(G′)
that satisfies the following condition for any paths p,q in G: