
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics therefore excludes topics in "continuous mathematics" such as calculus and analysis. Discrete objects can often be enumerated by integers. More formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (sets that have the same cardinality as subsets of the natural numbers, including rational numbers but not real numbers). However, there is no exact, universally agreed definition of the term "discrete mathematics." Indeed, discrete mathematics is described less by what is included than by what is excluded: continuously varying quantities and related notions.
The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in discrete steps and store data in discrete bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems, such as in operations research.
Although the main objects of study in discrete mathematics are discrete objects, analytic methods from continuous mathematics are often employed as well.
Set theory
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas.
In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantors work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
Set theory is the branch of mathematical logic that studies sets, which are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics. The language of set theory can be used in the definitions of nearly all mathematical objects.
The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. After the discovery of paradoxes in naive set theory, numerous axiom systems were proposed in the early twentieth century, of which the Zermelo–Fraenkel axioms, with the axiom of choice, are the best-known.
Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, with an active research community. Contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals.
Set theory begins with a fundamental binary relation between an object o and a set A. If o is a member (or element) of A, write o ? A. Since sets are objects, the membership relation can relate sets as well.
A derived binary relation between two sets is the subset relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, denoted A ? B. For example, {1,2} is a subset of {1,2,3} , but {1,4} is not. From this definition, it is clear that a set is a subset of itself; for cases where one wishes to rule this out, the term proper subset is defined. A is called a proper subset of B if and only if A is a subset of B, but B is not a subset of A.
Set operations and Venn Diagrams
http://www.math.hawaii.edu/~williamdemeo/Math371-Summer2011/SetOperationsAndVenDiagrams.pdf
Just as arithmetic features binary operations on numbers, set theory features binary operations on sets. The:
1) Union of the sets A and B, denoted A ? B, is the set of all objects that are a member of A, or B, or both. The union of {1, 2, 3} and {2, 3, 4} is the set {1, 2, 3, 4} .
2) Intersection of the sets A and B, denoted A ? B, is the set of all objects that are members of both A and B. The intersection of {1, 2, 3} and {2, 3, 4} is the set {2, 3} .
3) Set difference of U and A, denoted U \ A, is the set of all members of U that are not members of A. The set difference {1,2,3} \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ {2,3,4} is {1} , while, conversely, the set difference {2,3,4} \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ {1,2,3} is {4} . When A is a subset of U, the set difference U \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ A is also called the complement of A in U. In this case, if the choice of U is clear from the context, the notation Ac is sometimes used instead of U \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ A, particularly if U is a universal set as in the study of Venn diagrams.
4) Symmetric difference of sets A and B, denoted A ? B or A ? B, is the set of all objects that are a member of exactly one of A and B (elements which are in one of the sets, but not in both). For instance, for the sets {1,2,3} and {2,3,4} , the symmetric difference set is {1,4} . It is the set difference of the union and the intersection, (A ? B) \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ (A ? B) or (A \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ B) ? (B \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ A).
5) Cartesian product of A and B, denoted A × B, is the set whose members are all possible ordered pairs (a,b) where a is a member of A and b is a member of B. The cartesian product of {1, 2} and {red, white} is {(1, red), (1, white), (2, red), (2, white)}.
6) Power set of a set A is the set whose members are all possible subsets of A. For example, the power set of {1, 2} is { {}, {1}, {2}, {1,2} } .
Some basic sets of central importance are the empty set (the unique set containing no elements), the set of natural numbers, and the set of real numbers.
The empty set, Partition, and Power sets
Definition:
A set with no element in it is called an empty set.
Theorem
If Ø is an empty set, then it is a subset of any set.
Corollary
There is only one empty set.
Notation
Since there is only one empty set, we use the symbol Ø to denote this unique empty set, and we will call it the empty set.
The Foundational Rules of Set Theory
The laws listed below can be described as the Foundational Rules of Set Theory. We derive them by going back to the definitions of intersection, union, universal set and empty set, and by considering whether a given element is in, or not in, one or more sets.
The Idempotent Laws
As an example, well consider the ?I heard you the first time? Laws – more correctly called the Idempotent Laws - which say that:
A ? A = A and A ? A = A
This law might be familiar to you if youve studied logic. The above relationship is comparable to the tautology.
These look pretty obvious, dont they? A simple explanation for the intersection part of these laws goes something like this:
The intersection of two sets A and B is defined as just those elements that are in A and in B. If we replace B by A in this definition we get that the intersection of A and A is the set comprising just those elements that are in A and in A. Obviously, we dont need to say this twice , so we can simply say that the intersection of A and A is the set of elements in A. In other words:
A ? A = A
We can derive the explanation for A ? A = A in a similar way.
De Morgans Laws
There are two laws, called De Morgans Laws, which tell us how to remove brackets, when theres a complement symbol - ? - outside. One of these laws looks like this:
(A ? B) ? = A ? ? B ?
(If youve done Exercise 3, question 4, you may have spotted this law already from the Venn Diagrams.)
Look closely at how this Law works. The complement symbol after the bracket affects all three symbols inside the bracket when the brackets are removed:
A becomes A ?
B becomes B ?
and ? becomes ?.
To prove this law, note first of all that when we defined a subset we said that if
A ? B and B ? A, then A = B
So we prove:
(i) (A ? B) ? ? A ? ? B ?
and then the other way round:
(ii) A ? ? B ? ? (A ? B) ?
The proof of (i) goes like this:
Lets pick an element at random x ? (A ? B) ?. We dont know anything about x; it could be a number, a function, or indeed an elephant. All we do know about x, is that
x ? (A ? B) ?
So
x ? (A ? B)
because thats what complement means.
This means that x answers No to both questions Are you in A? and Are you in B? (otherwise it would be in the union of A and B). Therefore
x ? A and x ? B
Applying complements again we get
x ? A ? and x ? B ?
Finally, if something is in two sets, it must be in their intersection, so
x ? A ? ? B ?
So, any element we pick at random from (A ? B) ? is definitely in A ? ? B ?. So by definition
(A ? B) ? ? A ? ? B ?
The proof of (ii) is similar:
First, we pick an element at random from the first set, x ? A ? ? B ?
Using what we know about intersections, that means
x ? A ? and x ? B ?
So, using what we know about complements,
x ? A and x ? B
And if something is in neither A nor B, it cant be in their union, so
x ? A ? B
So, finally:
x ? (A ? B) ?
So:
A ? ? B ? ? (A ? B) ?
Weve now proved (i) and (ii), and therefore:
(A ? B) ? = A ? ? B ?
This gives you a taste for whats behind these laws. So here they all are.
The Laws of Sets
Commutative Laws
A ? B = B ? A
A ? B = B ? A
Associative Laws
(A ? B) ? C = A ? (B ? C)
(A ? B) ? C = A ? (B ? C)
Distributive Laws
A ? (B ? C) = (A ? B) ? (A ? C)
A ? (B ? C) = (A ? B) ? (A ? C)
Idempotent Laws
A ? A = A
A ? A = A
Identity Laws
A ? ø = A
A ? U = A
A ? U = U
A ? ø = ø
Involution Law
(A ?) ? = A
Complement Laws
A ? A = U
A ? A = ø
U ? = ø
ø ? = U
De Morgan’s Laws
(A ? B) ? = A ? ? B ?
(A ? B) ? = A ? ? B ?
Duality and Boolean Algebra
You may notice that the above Laws of Sets occur in pairs: if in any given law, you exchange ? for ? and vice versa (and, if necessary, swap U and ø) you get another of the laws. The partner laws in each pair are called duals, each law being the dual of the other.
For example, each of De Morgans Laws is the dual of the other.
The first complement law, A ? A ? = U, is the dual of the second: A ? A ? = ø.
... and so on.
This is called the Principle of Duality. In practical terms, it means that you only need to remember half of this table!
This set of laws constitutes the axioms of a Boolean Algebra. See Boolean Algebra for more.
Proofs using the Laws of Sets
We may use these laws - and only these laws - to determine whether other statements about the relationships between sets are true or false. Venn diagrams may be helpful in suggesting such relationships, but only a proof based on these laws will be accepted by mathematicians as rigorous.
Logic
Laws of logic, Normal Form
A proposition is a statement which has truth value: it is either true (T) or false (F).
Example 1
Which of the following are propositions?
(a) 17 + 25 = 42
(b) July 4 occurs in the winter in the Northern Hemisphere.
(c) The population of the United States is less than 250 million.
(d) Is the moon round?
(e) 7 is greater than 12.
(f) x is greater than y.
Answers
(a) is a proposition; and of course it has the truth value true.
(b) is a proposition. Of course, its false, but its still a proposition.
(c) is a proposition, but we may not actually know whether its true or false. Nevertheless, the fact is that the statement itself is a proposition, because it is definitely either true or false.
(d) is not a proposition. Its a question.
(e) is a proposition. Its false again, of course 7>12.
(f) is a bit more debatable! Its certainly a potential proposition, but until we know the values of x and y, we cant actually say whether it is true or false. Note that this isnt quite the same as (c), where we may not know the truth value because we arent well-enough informed. See the next paragraph.
Propositional Functions
A function is, loosely defined, an operation that takes as input one or more parameter values, and produces a single, well-defined output.
Youre probably familiar with the sine and cosine functions in trigonometry, for example. These are examples of functions that take a single number (the size of an angle) as an input and produce a decimal number (which in fact will lie between +1 and -1) as output.
If we want to, we can define a function of our own, say RectangleArea, which could take two numbers (the length and width of a rectangle) as input, and produce a single number as output (formed by multiplying the two input numbers together).
In (f) above, we have an example of a Propositional Function. This is a function that produces as output not a number like sine, cosine or RectangleArea, but a truth value. Its a statement, then, that becomes a proposition when it is supplied with one or more parameter values. In (f), the parameters are x and y. So if x = 2 and y = 7, its output is False; if x = 4 and y = -10, its output is True.
More about propositional functions later.
Notation
We shall often represent propositions by lower-case letters p, q, ...
Compound Propositions
Propositions may be modified by means of one or more logical operators to form what are called compound propositions.
There are three logical operators:
conjunction: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\scriptstyle \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\wedge meaning AND
disjunction: ? meaning OR
negation: ¬ meaning NOT
Example 2
p represents the proposition "Henry VIII had six wives".
q represents the proposition "The English Civil War took place in the nineteenth century".
(a) Connect these two propositions with OR. Is the resulting compound proposition true or false?
(b) Now connect them with AND. Is this compound proposition true or false?
(c) Is the opposite of p true or false?
Answers
(a) p ? q is "Henry VIII had six wives or the English Civil War took place in the nineteenth century"
This is true. The first part of the compound proposition is true, and this is sufficient to make the whole statement true – if a little odd-sounding!
If you think that this example seems very artificial, imagine that youre taking part in a History Quiz; there are two questions left, and you need to get at least one right to win the quiz. You make the two statements above about Henry VIII and the Civil War. Do you win the quiz? Yes, because it is true that either Henry VIII had six wives or the English Civil War took place in the nineteenth century.
Note that this is an inclusive OR: in other words, we dont rule out both parts being true. So p ? q means "Either p is true, or q is true, or both".
(b) p \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\scriptstyle \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\wedge q is "Henry VIII had six wives and the English Civil War took place in the nineteenth century"
This is false.
To be true, both parts would have to be true. This is equivalent to your needing both questions right to win the quiz. You fail, because you got the second one wrong.
(c) The opposite of p, which we write as ¬p, is "Henry VIII did not have six wives". This is clearly false. And in general, if p is true, then ¬p is false, and vice versa.
Example 3
p is "The printer is off-line"
q is "The printer is out of paper"
"r" is "The document has finished printing"
Write as English sentences, in as natural a way as you can:
(a) p ? q
(b) r \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\scriptstyle \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\wedge q
(c) q \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\scriptstyle \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\wedge ¬r
(d) ¬(p ? q)
Answers
(a) The printer is off-line or out of paper.
Notice how we often leave words out when were writing or speaking English. This sounds much more natural than "The printer is off-line or the printer is out of paper".
(b) The document has finished printing and the printer is out of paper.
The subject of each part of the sentence is different now, so no words are missing this time.
(c) The printer is out of paper and the document has not finished printing.
But and And
The statement in (c) could be someone reporting a problem, and they might equally well say:
(c) The printer is out of paper but the document has not finished printing.
So note that, in logic, but and and mean the same thing. Its just that we use but to introduce a note of contrast or surprise. For example, we might well say:
The sun is shining brightly, but its freezing cold outside.
Logically, we could use and to connect both parts of this sentence, but(!) its more natural to use but.
In (d) what does ¬(p ? q) mean? Well, p ? q means either p is true or q is true (or both). When we place ¬ in front, we negate this. So it means (literally):
It is not true that either the printer is off-line or the printer is out of paper.
In other words:
(d) The printer is neither off-line nor out of paper.
Notice that its often convenient to translate ¬ using the phrase It is not true that ….
Logic Exercise 1
Click link for Logic Exercise 1.
Truth Tables
Consider the possible values of the compound proposition p \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\scriptstyle \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\wedge q for various combinations of values of p and q. The only combination of values that makes p \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\scriptstyle \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\wedge q true is where p and q are both true; any other combination will include a false and this will render the whole compound proposition false. On the other hand, the compound proposition p ? q will be true if either p or q (or both) is true; the only time p ? q is false is when both p and q are false.
We summarise conclusions like these in what is called a Truth Table, the truth table for AND being:
p q p \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\scriptstyle \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\wedge q
T T T
T F F
F T F
F F F
The truth table for OR is:
p q p ? q
T T T
T F T
F T T
F F F
Equivalence, Implications
We cannot construct any more than 16 truth tables involving two statements. This is because such a truth table has 4 rows and the truth value of each row is T or F. Click on the microscope for a list of 16 inequivalent truth tables. However, we can certainly construct more than 16 statements involving two statements. What happens is that many (in fact infinitely many) statements have identical truth tables. We say that the statements r and s are logically equivalent if their truth tables are identical. For example the truth table of
Directed Graphs (Digraphs)
If tasks are represented by vertices and precedence relations are represented by arrows pointing from one vertex to another, then the essence of a scheduling problem can be displayed visually by a graph known as a directed graph (digraph, for short).

This digraph represents a project consisting of four tasks: task A (takes 3 hours), task Q (takes 2 hours), task X (takes 4 hours), and task F (takes 2 hours). The arrows reflect the precedence relations; namely, task X must precede (come before) tasks A & F and task Q must precede tasks X & A.
Before we continue using digraphs to help us solve scheduling problems, lets look more closely at digraphs.
A digraph is a special type of graphone in which the relationships among vertices are asymmetrical. These asymmetrical relationships are indicated by the arrows of the digraph. An asymmetrical relationship is one in which the relationship applies in one direction only. For instance, "is the father of" is an asymmetrical relationship. If "Paul Jones, Sr. is the father of Paul Jones, Jr." is a true statement, then it would be false to say that "Paul Jones, Jr. is the father of Paul Jones, Sr." In the same sense, precedence relationships in scheduling problems are asymmetrical. (As above, when task X must precede task A.)
When dealing with ordinary graphs, the line segments connecting vertices were called edges. For digraphs, the arrows connecting edges will be called arcs. Just think of arcs as edges with arrows.
In the language of digraphs, wpe9.jpg (904 bytes) is read "X is incident to Y" or "Y is incident from X."
For ordinary graphs, we defined the degree of a vertex. For digraphs, we define the indegree of a vertex to be the number of arrows pointing in toward the vertex and the outdegree of a vertex to be the number of arrows pointing out away from the vertex.

For instance, in the digraph above, vertex A has an indegree of 2 (since 2 arrows point in toward A) and an outdgree of 1 (since 1 arrow points out away from A).
In a digraph, a path from vertex A to vertex B is a sequence of arcs starting at A and ending at B in which no arc is repeated and each vertex in the sequence is incident to (precedes) the next one.

For instance, Q X A is a path from Q to A.
Q A is also a path from Q to A.
X Q A is NOT a path from X to A (since X does not precede Q).
In a digraph, a cycle is a path that starts and ends at the same vertex.

For instance, A F X A is a cycle.
Notice something VERY important here.
A digraph that represents a scheduling problem CANNOT have a cycle.
For instance, if vertices A, F, and X are tasks and the arrows represent the precedence relations, then task A must be incident to ( precede) task F, task F must be incident to (precede) task X, and task X must be incident to (precede) task ACLEARLY AN IMPOSSIBLE SITUATION. (Task A cant begin until task X is completed but task X cant begin until task F is completed and task F cant begin until task A is completed.)
Project Digraphs

Schedulers often take digraphs representing scheduling problems (for example, the one above) and modify them slightly so that they have "ficticious" START and END tasks of 0 time duration (as shown below).

Notice that the START vertex has arrows pointing away from it toward all tasks that could be the first task (as determined by the precedence relations). Any vertex with an indegree of 0 could be the first task of the project.
And, the END vertex has arrows pointing toward it away from all tasks that could be the last task (as determined by the precedence relations). Any vertex with an outdegree of 0 could be the last task of the project.
The addition of START and END vertices does not change the analysis of the scheduling problem at all. It is simply a nice way to visualize a project, giving it a definite START and END.
Hasse diagram
From Wikipedia, the free encyclopedia
The power set of a 2-element set ordered by inclusion
In order theory, a Hasse diagram (/?hæs?/; German: /?has?/) is a type of mathematical diagram used to represent a finite partially ordered set, in the form of a drawing of its transitive reduction. Concretely, for a partially ordered set (S, ?) one represents each element of S as a vertex in the plane and draws a line segment or curve that goes upward from x to y whenever y covers x (that is, whenever x < y and there is no z such that x < z < y). These curves may cross each other but must not touch any vertices other than their endpoints. Such a diagram, with labeled vertices, uniquely determines its partial order.
Hasse diagrams are named after Helmut Hasse (1898–1979); according to Birkhoff (1948), they are so-called because of the effective use Hasse made of them. However, Hasse was not the first to use these diagrams; they appear, e.g., in Vogt (1895). Although Hasse diagrams were originally devised as a technique for making drawings of partially ordered sets by hand, they have more recently been created automatically using graph drawing techniques.
The phrase "Hasse diagram" may also refer to the transitive reduction as an abstract directed acyclic graph, independently of any drawing of that graph, but this usage is eschewed here.
A "good" Hasse diagram
Although Hasse diagrams are simple as well as intuitive tools for dealing with finite posets, it turns out to be rather difficult to draw "good" diagrams. The reason is that there will in general be many possible ways to draw a Hasse diagram for a given poset. The simple technique of just starting with the minimal elements of an order and then drawing greater elements incrementally often produces quite poor results: symmetries and internal structure of the order are easily lost.
The following example demonstrates the issue. Consider the power set of a 4-element set ordered by inclusion \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\subseteq. Below are four different Hasse diagrams for this partial order. Each subset has a node labelled with a binary encoding that shows whether a certain element is in the subset (1) or not (0):
The first diagram makes clear that the power set is a graded poset. The second diagram has the same graded structure, but by making some edges longer than others, it emphasizes that the 4-dimensional cube is a union of two 3-dimensional cubes. The third diagram shows some of the internal symmetry of the structure. In the fourth diagram the vertices are arranged like the fields of a 4×4 matrix.
Lattice (order)
In mathematics, a lattice is a partially ordered set in which every two elements have a supremum (also called a least upper bound or join) and an infimum (also called a greatest lower bound or meet). An example is given by the natural numbers, partially ordered by divisibility, for which the supremum is the least common multiple and the infimum is the greatest common divisor.
Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities. Since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras. These "lattice-like" structures all admit order-theoretic as well as algebraic descriptions.
Pigeonhole principle
An image of pigeons in holes. Here there are n = 10 pigeons in m = 9 holes. Since 10 is greater than 9, the pigeonhole principle says that at least one hole has more than one pigeon.
In mathematics, the pigeonhole principle states that if n items are put into m containers, with n > m, then at least one container must contain more than one item. This theorem is exemplified in real-life by truisms like "there must be at least two left gloves or two right gloves in a group of three gloves". It is an example of a counting argument, and despite seeming intuitive it can be used to demonstrate possibly unexpected results; for example, that two people in London have the same number of hairs on their heads.
The first formalization of the idea is believed to have been made by Peter Gustav Lejeune Dirichlet in 1834 under the name Schubfachprinzip ("drawer principle" or "shelf principle"). For this reason it is also commonly called Dirichlets box principle, Dirichlets drawer principle or simply "Dirichlet principle" — a name that could also refer to the minimum principle for harmonic functions. The original "drawer" name is still in use in French ("principe des tiroirs"), Polish ("zasada szufladkowa"), Hungarian ("skatulyaelv"), Italian ("principio dei cassetti"), German ("Schubfachprinzip"), Danish ("Skuffeprincippet"), and Chinese ("????").
Though the most straightforward application is to finite sets (such as pigeons and boxes), it is also used with infinite sets that cannot be put into one-to-one correspondence. To do so requires the formal statement of the pigeonhole principle, which is "there does not exist an injective function on finite sets whose codomain is smaller than its domain". Advanced mathematical proofs like Siegels lemma build upon this more general concept.
A generalized version of this principle states that, if n discrete objects are to be allocated to m containers, then at least one container must hold at least
objects, where
is the ceiling function, denoting the smallest integer larger than or equal to x. Similarly, at least one container must hold no more than
objects, where
is the , floor function, denoting the largest integer smaller than or equal to x.
A probabilistic generalization of the pigeonhole principle states that if n pigeons are randomly put into m pigeonholes with uniform probability 1/m, then at least one pigeonhole will hold more than one pigeon with probability
In mathematics, a recurrence relation is an equation that recursively defines a sequence, once one or more initial terms are given: each further term of the sequence is defined as a function of the preceding terms.
The term difference equation sometimes (and for the purposes of this article) refers to a specific type of recurrence relation. However, "difference equation" is frequently used to refer to any recurrence relation.
An example of a recurrence relation is the logistic map:
Some simply defined recurrence relations can have very complex (chaotic) behaviours, and they are a part of the field of mathematics known as nonlinear analysis.
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
Fibonacci numbers
The Fibonacci numbers are the archetype of a linear, homogeneous recurrence relation with constant coefficients (see below). They are defined using the linear recurrence relation
with seed values:
Explicitly, recurrence yields the equations:
etc.
We obtain the sequence of Fibonacci numbers which begins:
- 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
- It can be solved by methods described below yielding the closed-form expression which involve powers of the two roots of the characteristic polynomial t2 = t + 1; the generating function of the sequence is the rational function

Graphs and Subgraphs
Paths and circuits: Eulerian and Hamiltonian
Euler Paths and Circuits
1. An Euler circuit (or Eulerian circuit) in a graph G is a simple circuit that contains every edge of G.
-Reminder: a simple circuit doesnt use the same edge more than once.
-So, a circuit around the graph passing by every edge exactly once.
-We will allow simple or multigraphs for any of the Euler stuff.
2. Euler circuits are one of the oldest problems in graph theory.
-People wanted to walk around the city of Königsberg and cross every bridge as they went.
-… and end up back at home.
-The problem turns into one about graphs if you let each bridge be an edge and each island/land mass be a vertex.
-And Euler solved it, so he gets his name on it.
In the modern world: you want to walk around the mall without missing any stores, or wasting time walking the same hall again.
For example, the first graph has an Euler circuit, but the second doesnt.
Hamilton Paths and Circuits
1. The Euler circuits and paths wanted to use every edge exactly once.
2. It seems obvious to then ask: can we make a circuit of a graph using every vertex exactly once?
-Such a circuit is a Hamilton circuit or Hamiltonian circuit.
-Similarly, a path through each vertex that doesnt end where it started is a Hamilton path.
3. It seems like finding a Hamilton circuit (or conditions for one) should be more-or-less as easy as a Euler circuit.
-Unfortunately, its much harder.
Isomorphism of graphs
In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H

such that any two vertices u and v of G are adjacent in G if and only if ƒ(u) and ƒ(v) are adjacent in H. This kind of bijection is commonly called "edge-preserving bijection", in accordance with the general notion of isomorphism being a structure-preserving bijection.
If an isomorphism exists between two graphs, then the graphs are called isomorphic and we write G\\\\\\\\simeq H. In the case when the bijection is a mapping of a graph onto itself, i.e., when G and H are one and the same graph, the bijection is called an automorphism of G.
The graph isomorphism is an equivalence relation on graphs and as such it partitions the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is called an isomorphism class of graphs.
Trees
Spanning trees and minimum spanning tree
Trees and weighted trees
A weight-balanced binary tree is a binary tree which is balanced based on knowledge of the probabilities of searching for each individual node. Within each subtree, the node with the highest weight appears at the root. This can result in more efficient searching performance.
Construction of such a tree is similar to that of a Treap, but node weights are chosen randomly in the latter.
The diagram
In the diagram to the right, the letters represent node values and the numbers represent node weights. Values are used to order the tree, as in a general binary search tree. The weight may be thought of as a probability or activity count associated with the node. In the diagram, the root is G because its weight is the greatest in the tree. The left subtree begins with A because, out of all nodes with values that come before G, A has the highest weight. Similarly, N is the highest-weighted node that comes after G.
Timing analysis
A weight balanced tree gives close to optimal values for the expected length of successful search calculations. From the above example we get
ELOSS = depth(node A)*probability(node A) + depth(node C)*probability(node C) + ...
ELOSS = 2*0.17 + 5*0.03 + 4*0.09 + 3*0.12 + 1*0.20 + 3*0.11 + 3*0.10 + 2*0.18
ELOSS = 2.5
This is the expected number of nodes that will be examined before finding the desired node.
Algebraic Structures
In mathematics, and more specifically in abstract algebra, the term algebraic structure generally refers to a set (called carrier set or underlying set) with one or more finitary operations defined on it.[1]
Examples of algebraic structures include groups, rings, fields, and lattices. More complex structures can be defined by introducing multiple operations, different underlying sets, or by altering the defining axioms. Examples of more complex algebraic structures include vector spaces, modules, and algebras.
The properties of specific algebraic structures are studied in abstract algebra. The general theory of algebraic structures has been formalized in universal algebra. Category theory is used to study the relationships between two or more classes of algebraic structures, often of different kinds. For example, Galois theory studies the connection between certain fields and groups, algebraic structures of two different kinds.
In a slight abuse of notation, the word "structure" can also refer only to the operations on a structure, and not the underlying set itself. For example, a phrase "we have defined a ring structure (a structure of ring) on the set A" means that we have defined ring operations on the set A. For another example, the group (\\\\\\\\mathbb Z, +) can be seen as a set \\\\\\\\mathbb Z that is equipped with an algebraic structure, namely the operation +.
Isomorphism, Homomorphism and Automorphism
Isomorphisms, automorphisms and homomorphisms are all very similar in their basic concept. An isomorphism is a one-to-one correspondence between two abstract mathematical systems which are structurally, algebraically, identical. The structures might be groups, rings, integral domains, fields, vector spaces, etc. In general these structures consist of a set of elements with one or more operations defined on them, such as addition or multiplication, and represent axiomatically defined mathematical structures. The isomorphic correspondence is so defined that corresponding elements will correspond. If the two structures are groups an isomorphism between them is defined as a correspondence such that if elements x and y of the first group correspond to elements x and y respectively of the second group, then the product xy of the first group corresponds to the product xy of the second group. This simple condition is adequate to insure that like elements correspond i.e that identity elements correspond, inverse elements correspond, etc. and that the two groups are structurally, algebraically identical. When we say that two mathematical structures are isomorphic with each other we mean that they are identical structurally, algebraically; that corresponding elements correspond and their internal workings and mechanisms are identical. Their differences lie only in superficial things like the names we give the elements and the way we denote the law of combination. So, to repeat: An isomorphism is a one-to-one correspondence between two mathematical structures which are structurally, algebraically, equivalent. Two mathematical structures are said to be isomorphic to each other if it is possible to establish a one-to-one correspondence between them in such a way that it meets the criterion for an isomorphism for that particular kind of structure (the criterion for a group isomorphism, a ring isomorphism, etc.).
check out this video
An automorphism is defined as an isomorphism of a set with itself. Thus where an isomorphism is a one-to-one mapping between two mathematical structures an automorphism is a one-to-one mapping within a mathematical structure, a mapping of one subgroup upon another, for example.
A homomorphism is also a correspondence between two mathematical structures that are structurally, algebraically identical. However, there is an important difference between a homomorphism and an isomorphism. An isomorphism is a one-to-one mapping of one mathematical structure onto another. A homomorphism is a many-to-one mapping of one structure onto another. Where an isomorphism maps one element into another element, a homomorphism maps a set of elements into a single element. Its definition sounds much the same as that for an isomorphism but allows for the possibility of a many-to-one mapping. The best way to illustrate a homomorphism is in its application to the mapping of quotient groups. Quotient groups are groups whose elements are sets namely cosets of the normal group of some group. The cosets of any normal subgroup N of a group G form a group under complex multiplication and this group is called the quotient group (or factor group) of G by N and is denoted by G/N. The normal subgroup N plays the role of the identity in the quotient group. Now let us state a theorem fundamental for the whole theory of homomorphic mappings:
Theorem. Under homomorphic mapping of an arbitrary group G onto a group G, the set N of elements of G that are mapped into the neutral element e of G is a normal subgroup of G; the set of elements of G that are mapped into an arbitrary fixed element of G is a coset of G with respect to N, and the one-to-one correspondence so established between the cosets of G with respect to N and the elements of G is an isomorphism between G and the factor group
From this theorem we see that under this homomorphism all the elements in a particular coset are imaged into a single element in G. Elements from separate cosets are imaged into separate elements in G. Thus we see that under a homomorphic mapping, on transition from G to G, distinct elements of G coalese into a single element of G. Classes or sets of elements map into single elements.
.jpg)







