Cut property

For any cut C of the graph, if the weight of an edge e in the cut-set of C is strictly smaller than the weights of all other edges of the cut-set of C, then this edge belongs to all MSTs of the graph.

Proof: Assume that there is an MST T that does not contain e. Adding e to T will produce a cycle, that crosses the cut once at e and crosses back at another edge e’ . Deleting e’ we get a spanning tree T∖{e’}∪{e} of strictly smaller weight than T. This contradicts the assumption that T was a MST.

By a similar argument, if more than one edge is of minimum weight across a cut, then each such edge is contained in some minimum spanning tree.

https://en.wikipedia.org/wiki/Minimum_spanning_tree#Cut_property

Definition in class

Let (S,V\S) be a cut and let e∈E be a min weight crossing edge for the cut. Then the MST contains e. (i.e. e is a “safe edge”)

Proof in class

Suppose the MST T doesn’t contain e (e∉T)

So, since T is a spanning tree, there is path P from u to v

W(T’)=W(T)-W(e’)+w(e) < W(T)

Greedy MST Algorithm

A = ∅
for j = 1 -> |v|-1

  • Find a cut (S,V/S) st. no deges in A cross cut
  • Add min weight crossing edge for that cut [A<-A∪{e}]

Prim’s Algorithm

A = ∅

while |A| < |V| - 1

(1) Find edge of min weight that connects A to an isolated vertex

(2) A <- A ∪ {u,v}

Example

Proof

In each iteration, let cut be specified: via S = set of vertivles in tree A

Prim adds min weight crossing edge for cut (S,V\S) (Apply C.P. Theorem)

Pseudocode

Kruckal’s Algorithm

A = ∅

while |A| < |V| - 1

(1) Find edge (u,v) of min weight not in A

(2) If there’s no u-v path using edge in A. Then A <- A ∪{(u,v)} return A

Example

Proof by Contradiction

Proof spanning tree then prove the constructed spanning tree is of minimal weight

Pseudocode

Boruvka’s Algorithm

The algorithm begins by finding the minimum-weight edge incident to each vertex of the graph, and adding all of those edges to the forest.

Then, it repeats a similar process of finding the minimum-weight edge from each tree constructed so far to a different tree, and adding all of those edges to the forest.

Each repetition of this process reduces the number of trees, within each connected component of the graph, to at most half of this former value, so after logarithmically many repetitions the process finishes.

When it does, the set of edges it has added forms the minimum spanning forest.

https://en.wikipedia.org/wiki/Bor%C5%AFvka%27s_algorithm

Pseudocode

Proof

Optional Material: http://www.csee.wvu.edu/~ksmani/courses/fa01/random/lecnotes/lec11/MST.pdf

Boruvka’s Algorithm is based upon the following lemma:

Let v ∈ V be any vertex in G. The minimum spanning tree for G must contain the edge (v, w)
that is the minimum weight edge incident on v.

This can be proved by contradiction. Assume the MST does not contain the minimum weight
edge incident on v. If so, there must be another edge connecting v to our MST. However, if we
remove that edge, and add the minimum weight edge (we need to prevent cycles, this is an
MST), then the total edge values would be less than or equal to the tree with the non-minimum
edge. This is a contradiction.

Additionally, at each contraction, the algorithm creates a forest in the original graph.
This forest does not have any edges that would not be in the MST.
Finally, the algorithm runs until there is only one component. T

http://www-student.cse.buffalo.edu/~atri/cse331/fall16/recitations/Recitation10.pdf

Example

Initial Graph

Initially MST is empty. Every vertex is singe component as highlighted in blue color in below diagram.

For every component, find the cheapest edge that connects it to some other component.

Component Cheapest Edge that connects it to some other component
{0} 0-1
{1} 0-1
{2} 2-8
{3} 2-3
{4} 3-4
{5} 5-6
{6} 6-7
{7} 6-7
{8} 2-8

The cheapest edges are highlighted with green color. Now MST becomes {0-1, 2-8, 2-3, 3-4, 5-6, 6-7}.

After above step, components are { {0,1}, {2,3,4,8}, {5,6,7} }. The components are encircled with blue color.

We again repeat the step, i.e., for every component, find the cheapest edge that connects it to some other component.

Component Cheapest Edge that connects it to some other component
{0,1} 1-2 (or 0-7)
{2,3,4,8} 2-5
{5,6,7} 2-5

The cheapest edges are highlighted with green color. Now MST becomes {0-1, 2-8, 2-3, 3-4, 5-6, 6-7, 1-2, 2-5}

At this stage, there is only one component {0, 1, 2, 3, 4, 5, 6, 7, 8} which has all edges. Since there is only one component left, we stop and return MST.

https://www.geeksforgeeks.org/boruvkas-algorithm-greedy-algo-9/

Disjoint Set (Union-Find)

A disjoint-set data structure is a data structure that keeps track of a set of elements partitioned into a number of disjoint (non-overlapping) subsets. A union-find algorithm is an algorithm that performs two useful operations on such a data structure:

Find

Determine which subset a particular element is in. This can be used for determining if two elements are in the same subset.

Union

Join two subsets into a single subset.

In this post, we will discuss the application of Disjoint Set Data Structure. The application is to check whether a given graph contains a cycle or not.

Union-Find Algorithm can be used to check whether an undirected graph contains cycle or not. Note that we have discussed an algorithm to detect cycle. This is another method based on Union-Find. This method assumes that the graph doesn’t contain any self-loops.

https://www.geeksforgeeks.org/union-find/

Example

Pseudocode

Find(i) = id[id[i]]
id[id[id[...id[i]...]]]
keep calling id recursively until id[i] = i
Find(i)
while(id[i] != i)
{i <-- id[i]
}
return i

with worst case of O(n) in the while loop

// already know [root of i] = a and [root of j] = b
id[a] <-- b XOR id[b] <-- a
weighted-Quick-Union(i,j)
//Assume we keepp track of size (#nodes) in each tree
if tree with a is larger
id[b] <-- a
O.W
id[a] <-- b

Proposition

Weighted-Quick-Union ensures that all nodes have depth <= log_2(n) where is # vertices

Proof

let v be some node

(1) Depth of v increases (by 1) only if root of v changes
(2) Root of v changes only if size of v’s tree at least doubles
(3) Let S_j be size(# nodes) in the tree of v after j label changes (root changed)
(4) S_j >= 2 S_(j-1) >= 2*2 S_(j-2) >= … 2^j *1 –> 2^j <= n <–> j <= log_2(n)

W-Q-U Runtime

FIND O(logn)

UNION O(1)
Given mixture m,n of connected and union operations, runtime = O(m*log(n))

Union-Rank

The idea is to always attach smaller depth tree under the root of the deeper tree. This technique is called union by rank. The term rank is preferred instead of height because if path compression technique (we have discussed it below) is used, then rank is not always equal to height. Also, size (in place of height) of trees can also be used as rank. Using size as rank also yields worst case time complexity as O(Logn)

Let us see the above example with union by rank
Initially, all elements are single element subsets.
0 1 2 3

Do Union(0, 1)
1 2 3
/
0

Do Union(1, 2)
1 3
/ \
0 2

Do Union(2, 3)
1
/ | \
0 2 3

Path Compression

The second optimization to naive method is Path Compression. The idea is to flatten the tree when find() is called.

When find() is called for an element x, root of the tree is returned.

The find() operation traverses up from x to find root.

The idea of path compression is to make the found root as parent of x so that we don’t have to traverse all intermediate nodes again.

If x is root of a subtree, then path (to root) from all nodes under x also compresses.

Let the subset {0, 1, .. 9} be represented as below and find() is called
for element 3.
9
/ | \
4 5 6
/ \ / \
0 3 7 8
/ \
1 2

When find() is called for 3, we traverse up and find 9 as representative
of this subset. With path compression, we also make 3 as the child of 9 so
that when find() is called next time for 1, 2 or 3, the path to root is reduced.

9
/ / \ \
4 5 6 3
/ / \ / \
0 7 8 1 2

The two techniques complement each other.

The time complexity of each operation becomes even smaller than O(Logn).

In fact, amortized time complexity effectively becomes small constant.

https://www.geeksforgeeks.org/union-find-algorithm-set-2-union-by-rank/

BFS (solution for unweighted graph)

Breadth First Traversal (or Search) for a graph is similar to Breadth First Traversal of a tree.

The only catch here is, unlike trees, graphs may contain cycles, so we may come to the same node again.

To avoid processing a node more than once, we use a boolean visited array.

For simplicity, it is assumed that all vertices are reachable from the starting vertex.

Example

procedure BFS(G, root) is
let Q be a queue
label root as explored
Q.enqueue(root)
while Q is not empty do
v := Q.dequeue()
if v is the goal then
return v
for all edges from v to w in G.adjacentEdges(v) do
if w is not labeled as explored then
label w as explored
Q.enqueue(w)

Path Relaxation Property

Weighted DAG (Directed Acyclic Graph)

For a general weighted graph, we can calculate single source shortest distances in O(VE) time using Bellman–Ford Algorithm.

For a graph with no negative weights, we can do better and calculate single source shortest distances in O(E + VLogV) time using Dijkstra’s algorithm.

We can calculate single source shortest distances in O(V+E) time for DAGs. The idea is to use Topological Sorting.

We initialize distances to all vertices as infinite and distance to source as 0, then we find a topological sorting of the graph.

Topological Sorting of a graph represents a linear ordering of the graph (See below, figure (b) is a linear representation of figure (a) ).

Once we have topological order (or linear representation), we one by one process all vertices in topological order. For every vertex being processed, we update distances of its adjacent using distance of current vertex.

https://www.geeksforgeeks.org/shortest-path-for-directed-acyclic-graphs/

Example

Algorithm

(1) Use topological sort (via DFS) to obtain topological ordering of vertices

(2) For each vertex u (in topo order) For all adjacent vertices v, call RELAX(u,v)

Proof of Correctness

Consider shortest path from s to v (v_0,…,v_k) with v_0 = s and v_k = v

Since vertices are processed in topo order, the sequence of RELAX calls includes subsequence RELAX(v_0,v_1), RELAX(v_1,v_2)… RELAX(v_(k-1),v_k)

Edge Relaxtion

The edge relaxation is the operation to calculate the reaching cost to the vertex lower. More concretely, the operation will become:

For the edge from the vertex u to the vertex v, if d[u]+w(u,v)<d[v] is satisfied, update d[v] to d[u]+w(u,v)

The vertices u and v stand the neighbors in the graph and d[u] and d[v] stand the reaching cost to the vertices u and v respectively. Also, w(u,v) stands the weight of the edge from the vertex u to the vertex v. To summarize things up to now, we can make this figure below.

Now we know we can reach the vertex u from the starting vertex S through two vertices and that path costs d[u].

Also, we can reach the vertex v from the starting vertex S through four vertices and that path costs d[v].
Here, edge relaxation updates d[v] to d[u]+w(u,v) when d[u]+w(u,v) is less than d[v].

In other words, it updates the current reaching cost to the vertex v (d[v]) to the lower reaching cost (d[u]+w(u,v)).

The reason why it updates the cost is that the path through the vertex u can be shorter because the reaching cost of the path through the vertex u will be lower than the cost of the current path.

Actually, the algorithms for the shortest paths problem solve the problem by repeatedly using the edge relaxation.

https://towardsdatascience.com/algorithm-shortest-paths-1d8fa3f50769

Dijkstra’s Algorithm(A greedy algorithm)

Conceptual Version

Example

Code

Code compare with Prim

Runtime

Claim

Correctness

Bellman-Ford Algorithm

Code

Correctness

From Lectures

Floyd-Marshall Algorithm

Problem

Find the shortest path from i to j

Subproblem

Find the shortest paths from i to j where intermediate vertices belong to {1,2,…,n-1}

= Find shotest path from i to j where intermed vertices belong to {1,2,…,n}

D_ij^(k) - restrict intermed vertices to the set {1,2,…,k}

Flow Network Example

Maximum flow Problem

Min-cut Problem

Residural Graph G_f

Ford-Fulkerson Algorithm

Flow Value Lemma

Weak Duality

Max-flow min-cut theorem

Bad case for Ford-Fulkerson Algorithm

Shortest augmenting path

Lemma

Proof

Theorem

Blocking-flow Algorithm

Select medians and order statics

A naive solution

Quickselect

Quickselect Worst-case Analysis

Pick a Good Pivot

Runtime

Probability

Coin Example

Dice Example

Events

Random Variables

Events Based on Random Variables

Expected Value

Linearity of Exapectation

Independence

Recall Quickselect

Randomized Quickselect

Pick a Good Pivot

Sketch of Bound on Expected Runtime

Dictionary

Warm-up

Definition

Operations

Unordered list

Ordered list

Balanced binary search tree

Direct-address table

Hash tables

Collision

Not a natrual number

Handling collisions

Chaining

Operations

Load factor

Assumption: Simple uniform hashing

Design hash function

Division method

Multiplication method

Universal hashing

Average case

Construct

Open addressing

Average-case

Linear probing

Quadratic probing

Double hashing

Amortized Analysis

The peril of per-operation worst-case analysis

Aggregate analysis

Accounting method

Incrementing Binary Counter Example

Dynamic tables

Worst case

Challenge 1

Horner Method

Challenge 2

Efficiently computing the hash function

Rabin-Karp substring search example

Rabin-Karp analysis

Substring search————Knuth-Morris-Pratt

Determinishtic finite state automaton(DFA)

Java implementation

DFA construction

Build DFA from pattern

DFA construction in linear time

Java implementation

KMP substring search analysis

Greedy Algorithms

Interval scheduling

Therem

Proof

Analysis

Interval partitioning

Scheduling to minimizing lateness

Optimal caching

Keep updating……