id
stringlengths 18
77
| text
stringlengths 52
3.68M
| source
stringclasses 12
values | format
stringclasses 2
values |
|---|---|---|---|
algo_notes_block_200
|
\begin{subparag}{Remark}
To verify if we indeed have a SCC, we first verify that every vertex can reach every other vertex. We then also need to verify that it is maximal, which we can do by adding any element which has one connection to the potential SCC, and verifying that what it yields is not a SCC.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_201
|
\begin{subparag}{Example}
For instance, the first example is not a SCC since $c\not\leadsto b$, the second is not either since we could add $f$ and it is thus not maximal:
\imagehere[0.7]{Lecture15/WrongSCCExample.png}
However, here are all of the SCC of the graph:
\imagehere[0.7]{Lecture15/SCCExample.png}
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_202
|
\begin{parag}{Theorem: Existence and unicity of SCCs}
Any vertex belongs to one and exactly one SCC.
|
algo_notes
|
latex
|
algo_notes_block_203
|
\begin{subparag}{Proof}
First, we notice that a vertex always belongs to at least one SCC since we can always make an SCC containing one element (and adding it enough elements so that to make it maximal). This shows the existence.
Second, let us suppose for contradiction that SCCs are not unique. Thus, for some graph, there exists a vertex $v$ such that $v \in C_1$ and $v \in C_2$, where $C_1$ and $C_2$ are two SCCs such that $C_1 \neq C_2$. By definition of SCCs, for all $u_1 \in C_1$, we have $u_1 \leadsto v$ and $v \leadsto u_1$, and similarly for all $u_2 \in C_2$. However, by transitivity, this also means that $u_1 \leadsto u_2$ and $u_2 \leadsto u_1$. This yields that we can create a new SCC $C_1 \cup C_2$, which contradicts the maximality of $C_1$ and $C_2$ and thus shows the unicity.
\qed
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_204
|
\begin{parag}{Definition: Component graph}
For a directed graph (digraph) $G = \left(V, E\right)$, its \important{component graph} $G^{SCC} = \left(V^{S C C}, E^{S C C}\right)$ is defined to be the graph where $V^{S C C}$ has a vertex for each SCC in $G$, and $E^{S C C}$ has an edge between the corresponding SCCs in G.
|
algo_notes
|
latex
|
algo_notes_block_205
|
\begin{subparag}{Example}
For instance, for the digraph hereinabove:
\imagehere[0.5]{Lecture15/ComponentGraphExample.png}
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_206
|
\begin{parag}{Theorem}
For any digraph $G$, its component graph $G^{SCC}$ is a DAG (directed acyclic graph).
|
algo_notes
|
latex
|
algo_notes_block_207
|
\begin{subparag}{Proof}
Let's suppose for contradiction that $G^{S C C}$ has a cycle. This means that we can access one SCC from $G$ from another SCC (or more); and thus any elements from the first SCC have a path to elements of the second SCC, and reciprocally. However, this means that we could the SCCs, contradicting their maximality.
\qed
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_208
|
\begin{parag}{Definition: Graph transpose}
Let $G$ be a digraph (directed graph).
The \important{transpose} of $G$, written $G^T$, is the graph where all the edges have their direction reversed:
\[G^T = \left(V, E^T\right), \mathspace \text{where } E^T = \left\{\left(u, v\right): \left(v, u\right) \in E\right\}\]
|
algo_notes
|
latex
|
algo_notes_block_209
|
\begin{subparag}{Remark}
We call this a transpose since the transpose of $G$ is basically given by transposing its adjacency matrix.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_210
|
\begin{subparag}{Observation}
We can create $G^T$ in $\Theta\left(V + E\right)$ if we are using adjacency lists.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_211
|
\begin{parag}{Theorem}
A graph and its transpose have the same SCCs.
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_212
|
\begin{parag}{Kosarju's algorithm}
The idea of Kosarju's algorithm to compute component graphs efficiently is:
\begin{enumerate}
\item Call \texttt{DFS($G$)} to compute the finishing times $u.f$ for all $u$.
\item Compute $G^T$.
\item Call \texttt{DFS($G^T$)} where the order of the main loop of this procedure goes in order of decreasing $u.f$ (as computed in the first DFS).
\item Output the vertices in each tree of the depth-first forest formed in second DFS, as a separate SCC. Cross-edges represent links in the component graph.
\end{enumerate}
|
algo_notes
|
latex
|
algo_notes_block_213
|
\begin{subparag}{Unicity}
Since SCCs are unique, the result will always be the same, even though graphs can be traversed in very different ways with DFS.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_214
|
\begin{subparag}{Analysis}
Since every instruction is $\Theta\left(V + E\right)$, our algorithm runs in $\Theta\left(V + E\right)$.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_215
|
\begin{subparag}{Intuition}
The main intuition for this algorithm is to realise that elements from SCCs can be accessed from one another when going forwards (in the regular graph) or backwards (in the transposed graph). Thus, we first compute some kind of ``topological sort'' (this is not a real one this we don't have a DAG), and use its reverse-order as starting points to go in the other direction. If two elements can be accessed in both directions, they will indeed be in the same tree at the end. If two elements have one direction where one cannot access the other, then the first DFS will order them so that we begin the second DFS by the one which cannot access the other.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_216
|
\begin{subparag}{Personal remark}
The Professor used the name ``magic algorithm'' since we do not prove this theorem and it seems very magical. I feel like it is better to give it its real name, but probably it is important to know its informal name for exams.
\end{subparag}
\end{parag}
\lecture{16}{2022-11-18}{This date is really nice too, though}{}
|
algo_notes
|
latex
|
algo_notes_block_217
|
\begin{parag}{Basic problem}
The basic problem solved by flow networks is shipping as much of a resource from one node to another. Edge have a weight, which, if they were pipes, would represent their flow capacity. The question is then how to optimise the rate of flow from the source to the sink.
|
algo_notes
|
latex
|
algo_notes_block_218
|
\begin{subparag}{Applications}
This has many applications. For instance: evacuating people out of a building. If we have given exits and corridors size, we can then know how many people we could evacuate in a given time.
Another application is finding the best way to ship goods on roads, or disrupting it in another country.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_219
|
\begin{parag}{Definition: Flow network}
A \important{flow network} is a directed graph $G = \left(V, E\right)$, where each edge $\left(u, v\right)$ has a capacity $c\left(u, v\right) \geq 0$. This is function is such that, $c\left(u, v\right) = 0$ if and only if $\left(u, v\right) \not\in E$. Finally, we have a \important{source} node $s$ and a \important{sink} node $t$.
We also assume that there are never antiparallel edges (both $\left(u, v\right) \in E$ and $\left(v, u\right) \in E$). This supposition is more or less without loss of generality since, then, we could just break one of the antiparallel edges into two edges linking a new node $v'$ (see the picture below). This will simplify notations in our algorithm.
\imagehere[0.6]{Lecture16/AntiparallelEdgesWLOG.png}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_220
|
\begin{parag}{Definition: Flow}
A \important{flow} is a function $f: V \times V \mapsto \mathbb{R}$ satisfying the two following constraints. First, the capacity constraint states that, for all $u, v \in V$, we have:
\[0 \leq f\left(u, v\right) \leq c\left(u, v\right)\]
In other words, the flow cannot be greater than what is supported by the pipe. The second constraint is flow conservation, which states that for all $u \in V \setminus \left\{s, t\right\}$:
\[\sum_{v \in V}^{} f\left(v, u\right) = \sum_{v \in V}^{} f\left(u, v\right)\]
In other words, the flow coming into $u$ is the same as the flow coming out of $u$.
|
algo_notes
|
latex
|
algo_notes_block_221
|
\begin{subparag}{Notation}
We will note flows on a flow network by noting $f\left(u, v\right) / c\left(u, v\right)$ for all edge. For instance, we could have:
\imagehere[0.55]{Lecture16/FlowNetworkExample.png}
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_222
|
\begin{parag}{Definition: Value of a flow}
The value of a flow $f$, denoted $\left|f\right|$, is:
\[\left|f\right| = \sum_{v \in V}^{} f\left(s, v\right) - \sum_{v \in V}^{} f\left(v, s\right)\]
which is the flow out of the source minus the flow into the source.
|
algo_notes
|
latex
|
algo_notes_block_223
|
\begin{subparag}{Observation}
By the flow conservation constraint, this is equivalent to the flow into the sink minus the flow out of the sink:
\[\left|f\right| = \sum_{v \in V}^{} f\left(v, t\right) - \sum_{v \in V}^{} f\left(t, v\right)\]
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_224
|
\begin{subparag}{Example}
For instance, for the flow graph and flow hereinabove:
\[\left|f\right| = \left(1 + 2\right) - 0 = 3\]
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_225
|
\begin{parag}{Goal}
The idea is now to develop an algorithm that, given a flow network, we find the maximum flow. The basic idea that could come to mind is to take a random path through our network, consider its bottleneck link, and send this value of flow onto this path. We then have a new graph, with capacities reduced and some links less (if the capacity happens to be 0). We can continue this iteratively until the source and the sink are 0.
This idea we would for example on the following (very simple) flow network:
\imagehere[0.4]{Lecture16/FlowNetworkExample2.png}
Indeed, its bottleneck link has capacity $3$, so we send 3 of flow on the only path. Then, it leads to a new graph with one edge less, where the source and the sink are no longer connected.
However, we notice that we suddenly have problems on different graphs. This algorithm may produce the following sub-optimal result on the following flow network:
\imagehere[0.4]{Lecture16/FlowNetworkExampleSubOptimal.png}
This means that we need a way to ``undo'' bad choices of paths done by our algorithm. To do so, we will need the following definitions.
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_226
|
\begin{parag}{Definition: Residual capacity}
Given a flow network $G$ and a flow $f$, the \important{residual capacity} is defined as:
\begin{functionbypart}{c_f\left(u, v\right)}
c\left(u, v\right) - f\left(u, v\right), \mathspace \text{if } \left(u, v\right) \in E \\
f\left(v, u\right), \mathspace \text{if } \left(v, u\right) \in E \\
0, \mathspace \text{otherwise}
\end{functionbypart}
The main idea of this function is its second part: the first part is just the capacity left in the pipe, but the second part is a new, reversed, edge we add. This new edge holds a capacity representing the amount of flow that can be reversed.
|
algo_notes
|
latex
|
algo_notes_block_227
|
\begin{subparag}{Example}
For instance, if we have an edge $\left(u, v\right)$ with capacity $c\left(u, v\right) = 5$ and current flow $f\left(u, v\right) = 3$, then $c_f\left(u, v\right) = 5 - 3 = 2$ and $c_f\left(v, u\right) = f\left(u, v\right) = 3$.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_228
|
\begin{subparag}{Remark}
This definition is the reason why we do not want antiparallel edges: the notation is much simpler without.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_229
|
\begin{parag}{Definition: Residual network}
Given a flow network $G$ and flow $f$, the \important{residual network} $G_f$ is defined as:
\[G_f = \left(V, E_f\right), \mathspace \text{where } E_f = \left\{\left(u, v\right) \in V \times V: c_{f}\left(u, v\right) > 0\right\}\]
We basically use our residual capacity function, removing edges with 0 capacity left.
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_230
|
\begin{parag}{Definition: Augmenting path}
Given a flow network $G$ and flow $f$, an \important{augmenting path} is a simple path (never going twice on the same vertex) from $s$ to $t$ in the residual network $G_f$.
\important{Augmenting the flow} $f$ by this path means applying the minimum capacity over the path: add it to edges which were here at the start, and remove it to edges we added after through the residual capacity. This can easily be seen by looking at the definition of residual capacity (if $\left(u, v\right) \in E$, then we use the opposite of the flow, if $\left(v, u\right) \in E$, then we use the positive version of the flow).
\end{parag}
\begin{filecontents*}[overwrite]{Lecture16/FordFulkersonMethod.code}
procedure FordFulkersonMethod(G, s, t):
initialise flow f to 0
while there exists an augmenting path p in the residual network Gf: // find paths where we have edges with still some flow we could use
augment flow f along p
return p
\end{filecontents*}
|
algo_notes
|
latex
|
algo_notes_block_231
|
\begin{parag}{Ford-Fulkerson algorithm}
The idea of the Ford-Fulkerson greedy algorithm for finding the maximum flow in a flow network is, as the one we had before, to improve our flow iteratively; but using residual networks in order to cancel wrong choices of paths.
|
algo_notes
|
latex
|
algo_notes_block_232
|
\begin{subparag}{Example}
Let's consider again our non-trivial flow network, and the suboptimal flow our naive algorithm found:
\imagehere[0.5]{Lecture16/FlowNetworkExampleSubOptimal.png}
Now, the residual networks looks like:
\imagehere[0.5]{Lecture16/FlowNetworkSubOptimal-ResidualNetwork.png}
Now, the new algorithm will indeed be able to take the new path. Taking the edge going from bottom to top basically cancels the choice it did before to choose it. Being careful to apply the new path correctly (meaning to add it to edges from $G$ and remove it to edges introduced by the residual network), we get the following flow and residual network:
\imagehere{Lecture16/FlowNetworkSubOptimal-Optimal.png}
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_233
|
\begin{subparag}{Proof of optimality}
We will want to prove its optimality. However, to do so, we need the following definitions.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_234
|
\begin{parag}{Definition: Cut of flow network}
A \important{cut of flow network} $G = \left(V, E\right)$ is a partition of $V$ into $S$ and $T = V \setminus S$ such that $s \in S$ and $t \in T$.
In other words, we split our graph into nodes on the source side and on the sink side.
|
algo_notes
|
latex
|
algo_notes_block_235
|
\begin{subparag}{Example}
For instance, we could have the following cut (where nodes from $S$ are coloured in black, and ones from $T$ are coloured in white):
\imagehere[0.6]{Lecture16/FlowNetworkCutExample.png}
Note that the cut does not necessarily have to be a straight line (since, anyway, straight lines make no sense for a graph).
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_236
|
\begin{parag}{Definition: Net flow across a cut}
The \important{net flow across a cut} $\left(S, T\right)$ is:
\[f\left(S, T\right) = \sum_{\substack{u \in S \\ v \in T}}^{} f\left(u, v\right) - \sum_{\substack{u \in S \\ v \in T}}^{} f\left(v, u\right)\]
This is basically the flow leaving $S$ minus the flow entering $S$.
|
algo_notes
|
latex
|
algo_notes_block_237
|
\begin{subparag}{Example}
For instance, on the graph hereinabove, it is:
\[f\left(S, T\right) = 12 + 11 - 4 = 19\]
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_238
|
\begin{parag}{Property}
Let $f$ be a flow. For any cut $S, T$:
\[\left|f\right| = f\left(S, T\right)\]
|
algo_notes
|
latex
|
algo_notes_block_239
|
\begin{subparag}{Proof}
We make a proof by structural induction on $S$.
\begin{itemize}[left=0pt]
\item If $S = \left\{s\right\}$, then the net flow is the flow out from $s$ minus the flow into $s$, which is exactly equal to the value of the flow.
\item Let's say $S = S' \cup \left\{w\right\}$, supposing $\left|f\right| = f\left(S', T'\right)$. We know that, then, $T = T' \setminus \left\{w\right\}$.
By conservation of flow, we know that everything coming in this new node $w$ is also coming out. Thus, removing it from $T'$ and putting in $S'$ does not change anything: in any case, it does not add or remove any flow to a cut, it only relays it:
\[f\left(S, T\right) = f\left(S', T'\right) \underbrace{- \sum_{v \in V}^{} f\left(v, w\right) + \sum_{v \in V}^{} f\left(w, v\right)}_{= 0} = f\left(S', T'\right)\]
\end{itemize}
\qed
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_240
|
\begin{parag}{Definition: Capacity of a cut}
The \important{capacity of a cut} $S, T$ is defined as:
\[c\left(S, T\right) = \sum_{\substack{u \in S \\ v \in T}}^{} c\left(u, v\right)\]
|
algo_notes
|
latex
|
algo_notes_block_241
|
\begin{subparag}{Example}
For instance, on the graph hereinabove, the capacity of the cut is:
\[12 + 14 = 26\]
Note that we do not add the 9, since it goes in the wrong direction.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_242
|
\begin{subparag}{Observation}
This value, however, \textit{depends} on the cut.
\end{subparag}
\end{parag}
\lecture{17}{2022-11-21}{The algorithm may stop, or may not}{}
|
algo_notes
|
latex
|
algo_notes_block_243
|
\begin{parag}{Property}
For any flow $f$ and any cut $\left(S, T\right)$, then:
\[\left|f\right| \leq c\left(S, T\right)\]
|
algo_notes
|
latex
|
algo_notes_block_244
|
\begin{subparag}{Proof}
Starting from the left hand side:
\[\left|f\right| = f\left(S, T\right) = \sum_{\substack{u \in S \\ v \in T}}^{} f\left(u, v\right) - \underbrace{\sum_{\substack{u \in S\\ v \in T}}^{} f\left(v, u\right)}_{\geq 0}\]
And thus:
\[\left|f\right| \leq \sum_{\substack{u \in S \\ v \in T}}^{} f\left(u, v\right) \leq \sum_{\substack{u \in S \\ v \in T}}^{} c\left(u, v\right) = c\left(S, T\right)\]
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_245
|
\begin{parag}{Definition: Min-cut}
Let $f$ be a flow. A \important{min-cut} is a cut with minimum capacity. In other words, it is a cut $\left(S_{min}, T_{min}\right)$, such that for any cut $\left(S, T\right)$:
\[c\left(S_{min}, T_{min}\right) \leq c\left(S, T\right)\]
|
algo_notes
|
latex
|
algo_notes_block_246
|
\begin{subparag}{Remark}
By the property above, the value of the flow is less than or equal to the min-cut:
\[\left|f\right| \leq c\left(S_{min}, T_{min}\right)\]
We will prove right after that, in fact, $\left|f_{max}\right| = c\left(S_{min}, T_{min}\right)$.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_247
|
\begin{parag}{Max-flow min-cut theorem}
Let $G = \left(V, E\right)$ be a flow network, with source $s$, sink $t$, capacities $c$ and flow $f$. Then, the following propositions are equivalent:
\begin{enumerate}
\item $f$ is a maximum flow.
\item $G_f$ has no augmenting path.
\item $\left|f\right| = c\left(S, T\right)$ for some cut $\left(S, T\right)$.
\end{enumerate}
|
algo_notes
|
latex
|
algo_notes_block_248
|
\begin{subparag}{Remark}
This theorem shows that the Ford-Fulkerson method gives the optimal value. Indeed, it terminates when $G_f$ has no augmenting path, which is, as this theorem says, equivalent to having found a maximum flow.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_249
|
\begin{subparag}{Proof $\left(1\right) \implies \left(2\right)$}
Let's suppose for contradiction that $G_f$ has an augmenting path $p$. However, then, Ford-Fulkerson method would augment $f$ by $p$ to obtain a flow with increased value. This contradicts the fact that $f$ was a maximum flow.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_250
|
\begin{subparag}{Proof $\left(2\right) \implies \left(3\right)$}
Let $S$ be the set of nodes reachable from $s$ in the residual network, and $T = V \setminus S$.
Every edge going out of $S$ in $G$ must be at capacity. Indeed, otherwise, we could reach a node outside $S$ in the residual network, contradicting the construction of $S$.
Since every edge is at capacity, we get that $f\left(S, T\right) = c\left(S, T\right)$. However, since $\left|f\right| = f\left(S, T\right)$ for any cut, we indeed find that:
\[\left|f\right| = c\left(S, T\right)\]
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_251
|
\begin{subparag}{Proof $\left(3\right) \implies \left(1\right)$}
We know that $\left|f\right| \leq c\left(S, T\right)$ for all cuts $S, T$. Therefore, if the value of the flow is equal to the capacity of some cut, it cannot be improved. This shows its maximality.
\qed
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture17/FordFulkerson-MaxFlow.code}
start with 0-flow
while there is an augmenting path from s to t in the residual network:
find an augmenting path
compute the bottleneck // the min capacity on the path
increase the flow on the path by the bottleneck and update the residual network
// flow is maximal
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture17/FordFulkerson-MinCut.code}
if no augmenting path exists in the residual network:
find the set of nodes S reachable from s in the residual network
set T = V \ S
// (S, T) is a minimum cut
\end{filecontents*}
|
algo_notes
|
latex
|
algo_notes_block_252
|
\begin{parag}{Summary}
All this shows that our Ford-Fulkerson method for finding a max-flow works:
\importcode{Lecture17/FordFulkerson-MaxFlow.code}{pseudo}
Also, when we have found a max-flow, we can use our flow to find a min-cut:
\importcode{Lecture17/FordFulkerson-MinCut.code}{pseudo}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_253
|
\begin{parag}{High complexity analysis}
It takes $O\left(E\right)$ to find a path in the residual network (using breadth-first search for instance). Each time, the flow value is increased by at least 1. Thus, the running time has a worst case of $O\left(E \left|f_{max}\right|\right)$.
We can note that, indeed, there are some cases where we reach such a complexity if we always choose the bad path (the one taking the link in the middle here, which will always exist on the residual network):
\imagehere[0.4]{Lecture17/WorstCaseFordFulkerson.png}
This graph would never terminate before the heat death of the universe.
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_254
|
\begin{parag}{Lower complexity analysis}
In fact, if we don't choose our paths randomly and if the capacities are integers (or rational numbers, this does not really matter since we could then just multiply everything by the lowest common divisor an get an equivalent problem), then we can get a much better complexity.
If we take the shortest path given by BFS, then the complexity is bounded by $\frac{1}{2} E V$. If we take the fattest path (the path which bottleneck has the largest capacity), then the complexity is bounded by $E \log\left(E \left|f_{max}\right|\right)$.
|
algo_notes
|
latex
|
algo_notes_block_255
|
\begin{subparag}{Proof}
We will not show those two affirmations in this course.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_256
|
\begin{parag}{Observation}
If the capacities of our network are irrational, then the Ford-Fulkerson method might not really terminate.
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_257
|
\begin{parag}{Application: Bipartite matching problem}
Let's consider the Bipartite matching problem. It is easier to explain it with an example. We have $N$ students applying for $M \geq N$ jobs, where each student get several offers. Every job can be taken at most once, and every student can have at most one job.
\imagehere[0.4]{Lecture17/BipartiteMatchingExample.png}
We want to know if it is possible to match all students to jobs. To do so, we add a source linked to all students, and a sink linked to all jobs, where all edges have capacity 1.
\imagehere[0.8]{Lecture17/BipartiteMatchingExampleFlow.png}
If the Ford-Fulkerson method gives us that $\left|f_{max}\right| = N$, then every student was able to find a job. Indeed, flows obtained by Ford-Fulkerson are integer valued if capacities are integers, so the value on every edge is 0 or 1. Since every student has the in-flow for at most one job, and each job has the out-flow for at most one student, there cannot be any student matched to two jobs or any job matched to two students, by conservation of the flow.
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_258
|
\begin{parag}{Application: Edge-disjoint paths problem}
In an undirected graph, we may want to know the minimum number of routes that we can take that do not share a common road. To do so, we set an edge of capacity 1 for both directions for every roads (in a non-anti-parallel fashion, as seen earlier).
Then, the max-flow is the number of edge-disjoint paths, and the min-cut shows the minimum number of roads that need to be closed so that there is no more route going from the start to the end.
\end{parag}
\lecture{18}{2022-11-25}{Either Levi or Mikasa made this function}{}
|
algo_notes
|
latex
|
algo_notes_block_259
|
\begin{parag}{Disjoint-set data structures}
The idea of \important{disjoint-set data structures} is to maintain a collection $\mathcal{S} = \left\{S_1, \ldots, S_k\right\}$ of disjoint sets, which can change over time. Each set is identified by a representative, which is some member of the set. It does not matter which element is the representative as long as, asking for the representative twice without modifying the set, we get the same answer both times.
We want our data structure to have the following operations:
\begin{itemize}
\item \texttt{Make-Set(x)} makes a new set $S_i = \left\{x\right\}$, and add $S_i$ to our collection $\mathcal{S}$.
\item \texttt{Union(x, y)} modifies $\mathcal{S}$ such that, if $x \in S_x$ and $y \in S_y$, then:
\[\mathcal{S} = \mathcal{S} \setminus S_x \setminus S_y \cup \left\{S_x \cup S_y\right\}\]
In other words, we destroy $S_x$ and $S_y$, but create a new set $S_x \cup S_y$, which representative is any member of $S_x \cup S_y$.
\item \texttt{Find(x)} returns the representative of the set containing $x$.
\end{itemize}
|
algo_notes
|
latex
|
algo_notes_block_260
|
\begin{subparag}{Remark}
This datastructure can also be named union find.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_261
|
\begin{parag}{Linked list representation}
A way to represent this data structure is through a linked list. To do so, each set is an object looking like a single linked list. Each set object is represented by a pointer to the head of the list (which we will take as the representative) and a pointer to the tail of the list. Also, each element in the list has a pointer to the set object and to the next element.
\imagehere{Lecture18/LinkedListRepresentation.png}
|
algo_notes
|
latex
|
algo_notes_block_262
|
\begin{subparag}{Make-Set}
For the procedure \texttt{Make-Set(x)}, we can just create a singleton list containing $x$. This is easily done in time $\Theta\left(1\right)$.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_263
|
\begin{subparag}{Find}
For the procedure \texttt{Find(x)}, we can follow the pointer back to the list object, and then follow the head pointer to the representative. This is also done in time $\Theta\left(1\right)$.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_264
|
\begin{subparag}{Union}
For the procedure \texttt{Union(x, y)}, everything gets more complicated. We notice that we can append a list to the end of another list. However, we will need to update all the elements of the list we appended to point to the right set object, which will take a lot of time if its size is big. So, to do so, we can just append the smallest list to the largest one (if their size are equal, we can make an arbitrary choice). This method is named \important{weighted-union heuristic}.
We notice that, on a single operation, both ideas have exactly the same bound. So, to understand why this is better, let's consider the following theorem.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_265
|
\begin{parag}{Theorem}
Let us consider a linked-list implementation of a disjoint-set datastructure
With the weighted-union heuristic, a sequence of (any) $m$ operations takes $O\left(m + n\log\left(n\right)\right)$ time, where $n$ is the number of elements our structure ends with after those operations. Without this heuristic, this bound gets to $O\left(m + n^2\right)$.
|
algo_notes
|
latex
|
algo_notes_block_266
|
\begin{subparag}{Proof with}
The inefficiency comes from constantly rewiring our elements when running the \texttt{Union} procedure. Let us count how many times an element $i$ may get rewire if, amongst those $m$ operations, there are $n$ \texttt{Union} calls.
When we merge a set $A$ containing $i$ and another set $B$, if we have to update wiring of $i$, then it means that the size of the list $A$ was smaller than the one of $B$, and thus the size of the total list of $A \cup B$ is at least twice the size of the one of $A$. However, we can double the sizes of a list at most $\log\left(n\right)$ times, meaning that the element $i$ has been rewired at most $\log\left(n\right)$ times. Since we have $n$ elements for which we could have made the exact same analysis, we get a complexity of $O\left(n\log\left(n\right)\right)$ for this scenario.
Note that we also need to consider the case where there are many more \texttt{Make-Set} and \texttt{Find} calls than \texttt{Union} ones. This is pretty trivial since they are both $\Theta\left(1\right)$, and thus this case is $\Theta\left(m\right)$.
Putting everything together, we get a worst case complexity of $O\left(m + n\log\left(n\right)\right) = O\left(\max\left\{m, n\log\left(n\right)\right\}\right)$.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_267
|
\begin{subparag}{Proof without}
Let's say that we have $n$ elements each in a singleton set and that our $m$ operations consist in always appending the list of the first set to the second one, through unions. This way, the first set will get a size constantly growing. Thus, we will have to rewire $1 + 2 + \ldots + n-1$ elements, leading to a worst case complexity of $O\left(n^2\right)$ for this scenario.
Again, considering the case where there are mostly \texttt{Make-Set} and \texttt{Find} call, it leads to a complexity of $\Theta\left(m\right)$. Putting everything together, we indeed get a worst case of $O\left(m + n^2\right)$.
\qed
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_268
|
\begin{subparag}{Remark}
This kind of analysis is amortised complexity analysis: we don't make our analysis on a single operation, since we may have a really bad case happening. However, on average, it is fine.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture18/DisjointSetForestMakeSet.code}
procedure MakeSet(x):
x.p = x
x.rank = 0
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture18/DisjointSetForestFindSet.code}
procedure FindSet(x):
if x != x.p:
x.p = FindSet(x.p) // update parent
return x.p
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture18/DisjointSetForestUnion.code}
procedure Union(x, y):
Link(FindSet(x), FindSet(y))
procedure Link(x, y):
if x.rank > y.rank:
y.p = x
else:
x.p = y
if x.rank == y.rank:
y.rank = y.rank + 1
\end{filecontents*}
|
algo_notes
|
latex
|
algo_notes_block_269
|
\begin{parag}{Forest of trees}
Now, let's consider instead a much better idea. We make a forest of trees (which are \textit{not} binary), where each tree represents one set, and the root is the representative. Also, since we are working with trees, naturally each node only points to its parent.
\imagehere[0.4]{Lecture18/DisjointSetForrestTree.png}
|
algo_notes
|
latex
|
algo_notes_block_270
|
\begin{subparag}{Make-Set}
\texttt{Make-Set(x)} can be done easily by making a single-node tree.
\importcode{Lecture18/DisjointSetForestMakeSet.code}{pseudo}
The rank will be defined and used in the \texttt{Union} procedure.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_271
|
\begin{subparag}{Find}
For \texttt{Find(x)}, we can just follow pointers to the root.
However, we can also use the following great heuristic: \important{path compression}. The \texttt{Find(x)} procedure follows a path to the origin. Thus, we can make all those elements' parent be the representative directly (in order to make the following calls quicker).
\importcode{Lecture18/DisjointSetForestFindSet.code}{pseudo}
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_272
|
\begin{subparag}{Union}
For \texttt{Union(x, y)}, we can make the root of one of the trees the child of another.
Again, we can optimise this procedure with another great heuristic: \important{union by rank}. For the \texttt{Find(x)} procedure to be efficient, we need to keep the height of our trees as small as possible. So, the idea is to append the tree with smallest height to the other. However, using heights is not really efficient (since they change often and are thus hard to keep track of), so we use ranks instead, which give the same kind of insights.
\importcode{Lecture18/DisjointSetForestUnion.code}{pseudo}
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_273
|
\begin{subparag}{Complexity}
Let's also consider applying $m$ operations to a datastructure with $n$ elements.
We can show that, using both union by rank and path compression, we have a complexity of $O\left(m \alpha\left(n\right)\right)$, where $\alpha\left(n\right)$ is the inverse Ackermann function. This function is growing really slow: we can consider $\alpha\left(n\right) \leq 5$ for any $n$ of size making sense when compared to the size of the universe. In other words, our complexity is \textit{approximately} $O\left(m\right)$.
Note that this complexity is tight, we don't like to just add some weird functions.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture18/ConnectedComponents.code}
procedure ConnectedComponents(G):
for each vertex v in G.V:
MakeSet(v)
for each edge (u, v) in G.E:
if FindSet(u) != FindSet(v):
Union(u, v)
\end{filecontents*}
|
algo_notes
|
latex
|
algo_notes_block_274
|
\begin{parag}{Application: Connected components}
For instance, we can construct a disjoint-set data structure for all the connected components of an undirected graph. Using the fact that, in an undirected graph, two elements are connected if and only if there is a path between them:
\importcode{Lecture18/ConnectedComponents.code}{pseudo}
|
algo_notes
|
latex
|
algo_notes_block_275
|
\begin{subparag}{Example}
For instance, in the following graph, we have two connected components:
\imagehere[0.7]{Lecture18/ExampleConnectedComponents.png}
This means that our algorithm will give us two disjoint sets in the end.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_276
|
\begin{subparag}{Analysis}
We notice that we have $V$ elements, and we have at most $V + 3E$ union or find operations.
Thus, using the best implementation we saw for disjoint set data structures, we get a complexity of $O\left(\left(V + E\right) \alpha\left(V\right)\right) \approx O\left(V + E\right)$. For the other implementation we would get $O\left(V \log\left(V\right) + E\right)$.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_277
|
\begin{parag}{Definition: Spanning tree}
A spanning tree of a graph $G$ is a set $T$ of edges that is acyclic and spanning (it connects all vertices).
|
algo_notes
|
latex
|
algo_notes_block_278
|
\begin{subparag}{Example}
For instance, the following is a spanning tree:
\imagehere[0.6]{Lecture18/ExampleSpanningTree.png}
However, the following is not a spanning tree since it has no cycle but is not spanning (the node $e$ is never reached):
\imagehere[0.6]{Lecture18/NotSpanningTreeExample1.png}
Similarly, the following is not a spanning tree since it is spanning but has a cycle:
\imagehere[0.6]{Lecture18/NotSpanningTreeExample2.png}
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_279
|
\begin{subparag}{Remark}
The number of edges of a spanning tree is $E_{span} = V - 1$.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_280
|
\begin{parag}{Minimum spanning tree (MST)}
The goal is now that, given an undirected graph $G = \left(V, E\right)$ and weights $w\left(u, v\right)$ for each edge $\left(u, v\right) \in E$, we want to output a spanning tree of minimum total weight (which sum of weights is the smallest).
|
algo_notes
|
latex
|
algo_notes_block_281
|
\begin{subparag}{Application: Communication networks}
This problem can have many applications. For instance, let's say we have some cities between which we can make communication lines at different costs. Finding how to connect all the cities at the smallest cost possible is exactly an application of this problem.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_282
|
\begin{subparag}{Application: Clustering}
Another application is clustering. Let's consider the following graph, where edge weights equal to the distance of nodes:
\imagehere[0.5]{Lecture18/MinimumSpanningTreesClustering.png}
Then, to find $n$ clusters, we can make the minimum spanning tree (which will want to use the small edges), and remove the $n-1$ fattest edges.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_283
|
\begin{parag}{Definition: Cut}
Let $G = \left(E, V\right)$ be a graph. A \important{cut} $\left(S, V \setminus S\right)$ is a partition of the vertices into two non-empty disjoint sets $S$ and $V \setminus S$.
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_284
|
\begin{parag}{Definition: Crossing edge}
Let $G = \left(E, V\right)$ be a graph, and $\left(S, V \setminus S\right)$ be a cut. A \important{crossing edge} is an edge connecting a vertex from $S$ to a vertex from $V \setminus S$.
\end{parag}
\lecture{19}{2022-11-28}{Finding the optimal MST}{}
|
algo_notes
|
latex
|
algo_notes_block_285
|
\begin{parag}{Theorem: Cut property}
Let $S, V \setminus S$ be a cut. Also, let $T$ be a tree on $S$ which is part of a MST, and let $e$ be a crossing edge of minimum weight.
Then, there is a MST of $G$ containing both $e$ and $T$.
\imagehere[0.5]{Lecture19/CutProperty.png}
|
algo_notes
|
latex
|
algo_notes_block_286
|
\begin{subparag}{Proof}
Let us consider the MST $T$ is part of.
If $e$ is already in it, then we are done.
Since there must be a crossing edge (to span both $S$ and $V \setminus S$), if $e$ is not part of the MST, then another crossing edge $f$ is part of the MST. However, we can just replace $f$ by $e$: since $w\left(e\right) \leq w\left(f\right)$ by hypothesis, we get that the new spanning tree has a weight less than or equal to the MST we considered. But, since the latter was minimal, it means that our new tree is also minimal and that $w\left(f\right) = w\left(e\right)$. Note that the new tree is indeed spanning since, if we consider the original MST to which we add $e$, then it has a cycle going through $e$ (since adding an edge to a MST always leads to a cycle), and thus through $f$ too. We can then remove any of the edges in this cycle ($f$ in particular) and still have our spanning property.
In both cases, we have been able to create a MST containing both $T$ and $e$, finishing our proof.
\qed
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture19/PrimsAlgorithm.code}
procedure Prim(G, w, r):
let Q be an empty min-priority queue
for each u in G.V:
u.key = infinity
u.pred = Nil
Insert(Q, u)
decreaseKey(Q, r, 0) // set r.key to 0
while!Q.isEmpty():
u = extractMin(Q)
for each v in G.Adj[u]:
if v in Q and w(u, v) < v.key:
v.pred = u
decreaseKey(Q, v, w(u, v))
\end{filecontents*}
|
algo_notes
|
latex
|
algo_notes_block_287
|
\begin{parag}{Prim's algorithm}
The idea of Prim's algorithm for finding MSTs is to greedily construct the tree by always picking the crossing edge with smallest weight.
|
algo_notes
|
latex
|
algo_notes_block_288
|
\begin{subparag}{Proof}
Let's do this proof by structural induction on the number of nodes in $T$.
Our base case is trivial: starting from any point, a single element is always a subtree of a MST. For the inductive step, we can just see that starting with a subtree of a MST and adding the crossing edge with smallest weight yields another subtree of a MST by the cut property.
\qed
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_289
|
\begin{subparag}{Implementation}
We need to keep track of all the crossing edges at every iteration, and to be able to efficiently find the minimum crossing edge at every iteration.
Checking out all outgoing edges is not really good since it leads to $O\left(E\right)$ comparisons at every iteration and thus a total running time of $O\left(EV\right)$.
Let's consider a better solution. For every node $w$, we keep a value $\text{dist}\left(w\right)$ that measure the ``distance'' (the minimum sum of weights to reach it) of $w$ from the current tree. When a new node $u$ is added to the tree, we check whether neighbours of $u$ have their distance to the tree decreased and, if so, we decrease it. To extract the minimum efficiently, we use a min-priority queue for the nodes and their distances. In pseudocode, it looks like:
\importcode{Lecture19/PrimsAlgorithm.code}{pseudo}
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_290
|
\begin{subparag}{Analysis}
Initialising $Q$ and the first for loop take $O\left(V \log\left(V\right)\right)$ time. Then, decreasing the key of $r$ takes $O\left(\log\left(V\right)\right)$. Finally, in the while loop, we make $V$ \texttt{extractMin} calls---leading to $O\left(V \log\left(V\right)\right)$---and at most $E$ \texttt{decreaseKey} calls---leading to $O\left(E\log\left(V\right)\right)$.
In total, this sums up to $O\left(E \log\left(V\right)\right)$.
\end{subparag}
\end{parag}
\begin{filecontents*}[overwrite]{Lecture19/KruskalsAlgorithm.code}
procedure Kruskal(G, w):
let result be an empty set of edges
for each vertex v in G.V:
makeSet(v)
sort the edges of G.E into nondecreasing order by weight w
for each (u, v) from G.E:
if findSet(u) != findSet(v):
result = SetUnion(result, (u, v))
return result
\end{filecontents*}
|
algo_notes
|
latex
|
algo_notes_block_291
|
\begin{parag}{Kruskal's algorithm}
Let's consider another way to solve this problem. The idea of Kruskal's algorithm for finding MSTs is to start from a forest $T$ with all nodes being in singleton trees. Then, at each step, we greedily add the cheapest edge that does not create a cycle.
The forest will have been merged into a single tree at the end of the procedure.
|
algo_notes
|
latex
|
algo_notes_block_292
|
\begin{subparag}{Proof}
Let's do a proof by structural induction on the number of edges in $T$ to show that $T$ is always a sub-forest of a MST.
The base case is trivial since, at the beginning, $T$ is a union of singleton vertices and thus, definitely, it is the sub-forest of any tree on the graph (and of any MST, in particular).
For the inductive step, by hypothesis, the current $T$ is a sub-tree of a MST. Let $e$ be an edge of minimum weight that does not create a cycle, and let's suppose for contradiction that it is not part of a MST. We notice that adding this edge to one of the MST will create a cycle, since adding an edge to a MST will always create a cycle. However, since there was no cycle when adding this edge to the forest, it means that there were some edges that were added later (meaning that they have greater weight) that compose this cycle. We can just remove one of those edges, getting a tree with weight smaller than the MST and also spanning, which is our contradiction.
\qed
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_293
|
\begin{subparag}{Implementation}
To implement this algorithm, we need to be able to efficiently check whether the cheapest edge creates a cycle. However, this is the same as checking whether its endpoint belong to the same component, meaning that we can use disjoint sets data structure.
We can thus implement our algorithm by making each singleton a set, and then, when an edge $\left(u, v\right)$ is added to $T$, we take the union of the two connected components $u$ and $v$.
\importcode{Lecture19/KruskalsAlgorithm.code}{pseudo}
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_294
|
\begin{subparag}{Analysis}
Initialising \texttt{result} is in $O\left(1\right)$, the first for loop represents $V$ \texttt{makeSet} calls, sorting $E$ takes $O\left(E \log\left(E\right)\right)$ and the second for loop is $O\left(E\right)$ \texttt{findSets} and \texttt{unions}. We thus get a complexity of:
\[\underbrace{O\left(\left(V + E\right) \alpha\left(V\right)\right)}_{= O\left(E \alpha\left(V\right)\right)} + O\left(E \log\left(E\right)\right) = O\left(E \log\left(E\right)\right) = O\left(E \log\left(V\right)\right)\]
since $E = O\left(V^2\right)$ for any graph.
We can note that, if the edges are already sorted, then we get a complexity of $O\left(E \alpha\left(V\right)\right)$, which is almost linear.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_295
|
\begin{parag}{Definition: Shortest path problem}
Let $G = \left(V, E\right)$ be a directed graph with edge-weights $w\left(u, v\right)$ for all $\left(u, v\right) \in E$.
We want to find the path from $a \in V$ to $b \in V$, $\left(v_0, v_1, \ldots, v_k\right)$, such that its weight $\sum_{i=1}^{k} w\left(v_{i-1}, v_i\right)$ is minimum.
|
algo_notes
|
latex
|
algo_notes_block_296
|
\begin{subparag}{Variants}
Note that there are many variants of this problem.
In \important{single-source}, we want to find the shortest path from a given source vertex to every other vertex of the graph. In \important{single-destination}, we want to find the shortest path from every vertex in the graph to a given destination vertex. In \important{single-pair}, we want to find the shortest path from $u$ to $v$. In \important{all-pairs}, we want to find the shortest path from $u$ to $v$ for all pairs $u, v$ of vertices.
We can observe that single-destination can be solved by solving single-source and by reversing edge directions. For single-pair, no algorithm better than the one for single-source is known for now. Finally, for all-pairs, it can be solved using single-source on every vertex, even though better algorithms are known.
\end{subparag}
\end{parag}
|
algo_notes
|
latex
|
algo_notes_block_297
|
\begin{parag}{Negative-weight edges}
Note that we will try to allow negative weights, as long there is no negative-weight cycle (a cycle which sum is negative) reachable from the source (since then we could just keep going around in the cycle and all nodes would have distance $-\infty$). In fact, one of our algorithm will allow to detect such negative-weight cycles.
|
algo_notes
|
latex
|
algo_notes_block_298
|
\begin{subparag}{Remark}
Dijkstra's algorithm, which we will present in the following course, only works with positive weights.
\end{subparag}
|
algo_notes
|
latex
|
algo_notes_block_299
|
\begin{subparag}{Application}
This can for instance be really interesting for exchange rates. Let's say we have some exchange rate for some given currencies. We are wondering if we can make an infinite amount of money by trading money to a currency, and then to another, and so on until, when we come back to the first currency, we have made more money.
To compute this, we need to compute the product of our rates. Since we will want to apply minimum-path and we will compute their sum, we can take a logarithm on every exchange rate. That way, minimising this sum of logarithms is equivalent to minimising the logarithm of the product of currencies, and thus minimising the product of currencies. Moreover, we want to find the way to make the maximum amount of money, so we can just consider the opposite of all our logarithms. To sum up, we have $w\left(u, v\right) = -\log\left(r\left(u, v\right)\right)$.
Now, we only need to find negative cycles: they allow to make an infinite amount of money.
\end{subparag}
\end{parag}
\lecture{20}{2022-12-02}{I like the structure of maths courses}{}
\begin{filecontents*}[overwrite]{Lecture20/BellmanFordRelax.code}
procedure Relax(u, v, w):
if u.d + w(u, v) < v.d:
v.d = u.d + w(u, v)
v.pred = u
\end{filecontents*}
\begin{filecontents*}[overwrite]{Lecture20/BellmanFord.code}
procedure initSingleSource(G, s):
for each v in G.V:
v.d = infinity
v.pred = Nil
s.d = 0
procedure BellmanFord(G, w, s):
initSingleSource(G, s)
// Main algorithm
for i = 1 to len(G.V) - 1:
for each edge (u, v) in G.E:
relax(u, v, w)
// Detect negative cycles (does not modify the graph)
for each edge (u, v) in G.E:
if v.d > u.d + w(u, v): // there would be a modification with relax
return false
return true
\end{filecontents*}
|
algo_notes
|
latex
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.