As promised, today I’m going to talk about how to compute the strongly connected components
of a directed graph. I’m going to go through one method, called Kosaraju’s algorithm, which is
the easiest to understand. It’s possible to do better that Kosaraju’s by a factor of 2, using
an algorithm called Tarjan’s algorithm, but Tarjan’s is really just a variation on the theme
of Kosaraju’s.
Kosaraju’s algorithm is amazingly simple. It uses a very clever trick based on the fact that
if you reverse all of the edges in a graph, the resulting graph has the same strongly connected components as the original. So using that, we can get the SCCs by doing a forward traversal to find an ordering of vertices, then doing a traversal of the reverse of the graph in the order generated by the first traversal.
That may sound a bit mysterious, but it’s really very simple. Take the graph G, and
do a recursive depth-first traversal of the graph, along with an initially empty stack of vertices. As the recursion started at a vertex V finishes, push V onto the stack. At the end of the traversal, you’ll
have all of the vertices of G in the stack. The order of the reverse traversal will be starting
with the vertices on the top of that stack.
So you reverse all of the edges in the graph, creating the reverse graph, G’. Start with the
vertex on top of the stack, and to a traversal from that vertex. All of the nodes reachable from
that vertex form one strongly connected component. Remove everything in that SCC from the stack, and
then repeat the process with the new top of the stack. When the stack is empty, you’ll have accumulated all of the SCCs.
As usual, things are a whole lot clearer with an example. Let’s do that using the graph to the right as an example.
First, we do a depth-first traversal of the graph, putting vertices on a stack as we finish the
recursive step started with that node. We’ll start the traversal with “A”. A,F,E,C,G,H – H is finished,
so it goes on the stack. Then G goes on the stack; then C, then E, then F, and we’re back to A. So
at this point, the stack is [H, G, C, E, F]. Then we keep going from A: A, B, D. D is done, so it goes
on the stack; then B, then A. So the final stack is [H, G, C, E, F, D, B, A]
Then, we reverse the graph, and start traversing from whatever is on top of the stack. So first, we start from A in the reversed graph. That gives us A, D, B. So {A, D, B} form one strongly connected
component. Now, the top of the stack that hasn’t been traversed yet is F. So we do F, E. {F,E} is
another SCC. The remaining stack is now [H, G, C], which traverses straightforwardly as C, H, G. So
we end up with {A, B, D}, {E, F}, and {C, G, H} as our SCCs.
What’s the time complexity? We need to do two traversals, and one graph reversal. If we’re using an adjacency list representation, we can create the reverse graph while we do the first traversal, so it’s really just two depth-first traversals. So two times the complexity of a traversal; in adjacency list representation, traversal is O(V+E), where E is the number of edges – so Kosaraju’s SCC algorithm is also O(V+E). (If you use an adjacency matrix, then it’s O(N2).)
Can we do better than that? Order of complexity, no. You can’t do better than the cost of a single traversal of the graph. You can eliminate the second traversal. Instead of doing that second
traversal, you can do the forward traversal using the stack, and then pull things off the stack
checking if they are the root of a strongly connected component, and if so, removing all members
of that component. Tarjan’s algorithm works this way, and ends up doing one traversal, but considering
each edge in the graph twice. So it’s slightly, but not dramatically faster. It’s generally preferred
just because it avoids the computation of the reverse graph.
In terms of what we were talking about in the last post, this is good news. The computation of the SCCs is quite fast – quite a lot better than NP-complete. So we can, feasibly, use the division into SCCs as the basis of parallelization of graph algorithms.
As usual, all diagrams were drawn using OmniGraffle.
Your posts about connections are timely. I just read a story about AT&T’s Hancock programming language and how it identifies “communities of interest”.
http://blog.wired.com/27bstroke6/2007/10/att-invents-pro.html
Great explanation, that was amazingly clear. You should write a textbook. 🙂
Some newer readers might need clarification on the distinction in digraphs between weakly connected, strongly connected, and weakly connected but not strongly connected.
For large random graphs, there’s the relationship to gels and the phase change in coagulation-diffusion systems (where a large connected subgraph sharply emerges, whether the components are molecules or stars in galaxies). This is a universal phenomenon.
You have not really explained what happens in the case that not all the nodes are reachable from A in the first transversal. Still, I found this explanation clearer than others I have read.
Great! When you have time, how about a post on identifying cliques, and one of computing all shortest paths?
Hi Mark,
How are Kosaraju’s or Tarjan’s algorithm related to or more advantageous than ‘dynamic programming’ or ‘optimization’ developed by Richard Bellman?
wiki
http://en.wikipedia.org/wiki/Dynamic_programming
A node that is not reachable from A cannot be in the same SCC as A or any node reachable from A. So, if after finishing the subgraph reachable from A, you have nodes left over, you pick one of them and start over. Eventually you run out of nodes.
Doug: Dynamic programming isn’t an algorithm, but rather a class of programming techniques. Mark had a post on it a while back, I think.
Dynamic Programming methods probably really wouldn’t do anything here – as Mark said, you can’t do better than a single graph traversal. There’s always that chance that a quantum algorithm can cut it down, but they’re sufficiently non-intuitive for me that I wouldn’t be able to say anything.
Hey Mark C.C, not to get you off topic, but could you maybe investigate what the UK student figured out in winning Wolfram’s $20k contest for his universal turing machine proof? I’m really interested in what the guy figured out. 🙂
Same here, but Scott Aaronson had an explanation of why QA’s can sometimes cleverly use phase interference to find probable solutions fast. As for example Shor’s algorithm seems based on a quantum Fourier transform, I guess the first question is how to recognize eigenfrequencies of a “resonating” SCC, whatever that means.
Physically it seems to me naively there should be something there. Picking up a set of elastic bands connecting point masses and shaking it, it should be possible to identify the SCC’s movements as they are more tightly bound. But if it can be parallelized and identified, I dunno. That model builds on a many-body problem…
Hi Xanthir,
Thanks for clarifying the relation of dynamic programming to these algorithms.
Richard Bellman credited with this development was a Rand applied mathematician who is, with the Hamilton-Jacobi-Bellman Theorem, also prominent in mathematical game theory, optimal control theory and quantum dynamics.
http://en.wikipedia.org/wiki/Hamilton-Jacobi-Bellman_equation
Example:
arXiv:quant-ph/0402143v1 19 Feb 2004
Shlomo E Sklarz, David J Tannor, Navin Khaneja
Optimal Control of Quantum Dissipative Dynamics: Analytic solution for cooling the three level LAMBDA system
Biography – ‘621 papers, 41 books, 21 book translations authored (or co-authored)’
http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Bellman.html
OT, maybe Mark you can be interested in this video, or maybe you already know it http://video.google.com/videoplay?docid=-6626464599825291409
I found it here http://blog.sciam.com/index.php?title=video_of_a_sphere_turning_inside_out_wit&more=1&c=1&tb=1&pb=1
Torbjörn, thanks so much for that link! That was extremely clarifying. It was *so* good, in fact, that I want to repost a comment that really cemented how Shor’s algorithm works (and thus points at the major benefit of quantum algorithms):
Originally from Robin Blume-Kohout
That’s absolutely fascinating, and so enlightening in its simplicity that I’m astounded. I now have a decent grasp on quantum algorithmic limits.
The other comments on there are quite good as well. I recommend reading them!
Mark, it seems that the blockquote isn’t working all of a sudden. My quote *should* start just below the italic text, and end above “That’s absolutely fascinating…”.
Xanthir, my pleasure. I had forgotten about the quantum localization stuff. But it’s true, the quantum realization of classical Fourier “uncertainty” is actually our friend, as QM minimizes our uncertainty of the system.
Btw, I think localization ties into “my” graph problem dual. In principle we don’t have to solve the full mass movements, only perturbations, to localize such stuff as SCC’s. But that and a $ will get me a coffee.
Note that it matters, but the question about dynamic programming didn’t come from the same Doug in the set theory conversation.
Does anything top the Fourier transform in terms of applicability and importance?
Xanthir: So wait, that’s interesting. Do I understand this correctly?
– The classical DFT works on N samples with complex values X0,X1..Xn; and produces N bin values representing as complexes the frequency/phases of the sinusoids those samples decompose into
– The quantum DFT works on one value which is a superposition of N basis states, with complex amplitudes X0|0> + X1|1> + … Xn|n>; and produces a new value which is a superposition of N basis states, with each state corresponding to a bin in the classical DFT and having a complex amplitude corresponding to the value that bin would have had in the DFT.
Fuzzy Doug: Good to know! I had assumed it was you.
Coin: Not completely familiar with all the details, but that seems to be accurate. The quantum DFT does exactly the same thing as the classical DFT, only faster, but you can only get a much smaller amount of information out of it.
Maybe just a formality, but we call the “reversed” graph the “transpose” graph. Good article!
Thank you for this very simple, very easy to follow example of Kosaraju’s algorithm for calculating SCC’s. I was struggling with some of the more mathy descriptions, and your description plus example helped make it all clear.
You lost me here: “That gives us A, D, B. So {A, D, B} form one strongly connected component.”. What was on the stack was [H, G, C, E, F, D, B, A], so how did you make the connection between {A, D, B} and {A, B, D}?
And what if there isn’t a link between A and D, you’d get just {A}, what do you do then?
Pingback: Computing Strongly Connected Components [Distributed Merging?] « Another Word For It
Hi there,
it’s a 10 years old post but I don’t understand something…
If my starting graph is simply a chain A -> B -> C -> D
your version of Kosajaru will produce a stack
[D C B A] in the first phase.
In the second phase, you will obtain a SCC {D C B A} since
these are all reachable from D in the transposed graph.
Or did I miss something ?