# HG changeset patch # User deba # Date 1157638607 0 # Node ID c790d04e192af5355c1e1a92e0e4a7f49777ebae # Parent 25aab9493dd2cfd9dde4236abfd30bb96a01ff2a Hao-Orlin algorithm It is based on Attila's work It is tested on all dimacs files in data directory It may need more execution control - possible interruption after each findNewSink diff -r 25aab9493dd2 -r c790d04e192a lemon/Makefile.am --- a/lemon/Makefile.am Thu Sep 07 14:04:31 2006 +0000 +++ b/lemon/Makefile.am Thu Sep 07 14:16:47 2006 +0000 @@ -57,6 +57,7 @@ lemon/graph_utils.h \ lemon/graph_writer.h \ lemon/grid_ugraph.h \ + lemon/hao_orlin.h \ lemon/hypercube_graph.h \ lemon/iterable_maps.h \ lemon/johnson.h \ diff -r 25aab9493dd2 -r c790d04e192a lemon/hao_orlin.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/lemon/hao_orlin.h Thu Sep 07 14:16:47 2006 +0000 @@ -0,0 +1,521 @@ +/* -*- C++ -*- + * lemon/hao_orlin.h - Part of LEMON, a generic C++ optimization library + * + * Copyright (C) 2005 Egervary Jeno Kombinatorikus Optimalizalasi Kutatocsoport + * (Egervary Research Group on Combinatorial Optimization, EGRES). + * + * Permission to use, modify and distribute this software is granted + * provided that this copyright notice appears in all copies. For + * precise terms see the accompanying LICENSE file. + * + * This software is provided "AS IS" with no warranty of any kind, + * express or implied, and with no claim as to its suitability for any + * purpose. + * + */ + +#ifndef LEMON_HAO_ORLIN_H +#define LEMON_HAO_ORLIN_H + +#include +#include +#include + +#include +#include +#include +#include + + +/// \file +/// \ingroup flowalgs +/// Implementation of the Hao-Orlin algorithms class for testing network +/// reliability. + +namespace lemon { + + /// \addtogroup flowalgs + /// @{ + + /// %Hao-Orlin algorithm for calculate minimum cut in directed graphs. + /// + /// Hao-Orlin calculates the minimum cut in directed graphs. It + /// separates the nodes of the graph into two disjoint sets and the + /// summary of the edge capacities go from the first set to the + /// second set is the minimum. The algorithm is a modified + /// push-relabel preflow algorithm and it calculates the minimum cat + /// in \f$ O(n^3) \f$ time. The purpose of such algorithm is testing + /// network reliability. For sparse undirected graph you can use the + /// algorithm of Nagamochi and Ibraki which solves the undirected + /// problem in \f$ O(n^3) \f$ time. + /// + /// \author Attila Bernath and Balazs Dezso + template , + typename _Tolerance = Tolerance > + class HaoOrlin { + protected: + + typedef _Graph Graph; + typedef _CapacityMap CapacityMap; + typedef _Tolerance Tolerance; + + typedef typename CapacityMap::Value Value; + + + typedef typename Graph::Node Node; + typedef typename Graph::NodeIt NodeIt; + typedef typename Graph::EdgeIt EdgeIt; + typedef typename Graph::OutEdgeIt OutEdgeIt; + typedef typename Graph::InEdgeIt InEdgeIt; + + const Graph* _graph; + const CapacityMap* _capacity; + + typedef typename Graph::template EdgeMap FlowMap; + + FlowMap* _preflow; + + Node _source, _target; + int _node_num; + + typedef ResGraphAdaptor ResGraph; + typedef typename ResGraph::Node ResNode; + typedef typename ResGraph::NodeIt ResNodeIt; + typedef typename ResGraph::EdgeIt ResEdgeIt; + typedef typename ResGraph::OutEdgeIt ResOutEdgeIt; + typedef typename ResGraph::Edge ResEdge; + typedef typename ResGraph::InEdgeIt ResInEdgeIt; + + ResGraph* _res_graph; + + typedef typename Graph::template NodeMap CurrentArcMap; + CurrentArcMap* _current_arc; + + + typedef IterableBoolMap WakeMap; + WakeMap* _wake; + + typedef typename Graph::template NodeMap DistMap; + DistMap* _dist; + + typedef typename Graph::template NodeMap ExcessMap; + ExcessMap* _excess; + + typedef typename Graph::template NodeMap SourceSetMap; + SourceSetMap* _source_set; + + std::vector _level_size; + + int _highest_active; + std::vector > _active_nodes; + + int _dormant_max; + std::vector > _dormant; + + + Value _min_cut; + + typedef typename Graph::template NodeMap MinCutMap; + MinCutMap* _min_cut_map; + + Tolerance _tolerance; + + public: + + HaoOrlin(const Graph& graph, const CapacityMap& capacity, + const Tolerance& tolerance = Tolerance()) : + _graph(&graph), _capacity(&capacity), + _preflow(0), _source(), _target(), _res_graph(0), _current_arc(0), + _wake(0),_dist(0), _excess(0), _source_set(0), + _highest_active(), _active_nodes(), _dormant_max(), _dormant(), + _min_cut(), _min_cut_map(0), _tolerance(tolerance) {} + + ~HaoOrlin() { + if (_res_graph) { + delete _res_graph; + } + if (_min_cut_map) { + delete _min_cut_map; + } + if (_current_arc) { + delete _current_arc; + } + if (_source_set) { + delete _source_set; + } + if (_excess) { + delete _excess; + } + if (_dist) { + delete _dist; + } + if (_wake) { + delete _wake; + } + if (_preflow) { + delete _preflow; + } + } + + private: + + void relabel(Node i) { + int k = (*_dist)[i]; + if (_level_size[k] == 1) { + ++_dormant_max; + for (NodeIt it(*_graph); it != INVALID; ++it) { + if ((*_wake)[it] && (*_dist)[it] >= k) { + (*_wake)[it] = false; + _dormant[_dormant_max].push_front(it); + --_level_size[(*_dist)[it]]; + } + } + --_highest_active; + } else { + ResOutEdgeIt e(*_res_graph, i); + while (e != INVALID && !(*_wake)[_res_graph->target(e)]) { + ++e; + } + + if (e == INVALID){ + ++_dormant_max; + (*_wake)[i] = false; + _dormant[_dormant_max].push_front(i); + --_level_size[(*_dist)[i]]; + } else{ + Node j = _res_graph->target(e); + int new_dist = (*_dist)[j]; + ++e; + while (e != INVALID){ + Node j = _res_graph->target(e); + if ((*_wake)[j] && new_dist > (*_dist)[j]) { + new_dist = (*_dist)[j]; + } + ++e; + } + --_level_size[(*_dist)[i]]; + (*_dist)[i] = new_dist + 1; + _highest_active = (*_dist)[i]; + _active_nodes[_highest_active].push_front(i); + ++_level_size[(*_dist)[i]]; + _res_graph->firstOut((*_current_arc)[i], i); + } + } + } + + bool selectNewSink(){ + Node old_target = _target; + (*_wake)[_target] = false; + --_level_size[(*_dist)[_target]]; + _dormant[0].push_front(_target); + (*_source_set)[_target] = true; + if ((int)_dormant[0].size() == _node_num){ + _dormant[0].clear(); + return false; + } + + bool wake_was_empty = false; + + if(_wake->trueNum() == 0) { + while (!_dormant[_dormant_max].empty()){ + (*_wake)[_dormant[_dormant_max].front()] = true; + ++_level_size[(*_dist)[_dormant[_dormant_max].front()]]; + _dormant[_dormant_max].pop_front(); + } + --_dormant_max; + wake_was_empty = true; + } + + int min_dist = std::numeric_limits::max(); + for (typename WakeMap::TrueIt it(*_wake); it != INVALID; ++it) { + if (min_dist > (*_dist)[it]){ + _target = it; + min_dist = (*_dist)[it]; + } + } + + if (wake_was_empty){ + for (typename WakeMap::TrueIt it(*_wake); it != INVALID; ++it) { + if (_tolerance.positive((*_excess)[it])) { + if ((*_wake)[it] && it != _target) { + _active_nodes[(*_dist)[it]].push_front(it); + } + if (_highest_active < (*_dist)[it]) { + _highest_active = (*_dist)[it]; + } + } + } + } + + for (ResOutEdgeIt e(*_res_graph, old_target); e!=INVALID; ++e){ + if (!(*_source_set)[_res_graph->target(e)]){ + push(e, _res_graph->rescap(e)); + } + } + + return true; + } + + Node findActiveNode() { + while (_highest_active > 0 && _active_nodes[_highest_active].empty()){ + --_highest_active; + } + if( _highest_active > 0) { + Node n = _active_nodes[_highest_active].front(); + _active_nodes[_highest_active].pop_front(); + return n; + } else { + return INVALID; + } + } + + ResEdge findAdmissibleEdge(const Node& n){ + ResEdge e = (*_current_arc)[n]; + while (e != INVALID && + ((*_dist)[n] <= (*_dist)[_res_graph->target(e)] || + !(*_wake)[_res_graph->target(e)])) { + _res_graph->nextOut(e); + } + if (e != INVALID) { + (*_current_arc)[n] = e; + return e; + } else { + return INVALID; + } + } + + void push(ResEdge& e,const Value& delta){ + _res_graph->augment(e, delta); + if (!_tolerance.positive(delta)) return; + + (*_excess)[_res_graph->source(e)] -= delta; + Node a = _res_graph->target(e); + if (!_tolerance.positive((*_excess)[a]) && (*_wake)[a] && a != _target) { + _active_nodes[(*_dist)[a]].push_front(a); + if (_highest_active < (*_dist)[a]) { + _highest_active = (*_dist)[a]; + } + } + (*_excess)[a] += delta; + } + + Value cutValue() { + Value value = 0; + for (typename WakeMap::TrueIt it(*_wake); it != INVALID; ++it) { + for (InEdgeIt e(*_graph, it); e != INVALID; ++e) { + if (!(*_wake)[_graph->source(e)]){ + value += (*_capacity)[e]; + } + } + } + return value; + } + + public: + + /// \brief Initializes the internal data structures. + /// + /// Initializes the internal data structures. It creates + /// the maps, residual graph adaptor and some bucket structures + /// for the algorithm. + void init() { + init(NodeIt(*_graph)); + } + + /// \brief Initializes the internal data structures. + /// + /// Initializes the internal data structures. It creates + /// the maps, residual graph adaptor and some bucket structures + /// for the algorithm. The \c source node is used as the push-relabel + /// algorithm's source. + void init(const Node& source) { + _source = source; + _node_num = countNodes(*_graph); + + _dormant.resize(_node_num); + _level_size.resize(_node_num, 0); + _active_nodes.resize(_node_num); + + if (!_preflow) { + _preflow = new FlowMap(*_graph); + } + if (!_wake) { + _wake = new WakeMap(*_graph); + } + if (!_dist) { + _dist = new DistMap(*_graph); + } + if (!_excess) { + _excess = new ExcessMap(*_graph); + } + if (!_source_set) { + _source_set = new SourceSetMap(*_graph); + } + if (!_current_arc) { + _current_arc = new CurrentArcMap(*_graph); + } + if (!_min_cut_map) { + _min_cut_map = new MinCutMap(*_graph); + } + if (!_res_graph) { + _res_graph = new ResGraph(*_graph, *_capacity, *_preflow); + } + + _min_cut = std::numeric_limits::max(); + } + + + /// \brief Calculates the minimum cut with the \c source node + /// in the first partition. + /// + /// Calculates the minimum cut with the \c source node + /// in the first partition. + void calculateOut() { + for (NodeIt it(*_graph); it != INVALID; ++it) { + if (it != _source) { + calculateOut(it); + return; + } + } + } + + /// \brief Calculates the minimum cut with the \c source node + /// in the first partition. + /// + /// Calculates the minimum cut with the \c source node + /// in the first partition. The \c target is the initial target + /// for the push-relabel algorithm. + void calculateOut(const Node& target) { + for (NodeIt it(*_graph); it != INVALID; ++it) { + (*_wake)[it] = true; + (*_dist)[it] = 1; + (*_excess)[it] = 0; + (*_source_set)[it] = false; + + _res_graph->firstOut((*_current_arc)[it], it); + } + + _target = target; + (*_dist)[target] = 0; + + for (ResOutEdgeIt it(*_res_graph, _source); it != INVALID; ++it) { + push(it, _res_graph->rescap(it)); + } + + _dormant[0].push_front(_source); + (*_source_set)[_source] = true; + _dormant_max = 0; + (*_wake)[_source]=false; + + _level_size[0] = 1; + _level_size[1] = _node_num - 1; + + do { + Node n; + while ((n = findActiveNode()) != INVALID) { + ResEdge e; + while (_tolerance.positive((*_excess)[n]) && + (e = findAdmissibleEdge(n)) != INVALID){ + Value delta; + if ((*_excess)[n] < _res_graph->rescap(e)) { + delta = (*_excess)[n]; + } else { + delta = _res_graph->rescap(e); + _res_graph->nextOut((*_current_arc)[n]); + } + if (!_tolerance.positive(delta)) continue; + _res_graph->augment(e, delta); + (*_excess)[_res_graph->source(e)] -= delta; + Node a = _res_graph->target(e); + if (!_tolerance.positive((*_excess)[a]) && a != _target) { + _active_nodes[(*_dist)[a]].push_front(a); + } + (*_excess)[a] += delta; + } + if (_tolerance.positive((*_excess)[n])) { + relabel(n); + } + } + + Value current_value = cutValue(); + if (_min_cut > current_value){ + for (NodeIt it(*_graph); it != INVALID; ++it) { + _min_cut_map->set(it, !(*_wake)[it]); + } + + _min_cut = current_value; + } + + } while (selectNewSink()); + } + + void calculateIn() { + for (NodeIt it(*_graph); it != INVALID; ++it) { + if (it != _source) { + calculateIn(it); + return; + } + } + } + + void run() { + init(); + for (NodeIt it(*_graph); it != INVALID; ++it) { + if (it != _source) { + startOut(it); + // startIn(it); + return; + } + } + } + + void run(const Node& s) { + init(s); + for (NodeIt it(*_graph); it != INVALID; ++it) { + if (it != _source) { + startOut(it); + // startIn(it); + return; + } + } + } + + void run(const Node& s, const Node& t) { + init(s); + startOut(t); + startIn(t); + } + + /// \brief Returns the value of the minimum value cut with node \c + /// source on the source side. + /// + /// Returns the value of the minimum value cut with node \c source + /// on the source side. This function can be called after running + /// \ref findMinCut. + Value minCut() const { + return _min_cut; + } + + + /// \brief Returns a minimum value cut. + /// + /// Sets \c nodeMap to the characteristic vector of a minimum + /// value cut with node \c source on the source side. This + /// function can be called after running \ref findMinCut. + /// \pre nodeMap should be a bool-valued node-map. + template + Value minCut(NodeMap& nodeMap) const { + for (NodeIt it(*_graph); it != INVALID; ++it) { + nodeMap.set(it, (*_min_cut_map)[it]); + } + return minCut(); + } + + }; //class HaoOrlin + + +} //namespace lemon + +#endif //LEMON_HAO_ORLIN_H